Articles

It’s Official: 100 Percent Brain Fingerprinting Is For Real

posted May 3, 2016, 4:03 AM by Ellen Pearlman

image

Natural, non-responses can give you away - Andrew Halting/Bingham University - The New Biometric Brainprint

It was just a matter of time before ‘brain fingerprinting’ became the go-to biometric indicator of choice for those in high security, non-breechable positions of power and authority. And the time is now. Researchers at Binghampton University have developed a biometric identification method called Cognitive Event-RElated Biometric REcognition (CEREBRE) for identifying an individual’s unique brain fingerprint. They discovered each subjects brains had a unique reaction to a visual image. When a computer analyzed each unique image response it was with 100 percent accuracy. Just like fingerprints and iris scans, no two people were alike. The best part is they can’t be stolen and can’t be imitated.

image

Professor Sarah Laszlo, University of Bingham who discovered the technique -Andrew Halting/Bingham University - The New Biometric Brainprint

They are able to do this because each “category of stimulation” works with overlapping parts of the brain (see semantic brain below). A picture of food, a celebrity, or an animal all stimulate different parts of the brain. 

image

Prepping a subject for testing - Andrew Halting/Bingham University - The New Biometric Brainprint

The research crosses cybersecurity, biometrics, cognitive neuroscience and psychology. Professor Lazlo says, “ We are unique, our brains are unique, our thoughts are unique, our feelings are unique, its really quantifiable, you are not the same as any other person.”

image

One person’s unique brainwave “fingerprint” - Andrew Halting/Bingham University - The New Biometric Brainprint

The researchers see this as being used for “access to very high security locations” like the Pentagon, and will be used for next generation biometrics. But what is happing at the University of California, Berkeley doubles the fun.

image

Mapped areas of the brain responding to the word “top” (highlighted in green), all images from Nature Video

The other frontier that is progressing quickly is the semantic brain, or brain mapping out of Jack Gallant’s lab at UC Berkeley. There is now a ‘brain dictionary”. It not really just one word, as  “a single word can activate whole regions”.  Its your entire brain.

image

Word “top” associated with words of clothing and appearances

image

Top is also with numbers and measurements

image

As well as top as in places and buildings

image

Mapping of types of words

Certain words form in certain parts of the brain, although activity is all across the brain and in both hemispheres. Volunteers at Gallants lab went into an fMRI Lab and listened to stories from The Moth storytelling for two hours. They figured out, through blood flow, which parts were responding to the meaning of the words, or the actual semantics.

image

The red sections here are social words, especially “wife, mother, pregnant and family”. 

It also aligns with words like ‘house and owner” just next door. Words that have to do with how things look, like “stripes” are, naturally,  found near the visual cortex. 

image

Different people have the same kinds of concepts in the same kinds of places, though not exactly the same. 

image

So I went to the online mapping of voxels here, and zoomed in on that little orange square, and it was like reading an amazing poem - 

This aligns very closely with the first study mentioned above, on brain fingerprinting, since it uses a type of semantic qualifier of images and words to calculate the brain fingerprinting process. When these two areas align, that of brain fingerprinting and the semantic brain, no matter who you are, there you are, complete and irreplaceable. And trackable and identifiable.

Medical Breakthroughs -Operating Room VR and Chip Implants That Move Paralyzed Limbs

posted Apr 15, 2016, 9:09 AM by Ellen Pearlman

image

IOS App that lets you stream an 360 operation

The world’s first VR operation was streamed live with its own IOS and Android apps on April 14, 2016. If you really wanted to go full tilt bogey on watching the colon cancer surgery you could have searchedv'VRinOR’ in the Oculus Store for a Gear VR to view it on.

image

Special 360 Camera - Image from BBC

The operation carried out by Barts Health staff has been made possible in partnership with healthcare company Medical Realities and live streaming app Mativision. The doctor performing the surgery is Dr Shafi Ahmed

image

Dr. Shafi Ahmed - Image from BBC

By being streamed through the Mativision app, it allowed thousands of medical students to tune in – and anyone else who wanted to, even people in developing countries. The website Medical Realities, allowed anyone who was using Google Cardboard and a smart phone to see what was going on. 

image

Medical students using Google cardboard and their smartphones to watch the operation

In one sense, this is like a TV broadcast but with one huge difference. Whoever is watching can move themselves through a 360 degree view of the entire event -something even the doctors can’t accomplish. Its like being inside someone’s intestines, as well as the operating room. Future plans are to have the operation incorporate virtual touch and feel within a few years. 

CHIP IMPLANTS

The other breakthrough, which I have been tracking in my blogs, is about a chip implanted in a paralyzed man’s brain that enables him, if he concentrates hard enough, to bring motion back to his hands. Ian Burkhart broke his neck and lost the ability to control anything below his chest. 

image

The electrical stimulators on Ian’s hand - all photos from Nature Magazine

image

Nerve signals from brain to hand cut off at the neck

image

Chip implanted into the brain - looks painful


image

The red dots are the spots in the brain that light up when Ian thinks about flexing his hand

image

The computer software algorithm learning how to fine tune the brain gripping formula

The computer device then sends signals to the muscle in the forearm. Ian is able to use his hand.

image

Here Ian is with the stimulators on his forearm. He can even pinch a straw, all through thoughts that are conveyed to his hand thorough his brain but bypassing his normal muscular system. Its only being done in the lab at this point, but one day it could work with people outside of the lab in a more mobile setting. 

The work was done by Rajesh Rao, the director of the Center for Sensorimotor Neural Engineering at the University of Washington, and a scientist I have been tracking. The full paper is here.

This shows the emergence of both 360 VR for live streaming audiences as well as implantable devices to create truly bionic individuals

A Ubiquitous State Becomes An Electro-Haptic Reality

posted Jan 14, 2016, 8:37 PM by Ellen Pearlman

image

Artist Tiffany Trenda’s idea of the full body suit is way different than Teslasuit’s - This is a photo from her performance “Body Code”, 2012 taken in Times Square, NY - which shows once again, the artworld gets it first.

image


The Kickstarter photos of Tesla Studios commercial body suit, 2015

The Teslasuit is a 52 point climate controlled haptic feedback suit that is fully upgradable and can be used with VR systems like the Oculus Rift, Playstation and Xbox. Their logo says “feel what you play”. It comes equipped with electrodes that “move your muscles around” according to one user. That is because it uses EMS or electro-muscular stimulation.

image

User Anthony Brown gets a virtual  but real massage through touch the dots

The technology already exists in physical therapy to assist in pain relief and train muscle groups.

image

The touchsuit design

Its a wireless suit that incorporates tiny electric mesh sensors

image
Heat mapping and motion sensors

The suit can capture body movements and send them directly to whatever VR application is being used, or at least in theory. I would like to see a number of demonstrations to make sure there are no glitches. It also has a climate control system that responds to what you are doing inside the VR system, ie., the environment becomes hot if you are in the tropics or cold if you are in the virtual Arctic

image

The Bluetooth enabled control belt

It has its own operating system and apps, like the virtual meeting app where users can transmit virtual hugs. The developers describe it as “durable, washable and comfortable to wear”.  They describe its uses for “ gaming, virtual dating, health, wellbeing, education and virtual technologies, sport and fitness, science and engineering, psychology and real life training simulations, animation and many more.” You know what is coming next, -the sex industry will be all over this in a nanosecond.
image

The all-male team of the UK based company has a pretty cheeky sense of their ability to be god-like, but all in good humour. 

image

The only time a girl shows up in the promo videos is an avatar babe who is on the receiving end of a “virtual hug” from robotic hands. The only game that currently works with the suit is a virtual paintball game, but more should be in the pipeline.

image

Standard military uses - again. Calling DARPA, calling DARPA, MI6, do you read me?

image

Then there is artist Tiffany Trenda’s 3D printed costume for her piece “Ubiquitous States” that “synchronises heartbeats through touch and sound to probe deeper complexities of human emotions”. What a totally different approach! Her suit has ECG sensors in the fingertips. When Trenda touches someone’s wrist of neck that persons’s vital signals are picked up. Trenda’s are monitored all the time and both are displayed on a front panel in the costume. Both also wear headphones to hear one another’s heartbeat. When they synchronize the panel display turns from black to white.

image

The piece is a collaboration between 3D Systems, the largest specialised 3D printing company in the world, award winning designer Janne Kyttenan, and  Trenda.

image

Trenda at the Context Art Fair in Miami, December 2015

What I really like is the artist’s statement - 

In Veléry’s The Conquest of Ubiquity, he describes the current state of our techno-obsessed culture where our tools of precision have changed our behaviors. Today, these tools are primarily our screens (smart phones, tablets, computers, etc.). They give us the ability to download, text, talk, and interact anywhere with the surface of the skin and surface of the computer. Thus, we are living in both the physical and the simulated world simultaneously and we are unable to fully engage in the present. Instead, we are hiding behind our devices and masking our primitive desires of having real time physical interactions. This questions the authenticity of our behaviors using these vernacular technologies. In Ubiquitous States, the performance will unite the heartbeats of both artist and the participant and will measure our emotional connections through the virtual.”

image

Tiffany Trenda - “Proximity Cinema,” 2013 in the  Black Forest, Germany

Magic Leap Gets It Right

posted Dec 12, 2015, 10:45 PM by Ellen Pearlman

image

Whale breeching in high school gymnasium

Magic Leap has combined holography and augmented reality producing life-like computer generated optical illusions. In this jaw dropping video a school basketball court becomes the stage for a full sized whale to breech from underneath the floorboards before flopping down into the non-existent ocean and disappearing. Its illusions are so life like they fool the brain, just like they are doing in this video clip in front of a real audience.

Most augmented reality displays have been 2D, and are used on individual’s mobile phones to superimposed one image over another. Mixed reality is putting an object into the real world just like you would normally interact with it

Magic Leap was founded by Rony Abovitz, who previously founded the world’s first robotic haptic surgical arm company, Mako Surgical. He sold that for 1.3 billion dollars to a company called Stryker. With so much money, he dressed up as a spaceman and danced around with two human furballs called “Shaggles” selling ‘space fudge’  during TEDxSarasota. He called his presentation “The Synthesis of Imagination.”

image

Abovitz as spaceman hawking space fudge after his 1.3 billion dollar windfall

However, there was a deeper motive for the shenanigans. He briefly spoke to the audience saying “A few awkward steps for me, a magic leap for mankind” Later in his presentation he flashed an overhead sign saying “Creativity Matters: 12/12/12″ 

Abovitz has basically made a Google hololens with one difference - its image generation capabilities. It uses light field signal technology. Lightfields collect all rays of light that come from an event. Normally a photograph takes the rays and stores color and intensity onto a sensor plate (or in the old days, film). A lightfield does not store the pixels but the entire ray. When you have all the rays emanating from a scene, the image can be refocused at any position. When you do this over a period of time it is called a dynamic lightfield. With the correct lens the saved ray information can be refocused so our eyes see depth perception, or a type of 3D microdisplay. It is a specialized way of performing advanced signal processing. Magic Leap is perfecting its own lightfield chip to project images onto the user’s retina using a specialized projector.

The big issue here is safety for both the human eye and human brain. Abovitz won’t work with devices that use traditional stereoscopic 3D tech. A lot of his initial insight about the technology came from mechanical engineer researcherEric Seibel at the University of Washington who has made true 3D displays investigations feasible. 

image

Sparkydog and Frieds, illustration by Andy Lanning and Anthony Williams, DC Comics

Abovitz has a virtual rock and roll band he calls “Sparkydog & Friends.”  They were the inspiration for the Magic Leap, because he wanted to take them on a virtual tour where they would appear in as many places as possible. Preliminary reports about Magic Leap suggest the viewer would wear special sunglasses with the equivalent of a small battery pack attached. It also needs a special software to track the viewer’s eyes and fingers, and configure all of this through deep learning AI research.

The company already hired Neal Stephenson as ‘chief futurist’. Stephenson wrote the 1992 sci-fi blockbuster Snowcrash.

image

A recent U.S.  patent filed by Magic Leap of the type of glasses they are envisioning

image

The belt pack you would need to wear

Google has invested a half billion dollars in Magic Leap, and the company just raised 827 million in a third round of funding. This means they will have raised 1.4 billion in startup funding, the most in the world. Now they can hire the staff they need to make this into a compelling reality. 

Instant VR Gonna Get You

posted Nov 9, 2015, 8:00 PM by Ellen Pearlman   [ updated Nov 9, 2015, 8:24 PM ]

image

The New York Times Google VR viewer for your smart phone - courtesy New York Times

The New York Times broke the VR (Virtual Reality) barrier by sending out an inexpensive Google Cardboard virtual reality viewer to all of its digital subscribers this week that they can use in conjunction with their smart phones. Concurrently, the National Football League (NFL) is amping up its VR arsenal for training purposes. The Minnesota Vikings now have the option of their football coach being able to enter the VR world with them to work off the rough edges of a touchdown play gone bad. 

image

Jeremy Bailenson, the Stanford scientist Mark Zuckerberg of Facebook met with before purchasing the Oculus Rift - courtesy NFL Next

I was pleased to see Jeremy Bailenson, the cutting edge HCI (Human Computer Interaction) expert was deeply involved with the NFL in adapting these immersive systems for the league. I encountered Bailenson during  at the Tribeca Talks: Imagination Panel powered by The Hatchery at the Tribeca Film Festival where I sampled his designs for VR via an Oculus Rift.  Bailenson knows the relationship between the eye, brain, and motor coordination better than anyone else.  His VR piece, which I experienced while wearing an Oculus Rift headset fooled my brain into thinking I had crashed into a solid object, though I was fully aware there was no real object in front of me.

image

Scoping out a virtual football player, courtesy NFL Next

The technology is so developed it can track a players physical movements in real time.

image

Tracking a football player’s body motion in real time

Strvr is the VR system  the NFL seems to prefer at the moment. It incorporates a multi camera setup that allows the player to re-enter the scrimmage after the fact, re-experience it, and reevaluate it virtually, running the play from many different angles as many times as the player, or the player and his coach want.

image

Multi-cameras ready to record immersive experiences.

In the future sports stadiums and sports bars will also see fans purchasing virtual viewing tickets for a variety of prices: from box row seats for premium prices to further up in the bleachers for as little as ten dollars. 

One other creative project that does not quite use VR, but instead the implication of VR through drones and projection mapping is the music video directed by Daito Manabe and TAKOM, “Cold Stares” by Chance The Rapper and the O’My’s.

image

Dancers huddled on stage, blue circle is the light on top of a camera drone

image

This is part of the projection mapping image that flashes on screen from the processed drone image during the performance

image

This is taking a different view of immersion and cameras that the NFL did not quite dream of. Though media and sports will power the adaptation of these technologies, the arts already thought this one through, starting with Stan VanDerBeek’s movie projection dome in 1969. 


And then there is Nonny de la Pena’s post in FaceBook about the prototype viewer being developed in 2012 by Perry Hoberman and USC ICT - Where they gave it to Google. For free!. 

First VR Film Festival ever! - so in your face

posted Sep 28, 2015, 8:02 AM by Ellen Pearlman

image

Still from “Butts” by Tyler Hurd

I don’t know about you, but I can’t wait to have a grown up blue Snurf-like figure fart out glitter dust all over my face in an immersive 3D environment! Two minutes of butts and scatological references! “Butts” is the supposed first every animated cartoon for virtual reality, part of a Virtual Reality film festival making a ten-city tour organised by VR video agency Kaleidoscope and video hosting platformVrideo. Its going to Boston, Austin, Toronto, New York, LA,  San Francisco, Vancouver, Seattle and Portland.

However, if you want something more sobering, and real, then the festival also offers  “Welcome to Aleppo”  by Christian Stephan. His immersive experience takes you on a 360 degree hike around Allepo, Syria so you can understand, up close and personal, what all those hundreds of thousands of Syrian refugees are running from.

image

Welcome to Aleppo by Christian Stephan

So let’s say animation is not your thing, and war torn zones are a tad too real. Then you can watch “Surge” by Arjan Van Meerten, a year long code jamboree fest about evolution and complexity created in the newest cutting edge software program, Unreal.

image

Surge by Arjan Van Meerten

But if you want to kick back and watch a docu-drama, then Jessica Kantor has produced a black and white quickie story “The Archer.”

image

Jessica Kantor, The Archer

All of these are seen in an ubiquitous alone - together viewing environment.

image

Let’s be together - but alone while we watch our VR videos.

So what might happen to you, or your significant other under these conditions? Could you meet and fall in VR Luv? Aaron Bradbury already thought this through, and made a VR pre-rendered 360 degree animation about the chemical processes that occur when you glimpse the object of your desire - aptly titled “LoVR.”

image

LoVR by Aaron Bradbury

The implications of this new viewing environment, minus the popcorn, should keep Sherry Turkle trying to deconstruct this new type of “Alone Together” in business for the next fifteen years - at least

Brains Go Bigtime!

posted Sep 16, 2015, 9:19 AM by Ellen Pearlman

image

Muse headsets all lined up, ready for action - photo by William Wnekowicz

In February there was a big party/exhibition at the Exploratorium in California with people walking around controlling, or trying to control various things and mental states with their minds. The team who put together one of the exhibits consisted of Marion Le Borgne, James Bobowski, David Silva, and William Wnekowicz

Muse, a dry EEG headset (as opposed to gel or saline) allows for live time streaming data of brainwaves, and was the headset of choice for the exhibition. It also used just a few processing bands of waves like alpha, beta, theta and gamma.  The way the brainwave data was stored, analysed and visualised was through an open-source project the team named CloudBrain which is generating a lot of buzz in the BCI (Brain Computer Interface) world. For visitors who allowed it, their anonymous brain data was stored in a cloud database using the Cassandra system. 

image

People’s live-time brain feeds projected on a viewer. Are you relaxed, calm, tense, thinking, or are your eyes closed? You can look up on a screen and tell right away.

Cloudbrain used an analysing module powered by a custom algorithm. The visualization was set up to look like a “radar” chart, and it projected real-time EEG on a monitor. The code that powered it was written in AngularJS or JavaScript.

The tricky part of this exhibit revolved around how to stream everyone’s headset data while they were walking around. Usually BCI consumer grade headsets work with one headset, one person, and one display. The team  chose a publisher/subscriber architecture so all EEG streams (publishers), would route any one visitor’s stream to a specific booth or (subscriber). They did this through a Spark Core and a RFID tagger. They also used an app called Spacebrew

image

Spark Core device on the left and RFID taggers on the right

The Spacebrew app was not meant to be used with so much traffic, meaning so many people. That is because Muse sends out a few thousand messages per second, and the exhibition used 20 Muse headsets, which is thousands of messages per second times twenty.  The team had to rewrite the code to make it work in rabbitMQ, which scales in size much better on the backend. 

They  needed huge server (BrainServer) capacity and a lot of dedicated internet bandwidth. The team also deployed five machines to capture Bluetooth data from the Muse headsets, and forward it to BrainServer. 

image

Lots of Bluetooth enabled headsets at this opening party

CloudBrain has a demo of, as of this writing, just two models of BCIs - Muse and OpenBCI, and how they parse data live time. 

image

A screen shot of how the MUSE parses livetime data in the ‘radar’ chart.

The first screen shot shows 25 users, and the second shows “all”, though I am not sure how many “all” is.

image

Muse “all data, more squishy lines at top which are different channels

The CloudBrain data connects back to a repository on Github of raw code. On the repository they posted the averages of the aggregated data. 

image

So it seems most people went into Alpha states of relaxation, or closed eyes, hard to say which. That is followed by Beta, or alertness and/or agitated. Coming in third was Theta, or calmness. So the MUSE functioned as a kind of biofeedback device. 

image

This is the best representation of the visitors and the ‘radar’ screens live time. 

The developers think it is very interesting for participants to see live time comparisons of how users brains stack up against others. 

However, this is the first time that data banks of users brainwaves are being developed for use in the Cloud and proves how he field of neurogaming is right on the cusp of Cloud storage technology. The implications of this are far reaching. Cognitive Technology understands the profound implication this will have on issues of bioprivacy, and recently launched the first workshop for theCenter for Responsible Brainwave Technologies.

The question will no longer be “What’s In Your Wallet”, but more like “What’s In Your Brain?”

Celebrities On Holograms

posted Sep 12, 2015, 5:10 AM by Ellen Pearlman

image

Plans are in the works for a Whitney Houston holographic dead celebrity world tour

image

Bharatiya Janata Party (BJP)’s prime ministerial candidate Narendra Modi pictured during a live “3D hologram” telecast in Mumbai Photo: AFP

image

Jimmy Kimmel Live in both Hollywood (left and right big screens) and as a hologram (smaller center image)  in Nashville. 

Holograms USA puts Madame Tussauds Wax Museum to shame. Soon you will be able to go to the National Comedy Center in Jamestown, New York (hometown of Lucille Ball of ‘I Love Lucy fame) and watch live time holograms of your favorite dead comedians at the LucyDesi Center for Comedy.

image

Sketch up of Richard Prior and others as holograms at LucyDesi Center

Holograms USA is the same company that brought Tupac Shaker to theCoachella Valley Music and Arts Festival in 2012, and they have refined their techniques to enable more realistic stage representations.

image

Snoop Dog with a hologram of the late Tupac Shakur -Photo by Christopher Polk/Getty Images for Coachella

Its not stopping there. Plans are being draw together for a world tour of that candelabra welding heartthrob Liberace.

image

Mr. Sparkles & Glitz, Liberace, brought back to life through the magic of technology. Photo Larry Busacca/WireImage

Jonathan Warren, chairman of the Liberace Foundation told Rolling Stone Magazine, “You’ll feel the warmth from his heart, the sparkle of his eye and the pure lightning from his fingertips.”  OOOOWeee - can’t wait!!

image

Michael Jackson dancing live time as a hologram with real flesh and blood dancers.

There is also the video from the 2014 Billboard Awards of Michael Jacksondancing with real dancers on stage, even though it launched a nasty lawsuit. Very convincing. The technology behind this is a Mylar foil, known as Musion Eyeliner that can deliver uncompressed media for clear projected imagery.

For tech geeks, Musion says their Eyeliner System uses “simple video equipment: a video projector, preferably DLP with an HD card/minimum native resolution of 1280 x 1024 and brightness of 5000+ lumens or a high quality TFT Plasma or LCD screen. A hard-disc player with 1920 x 1080i HD graphics card, Apple or PC video server, DVD player.Musion Eyeliner Foil + 3D set/drapes enclosing 3 sides lighting and audio Show controller (on site or remote) Subjects are filmed in HDTV and broadcast on to the foil through HDTV projection systems, driven by HD Mpeg2 digital hard disc players, or uncompressed full HDTV video/Beta-Cam players.”

Hologram USA is producing “The Fishtank” with 3D viewing, powered by 16 core computers.

image

In the fishtank with a 20 foot wide version on the way

image

Pricey Eyeliner setups

They actually show mockups of their Eyeliner setups for traveling theater shows. Their Eyeline foil is 1/25,000 of an inch thick and uses three layers

image

The three layers of holographic foil in orange fades

The image is then projected both from the top and the bottom and reflected back into the foil with surprisingly lifelike effects.

image

Holograhic image from the top, and also double of image from the bottom reflected back.

However, the real jumping off point is the telepresence experience called “on stage” that the company has developed. It is just like the Jimmy Kimmel projection pictured above. The plans are to have it go from the big stage, to a business environment, to eventually the home. It can also be broadcast live time across an IP network. It is “full interoperability in the IP word, the solution is ‘standard spaced” and “any PC with a broadband connection can tune into the broadcast of telepresence.” The real issue is not what product it can be projected onto or in, but the scaling of the architecture for all products cross platform.  They say they have hologram telepresence in 28 countries and is being adopted by Cisco very strongly. The more networks that are created, the more users can use it. They are targeting everything from medical conferences to rural hubs. 

image

The man on the left from HologramUSA TelePresence™ is in Bangalore, India, and the Martin DeBeers and Chuck Stukey on the right from Cisco are in San Jose, California. 

In the picture above the Cisco guys are getting ready to toss a ball out to the audience, but admit they “haven’t yet figured out how to do that.” However, they are partnering with major telecom carriers like BT, Deutsche Telekom, Vodafone and EE.

Cisco is using an Netriders competition to sponsor a ‘virtual’ team with a sign on bonus and make it a new emerging technology that they can develop.

But Holograms USA have not stopped there. They are aiming their sights on Washington D.C., the nation’s capital and hired the Northern Virginia-based firm U.S. Government Relations International to present their technology to politicians. Holograms USA already helped Narendra Modi become prime minster of India, allowing him to address a number of political rallies simultaneously. But does the world really need a virtual Donald?

What Do Computers Dream Of When They Dream?

posted Sep 10, 2015, 3:11 AM by Ellen Pearlman   [ updated Sep 10, 2015, 4:21 AM ]

image

Calling Agnes Martin, with a sprinkle of Sol Le Witt and a dab of Vincent Van Gogh - computer image processed by Günther Noack, Software Engineer

image

Agnes Martin - Cover of the Book “The Nineties And Beyond”

image

Sol Le Witt - Color Bands, 2000

image

Vincent Van Gogh, Madhouse Garden Painting

image

I dream of Spiral Horned Antelopes - Original photo by Zachi Evenor

Google research engineers  Alexander Mordvintsev, Christopher Olah, and Intern Mike Tyka are deep into artificial neural networks, image classification andspeech recognition. According to the authors, the way this works for computer generated images is they use “10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.” So stacking and layering with links between the layers that are mathematically determined and tweaked is how they build it. What one layer looks for in another, and the amount of ‘noise’ involved is the ‘dreaming’ part of the equation for computers, if you want to draw an analogy. Aren’t dreams collections of random image noise in humans as well? 

The authors say they just pick an image, and then pick a layer and let the network algorithm go to work. Each layer has a difference in complexity. For example lower layers make strokes or “simple ornament-like patterns” because they deal mostly with edges and orientation.

image

Plain old photo of the simple sky and clouds. Not much going on here.

Using higher level layers, images begin to emerge, manifesting an ‘over interpretation’.

image

This is what you might see if you were on LSD or peyote or mescaline. Not these exact images of course, but depending on what you took, how high the dosage, what your life was like, and what environment you took the drugs in, something like this could happen to you - or your machine learning computer.

The engineers call this technique “inceptionism” and it is part of a neural network architecture

Some of the more layered images from the engineers work in inceptionism look like they came straight out of the 1960s and 1970s psychedelic movement.

image

Tutti Frutti ? Neural net “dreams”— generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory

image

Very radicle Timothy Leary on LSD-type images generated by inceptionism. Neural net “dreams”— generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory

image

This is your cat on drugs, you on drugs - or inceptionism according to your computer. Take your pick.

I used it on this picture, one from ‘environments’.

image

I Love NY parking lot photo

This is what the neural network saw when I tried it, and what it commented. The image is awful, but it did recognise it was “outdoor” “Alley” “Parking Lot” “natural light” “brick” “dirty” (hey that’s not fair!). 



This is the way, in a visualisation of the algorithm, they figure it out. They use something called ‘drawNet’ that combines images, though how it determines what to use and why is a bit too technical for my understanding at the moment, but its clear they run and compare lots of images in a database and get a composite image.

Getting To Know You, Getting To Know All About You Redux

posted Sep 5, 2015, 3:42 AM by Ellen Pearlman

image

Man wearing stylish brain waive interruptor. Photos by Marco Zanin

Are you worried about unknown, roving security forces intercepting your brainwaves? Sure, they can snatch your cell phone memes, Instagram, Pinterest and Snapchat pics, Tinder likes, you name it. But what about your actual thoughts? Who you gonna call? Brainwave Busters?

Neuro Imaging Surveillance” or NIS for short by designer Lisa Cori Chung, (as opposed to the strictly medical Surveillance Neuro Imaging) is the stylish, one size fits all anti-brain wave detection device for those who are worried about being on anyone’s current security profile list.

Fabrica, an Italian design research center worked with Chung and Catlin Morris to design pieces that take a unique approach to the subject. They don’t stop invasive scan technologies per se, but like a chameleon, change the wearer’s sensory input through diversionary tactics. This means hats that send out impulses of sound through bone conduction, a gently shocking collar, and a mask that sends out pulsing flashing lights. 

image

Flashing Lights sensor unmasked

The piece was commissioned by Wearable Futures . The concept behind it is not actually so far fetched. In 2013 Edward Snowden revealed that “The NSA can determine whether your dreams indicate a subversive mindset, but they’re not yet at the point where this technology can interpret specific images or words. It’s still being tested, and it promises to be the most powerful tool yet for spying on American citizens. In dreams, people cannot censor themselves like they can while awake, so the NSA hopes this will help unravel even the most carefully hidden terror attacks.”

image

It comes in a variety of flavors, so to speak. The first is the ‘electric shock’ one, which is sort of low level and rather buzzy and involves skin conduction.

image

Electric shock sensor

The next one is bone conducted sound, which is a bit like Cyborg Neil Harbisson’s use of the occipital bone.

image

Bone conducted sound sensor

Finally there is flashing lights, which disrupts all sorts of alpha, beta, theta and gamma rays.

image

Look up in the sky - its a bird, its a plane, its flashing lights sensor

Since new algorithms are detecting such activities like gait, running, hanging out, and even how one holds onto objects, there is no end in sight for the fun things that can be bounced around big data repositories. The way Snowden suggests this would work is intense x-ray’s would be beamed to the human brain from roving cars with specially tuned radar systems. Animals and other living beings would be filtered out with special algorithms, so no chance Fido wanting his treats will be included.

1-10 of 92