
UC Berkeley Scientists Translate Brainwaves into Imagery - uptown
http://abcnews.go.com/Health/MindMoodNews/scientists-youtube-videos-mind/story?id=14573442
======
jlao
Their website has a better description of how it works that is less ambiguous:
<http://gallantlab.org/>

The description of the first video: "The left clip is a segment of the movie
that the subject viewed while in the magnet. The right clip shows the
reconstruction of this movie from brain activity measured using fMRI. The
reconstruction was obtained using only each subject's brain activity and a
library of 18 million seconds of random YouTube video that did not include the
movies used as stimuli. Brain activity was sampled every one second, and each
one-second section of the viewed movie was reconstructed separately."

So they gathered a lot of fMRI data from people watching several hours of
YouTube videos (the training set). They then use this to train some sort of
machine learning algorithm to make a model. The pictures you see in the
article are from a running the model on a test set which does not contain any
of the videos from the training set.

~~~
mortenjorck
So in essence, the researchers are intercepting network traffic from the
visual cortex while a subject is given a certain stimulus, then matching that
traffic signature with signatures of similar stimuli. Which is to say, they're
doing some very interesting traffic analysis, but aren't actually decoding any
of the information itself.

~~~
neilk
Yes, but it is still a brilliant, brilliant hack. Reminds me a little of
Norvig's observation that having enormous amounts of data changes everything.

He was referring to AI algorithms, but seriously, who would have thought that
having YouTube would lead to _this_?

------
astrofinch
"The reconstructed videos are blurry because they layer all the YouTube clips
that matched the subject's brain activity pattern." – What does this mean
exactly? This is beginning to sound more like a cool machine learning trick
and less like mind reading.

~~~
frisco
This line means this study is much less interesting than it sounds at first.
Basically, it sounds like they used the fMRI voxel data to build a classifier
that predicted which frames from the videos were showing, and composited each
frame weighted by its probability. In other words, wtf.

~~~
jerf
This link will decay in the future, but <http://gallantlab.org/> is the
original source. For a similar image of a bird, it has the text: "The left
clip is a segment of the movie that the subject viewed while in the magnet.
The right clip shows the reconstruction of this movie from brain activity
measured using fMRI. The reconstruction was obtained using only each subject's
brain activity and a library of 18 million seconds of random YouTube video
that did not include the movies used as stimuli. Brain activity was sampled
every one second, and each one-second section of the viewed movie was
reconstructed separately." There's also a useful video.

It seems valid to me. There's no reason to ask them to somehow extract this
visual data "unbiased", without bootstrapping off of video clips like that.

Actually, I'd commend that link to anybody posting complaints here, it covers
everything everybody is saying as of my writing here.

~~~
rflrob
> It seems valid to me. There's no reason to ask them to somehow extract this
> visual data "unbiased", without bootstrapping off of video clips like that.

I agree that their approach seems valid. There is a reason to ask them to
extract the visual data in an even more unbiased fashion, though: if we
understand how the brain is wired, then it should be "trivial" to back out the
image from the patterns of activation.

Of course, the previous sentence is making a couple assumptions that I don't
think are anywhere close to valid. 1) "the brain" implies that there is a
single, nearly completely conserved architecture that is remotely similar from
one person to another; 2) I think you'd need to get the activity to much
higher resolution than fMRI can give you; 3) the stimulus <\--> response
mapping is moderately close to bijective, so for a given input, there's only
one set of activity, and vice versa. Still, this study is an interesting first
step on what will, no doubt, be a very long journey to improve the technology.

------
ghc
It seems to me that labs at Berkeley always gets a pass when it comes to
vastly overstating the significance of their work. This is so far from
actually reading the visual data from the brain that even mentioning the
future of reading memories seems ludicrous.

~~~
LearnYouALisp
Glorious Glasgow, is that you?

------
antimora
The article is misleading. There is not deconstruction or decoding of brain
waves happening, it's simply correlating stimuli with prior learned/trained
images. If the model was trained on images of dogs and cats, the
"reconstructed" images will be terms of dogs and cats, which is the basic
limitation of the process.

Source: "In practice fitting the encoding model to each voxel is a
straightforward regression problem. " (<http://gallantlab.org/>)

~~~
glenstein
The reconstructed image would also be in terms of the abstract features of
dogs and cats: the shades of color, contours of their bodies, their position
on the screen. And the abstract features could be recombined into an average
image that's completely unlike any dog or cat but resembles what a person is
looking at.

------
shabble
The first thing that occurred to me, is "How do you show video to people in an
MRI?" My scan involved wearing headphones made entirely of plastic[1] due to
the fact you're inside a ridiculously strong magnetic field. They didn't
actually contain the audio speakers - they were acoustically coupled to
something a few metres away.

I suppose if it were properly anchored / built into the machine, you might be
able to mount an LCD panel, and then somehow calibrate around it.
Alternatively, something complicated involving projection and mirrors, maybe.
The scanning tunnel is pretty damn narrow though.

It's strange, but when reading about all sorts of interesting science, I end
up wondering about the methodology sometimes more than the actual results.

[1] something a bit like these:
[http://www.scansound.com/xcart/product.php?productid=16172&#...</a> Gave me
flashbacks to X-Men: <a href="http://comicattack.net/wp-
content/uploads/2010/02/42.jpeg" rel="nofollow">http://comicattack.net/wp-
content/uploads/2010/02/42.jpeg</a> :)

~~~
Kliment
Having done this (showing video to people in an MRI), I think I'm qualified to
respond.

There are two methods we've used, one was a goggle system where an array of
optical fibers, one per pixel, are brought from the scanner tube to the
control room and coupled to a LED display. The resolution is atrocious, and
the thing is heavy, but it gets attached to the head coil so the person inside
does not have to bear the weight.

The method we are using now is to attach a mirror to the head coil, and have a
huge flatscreen outside the tube. You show mirrored images on the screen, and
since the screen is big enough it covers the entire visual field. Works
better.

~~~
noonespecial
Might I suggest a projector, with a zoom showing on a small piece of frosted
plexi? I have used this method to project an image onto a "screen" that was
underwater (and quite invisible until struck by the projector beam).

~~~
Kliment
Yeah, there are solutions like this. But the big screen is what we have, and
it works great

------
parallel
OK, I think this is a little misleading at a glance. From my read they didn't
show a video to the subject then create images from the brainwaves. Instead
they showed videos to subjects, recorded the response. This allowed them to
greate a mapping of response to video. You then show videos later, read the
response then look up the video.

The article states; "The reconstructed videos are blurry because they layer
all the YouTube clips that matched the subject's brain activity pattern."

They could have just shown the best match, this would not have been a cool
blurry image that is very easy to misinterpret as being generated from the
brainwaves directly. More than a little slippery.

~~~
allending
"The reconstruction was obtained using only each subject's brain activity and
a library of 18 million seconds of random YouTube video that did not include
the movies used as stimuli."

Not totally scifi awesome, but still pretty cool and I think it is a valid
approach. Sounds like the bigger the library of clips mapped to brain
activity, the more the technique converges to a desirable result.

------
0x12
Would this allow you to scan the brains of subjects that are sleeping in an
MRI to reconstruct their dream images?

Reconstruction of what the subjects are currently looking at is interesting
but a direct window into the imagination would be something else.

~~~
zerostar07
As they mention in the article, it is known that dream imagery does invoke
responses in the visual cortices (i believe even in V1), however the responses
are weak in the early cortices, so it's not currently possible to "read" the
dream imagery (i assume they would have done it if possible).

I am not sure how well the algorithm would work without the V1 activation,
since V1 is retinotopically organized, making it quite easy to decode.

~~~
0x12
Thank you.

So V1 is 'raw' and later cortices have performed more processing on the data
causing it to be higher level, which in turn makes it harder to translate it
back to a visual?

~~~
zerostar07
Yes that's more or less the picture. The extraction of images from V1 has been
performed before, the novelty in this paper is that they reconstruct motion
too. In the case of dreams, the flow of information is in the reverse, higher
level areas projecting to lower level, creating the illusion of vision. It's
not yet known if an actual image is formed in low level virtual cortices from
dreams.

------
mtinkerhess
It doesn't surprise me that they're able to do this. What does surprise me is
how much the composite video looks like something out of Minority Report. It
fulfills my fantastical right-brained expectation of what this kind of "mind
reading" should look like—blurry, disjointed, imprecise impressions of a
scene—and also makes sense to my left brain as I try to imagine the underlying
machine learning algorithms.

~~~
chime
The straight lines in the background in the reconstructed video seem
unrealistic. If my understanding of human vision is correct, we don't see
things like a 2D pixelated display but with varying degrees of focus on items,
backgrounds, and people. It's not like I see a straight line on a wall or
floor and my brain does Bresenham's.

------
neilk
This is shameless threadjacking, but their videos look a lot like a little art
project I did some years ago with Flickr:

<http://www.flickr.com/photos/brevity/sets/164195/>

It's kind of a similar hack. There's a many to one relationship of images to a
tag. Then that relationship is reversed and averaged out to get a consensus
image. Of course this only works at the linguistic/labelling level, not at a
brain level.

------
danmaz74
I have been wondering for some time if it would be possible to find a
correlation between some brain imaging techniques (MRI, PET...) and the act of
consciously lying. This could create the ultimate lie-detecting machine.

Potentially, it could be much easier than recognizing full images, as there
would only be two possible outputs to discriminate.

~~~
zerostar07
Turns out it's actually harder, because we can't locate any 'lie signals' or
'lie areas', while the visual cortex is very large and organized.

~~~
danmaz74
You mean that we already tried to locate them and failed, or that we should
start looking?

Anyway, my point is that _if_ there are some neural activity pattern that is
correlated to lying, it could be easier to extract that information than it is
extracting full images from the visual cortex.

~~~
zerostar07
<http://en.wikipedia.org/wiki/Lie_detection#fMRI>

~~~
danmaz74
So, there are already some indicators... thanks for the link!

------
zerostar07
This is cool engineering work, sure to generate inflated headlines.
Nevertheless, one would use a similar approach to read dreams, if only the
detectors could detect the brain activity in V1 during sleep. Idea for creepy
pillows??

------
prtk
Big brother is "watching" you(tube)!

~~~
zerostar07
/me rushes to domainsquat 'creepypillows.com'

