
Computer can read letters directly from the brain - turing
http://www.ru.nl/english/general/news_agenda/news/@910991/computer-can-read/
======
dnautics
Should be clear - if my understanding of this is correct, this is the computer
reading letters directly from the VISUAL CORTEX. So, this isn't the computer
reading a mind, so much as, a computer tapping the visual processing conduit.
You probably couldn't "think of a letter" and have the machine figure it out.
Something similar but more crude had been achieved in cats (horizontal lines
vs vertical lines, using direct electrode implantation), about a decade ago.

What is impressive (if this article is not fraudulent or overinterpreting) is
that it's a) done in humans, which realistically shouldn't be too much of a
stretch from cats, and b) done non-invasively using MRI. We're NOT entirely
sure what we're measuring with fMRI - it's supposedly increased bloodflow to
the brain, but what that has to do with voltammetric activity is not 100%
sussed out.

Aside: When I was in grad school there was this brilliant girl who somehow got
sidetracked and burnt out in the lab she was in, started dropping out for
weeks to isolate psychogenic compounds from desert cacti. For her qualifying
independent proposal her presentation was basically two powerpoint slides that
said "test out LSD in cats". Naturally, she failed, but she had this amazing
hypothesis about how LSD works, and I understand why she wanted to do in
cats.... And I'm 99% sure she failed to communicate this to her committee. She
did, however, get a nice severance package and got to attend Albert Hoffman's
100th birthday party.

~~~
hannibal5
Visual cortex is not just processing what we are seeing trough our eyes, there
is also feedback from higher brain regions back to the visual cortex. So when
you are dreaming, your visual cortex activates.

Things get more messy though. For example when they take images from cat's
visual cortex, they are much clearer because the cat is anesthetized and
visual cortex only processes data from eyes. If cat would be awake, it would
make reading the image very hard to read.

Relevant research with video recording trough much of processing:
[http://newscenter.berkeley.edu/2011/09/22/brain-
movies/](http://newscenter.berkeley.edu/2011/09/22/brain-movies/)

~~~
dnautics
Sure, I guess I should have mentioned there's some feedback. That's why I said
you "probably couldn't"... I don't think we know how much the activated visual
cortex corresponds to the imagery we see when we are dreaming.

------
1qaz2wsx3edc
_tinfoil_ 20 years from now the headline will be: "TSA brain scanners achieved
by NSA, citizens shocked but docile."

~~~
rhizome
A more apt comparison would be that in 20 years it's discovered that all
baseball caps of the previous decade had the NSA's brain scanners in them,
which NewEra denies the whole time.

~~~
qw
Only those with bad thoughts have something to hide. You can't let the bad
guys win

------
abrichr
_The researchers ‘taught ' a model how small volumes of 2x2x2 mm from the
brain scans - known as voxels - respond to individual pixels._

There's got to be millions of neurons per 8mm3 of brain matter. I'd be
interested to see what the images looked like before the prior knowledge was
introduced.

~~~
turing
I think that's part of what makes this so incredible. The current model uses
only 1,200 voxels. With the higher resolution scanner mentioned at the end of
the article, they will be able to use 15,000. With that in mind it seems this
approach could have a lot of potential for further improvement.

~~~
eykanal
While that's something to think about, keep in mind that the level of
correlation between those voxels is ridiculously high. Simply adding more
voxels isn't necessarily adding useful information. Even more so, fMRI is
based on the BOLD effect [1], which is highly blurred across a fairly large
area. While there is potential for improvement, there are a number of pretty
fundamental limitations to this technology.

[1]:
[http://en.wikipedia.org/wiki/Functional_magnetic_resonance_i...](http://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging#BOLD_hemodynamic_response)

------
lambdaloop
Link to original article:
[https://www.dropbox.com/s/x2z14gw8017ezbg/for%20reddit.pdf](https://www.dropbox.com/s/x2z14gw8017ezbg/for%20reddit.pdf)

(Via reddit:
[http://en.reddit.com/r/Scholar/comments/1kc8mw/request_linea...](http://en.reddit.com/r/Scholar/comments/1kc8mw/request_linear_reconstruction_of_perceived_images/))

The Gallant lab at UC Berkeley did something somewhat similar about 2 years
ago. See here:
[https://www.youtube.com/watch?v=KMA23JJ1M1o](https://www.youtube.com/watch?v=KMA23JJ1M1o)

From what I understand, both reconstructions involve setting up models of
brain activity for vision, learning the parameters by machine learning from
patients, and then using Bayesian inference to determine what is being seen.

While incredibly cool, we are still a long way from reading thoughts, and even
longer if we're not allowed to learn the parameters for that subject first.
Right now, we can only _kinda_ reconstruct what someone is seeing, but that's
really not much better than a camera.

------
reustle
I'm really looking forward to my brain powered keyboard. I was close to buying
an Emotiv headset a few times to attempt a build, but I don't think the
resolution was there, nor was I able to build the machine learning end.

~~~
dnautics
I'm considering doing this. Have you tried playing around with simple neural
nets? Andrew Ng's machine coursera course is really phenomenal and drops you
into doing neural nets using octave, which makes the understanding really
easy. After doing it in octave, and writing some simple discriminators, I was
able to really rapidly write neural nets in several languages - I even wrote
one in python - a language I don't 'know' \- to play around with in
quantopian. Needless to say, the neural net lost a lot of virtual money, but I
figured out why. =)

------
jacquesm
There was a video a while ago about a paralysed woman controlling a robotic
arm:

[http://www.theguardian.com/science/video/2012/dec/17/paralys...](http://www.theguardian.com/science/video/2012/dec/17/paralysed-
woman-controls-robotic-arm-mind-video)

Hard to pick between that one and this one which one is giving me more of a
living in the future feeling.

Very impressive.

------
Zenst
Not long ago lasers, phones, compters were all very large. MRI machines today
are very large, but one day. Not saying this approach is the best or that
there are alternatives that are easier to adapt to something consumerable.

One thing I do know, that in the not so distant future - HATS will come back
into fashion and with that I hope that somebody is not allowed to pattern
using hats to contain sensors or any kind. But I have hope that the whole
patatent area will be in a far better state of play by then.

I also suspect a whole new area of social issue will arise in the form of
thought tourretes, be it having SIRI searching for porn or downloading the
latest XRAY filter for Glass - will be interesting times. Me I'm still waiting
for a grammer nazi app that fix's the mistakes instead of complaining about
them. We all have out dreams and to think beer and have a robot fetch you a
cold one is still a dream. But getting closer.

~~~
samatman
A small MRI is, to a first approximation, equal to a room temperature
superconductor.

The world that has them, has many wonderful things.

------
networked
How groundbreaking is this? On that note, what is the state of the art for
brain-computer interfaces, invasive or non-invasive, with which the user can
actually input data into a computer?

As far as I understand the method described in the article, it could
eventually be employed as an alternative to eye tracking for computer input,
i.e., instead of determining what letter the user's eyes are looking at by
using cameras pointed at their face and computer vision you would scan the
user's visual cortex directly. One can immediately think of applications this
would have even outside of the assistive technology market, e.g., for mobile
input.

------
shitlord
It would be really cool if we could develop this to the point that it would
work in humans and with a minimal amount of hardware. Imagine the
possibilities, coupled with wearable computing: we could digitize SO much
information, from landmarks to museums to captchas and more... all without an
obnoxious camera.

~~~
emiliobumachar
The reason a small camera is obnoxious is not the hardware. I think people
would raise much the same objections to being recorded without permission if
it was a human eye plus brain scanner.

(I'm not making a statement about whether these objections are right or wrong,
I'm just saying this technology will not change the debate)

------
Houshalter
This is a much cooler example
[http://youtu.be/nsjDnYxJ0bo](http://youtu.be/nsjDnYxJ0bo)

If I'm understanding the description correctly, they are just training it to
recognize what the image is closest to and taking an slice of a youtube video
that most closely matches it.

I imagine if they used a more efficient method or trained it more, they could
do way better. It seems like most of the data to build an accurate picture of
what they are seeing is already there.

------
andyidsinga
right now I'm thinking ... h e l l o n s a

------
tlrobinson
Get your tin foil hats ready.

~~~
knotty66
Tin foil hat time has long gone.

------
mosselman
Hey, that is my University :), great surprise.

