
IBM: Mind reading is less than five years away. For real. - azazo
http://news.cnet.com/8301-13772_3-57344881-52/ibm-mind-reading-is-less-than-five-years-away-for-real/?tag=rtcol
======
lars
I've recently completed a masters thesis on EEG based mind reading, and I
think I have a fairly good grasp on the state of the art in this field. I also
have a copy of Kurzweil's The Singularity is Near by my bed, and I'm usually
strongly optimistic about technology. But if IBM are talking about EEG based
technology here, I would have to bet that they are flat out wrong on this one.
I'll explain why.

Something like moving a cursor around by thinking about it, or thinking about
making a call and having it happened requires a hell of a lot of bits of
information to be produced by the brain computer interface. With the current
state of the art we can distinguish between something like 2-6 classes of
thoughts sort-of reliably, and even then it's typically about thinking of
particular movements, not "call mom".

Importantly, what most people look for in the signal (the feature in machine
learning terms) are changes in signal variance. And there are methods to
detect these changes that are in some sense mathematically optimal (which is
to say they can be still be improved a little bit, but there won't be any
revolutionary new discoveries.) There may be other features to look for, but
we wont be getting much better at detecting changes in signal variance.

Some methods can report results like a 94% accuracy over a binary
classification problem. Such a result may seem "close to perfect", but it is
averaged over several subjects, and likely varies between for example 100% and
70%. For the people with 70% accuracy, the distinguishing features of their
signals are hidden for various reasons. And this is for getting one bit of
information out of the device. Seems like such a device would need to work for
everyone to be commercially successful.

In computer vision we have our own brains to prove that the problems can be
solved. For EEG based brain computer interfaces, such proofs don't exist.
There are certain things you probably can't detect from an EEG signal, meaning
the distinguishing information probably isn't there at all. I'm easily willing
to bet IBM money that who I would like to call can not be inferred from the
electrical activity on my scalp. (Seriously IBM, let's go on longbets.org and
do this.)

~~~
tokenadult
Thanks for the interesting detailed information about EEG resolution, which I
can attest is in accordance with what I have read about neuroelectrical
interaction in other contexts.

But what is to me implausible about thinking "phone Mom" and having my
computer do it for me is that this scenario envisions an unusually high degree
of usability that no consumer-facing software writers have ever achieved.
Right now, on a BRAND NEW computer system using mostly application programs
recommended by Hacker News readers (for example, I am using Chrome to Web
browse), I can't count on my computer doing what I want even if I have my
hands on the keyboard or a hand on my mouse. User-interface design appears to
be HARD--or at least, it is rarely done right--so I am very doubtful that in
five years or even twenty-five years I'll be able to use a computer that
really does what I think.

~~~
swombat
Hate to be that guy, but Siri is getting pretty close to this.. "Phone Mom"
will work with an iPhone 4S.

------
brendn
Can someone change this to link to the actual IBM blog entry [1] instead of
the CNET fluff piece?

[1] [http://asmarterplanet.com/blog/2011/12/the-
next-5-in-5-our-f...](http://asmarterplanet.com/blog/2011/12/the-
next-5-in-5-our-forecast-of-five-innovations-that-will-alter-the-landscape-
within-five-years.html)

------
narrator
The "No Passwords" prediction is overlooking a big stumbling block: biometric
data is not that secret and cannot be changed once intercepted. You might as
well just walk up to an ATM, and speak your social security number. So the ATM
is secure, but it's just another trusted client with all its associated
problems.

The only thing biometric data is really good for is keeping track of people
when they don't want to be tracked or want to hide their identity. For
example, it would be a useful means of tracking and identifiying people in a
prison or a border checkpoint.

------
blhack
Linkbaity headline, there.

"Mind reading" already exists kindof sortof maybe good enough to cnet to write
an article about.

This is at the top of my Christmas list: <http://emotiv.com/>

In fact, here is a comparison of consumer Brain Computer Interfaces:
[http://en.wikipedia.org/wiki/Comparison_of_consumer_brain%E2...](http://en.wikipedia.org/wiki/Comparison_of_consumer_brain%E2%80%93computer_interfaces)

~~~
bgalbraith
As a current PhD student working in this area, I caution you about getting too
excited about the Emotiv EPOC. We've got one in the lab we've started to work
with as a potential low-cost EEG system. The out-of-the-box software is kinda
hokey, so you may end up with an expensive novelty you use once or twice.

On the technical side, it does seem to be the best current option for consumer
EEG, though most of these devices are actually strongly influenced by, if not
heavily reliant on, facial muscle movements.

~~~
blhack
Have you worked with the open source python library for it?

~~~
bgalbraith
No, I haven't, though I'm interested in which library are you referring to.
We've been developing our own Python wrapper interface to their API, though
this is to share a common interface with the other EEG DAQ (e.g. g.tec) Python
wrappers we've been developing.

~~~
blhack
<https://github.com/daeken/Emokit/blob/master/Announcement.md>

(I've heard a few people make similar complaints to yours, which is really
really saddening to me. Regardless I'm still going to buy one and see what I
can do with it :))

------
eykanal
lars's comment (<http://news.ycombinator.com/item?id=3371968>) is right on
target. I recently finished my PhD in biomedical engineering, and _the_ hot
field that everyone wants to go into is what we're calling BMI - Brain-Machine
Interfaces. The trick is, there are very few types of signals than can be
reliably determined from these brain-signal reading devices.

Broadly speaking, there are two kinds of tasks that can be easily
accomplished; anything involving moving limbs, or simple, low degree of
freedom tasks (like moving a computer cursor). After months and months of
training, a person can be trained to manipulate numerous degrees with pretty
good reliability (i.e., move a robotic arm, AND control the mechanical pincer
at the end), but this type of work doesn't generalize to other types of
thought. We're nowhere near being able to extract sentences or words or being
able to determine what complex scene is being viewed simply using brain
activity patterns.

------
bgalbraith
When talking about EEG-based "mind reading", there are three primary methods
currently under study (when looking at locked-in patients at least):

1) P300 - This refers to a predictable change in the EEG signal that happens
around 300 milliseconds after something you were expecting happens. For
example, if I am looking for a particular letter to flash amongst a grid of
letters all randomly flashing, a P300 will be triggered when the letter I want
flashes.

2) SSVEP - This stands for steady state visually evoked potential. This
approach uses EEG signals recorded from over the visual cortex, which responds
to constantly flickering stimuli. Given a few seconds, the power of the
frequency of the attended stimulus increases in the EEG, which can then be
detected and used to make a decision.

3) SMR - This stands for sensorimotor rhythms, and is an approach that looks
for changes in EEG activity over the motor cortex. Successful approaches have
been able to identify when you imagine clenching your left or right fists, or
pushing down on your foot. Unlike the other two, this does not require
external stimuli.

SMR is the most like what we consider mind reading, as the user is initiating
the signal while the other two infer what a person is looking at. It is
limited to only 2-3 degrees of freedom at the moment, however, and is the
hardest signal to work with. It is susceptible to external factors such as the
current environment and mental state, and not everyone seems to be able to
generate the needed signals. SSVEP, while lacking the wow factor of SMR, is
much easier to work with and is a much more stable signal.

Disclosure: I work in this area. Here's a flashy NSF video highlighting our
lab:
[http://www.nsf.gov/news/special_reports/science_nation/brain...](http://www.nsf.gov/news/special_reports/science_nation/brainmachine.jsp)

------
moocow01
I would say rather the capability may be 5 years away. Whether consumers want
it - I'm skeptical. I knew someone who for reasons I won't go into had a
computer that they had to control with their eyes (basically has a webcam that
tracks the eyes and moves the cursor and then clicks when you wink). It made
me realize the further integration of computing control and a human's
anatomy/biology can create more problems because there is a lack of a
filtering mechanism. When you type on a computer you choose what your computer
does by making deliberate actions rather than your computer monitoring you and
interpreting your actions. The problem with the latter is there are many
things you do that does not involve your computer... pick up the phone, throw
a ball for your dog, talk to a coworker, etc. When your computer is monitoring
you for input it never knows when the action is for it and when it is not. So
in the case of the computers based on eye control, the experience is very
problematic when you have to look somewhere else for any reason.

Now taking it a step further I can't even imagine how out of control a
computer would be based on someone's mind. Our minds randomly fire off
thoughts non-stop - its actually incredibly hard to concentrate on one
deliberate thing for a long time (if you've ever tried meditation you realize
this very quickly). How a computer could filter actions for it and actions
that are just the randomness of the brain seems like it would be incredibly
difficult in that there really isn't a definitive line there at all.

~~~
mikecsh
You seem to be describing a problem that needs to be solved rather than a
situation meaning that there is anything wrong with the technology. I used to
have speech control turned on on my computer - you can set it to listen to
everything which is basically the situation you describe with all its
problems. Alternatively you can ask for it to listen for a keyword or only to
listen when you press a key. I imagine similar solutions will present
themselves for brain-computer interfaces.

~~~
moocow01
I think there is some truth to that but I'd say the one difference between
speech recognition and brain recognition is that speech is a voluntary action
you control, while your thoughts have a largely involuntary component to it.
Involuntary meaning when someone says "an idea just popped into my head" - the
idea seemingly was not an action deliberately triggered. Think if while you
were "mind typing" an email and suddenly the thought "god I hate my boss"
popped into your head. If the filtering mechanism of the computer was poor,
your computer might, assuming it was being helpful, shoot off an email to your
boss saying "god I hate you". I guess what I'm saying is the way by which you
could filter your own thoughts to be interpreted by your computer or not seems
like an incredibly difficult proposition.

~~~
shadowfiend
Yes, but I think we can also distinguish, in our own minds, the difference
between a thought that is fleeting or passing and a thought that we want to
take action on. Similarly, a successful brain-computer interface should be
able to make that distinction.

------
brianwillis
Previous "five in five" predictions from IBM can be found here:
[http://www.ibm.com/smarterplanet/us/en/ibm_predictions_for_f...](http://www.ibm.com/smarterplanet/us/en/ibm_predictions_for_future/examples/index.html)

~~~
colton36
So their 5 year prediction from 2006 fell completely on its face.

~~~
cbr
Including: "Our mobile phones will start to read our minds"

------
kingkawn
i am setting an alert in my calender 5 years from now with the text of this
article and the author's email address.

------
itmag
Does anyone ever feel that neuroscience is getting more and more lovecraftian
and challenging basic assumptions of what it means to be human? It sometimes
feels like we're at a point in history where all the basic tenets of existence
are being torn down by science and replaced with... nothing. Am I the only one
who gets existential crisises from this kind of stuff? :p

It doesn't help, of course, that I'm currently reading this book:
[http://www.amazon.com/Conspiracy-Against-Human-Race-
Contriva...](http://www.amazon.com/Conspiracy-Against-Human-Race-
Contrivance/dp/098242969X)

The luddite in me wishes that science will never be able to fully pick apart
the human psyche. Here's to having an inscrutable ghost in the machine to keep
us from being mere deterministic flesh-bots...

~~~
wladimir
I wonder too...

There have been other times in history that scientists had the idea that
science was almost complete, that there were just a few things left to sort
out and we'd understand it all (such as around 1900 with mathematics).

We may think we are very near and then discover something new and then find
out a lot of new questions around the mind and consciousness. I don't think
we're quite there yet.

However I'm sure "shallow AI" (and maybe "shallow mindreading") will become
more and more important in the near future. Which is what IBM is focusing on.

BTW: Thanks for the pointer. That book looks very interesting.

~~~
itmag
_BTW: Thanks for the pointer. That book looks very interesting._

I'm actually not so happy about posting that link. For me, it states things
that I had already mostly figured out on my own previously. For others, it
might zap a lot of Sanity Points.

I would say Ligotti is a very unique writer, in that he's deeply immersed in
existentialist philosophy, neuroscience, cognitive psychology, AND
lovecraftian horror. It makes for a very... disturbing cocktail.

Of course, everyone on HN is seemingly a Nietzschean overman who can take
these kinds of things in stride. Me, not so much :p

------
JoeAltmaier
A little fanciful I think. The stuff about generating your own energy through
captured kinetic energy is silly. My house has a 20KW feed - thats about 27
horsepower. On my bike I produce a tiny fraction of a horsepower. Its many
orders of magnitude off.

~~~
cullenking
You definitely produce more than a tiny fraction of a horsepower :) I have a
watt meter on my bicycle, and I can easily average 250 watts for an hour ride.
Well, easily meaning sweating profusely and worked at the end of it, but
still, that's 1/3rd of a horsepower for an hour. I can put out 1 horsepower
for a few seconds, and I am not a great cyclist...

~~~
JoeAltmaier
Ok, YOU produce more than a tiny fraction. I'm 52 and out of shape. I can ride
50 miles if you give me all day, but no way am I putting out more than 1/10 of
a horsepower. And NO WAY am I putting a generator on my bike while I'm riding
it!

~~~
zifot
Yeah, but isn't it that the premise of the article is that you would combine
energy from various multiple sources like, I don't know, 10-30 per house?
Water flowing in pipes, generated heat, gathered kinetic energy?

It would be interesting to see some calculations how much could be gained that
way.

------
6ren
In a sense, speech is mind-reading: you can have in your mind what the writer
had in their's.

This isn't just sophistry, but shows there are two problems, 1. to transmit
information into and out of a mind; 2. to transform the information into a
form that can be understood by another. A common language if you will.

This has analogues in relational databases, where the internal physical
storage representation is transformed into a logical representation of
relations, from which yet other relations may be transformed; and in
integrating heterogeneous web services, where the particular XML or JSON
format is the common language and the classes of the programs at the ends are
the representation within each mind.

There's no reason to think that the internal representation within each of our
minds is terribly similar. It will have some common characteristics, but will
likely differ as much as different human languages - or as much as other parts
of ourselves, such as our fingerprints. Otherwise, everyone would communicate
with that, instead of inventing common languages.

~~~
eurleif
>Otherwise, everyone would communicate with that, instead of inventing common
languages.

How would we communicate with it? By directly linking our brains together? I
don't see why it would have a direct translation into sounds.

~~~
6ren
You're right, that particular sentence is unnecessary to my argument and
weakens it.

------
ratsbane
I'm guessing that when mind reading comes it will be more more of a machine
learning exercise based on analysis of speech, vocal inflections, visible
features, and previous actions than a portable EKG machine with wires on the
scalp.

See Poe's detective Auguste Dupin, in, for example, "Murders in the Rue
Morgue."

------
brown9-2
I think it says something about this "prediction" that most of the text on the
IBM page about it ([http://asmarterplanet.com/blog/2011/12/the-
next-5-in-5-mind-...](http://asmarterplanet.com/blog/2011/12/the-
next-5-in-5-mind-reading-is-no-longer-science-fiction.html)) is:

 _Vote for this as the coolest IBM 5 in 5 prediction by clicking the “Like”
button below.

Join the Twitter conversation at #IBM5in5_

------
figital
"Neurofeedback" already exists it's just still under the radar (it's like
teaching yourself to roll your tongue). I've been trying to pull some demos
together to demonstrate that the web browser is the place this will take off:
<http://vimeo.com/32059038> (sorry I haven't pushed more of this extra-rough
demo code yet). Consider using something like the wireless PendantEEG if
you're going to be doing your own development OR be prepared to pay excessive
licensing fees required from a few of the vendors mentioned here. If you are
interested in helping develop this stuff mentioned in that video (and don't
mind springing for some reasonbly cheap hardware) please ping me. I'd also
like to plan a MindHead hackathon/mini-conference this spring in Boston (my
personal interests are improving attention and relaxation, peak perfomance,
and BCI).

------
cjfont
Going down the list of 5, for each one I was thinking to myself, "Yeah right",
then going through the explanations I was thinking, "Oh, well if _that_ is
what you mean by that, sure why not".

------
mw63214
Slightly off-topic, but I've always thought that the first wave of HCI to hit
the market and gain traction would be the integration of Affective sensing
tech. products and API's into popular areas like music, social networks, and
health care. I've always thought this would bring down the cost, increase
investment in the HCI/BCI space, and speed up the adoption rates and lead to a
much faster improvement of HCI technologies.

------
pmuhar
I dont see this happening, or being very accurate if it does. I dont know
about you guys, but my mind thinks about something new every few seconds, and
one tiny piece of a thought will turn into a whole new though. Its all very
random and for a computer to be able to understand and filter that seems a
little too sci-fi.

------
roundsquare
I was under the impression that we were very close to being able to move
sensors with our minds.

[http://www.ted.com/talks/tan_le_a_headset_that_reads_your_br...](http://www.ted.com/talks/tan_le_a_headset_that_reads_your_brainwaves.html)

------
catshirt
probably depends on your definition of "mind reading", but sounds like it
warrants a longbet.

------
benaston
IBM constantly seems to release press releases about technology it hasn't yet
developed to production quality. Said technology always vanishes without trace
(as far as I can recall.) I'm not holding my breath on this one.

------
bdg
Thanks for the awesome example of putting one of paul graham's essays into
action.

<http://www.paulgraham.com/submarine.html>

------
tree_of_item
"ATM machine" in an IBM video? I'm slightly disappointed.

~~~
derekp7
Would you rather them call it an AT machine? I guess it could be "@machine" in
that case. Point being -- if you can't choose which to leave out, then you
have only half redundancy -- which is acceptable, if not optimal.

~~~
tree_of_item
What's wrong with "ATM"?

~~~
rimantas
It does not work without PIN number :(

------
mkramlich
The only one who can tell me something is N years away is someone who just
stepped out of a time machine. I see no time machine, I pipe to dev null.

~~~
unwind
You cannot pipe to dev null, since it's a device, not a process.

So, to summarize in shell-like syntax, you can redirect your output to
/dev/null, as in

$ me > /dev/null

but you can't sensibly use a pipe like so:

$ me | /dev/null

This has been a message from your friendly neighborhood Unix
fundamentalism/literalism chapter.

~~~
mkramlich
but you had no problem with the time machine. only on HN.

------
farico
"you can control the cursor on a computer screen just by thinking about where
you want to move it."

Imagine writing code by thinking only?

------
overgard
The great thing about bold predictions is nobody ever remembers them if you're
wrong, but you look like a genius if you're right.

------
zyb09
Well I guess you could link brain patterns to thoughts, but how do you gonna
read them without a 5 ton MRI machine?

~~~
dholowiski
Who's to say a MRI machine has to weigh 5 tons 5 years from now? I imagine
people once said "Sure it's great to store your recipes in a computer, but who
wants to do that on a room sized computer?"

~~~
aheilbut
Gauss and Maxwell say.

~~~
jacquesm
They do and they don't. Yes, it is true that MRI machines weigh 5 tons because
of the magnetic fields generated and the amount of hardware required to
constrain those fields.

But you could imagine an MRI machine that works with lower strength fields and
better sensor technology. Imagining one is a far cry from being able to build
one though and I don't see this happening in the near future (if at all).

I think this prediction (like most of IBM's predictions about the future) is
strong on marketing and very weak on science.

------
silon3
How soon will it reach the quantum level of "you can't measure without
changing"?

------
lurchpop
Massively unsettling coming from the company who helped the nazis streamline
their attempts at genocide.

------
technology
George orwell's vision from the book 1984 vision is becoming true

------
mjwalshe
Seeing as at least 2 of the 5 are to be blunt crap why are we even discussing
this - this is as relistic as the fusion "to cheap to meter" stories they ran
in the 50's FFS

