
Reading minds with deep learning - whatrocks
https://blog.floydhub.com/reading-minds-with-deep-learning/
======
emilwallner
While the results are really cool, they're not perfect, and seemingly even the
best companies working with deep learning are still not getting it right every
time. It makes me wonder if this technology will ever be useful for these
tasks... I mean would you be willing to use BCIs even if they made small
errors from time to time?

~~~
cwkoss
There is a lot of variance in error rate in biological human nerves as well.
Certainly a lot of people on the low-end of the bell curve could benefit.

Interesting to consider whether a 'filter' could be made for people with
parkinson's that would drop tremor noise while preserving intentional actions.
Perhaps with robotic exosleeves tremor could be offset/counteracted - or
eventually we could input the filtered signal back into the nerve.

~~~
samlynnevans
That would be a fantastic application, however it would differ slightly in
that it would not be a BCI (as in a traditional brain computer interface), but
a device within the brain that impeded certain signals being sent to motor
neurons that it detected were Parkinsonian. This would involve directly
manipulating the output of the brain to the body itself, and require some
serious bio-mechanical hardware!

~~~
AstralStorm
This hardware already exists as an implantable electrode and is used in really
hard cases.

It is a competitive inhibitor and additionally stimulates dopamine release.

As for putting something like that in spinal cord or motor centers, very risky
surgery. Much more than cranial stimulator one. If this device fails in such a
simple way as being jostled, you die.

------
janeybrad
I think BCIs are super exciting. Humans have such high bandwidth in terms of
input; tons of information is interpreted by our nervous system every second.
Yet in terms of output, we are relatively poor; we can only type a few words a
minute! This article suggests we could one day work 'like digital octopuses'
but I'm naturally dubious, do you think it's possible as CTRL-labs propose
that we could entirely rewire our nervous systems' output to let us operate
computers as if we had multiple hands?

~~~
mrfusion
I’m a bit of a contrarian on this issue. If you look at how you type you
actually spend more time thinking about what to say rather than typing it out.

I think language is extremely high bandwidth. We’re bottling up extremely
complex, abstract thoughts into a few symbols.

~~~
gwern
One possibility is that the BCI will allow powerful interaction of the sort
simply not possible now based on using brain activations as supervision for
understanding material in a way which transports those extremely complex
abstractions into the computer in a software-understandable way:
[https://www.reddit.com/r/reinforcementlearning/comments/9pwy...](https://www.reddit.com/r/reinforcementlearning/comments/9pwy2f/wbe_and_drl_a_middle_way_of_imitation_learning/)

To give a quick random example: imagine the BCI records your global
activations as you read that Reddit post about deep learning augmented by
EEG/MRI/etc data; a year later while reading HN about something AI, you want
to leave a comment & think to yourself 'what was that thing about using labels
from brain imaging' which produces similar activations and the BCI immediately
pulls up 10 hits in a sidebar, and you glance over and realize the top one is
the post you were thinking of and you can immediately start rereading it. And
then of course your brain activations could be decoded into a text summary for
the comment which you can slightly edit and then post...

------
twillmas
Curious about differences in brain waves for desire versus intent. I might
desire to say something awful to that person who just honked at me, but I
probably shouldn’t - and right now wouldn’t. Would a BCI system be able to
tell the difference?

~~~
samlynnevans
Haha, that's a fantastic point! And in order to get anywhere closer to making
such differentiations, surface EEG S (used for the results into his article)
would surely not suffice. The technology and results we are collecting
currently are just scratching the surface, though I'd suspect if a BCI was
capable of detecting intent of such complexity as the situation you described,
it would also probably have the capacity to know whether it was something you
planned to act on or not.

------
carapace
If you connect a computer to a brain which element is the more sophisticated
processing device?

The brain.

If you want high-fidelity cheap BCI don't bother with fancy sensors, galvanic
skin response is enough. The trick is to use the high-powered processor to do
the heavy lifting. You use hypnosis and create a simple feedback loop to
calibrate e.g. one channel per finger and one per thumb. That gives you 8 data
lines and a clock/ack signal. An eight-bit parallel port from your mind to
your computer. You can now come out of trance and "type" by thinking at the
machine in a certain way (no imaginary keyboard) and your verbal thoughts
appear on the screen. Other output modalities are possible. It's fucking
trivial, if you'll pardon my language.

It's ridiculous to connect a computer to a brain and then concentrate on the
_computer_ , rather than the brain, to develop control and communication
systems. It's a bit like trying to move your bicycle using a skateboard.

But the greater problem with focusing on the CPU rather than the brain is that
it reinforces a world that doesn't demand much personal growth to use powerful
tools. BCI where the computer does the work relieves the user of learning how
to operate his or her own brain. BCI-by-hypnosis requires the user to learn
enough self-operation to be able to go into a light trance and learn a simple
motor pattern, and the meta-lesson "side effect" is more valuable and more
important than the ability to interact with computers with greater facility.

(BTW, I doubt this would ever be a mass-market product, because you're not
just selling a peripheral at that point...)

\- - - -

edit: Folks, I'm gonna ask that you please not downvote me on this one without
saying something about why. I've never been able to get anyone to listen or
take me seriously on this particular subject, and frankly that sucks. I know
this works but _no one cares_ and it's frustrating. So, again, please, if you
think this is worth a downvote can you please take a minute and tell me why?

~~~
trevyn
That’s cool! Have you actually done this? What clock rate can you achieve?

~~~
carapace
I forgot to mention in my main reply, I _did_ set up a binary Boolean signal
system as part of my exploration of hypnosis years ago. At first it was finger
twitching, but then later the system was arm twitches, right for affirmative,
left for negatory, non-response meant I[1] was framing the query in a way that
precluded an unambiguous y/n answer. I made use of this "comm channel" for
years, occasionally friends would remark on my odd arm twitches, eventually
they "wore off" in the sense that I didn't need them anymore.

Anyhow, this is what I meant about "trivial". These days you could wear e.g. a
Fitbit on each arm and "Ta-da!" you've got a BCI. You wouldn't even have to
stop by RadioShack (RIP) or wire up IMUs to an Arduino.

[1] At a minimum you have to postulate three "entities": the perceive, the
conscious mind, and the unconscious mind, for the model to make sense. In the
above "I" the conscious mind is posing queries to "me" the unconscious mind,
and I'm not mentioning the Perceiver at all.

