Hacker News new | past | comments | ask | show | jobs | submit login
Reading minds with deep learning (floydhub.com)
82 points by whatrocks on Nov 29, 2018 | hide | past | favorite | 26 comments



While the results are really cool, they're not perfect, and seemingly even the best companies working with deep learning are still not getting it right every time. It makes me wonder if this technology will ever be useful for these tasks... I mean would you be willing to use BCIs even if they made small errors from time to time?


There is a lot of variance in error rate in biological human nerves as well. Certainly a lot of people on the low-end of the bell curve could benefit.

Interesting to consider whether a 'filter' could be made for people with parkinson's that would drop tremor noise while preserving intentional actions. Perhaps with robotic exosleeves tremor could be offset/counteracted - or eventually we could input the filtered signal back into the nerve.


That would be a fantastic application, however it would differ slightly in that it would not be a BCI (as in a traditional brain computer interface), but a device within the brain that impeded certain signals being sent to motor neurons that it detected were Parkinsonian. This would involve directly manipulating the output of the brain to the body itself, and require some serious bio-mechanical hardware!


This hardware already exists as an implantable electrode and is used in really hard cases.

It is a competitive inhibitor and additionally stimulates dopamine release.

As for putting something like that in spinal cord or motor centers, very risky surgery. Much more than cranial stimulator one. If this device fails in such a simple way as being jostled, you die.


I think being able to get 90%+ accuracy is pretty bad ass. I think with more data over time, there can hopefully be a good amount of progress that can be made.

I don't even know what would make sense to improve the model (such as lowering false positive or false negative rates), but this research sounds pretty exciting!


See the progression of voice recognition. The answer is, yes, a BCI is still useful and will still be used, even if it makes errors. It will of course grow more useful and become more embedded into our lives, as it gets more accurate.


Yes that's the big question, and I mention that in the article. Let's say for a moment that we can't achieve 100% accuracy, well one parallel to this situation would be autocorrect on our phones; it often corrects you when you didn't want it to or misinterprets your meaning. Yet we still use it as there is net gain in terms of convenience. If a BCI could instantly put your thought on to paper, and left you solely with a few typos to mop up, I think we'd all be willing to compromise.


I think BCIs are super exciting. Humans have such high bandwidth in terms of input; tons of information is interpreted by our nervous system every second. Yet in terms of output, we are relatively poor; we can only type a few words a minute! This article suggests we could one day work 'like digital octopuses' but I'm naturally dubious, do you think it's possible as CTRL-labs propose that we could entirely rewire our nervous systems' output to let us operate computers as if we had multiple hands?


> do you think it's possible as CTRL-labs propose that we could entirely rewire our nervous systems' output to let us operate computers as if we had multiple hands?

Yes. You would not quite do it that way though. Your motor cortex is capable of handling that sort of thing[1] but your conscious awareness isn't[2] so you have to develop a kind of expanded awareness, like a dancer or martial artist, but in an abstract space.[3] (I suppose you could just hallucinate having eight arms, now that I think about it.)

You don't have to wait, you can get results with crude hardware. The trick is to use hypnosis. See my other comment in this page[4].

We could have fantastic "magical" UI a minute ago, but people connect a brain to a computer and then focus on programming the computer. It's silly.

(BTW, our output channels are primarily the face and the whole body[5], but there's evidence that sign language predates spoken language. I've heard that pre-verbal babies can learn to communicate with gestures.)

[1] Higher dimensionality or more degrees of freedom

[2] Generally, for most people. However it is possible to train your mind to carry out two or more simultaneous activities, such as writing a sentence with one hand while solving a math equation with the other. But this is sort of a stunt. Most folks would never conceivably need to do something like that.

[3] Obligatory reference to "Cyberspace", and "Minority Report" UI.

[4] https://news.ycombinator.com/item?id=18564989

[5] Gait, posture, overall tension/ease, etc.


I’m a bit of a contrarian on this issue. If you look at how you type you actually spend more time thinking about what to say rather than typing it out.

I think language is extremely high bandwidth. We’re bottling up extremely complex, abstract thoughts into a few symbols.


One possibility is that the BCI will allow powerful interaction of the sort simply not possible now based on using brain activations as supervision for understanding material in a way which transports those extremely complex abstractions into the computer in a software-understandable way: https://www.reddit.com/r/reinforcementlearning/comments/9pwy...

To give a quick random example: imagine the BCI records your global activations as you read that Reddit post about deep learning augmented by EEG/MRI/etc data; a year later while reading HN about something AI, you want to leave a comment & think to yourself 'what was that thing about using labels from brain imaging' which produces similar activations and the BCI immediately pulls up 10 hits in a sidebar, and you glance over and realize the top one is the post you were thinking of and you can immediately start rereading it. And then of course your brain activations could be decoded into a text summary for the comment which you can slightly edit and then post...


Same. We're pretty good at adapting the information density of language to fit what we need to convey into the available bandwidth.


Great points. Thomas reardon (CTRL-labs founder) says the same in a talk you can see on YouTube (definitely check it out). I share your doubts, and wonder exactly how neuroplastic we are. Perhaps we are simply built to operate in a certain way, and solely tapping in to our nerves responsible for output won't necessarily allow us to output in hugely different ways. Even if possible, it may not be so simple,and instead becoming a digital octopus may be like having to learn the piano, requiring years of practice!


Here's that talk you mentioned:

https://www.youtube.com/watch?v=5Z5aZK2C3ew (2018 O'Reilly AI Keynote, Thomas Reardon, CEO, CTRL-labs)


Cheers!


Curious about differences in brain waves for desire versus intent. I might desire to say something awful to that person who just honked at me, but I probably shouldn’t - and right now wouldn’t. Would a BCI system be able to tell the difference?


Haha, that's a fantastic point! And in order to get anywhere closer to making such differentiations, surface EEG S (used for the results into his article) would surely not suffice. The technology and results we are collecting currently are just scratching the surface, though I'd suspect if a BCI was capable of detecting intent of such complexity as the situation you described, it would also probably have the capacity to know whether it was something you planned to act on or not.


If you connect a computer to a brain which element is the more sophisticated processing device?

The brain.

If you want high-fidelity cheap BCI don't bother with fancy sensors, galvanic skin response is enough. The trick is to use the high-powered processor to do the heavy lifting. You use hypnosis and create a simple feedback loop to calibrate e.g. one channel per finger and one per thumb. That gives you 8 data lines and a clock/ack signal. An eight-bit parallel port from your mind to your computer. You can now come out of trance and "type" by thinking at the machine in a certain way (no imaginary keyboard) and your verbal thoughts appear on the screen. Other output modalities are possible. It's fucking trivial, if you'll pardon my language.

It's ridiculous to connect a computer to a brain and then concentrate on the computer, rather than the brain, to develop control and communication systems. It's a bit like trying to move your bicycle using a skateboard.

But the greater problem with focusing on the CPU rather than the brain is that it reinforces a world that doesn't demand much personal growth to use powerful tools. BCI where the computer does the work relieves the user of learning how to operate his or her own brain. BCI-by-hypnosis requires the user to learn enough self-operation to be able to go into a light trance and learn a simple motor pattern, and the meta-lesson "side effect" is more valuable and more important than the ability to interact with computers with greater facility.

(BTW, I doubt this would ever be a mass-market product, because you're not just selling a peripheral at that point...)

- - - -

edit: Folks, I'm gonna ask that you please not downvote me on this one without saying something about why. I've never been able to get anyone to listen or take me seriously on this particular subject, and frankly that sucks. I know this works but no one cares and it's frustrating. So, again, please, if you think this is worth a downvote can you please take a minute and tell me why?


Because you're an unknown, uncredentialed agent using language usually associated with pseudo-science, and asserting it as if it were plain as day. Unfortunately, most people don't have enough time to give every idea that comes across their mind's desk a full and rigorous examination. So we all use heuristics to filter which ideas are worth engaging with. And by and large ideas that start with the word 'hypnosis' are not.

You may well be on to something, but the bar of attention is high, and it's much higher for you than for someone using more normative ideas and terminology. Most people are not going to have the deep,intuitive connection to the idea that you have. They'll need some mechanism for attaining that intuition for themselves. Some numbers, a demo, a proof-of-concept. Something that helps them overcome their trained skepticism.


You're advocating the use of the well explored EMG (electromyogram) interface. These exist and work but the amount of training to operate one is actually higher than training a person to move themselves. They're used for subvocal speech and no hypnosis is needed. Galvanic skin response is a subset of EMG.

The winner would be a plain old analog interface which you can operate with body movement, not to fine yet not too gross. Keyboard is a similar thing but digital. Mouse is analog but planar.

Getting higher dimensional than 3D + time is very hard (gestures) and that is what's required for speech or fully operating a machine. Thus the projections.


That’s cool! Have you actually done this? What clock rate can you achieve?


I forgot to mention in my main reply, I did set up a binary Boolean signal system as part of my exploration of hypnosis years ago. At first it was finger twitching, but then later the system was arm twitches, right for affirmative, left for negatory, non-response meant I[1] was framing the query in a way that precluded an unambiguous y/n answer. I made use of this "comm channel" for years, occasionally friends would remark on my odd arm twitches, eventually they "wore off" in the sense that I didn't need them anymore.

Anyhow, this is what I meant about "trivial". These days you could wear e.g. a Fitbit on each arm and "Ta-da!" you've got a BCI. You wouldn't even have to stop by RadioShack (RIP) or wire up IMUs to an Arduino.

[1] At a minimum you have to postulate three "entities": the perceive, the conscious mind, and the unconscious mind, for the model to make sense. In the above "I" the conscious mind is posing queries to "me" the unconscious mind, and I'm not mentioning the Perceiver at all.


@dmreedy Thank you. I really appreciate that.

@AstralStorm As you point out, GSR (or more generally EMG) isn't necessarily the best modality for this. But my point is that, whatever the comm channel's nature and characteristics, once you have connected a brain to a computer, it's useful to recognize that the more powerful and easier to program signal processor is the brain.

@h0p3 First, thanks. I am deeply weird and to me it's a compliment. ;-) Second, OMG your website is a custom TiddlyWiki? It looks fascinating. Email sent. Well met.

@trevyn I have not actually done this, no. To me it's trivial, and I don't want nor need to talk to a computer that intimately. (I'm one of those people who, despite using a computer daily for years, still "hunt and pecks". I actually can kinda touch-type, but my ratio of thinking to typing is such that there's not a great incentive to do better. Typing speed has never been the limiting factor in my life, so... Lazy.)

I expect you would be able to input text as fast as you could think it. You're not limited to eight bits. I would also explore methods like, e.g. "Dasher" http://www.inference.org.uk/dasher/ https://en.wikipedia.org/wiki/Dasher_(software)

And with modern NN techniques and an HD camera you could just point the camera at your face and calibrate facial expressions, pulse rate, etc...

@mrfusion It just makes it much easier to self-train and calibrate. In crude terms, the conscious mind has a lot of belief structures and other homeostatic things going on that interfere with communicating with the motor cortex. In a trance most of that stuff is temporarily offline and you can "go in" and "ask" parts of your brain (presumable we all agree that the brain is where this stuff is happening, yes?) to do things like "work with the NN to calibrate unambiguous signals" etc..., and it will typically "go along with" this sort of hypnotic suggestion.

The thing is, it's not like this is something you could franchise and open stores in malls or something. There's no mass market for hypnosis, techno-assisted or otherwise.

@kakarot Sure. Email sent.


I'd like to know more about what you have to say.

I've been looking through your post history. Please take this as a compliment: you are weird. I like it (I think I'm weird too). If you are interested in a penpal friendship, hit me up: https://philosopher.life/.


I don’t see what function the hypnosis is providing?


I'm interested in talking with you about this in length in a more private channel, do you have an email I can reach out to?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: