Hacker News new | comments | show | ask | jobs | submit login
One rat brain 'talks' to another using electronic link (bbc.co.uk)
95 points by scholia 1691 days ago | hide | past | web | 47 comments | favorite

One possible path of development is toward the neural jack - basically a standardized plug port, a USB interface for the brain. Here's a page describing some Japanese research toward it:


I imagine the following: the jack outputs nerve impulses from the brain. Feed these impulses into a robot's control interface/"API", and the user's brain will be able to adaptively learn to control the various functions offered (e.g. flick a finger to make the robot's hand perform the same action). Even better, the robot uses a learning algorithm to associate the nervous data with the correct action through feedback from the user, so that eventually, the user's moving his hand causes the robot to do the same. Voila, exo-suits controlled directly by the brain.

If you can pass data back into the brain through the jack, you can return tactile feedback from the robot arm, or even sensations that represent other information, e.g. light or gas levels in the environment detected by mechanical sensors. I doubt human brains can process information that doesn't fit in our spectrum of senses, e.g. mathematical patterns, or abstract conceptual knowledge like an understanding of democracy, but maybe they can accept visualizations or memory vignettes generated by other users. Brains are adaptive and can learn to process unfamiliar stimuli.

It's all far in the future and raises a lot of strange and worrisome questions, but it's interesting to think about.

Actually, human brains have been shown to be able to recognize patterns generated by non-standard sensors. Our brains seem to encode sensory data into a standardized protocol before processing it. For instance, in one study researchers encoded visual data from a camera onto a thin tape on the tongue with a grid of pressure points. Subjects received visual data through their taste buds and were able to correctly process it (i.e., catch a ball). I can't find the exact study right now but it is out there.

This is partially the basis of "On Intelligence" by Jeff Hawkins, which led to his founding of Numenta (disclosure: I worked there last summer).

This was originally demonstrated by Professor Bach-Y-Rita in his work on neuroplasticity. Things other than cameras work just as well, interestingly; they've used the same kind of system to help people with inner ear damage learn to balance, give people built-in compasses, and to give people radar and sonar "vision".

http://en.wikipedia.org/wiki/Paul_Bach-y-Rita http://en.wikipedia.org/wiki/Brainport

>Subjects received visual data through their taste buds and were able to correctly process it (i.e., catch a ball).

If so, what about less intrusive input methods? I'm thinking of cameras, radars, sonars or entirely synthetic source of visual data with input through a pressure point grid on the user's skin [1]. This could easily have a lot of applications, from supplementing rear-view mirrors in cars with something you "see" all the time to network admins feeling which machines in their server park are being port scanned. The downside of this kind of dermal input would be the mechanical complexity of the "display" that applies pressure to your skin and the relatively low speed.

A relevant question is whether if I learned to "see" with pressure applied to one patch of my skin would I be able to quickly start "seeing" with a different one? Another question is whether this scales and will you be able to use multiple inputs at the same time once you've learned one.

[1] The closest thing to this that I know of is Northpaw, a haptic compass. It's 1D, though. See http://sensebridge.net/projects/northpaw/.

Another great book with further discussion and examples on this subject, neuroplasticity, is The Brain That Changes Itself. https://en.wikipedia.org/wiki/The_Brain_That_Changes_Itself

In summary from my reading of the book, current research seems to back the perspective that our brains are basically making sense of chaos: there's no standardized protocol - we automatically decode patterns from random streams of input.

Though I haven't seen it myself, my roommate read this and immediately thought of the movie "Surrogates", wherein apparently this type of control has become the norm for standard interaction between people.


Reading the plot of the movie right after the article definitely gave me the sci-fi heebie-jeebies...

So, that is a possible bad outcome. A possible good outcome could be the elimination of much of the loss of information that happens when we ham-handedly convert our thoughts to verbal, or especially written, communication. Imagine how much we might start caring about each other when we can stream each other's emotions and experiences. Imagine how much we might learn about each other.

Face to Face communication already encodes emotions and experiences fairly well. What's missing from written and or auditory only communication is the body language side channel. While you can feel fear when reading a novel, and joy when talking on the phone etc it takes a lot more effort to evoke such emotions. What would be really interesting IMO is if you could add that side channel to recorded video etc and really feel what a directory was aiming for etc.

PS: Learning to communicate over a neural link would be at least as hard as learning a new language, which is why I suggest the side channel approach. Body language is useful and much simpler than say Spanish.

It's nice that the paper is freely accessable to boot.

to be convenient: http://www.nature.com/srep/2013/130228/srep01319/full/srep01...

One observation, that the bbc didn't note, is that the decoder brain in the end had a representation of the encoder's whiskers.

To be a bit hyperbolic, the rat started to think it had two bodies. Or maybe it thought it had an extra set of whisker's some where. I can't quite imagine what the rat's body image would be.

Makes me think of Steven Wright's bit about the light switch in his house he couldn't figure out the purpose for, so he flicks it on or off when he passes. After a month, he gets a letter from a woman in Germany saying, "Cut that out!"

I#m so utterly torn on this. On one hand, this sounds like the first step to a matrix-like learning tool - downloading information into the brain. And is in a nerdy way quite cool.

But on the other hand this totally creeps me out. I am disgusted, that something so possibly sinister is being researched. Something with such a potential for misuse.

After reading this quote: "We will have a way to exchange information across millions of people without using keyboards or voice recognition devices or the type of interfaces that we normally use today," he said.

I just thought:

"We are the Borg. We will add your biological and technological distinctiveness to our own. Resistance is futile."

I think it would be difficult to find any scientific research that somebody, somewhere didn't consider to have sinister applications. If we discouraged anybody from doing any scientific research that seemed related to what some science fiction program showed a villain using in the future, we wouldn't have any scientific research left to encourage.

But it's been done to some extent. Here's an ongoing human project - twins with conjoined brain(s):


Tickle one and both know it. There's some discussion of shared personality also. In the end, They're just kids, with a little more sharing than others:


Every tool can be abused. This is not an argument against developing said tool.

And honestly, this isn't like the nuclear bomb. The benefits of a brain-to-brain communication tool vastly outweigh the possible harms.

How about the end of individual consciousness? What benefits could outweigh that?

Perhaps we should first establish the precise benefits of individual consciousness.

Benefit: "Me, conscious, alive!"

Or: Cogito, ergo sum. ;-)

However we feel about it individually. I am certain that in the long run Brain-Computer Interfaces are an inevitable outcome.

I also believe that becoming "Borg" is inevitable for at leas a subset of human population. Furthermore I think that it will be a good outcome. In the long run.

I would prefer our technology to be tools we can put away at any moment.

A wire popping out of my head makes that a bit hard.

Modern (civilized) human already has tons "wires" sticking out of his head. We are far beyond the level where we could part ways with our technology.

I think you are forgetting the upside to this.

Imagine what it does for peoples empathy when they can feel each others pain.

What you're describing also sounds like an effective psychological weapon. It depends on the application but I can imagine (not condone, but imagine) this being mandatory for, say, felony offenders in a prison setting. You've got nothing to do but sit in a cell awash in a cold tide of human suffering.

Especially if this technology becomes advanced to the point that you don't need the network directly, and you can just in essence record someone's emotional state then broadcast it. Instant purgatory.

Then again, if you can feel someone else's pain, presumably you can feel their pleasure too...

That sounds like something out of A Clockwork Orange! Scary indeed.

A prison where the prisoners constantly feel dread and suffering? Welcome to Azkaban.

How about, "Welcome to Prison?"

Imagine how addictive it will be when they can experience the intense emotions of a person being tortured or killed, without any danger to one's self.

There is always another side. I remember seeing this sort of voyeurism explored in Tad Williams' Otherland series.

See also Joe Haldeman's "Forever Peace".

I'd love to see another novel written about the scenario espoused at the end of the book about the groups of people who couldn't have the jack installed. But then I remember that Forever Peace exists. If you haven't read it, don't.

I wonder if this could have some great research applications. A year ago I watched some lectures from Stanford on ethobiology (i.e. the branch dealing with the biological processes underlying behavior) on YouTube by Robert Sapolsky, and he talked a lot about the difficulty of figuring out what parts of the brain (and which interplays of brain centers) are responsible for behavioral patterns, especially when it comes to the more complex things. One joke was something like: "You know that feeling when someone calls you and you don't really want to talk, but feel uncomfortable with saying that actually you're busy and would rather just read a book than talk? I think we've found the brain center for that." We have learned a lot from what happens when people have certain parts of their brains damaged. But maybe we could learn much, much more about the brain by being able to fiddle around with many different kinds of signals to different brain centers, and trying out hypotheses by stimulating several centers simultaneously in order to produce (or not produce) certain behaviors?

Reminds me Kevin Warwick's project. I take it both he and his wife implanted electrode arrays into arms (interfacing to a median nerve). And they've been able to perform simple communications. First direct and purely electronic communication between the nervous systems of two humans. It was done in 2002.

I'd be more impressed if they hadn't trained the second rat beforehand. The second rat may be perceiving the stimulus as anything - a pressure in it's tail, who knows? The second rat is trained to press a button when it feels a pressure in it's tail.

I just don't see a clear path from this to true 'communication' - unless it takes a form like morse code that one (human) mind taps out and the other (human) mind consciously decodes. But I know nothing about this either ;)

The "decoder" rat was not trained beforehand, unless I am badly misreading the article. The training process that occurred used the real-time signal from the "encoder" rat's brain. It simply took some time for the "decoder" to learn how to process the signal sent by the other rat.

I think he means trained as in, was taught that by pushing one of the levers it got food.

As you say, there was additional training to then handle the encoding / decoding.

The decoder rat isn't just pressing a button, it's choosing the correct button of two choices, and they got that up to 70% accuracy. It doesn't really matter if it feels the pressure in its tail if it can distinguish between two different messages and choose the correct switch.

I agree that it doesn't really matter, but I think the concern is that a layperson will read this and assume that the rats are actually exchanging conceptual knowledge, which we can't really prove at this point.

For all we know, the 2nd rat just gets a bloated feeling when it's the 2nd button.

This is just the first step in a potential future that could be huge. The consequences of which could be so persuasive i'd almost question if we could still call ourselves human. Huge changes that could help Communication improve, and general intelligence improve. Think about how much time people waste simply trying to communicate their minds vision to each other. This literally could be the most effective form of communication. Additionally imagine if its not a person on the other end, but a computer with artificial intelligence. It would literally be like having Google built into your brain. That's the intelligence explosion. Of course, if only a select few people have it, the digital divide would be huge.

If this becomes ubiquitous, then two of the most fundamental states of human nature (we are each alone within ourselves and our minds are obliterated in death) might be shifted. That would certainly mean redefining what 'human' means.

Although this would also mean the transhumanists were right all along which just annoys me.

Reminds me of DARPA using the subconscious human brain as a peripheral for pattern recognition.


I wonder whay human brain clusters will be used for? Stock market analysis?

I know this isn't Slashdot, but imagine a Beowulf cluster of them :)

Conjures up memories of the old hamster dance website...

If you wired up hamsters like that, you might be able to get them to dance in unison. The gifts are preferable, but still execrable.

Is this the beginning of the Borg?

> a way to exchange information across millions of people without using keyboards or voice recognition devices

Brain-to-brain communication will be convenient, because we won't need physical I/O devices. However, I wonder if there will be any advantage to interfacing any deeper into the brain than those corresponding nerves... that is, perhaps direct-brain interfaces will be pretty much like conventional I/O, just without the physical I/O device (like the cochlear implants/bionic eyes/bionic arms of today). So we can't improve fundamentally over things like google glass/MYO.

The reason I say this is because we already have brain-to-brain communication in the form of language. This involves reducing our internal representation (whatever it is) into a serial format that follows grammatical rules - not necessarily "proper" grammar, but sufficient to enable the recipient, using shared knowledge, to decode the meaning. This has striking parallels to network communication: we have the internal representation of data structures (e.g. objects), which are serialized to a low-level protocol (e.g. xml, json, yaml, protobuf, asn.1) which follows a higher-level grammar (explicit like xsd or implicit as in json). Well, not entirely surprising, since Chomsky's grammar hierarchy was based on human language.

The interesting question is whether brain-to-brain communication can be any more "direct" than the serial representation (language) we already have... That is, are our internal representations similar enough that they can be "directly" communicated; or are they so different between individuals that converting between them will amount to going through a common grammar in order to interface between them?

Favouring direct interface is the argument that serial representation was only needed to communicate over the distance to another human - i.e. because we didn't have direct brain-to-brain contact - and it is not needed apart from that. Also in favour of direct communication is that if individuals are reasoning about the same things in the same ways, our internal representations must be equivalent in some sense. Other hidden aspects of ourselves also turn out to have strong similarities (such as internal organs); and where they vary, they often vary in strong groups (such as blood types), so perhaps our hidden mental features will also be similar.

Against this is the observation that everyone's perspective is unique - literally, where their eyes are - but also of course their figurative perspective, resulting from cumulative experiences, genetic background, internal thoughts and randomly different wiring/interpretations. Therefore, we don't have the same internal models. We don't even have different models of the same reality, because we perceive reality differently - our models are of a different reality. And this diversity is one of the great creative strengths of our collective intelligence, that different people have different insights into the same problem. Another argument against this is that grammar is not only used in communicating to others, but is also used to communicate to ourselves - when we talk to ourselves, debate a course of action internally, "marshal our thoughts" into words. There's even an argument that the full power of human thought requires language - it's not just a serialization format, but is thought itself. By this view, even if we can communicate directly, it would only be associations, senses, emotions, recollections etc. We couldn't communicate an argument, for example.

SUMMARY our internal representations will differ at least to the extent that we interpret things differently (i.e. that we live in a different reality). We will probably fall into groups, like blood types; also by profession/training/personality/social group. i.e. between persons "on the same wavelength". This is a limit on how "directly" communicate, i.e. without requiring a shared intermediate general-purpose language.

tl;dr qualia

Few days back I read an article that the flower communicating to the bee. Now this. Mother Nature is awesome.

"Although the information was transmitted in real time, the learning process was not instantaneous. [It] takes about 45 days of training an hour a day, said Prof Nicolelis."

So much for Matrix learning style.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact