I imagine the following: the jack outputs nerve impulses from the brain. Feed these impulses into a robot's control interface/"API", and the user's brain will be able to adaptively learn to control the various functions offered (e.g. flick a finger to make the robot's hand perform the same action). Even better, the robot uses a learning algorithm to associate the nervous data with the correct action through feedback from the user, so that eventually, the user's moving his hand causes the robot to do the same. Voila, exo-suits controlled directly by the brain.
If you can pass data back into the brain through the jack, you can return tactile feedback from the robot arm, or even sensations that represent other information, e.g. light or gas levels in the environment detected by mechanical sensors. I doubt human brains can process information that doesn't fit in our spectrum of senses, e.g. mathematical patterns, or abstract conceptual knowledge like an understanding of democracy, but maybe they can accept visualizations or memory vignettes generated by other users. Brains are adaptive and can learn to process unfamiliar stimuli.
It's all far in the future and raises a lot of strange and worrisome questions, but it's interesting to think about.
This is partially the basis of "On Intelligence" by Jeff Hawkins, which led to his founding of Numenta (disclosure: I worked there last summer).
If so, what about less intrusive input methods? I'm thinking of cameras, radars, sonars or entirely synthetic source of visual data with input through a pressure point grid on the user's skin . This could easily have a lot of applications, from supplementing rear-view mirrors in cars with something you "see" all the time to network admins feeling which machines in their server park are being port scanned. The downside of this kind of dermal input would be the mechanical complexity of the "display" that applies pressure to your skin and the relatively low speed.
A relevant question is whether if I learned to "see" with pressure applied to one patch of my skin would I be able to quickly start "seeing" with a different one? Another question is whether this scales and will you be able to use multiple inputs at the same time once you've learned one.
 The closest thing to this that I know of is Northpaw, a haptic compass. It's 1D, though. See http://sensebridge.net/projects/northpaw/.
In summary from my reading of the book, current research seems to back the perspective that our brains are basically making sense of chaos: there's no standardized protocol - we automatically decode patterns from random streams of input.
Reading the plot of the movie right after the article definitely gave me the sci-fi heebie-jeebies...
PS: Learning to communicate over a neural link would be at least as hard as learning a new language, which is why I suggest the side channel approach. Body language is useful and much simpler than say Spanish.
to be convenient: http://www.nature.com/srep/2013/130228/srep01319/full/srep01...
One observation, that the bbc didn't note, is that the decoder brain in the end had a representation of the encoder's whiskers.
To be a bit hyperbolic, the rat started to think it had two bodies. Or maybe it thought it had an extra set of whisker's some where. I can't quite imagine what the rat's body image would be.
But on the other hand this totally creeps me out. I am disgusted, that something so possibly sinister is being researched. Something with such a potential for misuse.
After reading this quote:
"We will have a way to exchange information across millions of people without using keyboards or voice recognition devices or the type of interfaces that we normally use today," he said.
I just thought:
"We are the Borg. We will add your biological and technological distinctiveness to our own. Resistance is futile."
Tickle one and both know it. There's some discussion of shared personality also. In the end, They're just kids, with a little more sharing than others:
And honestly, this isn't like the nuclear bomb. The benefits of a brain-to-brain communication tool vastly outweigh the possible harms.
I also believe that becoming "Borg" is inevitable for at leas a subset of human population. Furthermore I think that it will be a good outcome. In the long run.
A wire popping out of my head makes that a bit hard.
Imagine what it does for peoples empathy when they can feel each others pain.
Especially if this technology becomes advanced to the point that you don't need the network directly, and you can just in essence record someone's emotional state then broadcast it. Instant purgatory.
Then again, if you can feel someone else's pain, presumably you can feel their pleasure too...
There is always another side. I remember seeing this sort of voyeurism explored in Tad Williams' Otherland series.
I just don't see a clear path from this to true 'communication' - unless it takes a form like morse code that one (human) mind taps out and the other (human) mind consciously decodes. But I know nothing about this either ;)
As you say, there was additional training to then handle the encoding / decoding.
Although this would also mean the transhumanists were right all along which just annoys me.
I wonder whay human brain clusters will be used for? Stock market analysis?
Brain-to-brain communication will be convenient, because we won't need physical I/O devices. However, I wonder if there will be any advantage to interfacing any deeper into the brain than those corresponding nerves... that is, perhaps direct-brain interfaces will be pretty much like conventional I/O, just without the physical I/O device (like the cochlear implants/bionic eyes/bionic arms of today). So we can't improve fundamentally over things like google glass/MYO.
The reason I say this is because we already have brain-to-brain communication in the form of language. This involves reducing our internal representation (whatever it is) into a serial format that follows grammatical rules - not necessarily "proper" grammar, but sufficient to enable the recipient, using shared knowledge, to decode the meaning.
This has striking parallels to network communication: we have the internal representation of data structures (e.g. objects), which are serialized to a low-level protocol (e.g. xml, json, yaml, protobuf, asn.1) which follows a higher-level grammar (explicit like xsd or implicit as in json). Well, not entirely surprising, since Chomsky's grammar hierarchy was based on human language.
The interesting question is whether brain-to-brain communication can be any more "direct" than the serial representation (language) we already have...
That is, are our internal representations similar enough that they can be "directly" communicated; or are they so different between individuals that converting between them will amount to going through a common grammar in order to interface between them?
Favouring direct interface is the argument that serial representation was only needed to communicate over the distance to another human - i.e. because we didn't have direct brain-to-brain contact - and it is not needed apart from that.
Also in favour of direct communication is that if individuals are reasoning about the same things in the same ways, our internal representations must be equivalent in some sense. Other hidden aspects of ourselves also turn out to have strong similarities (such as internal organs); and where they vary, they often vary in strong groups (such as blood types), so perhaps our hidden mental features will also be similar.
Against this is the observation that everyone's perspective is unique - literally, where their eyes are - but also of course their figurative perspective, resulting from cumulative experiences, genetic background, internal thoughts and randomly different wiring/interpretations. Therefore, we don't have the same internal models. We don't even have different models of the same reality, because we perceive reality differently - our models are of a different reality. And this diversity is one of the great creative strengths of our collective intelligence, that different people have different insights into the same problem.
Another argument against this is that grammar is not only used in communicating to others, but is also used to communicate to ourselves - when we talk to ourselves, debate a course of action internally, "marshal our thoughts" into words. There's even an argument that the full power of human thought requires language - it's not just a serialization format, but is thought itself. By this view, even if we can communicate directly, it would only be associations, senses, emotions, recollections etc. We couldn't communicate an argument, for example.
SUMMARY our internal representations will differ at least to the extent that we interpret things differently (i.e. that we live in a different reality). We will probably fall into groups, like blood types; also by profession/training/personality/social group. i.e. between persons "on the same wavelength". This is a limit on how "directly" communicate, i.e. without requiring a shared intermediate general-purpose language.
So much for Matrix learning style.