Hacker News new | past | comments | ask | show | jobs | submit login
“Social network” of brains lets people transmit thoughts to each other’s heads (technologyreview.com)
153 points by rbanffy on Oct 4, 2018 | hide | past | web | favorite | 56 comments



> “A cloud-based brain-to-brain interface server could direct information transmission between any set of devices on the brain-to-brain interface network and make it globally operable through the Internet, thereby allowing cloud-based interactions between brains on a global scale,” Stocco and his colleagues say.

I bet they struggled to keep a straight face saying that.


Disappointed they didn't fit blockchain in there somewhere


MIT student: reinvents wheel

MIT press office: "MIT researchers develop matter transporter"


Reading further: The motion of the mechanism has been described as an "harmonious coordination of trochoid trajectories".


This reminds me of a cool thought that I've read some place else: "imagine a technology so advanced that can transmit thoughts wirelessly between humans and even change the actual layout of the neruons in our brains". We already do that with our voices....


> We already do that with our voices....

But that requires one side speaking. If they refuse to speak, then we can't know their thoughts. And people can also tell lies with their voices. Now if we could somehow interface directly with the brain...

Voice is like an API the person provides to others. Like twitter's API for users access data. But twitter can shut down the API or manipulate their API to show us what they want us to see. But imagine if you had direct link to their database?


Something about putting it in technical terms like that made this idea 10x more unsettling.


People will just learn to lie with their brains.


I would argue that this has already happened, and the evidence that eventually proves it will be truly horrifying.

Think about a person--someone you may already know well--that actually believes his own bullshit. Such a person armed with a brain-to-brain interface is an existential threat to objective reality.


Sociopath brain?


I think the point is to eliminate the extremely inaccurate translation from thought to speech back to thought.


Who's to say that there isn't translation to do from thought to thought?

Kind of like we have different voices, accents, speech patterns, languages and manners of speech, I can't imagine our thoughts, even our most accurate representations of the simplest picture (eg "think of a white square"), aren't similarly different.


Feynman had a party trick where he could time a minute in his head by counting the seconds and count up the number of lines in a newspaper article.

A friend of his tried and couldn't do it, but was able to hold a conversation while timing a minute, which Feynman couldn't do.

They realised that while Feynman was counting the minute, he was counting verbally (but silently), while the friend was visualising a counter - or Feynman was using the speech centre, so could do a visual task at the same time, while the friend was using the visual centre, so could do a verbal task.

Something as simple as counting in their head was done completely differently.


That is an extremely interesting experiment, I had never heard about it before. Searching for it, I found the original article by Feynam (.pdf scan) : http://calteches.library.caltech.edu/607/2/Feynman.pdf


His visual thinker friend is https://en.wikipedia.org/wiki/John_Tukey


Agreed. Furthermore, thought has linked layers and context. Even if you could perfectly transmit an island idea from one person to another, it's unlikely that it would happen to contain the rich subjective context to make it nearly as useful to the recipient.

This is why learning and maturation take so much damned time.


How much of thought is encoded as speech to begin with?


Reminds me of this Futurama scene:

[Fry and Leela after Fry had an advertisement in his dream]

Leela: Didn't you have ads in the 21st century?"

Fry: Well sure, but not in our dreams. Only on TV and radio, and in magazines, and movies, and at ball games... and on buses and milk cartons and t-shirts, and bananas and written on the sky. But not in dreams, no siree.

If this technology gets refined, this might even be a reality in what, 50 years?


Or from the final episode of Firefly:

Jayne: Well, I don't like the idea of someone hearin' what I'm thinkin'.

Inara Serra: No one likes the idea of hearing what you're thinking.


Love it!

Seems it works for nearly all SM sites.


Small nitpick, it was the 20th Century, Fry got cryogenically frozen on the 31st December, 1999!


Even better: the XXI century doesn't really start until 2001, because both:

-Each century is 100 years long.

-There was no zero year - only the first.

Fry didn't get to see the XXI cemtury until spoiler alert.


This is so freaking cool, I literally can't believe this is happening and am so excited about being involved in technology at a time when such advanced capabilities are right around the corner. I wonder how tricky this is to actually build. I wonder if you could use some of the stuff off of OpenBCI and build one of these BrainNets using available resources. Anyone want to try with me? I can't believe they were able to accomplish this using only 32-channels, that seems pretty unbelievable.

Probably out of reach but it would be good for this kind of tech to not just get cobbled up in the regular patent storm and be able to keep some kind of implementation in the public domain as a result of putting this together. Oh, to be a CS student at the University of Washington right now... must be a fun time. I would trade an arm and a leg to be at the forefront of this domain, its the culmination of so many different fundamental breakthroughs that have been made over the last 20 years. What a time to be alive.

I can't wait to see where this + AR/VR will take us, but am also quite concerned about the cavalier attitude that megacorp tech companies take toward privacy issues and the implications of true telepathy on social interactions on a broader level


As much of the research in BCIs this one is more of a feat in engineering rather than science. What is the ultimate goal of the study? We know that flashing stimuli cause a response in a brain (this is called Steady State Visual Evoked Potential or SSVEP and is studied a lot). We know that a TMS device can cause receiving person to perceive flashes. This game could be fun and all, but it is a technology demo and ability to say "I did this first".

Another thing, SSVEP has two modes of action, one is based on vision and the other on attention. That is: if somebody is just looking at a flashing light, the signal is clearly visible in the occipital region of the scalp but this is _not_ brain signals, it is a signal from the optical nerve. It is (according to a paper) possible to consciously control the SSVEP response amplitude (for example with the two flashing targets in the peripheral vision and an eye-tracker that ensures that the subject is not actively looking at a target, merely concentrating on one). This response will be of lower amplitude but will come from the brain.


Cloud based brain to server? so finally I can spend my whole day swooning over Justin Bieber while some coffee barista in the city cant stop thinking about diesel injection timing faults for the Peterbuilt I was working on.


Still in its infancy it seems but frankly I'd hate for this to develop into a "thing". It's already hard to switch off in this crazy world, next up: Thought advertisements.


This is scarier than you think. So long as most people are okay with it, you may not have much of a choice in whether to participate or not. (Just as you mostly can’t choose to pay for web content instead of watching ads.)


I wonder what a thought ad-blocker would look like. Maybe a tin foil hat :)


A very good article on this topic: https://waitbutwhy.com/2017/04/neuralink.html


Even though this is obviously not as far down the road as the article makes it out to be, it's an amazing achievement that it works at all. I'm pretty surprised that something like this is possible. Considering that for my laymen mind, this sounds like you are "writing" on the brain, I'm gonna pass using this myself though. Seems possible to alter something in the human brain that you really didn't want to tinker with.


At risk of sounding like a broken record, I want to point out that you don't need fancy hardware to experiment with this sort of thing. Simple GSR (galvanic skin response) sensors and a bit of self-hypnosis are enough to get some very interesting effects and communication.

(Also, you have fancy hardware: your eyes and face, ears and voice, etc. It's easy to lose sight of the fact that still today the most sophisticated hardware in the room is the human nervous system. At least for a few more years, eh?)


I know it's gauche to reply to your own messages, but I wanted to be less coy about what I'm saying:

Make nine sensors that can detect twitches of your fingers, it doesn't matter what modality you use: motion, sound/vibration, electrical, but you want to be able to tunable sensitivity of each channel independently.

Connect the sensors to your eight fingers and one thumb.

Use hypnosis[1] to ask your unconscious mind to output binary digits on the fingers and a clock/ready signal on the thumb.

Presto. You have an 8-bit parallel port from your mind to your microcontroller.

With a camera and modern ML you would only need two bits, a yes/no and the clock/ready signal, to "train" your computer to read information from your facial expressions. And of course this can be extended to full biometric sensor suites. In fact, if you're wearing e.g. a Fitbit, you already have verything you need to set up a pseudo-telepathic UI. You just have to put some off-the-shelf software together and get your unconscious mind onboard.

Hardware is not the limiting factor.

[1] Self-hypnosis is easy to learn, or you can hire a hypnotist to induce a trance and give you post-hypnotic suggestion to be able to re-enter trance at will using a self-trigger (often a "mudra"[2] or brief counting ritual.)

[2] https://en.wikipedia.org/wiki/Mudra


> These tools include electroencephalograms (EEGs) that record electrical activity

As soon as I read "EEG", I rolled my eyes -- another clickbait title showcasing a usage of EEGs that, while interesting, is just a novelty. I don't think people are interested in communicating over a two-character alphabet. The "Yo" app, what happened to that again? I don't think the "Yo" app would have been successful, even if you could do it right from your head.


More like a brain telegraph than brain social network. Also sender doesn't actually thinks just focuses attention on one of two blinking lights.


'Siri, I'm getting pulled over'

beep, beep

'Yes, Dave, remain calm - the thought police will be with you shortly'


How do they make sure their communicator isn't working through the wrong mechanism? It could be tricky to verify that they aren't just flipping the rotate bit when their head gets zapped, or something else noticeable but not interesting.


We also do this via air pressure and the vocal tract with words.


Am I the only one who doesn't want this? I am spammed enough as it is.


The headline could just as well refer to the practice of talking...


I don't know what to think of it. The article grossly misinterprets the experiment as some sort of magical thought interface, but in fact it's just a utilization of EEG devices and software. But then I read the comment of one of the engineers and I cringe at "cloud-based brain-to-brain server". This sensationalism in science and engineering has to stop.


While I understand that it's a bit sensationalized. At the end of the day, they're transmitting 1 bit of information at a very slow rate. While's that's only 1 bit more than 0, it's still an infinite percentage increase, and still leaves me super excited.

Despite my bs-filter having toned it down and given me just the facts that it's 3 people in a room with wires on their head playing tetris, with 2 looking at lightbulbs to send signals, it's still got me pumped.

I really believe BCIs be game changers in the way we communicate with electronics (if not other humans), and I for one would love to play a game of tetris over the internet with an unknown mind.

Edit: even in its current transmission rates, I think this tech could have far reaching implications in a better form factor. 1 slow bit is all I need to take a picture, lock/unlock my car, flip the light switch, turn the tap, open/close the door, Start the washing machine, turn the tv on, start the car, close the window, etc.


The 1 bit limit doesn't look like a technological barrier, but a we-don't-feel-comfortable-cutting-into-human-brains-for-research barrier. Of mice and monkeys, the story is more than a bit different: https://www.nature.com/articles/srep11869

This is coming. So fast.


This was a great read thanks, I found a good precursor study on brain-to-brain interface (BTBI) using intracortical microstimulation (ICMS), if you are not familiar with this research it's a good place to start: https://www.nature.com/articles/srep01319


> 3 people in a room with wires on their head playing tetris

It's not really tetris, it's so simplified as to be reduced to a simple press/ don't press action.

Also, the whole contraption is the union of two separate devices: an input device that picks up a single codified signal from an EEG (but could simply and more effectively track eye movements for the same result); and another one that produces a visible flick of light in a person's FOV- albeit not passing through the retina. That's it. It doesn't "transmit thoughts" at all, it transmits coded signals at a much slower rate than me typing on a keyboard right now.


Yeah, we really don't need the bandwidth until media can be sent/received/interpreted directly into the brain.

Hooking up a portable BCI to a raspberry pi-equivalent as an intermediary should be all we need to really start connecting the brain electronically to the outside world in useful ways.


Or you could stream information in full 4K instantly through one's eyes.


If your eyes work.


Sure. I'm assuming if someone wants to stream something into your head, the assumption is the person is seeing.


Eyeball is money. Ad-supported media has to go away.


Yup. Why bother with eyeballs if you can stream ads to brain directly.


Paid trusted curation?


My prediction: if you apply moores law to this new brain-computer interface, we will eventually enter the matrix!


That's one over-hyped article if i ever saw one.

TLDR; It allows to send two signals that have two different functions in a tetris game - move left or rotate - accomplished by looking at different strobing lights that causes the brain to change slightly.


Yeah, I really like that part:

> A cloud-based brain-to-brain interface server could direct information transmission...

They can barely transmit 1 bit per second with no prospect of increasing the data rate but start talking BS about the cloud.


One thing I'm always impressed by is what the internet is - what it most fundamentally is.

It's nothing but two or more people connected by a line being able to send a reliable message of 0 or 1 to each other. That such a thing that would revolutionize the world and nearly every aspect of peoples' lives is something that would have sounded unreasonably hyperbolic in the lifetime of many people still alive today.

The most unremarkable sounding achievements are somehow often what lead to the most revolutionary achievements and developments. On the other end of the spectrum the most revolutionary sounding achievements seem to invariably sputter off into nothing.


[flagged]


Please don't do this here.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: