I bet they struggled to keep a straight face saying that.
MIT press office: "MIT researchers develop matter transporter"
But that requires one side speaking. If they refuse to speak, then we can't know their thoughts. And people can also tell lies with their voices. Now if we could somehow interface directly with the brain...
Voice is like an API the person provides to others. Like twitter's API for users access data. But twitter can shut down the API or manipulate their API to show us what they want us to see. But imagine if you had direct link to their database?
Think about a person--someone you may already know well--that actually believes his own bullshit. Such a person armed with a brain-to-brain interface is an existential threat to objective reality.
Kind of like we have different voices, accents, speech patterns, languages and manners of speech, I can't imagine our thoughts, even our most accurate representations of the simplest picture (eg "think of a white square"), aren't similarly different.
A friend of his tried and couldn't do it, but was able to hold a conversation while timing a minute, which Feynman couldn't do.
They realised that while Feynman was counting the minute, he was counting verbally (but silently), while the friend was visualising a counter - or Feynman was using the speech centre, so could do a visual task at the same time, while the friend was using the visual centre, so could do a verbal task.
Something as simple as counting in their head was done completely differently.
This is why learning and maturation take so much damned time.
[Fry and Leela after Fry had an advertisement in his dream]
Leela: Didn't you have ads in the 21st century?"
Fry: Well sure, but not in our dreams. Only on TV and radio, and in magazines, and movies, and at ball games... and on buses and milk cartons and t-shirts, and bananas and written on the sky. But not in dreams, no siree.
If this technology gets refined, this might even be a reality in what, 50 years?
Jayne: Well, I don't like the idea of someone hearin' what I'm thinkin'.
Inara Serra: No one likes the idea of hearing what you're thinking.
Seems it works for nearly all SM sites.
-Each century is 100 years long.
-There was no zero year - only the first.
Fry didn't get to see the XXI cemtury until spoiler alert.
Probably out of reach but it would be good for this kind of tech to not just get cobbled up in the regular patent storm and be able to keep some kind of implementation in the public domain as a result of putting this together. Oh, to be a CS student at the University of Washington right now... must be a fun time. I would trade an arm and a leg to be at the forefront of this domain, its the culmination of so many different fundamental breakthroughs that have been made over the last 20 years. What a time to be alive.
I can't wait to see where this + AR/VR will take us, but am also quite concerned about the cavalier attitude that megacorp tech companies take toward privacy issues and the implications of true telepathy on social interactions on a broader level
Another thing, SSVEP has two modes of action, one is based on vision and the other on attention. That is: if somebody is just looking at a flashing light, the signal is clearly visible in the occipital region of the scalp but this is _not_ brain signals, it is a signal from the optical nerve. It is (according to a paper) possible to consciously control the SSVEP response amplitude (for example with the two flashing targets in the peripheral vision and an eye-tracker that ensures that the subject is not actively looking at a target, merely concentrating on one). This response will be of lower amplitude but will come from the brain.
(Also, you have fancy hardware: your eyes and face, ears and voice, etc. It's easy to lose sight of the fact that still today the most sophisticated hardware in the room is the human nervous system. At least for a few more years, eh?)
Make nine sensors that can detect twitches of your fingers, it doesn't matter what modality you use: motion, sound/vibration, electrical, but you want to be able to tunable sensitivity of each channel independently.
Connect the sensors to your eight fingers and one thumb.
Use hypnosis to ask your unconscious mind to output binary digits on the fingers and a clock/ready signal on the thumb.
Presto. You have an 8-bit parallel port from your mind to your microcontroller.
With a camera and modern ML you would only need two bits, a yes/no and the clock/ready signal, to "train" your computer to read information from your facial expressions. And of course this can be extended to full biometric sensor suites. In fact, if you're wearing e.g. a Fitbit, you already have verything you need to set up a pseudo-telepathic UI. You just have to put some off-the-shelf software together and get your unconscious mind onboard.
Hardware is not the limiting factor.
 Self-hypnosis is easy to learn, or you can hire a hypnotist to induce a trance and give you post-hypnotic suggestion to be able to re-enter trance at will using a self-trigger (often a "mudra" or brief counting ritual.)
As soon as I read "EEG", I rolled my eyes -- another clickbait title showcasing a usage of EEGs that, while interesting, is just a novelty. I don't think people are interested in communicating over a two-character alphabet. The "Yo" app, what happened to that again? I don't think the "Yo" app would have been successful, even if you could do it right from your head.
'Yes, Dave, remain calm - the thought police will be with you shortly'
Despite my bs-filter having toned it down and given me just the facts that it's 3 people in a room with wires on their head playing tetris, with 2 looking at lightbulbs to send signals, it's still got me pumped.
I really believe BCIs be game changers in the way we communicate with electronics (if not other humans), and I for one would love to play a game of tetris over the internet with an unknown mind.
Edit: even in its current transmission rates, I think this tech could have far reaching implications in a better form factor. 1 slow bit is all I need to take a picture, lock/unlock my car, flip the light switch, turn the tap, open/close the door, Start the washing machine, turn the tv on, start the car, close the window, etc.
This is coming. So fast.
It's not really tetris, it's so simplified as to be reduced to a simple press/ don't press action.
Also, the whole contraption is the union of two separate devices: an input device that picks up a single codified signal from an EEG (but could simply and more effectively track eye movements for the same result); and another one that produces a visible flick of light in a person's FOV- albeit not passing through the retina. That's it. It doesn't "transmit thoughts" at all, it transmits coded signals at a much slower rate than me typing on a keyboard right now.
Hooking up a portable BCI to a raspberry pi-equivalent as an intermediary should be all we need to really start connecting the brain electronically to the outside world in useful ways.
TLDR; It allows to send two signals that have two different functions in a tetris game - move left or rotate - accomplished by looking at different strobing lights that causes the brain to change slightly.
> A cloud-based brain-to-brain interface server could direct information transmission...
They can barely transmit 1 bit per second with no prospect of increasing the data rate but start talking BS about the cloud.
It's nothing but two or more people connected by a line being able to send a reliable message of 0 or 1 to each other. That such a thing that would revolutionize the world and nearly every aspect of peoples' lives is something that would have sounded unreasonably hyperbolic in the lifetime of many people still alive today.
The most unremarkable sounding achievements are somehow often what lead to the most revolutionary achievements and developments. On the other end of the spectrum the most revolutionary sounding achievements seem to invariably sputter off into nothing.