One of the biggest challenges with decoding brain signals is getting a large number sensors that detect voltages from a very localized region of the brain. This study was done with ECoG (Eletro-Cortico-Gram) which involves implanting small electrodes directly on the surface of the brain. Nearly all consumer devices use EEG (Electro Encephelo Gram) which involves putting sensors on the surface of the skin.
Commercially available ECoG is highly unlikely as it requires an extremely invasive brain surgery. For ethical reasons the implants from the study we're likely implanted to help diagnose existing life threatening medical issues.
Decoding speech from EEG won't work as well as ECoG a number of reasons. First the physical distance between the sensors and the brain Means the signals you pick up aren’t localized. Second the skin and skull are great low pass filters and filter out really interesting signals at higher frequencies, 100-2KHZ. Additionally these signals have a really low signal power because they're correlated with neuronal spiking.
ECoG does a really good job picking on these signals because the sensor is literally on the surface of the brain. Its really hard to pick up these signals reliably with EEG.
There's been a forty-year evolution in these techniques. The cheap, noisy technique might prevail if scientists keep refining the craft by tiny improvements.
I don't think EEGs can really give the spatial and temporal resolution we really need to extract the necessary information for thought decoding (or encoding).
Working with a NN, the brain can probably negotiate the relevant parameters (let's say frequency modulation across one channels with four bins, across 4 spots on the brain), that's 4^4 = 64 values across whatever timebox of resolution you get. That's enough to encode english letters, which itself probably is an underprovisioned mechanism of data transfer).
I wouldn’t be surprised if the biofeedback allows the user to retrain their brain to encode information in low frequency signals—the raw channel capacity necessary to transmit text is quite low, plus the reader NN can also use context if hooked up to a recurrent submodel.
Yeah, you're not putting that in my brain.
This is invasive to the extreme, and seems to open the door for violations of people's intimate thoughts down the road.
You may not think about it much now, but if you pay any attention to things like intrusive thoughts, or even have to deal with carefully maintaining a public face in the workplace, it should not be difficult to realize why these technologies are legitimately dangerous even as read only systems.
The real nightmare begins when you finally get fed up with Read-Only and figure out how to write in order to potentially mutate mental state.
I'm normally pretty forward-thinking in terms of embracing the March of technological progress. However, the last decade or so has shown we as a society have had our grasp exceed our socio/ethical/moral framework for using it responsibly; and the potential abuse a full read/write neural interface would enable is one of the few things that has managed to attain a "full-stop" in my personal socio-ethical-moral framework.
Not to sound like that an adult, but we're just not ready.
Before anyone points out that the same moral outrage probably occurred with the printing press; there is a big damn difference between changing someone's mind through pamphlets, and having a direct link to the limbic system to tickle on a whim. We do a very bad job of correctly estimating the long-term effects of technological advancement; just look at how destructive targeted advertising has been.
I haven't reached my conclusion on an existing preconception/predisposition either. I used to be massively for this particular advancement. Only through a long time spent reflecting on it has my viewpoint done a 180.
I'm aware of all of the positive applications for the handicap, brain-locked, and paralyzed; but I'm still reluctant to consider embracing it for their sake when I've seen how prone to taking a crowbar to a minor exception/precedent our legal system is.
Maybe I've just been in the industry long enough not to trust tech people to keep society's overall well-being and stability at heart. Maybe I'm becoming a luddic coward as I get older. I don't know, and I ask myself if I'm not being unreasonable every day. The answer hasn't changed though in a long while, even though I do keep trying to seek out opportunities to challenge it.
I hope that helps, and doesn't make me sound like too much of a nut.
I've recently read a short story from Ted Chiang likened the development of writing to a fundamental cybernetic enhancement of the brain. I found it to be quite enlightening, as I never thought of how writing changes how we see ourselves and the environment. Our memories are imperfect and inaccurate and amplify biases we have, while writing loses much less information.
> just look at how destructive targeted advertising has been
Can you elaborate? Targeted advertising doesn't even make my top 100 of destructive technologies.
To clarify: we've gone from general audience profiling, to employment of broadband sensors for surreptitious collection of data from which to make ad serving decisions. There also exist patents for installing microphones for responding to user's screaming a brand name at a TV to skip a commercial, and the practice of frame sampling of viewed content from SmartTV's. These intrusions into personal privacy come purely for the benefit of forwarding the interests of these ad servers, which also creates a vulnerability in terms of the fact that your digital footprint is available to anyone else interested in paying or requesting to be able to use it. You can't have that granular ad targeting without implementation of further surveillance capabilities.
Furthermore, there are additional consequences in that filter bubbles are created. Without you being aware, the advertising industry by default will attempt to skew your overall experience toward what they think you want to see rather than what is actually out there, or what you ask for. These algorithms allowed to run unchecked, without instilling an innoculative knowledge of the tendency of these systems to shepherd one right off the reservation given enough time, leads to things where we throw around phrases likening our society to being "post-truth", and have actually recorded multiple instances of widespread population level sentiment engineering.
So we"e garbage binned any semblance of common worldview, and invited Orwellian tiers of data collection into our lives so that other people can stand a chance at maybe serving us an ad we weren't even actively looking for in the hopes of modifying our behavvior to make a purchase happen so that they can generate revenue off of our eyeballs and content creation.
Make no mistake. Targeted advertising is a blight. It's one of those things that sounds reasonable, innocent, and possibly even helpful on the surface; but quickly sours once you start digging into the details that make it happen.
I understand some people may feel they get value out of such an arrangement; that having that ad pop up at that time genuinely makes their life easier. I ask the followi ng, however: has an ad ever taught you anything that dedicated research, and purposeful exercise of your will to purchase couldn't teach you? Has your experience searching and trying to share information online not been adversely effected in that all people's searches of the same terms have no real consistent base anymore? The answer for me in both cases is "no". Throw in the fact that if I don't regularly clean out every last trace of client side state, my wanderings through cyberspace are painstakingly mapped and integrated by an industry hell bent on coaxing Every last shred of potential value out of my mere existence with no regard for the dangers of accumulating all that data in one place.
Nowadays, you have rumblings that we should be using these technical solutions as the basis of social/political policy, and half the people making the assertion one way aren't looking at the whole picture.
I don't want the world to time-freeze at early 2000's technology by a long shot. Let me be clear on that. I do however, believe we need to seriously take a look at our capabilities, and work on creating a cohesive, widespread set of ethic/moral dicta that jive with what we purport our most valued cultural aspects are as a society. Yes, I understand that may mean converging to things I don't agree with; and that's fine. I just want as many people as possible to have the whole picture; and I don't think that right now that is actually the case.
Also, see the information warfare post from a sibling poster. Information, and tactically imposed voids of information are just as weaponizable as any object. Over longer timescales, no doubt. Still viable though.
Shear forces cause glial scarring?
Don't get me wrong, I love Elon as much as the next fanboy but there are serious ethical implications at play here that you can't just engineer your way around.
I am way out of my element talking about brain surgery and sensors. However, one thing that I do well is say "you shouldn't bet against neural networks", which is a great way to be right on a few-year time horizon.
Your point about invasive implants being impractical for commercial use made me wonder.. I searched the first phrase that popped into my head, "Non-invasive Brain-Computer Interface". Looks like there's promising research on significantly improving sensitivity/resolution of EEG singals.
- First Ever Non-invasive Brain-Computer Interface Developed - https://www.technologynetworks.com/informatics/news/first-ev... (2019)
- Noninvasive neuroimaging enhances continuous neural tracking for robotic device control - https://robotics.sciencemag.org/content/4/31/eaaw6844
Still, your prediction of "likely decades" sounds realistic. I'm hoping for an affordable, non-invasive brain-computer interface to be as widely used as the keyboard, mouse, or microphone.
Extremely crude, but it worked. IIRC the instructions were "to go left, think 'go left'"; "to go right, relax". After some number of sessions (I think I remember it being "a lot") the user's brain would automatically do the correct thing.
If I may add : EEG is also a pain to put in place (you have to put gel, correctly put the cap etc.) and it's very easy to pollute your signal by merely moving a bit or even blinking.
DLSS kinda works because we "know" what is what thing in a photo.
EEG to ECOG would be trying to figure out a painting (that could be anything, from any painter) from a significant distance behind a frosted glass
I thought the reason DLSS works is because the same rendering algorithm is used to generate the low resolution image and the high resolution image and the neural network merely learns a filter between the two.
Take a patient with ECoG implant(s), put EEG sensors on the patient, and hit record. You now have the same rendering mechanism (the brain) generating a low resolution signal (EEG) and a high resolution signal (ECoG).
However, back to DLSS, if the low resolution signal is a single pixel, then generating a 4k image from just that single pixel may not be very fruitful.
Still, it would be interesting to see an attempt at using a generative adversarial network (GAN) to generate an ECoG from an EEG. And if it doesn't work, then make a determination of how much more EEG sensitivity is needed before it will work.
Corallary, you need two microphones at least for simplicities sake, although the repetitivenes of the chants makes it a little easier.
Most likely this will have good use in aphasia research.
The ECoG implants are usually done to pinpoint where seizers start in the brain. The surgeons already have a good idea of the rough area from EEG but to zero in on the exact locus they need ECoG.
So to find a candidate for a study you need someone who has epilepsy, their epilepsy needs to be bad enough to merit brain surgery, and the epileptic center cannot be the brain region you care about but it needs to be close enough to your target region that the same ECoG will cover both brain areas.
So again I'm all for pushing this science forward. The more we learn about how the brain works the more we'll understand what makes us human. However, this isn't a technology problem right now. Its an ethical and medical one.
That sounds like a limitation of the implant technology to me. The reason there are ethical problems with performing invasive brain surgery on a healthy person is because the risks and downsides of currently available implant technology are significant. If getting a brain implant were as cheap, easy, and safe as getting a tattoo, the ethical problem would be largely solved.
Even for patients who suffer from severe health conditions such as OCD, depression, or tremors, the treatment can be very disruptive and emotional well-being may decrease, even with successful treatment of the condition (the doctor is happy, the patient less so).
The dual use possibilities of this technology are extremely scary, and the involvement of Facebook (supposedly for VR) and DARPA (supposedly to treat anxiety and PTSD in soldiers) does not bode too well.
As far as I know, decoding is challenging with totally cooperative subjects doing simple tasks. Wiggling just a few mm is enough to completely destroy a run.
> Brain scans can reveal how you think and feel, and even how you might behave. No wonder the CIA and big business are interested.
Technology vs. Torture (2004)
> Indeed, a Pentagon agency is already funding Functional MRI research for such purposes.
The Legality of the Use of Psychiatric Neuroimaging in Intelligence Interrogation (2005)
> For example, an interrogator could present a detainee with pictures of suspected terrorists, or of potential terrorist targets, which would generate certain neural responses if the detainee were familiar with the subjects pictured. U.S. intelligence agencies have been interested in deploying fMRI technology in interrogation for years. It now appears that they can.
Zero-Shot Learning with Semantic Output Codes (2009)
> As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.
Anecdotal hearsay: I first heard of a brain reading helmet able to successfully reconstruct a numerical password on subjects where the helmet was not trained on (but who consciously had to think about the keycode) in 2001. I also heard this technology was used extensively in Guantanamo Bay and black sites, possibly as a cheap trick to intimidate prisoners into speaking the truth / making them visibly anxious to lie, such tricks dating back to the world wars where they used sedatives and/or uppers disguised as "truth serums", and even the threat of administering these caused subjects to crack.
As for unethical deep brain stimulation research see:
- Assessment of Soviet Electrical Brain Stimulation Research and Applications (1975) https://www.cia.gov/library/readingroom/docs/CIA-RDP96-00792...
- Robert Galbraith Heath (1953+) https://en.wikipedia.org/wiki/Robert_Galbraith_Heath
> Dr Heath's work on mind-control at Tulane was partly funded by the US military and the CIA. Dr Heath's subjects were African Americans. In the words of Heath's collaborator Australian psychiatrist Harry Bailey, this was "because they were everywhere and cheap experimental animals". Following the discovery by Olds and Milner of the "pleasure centres" of the brain [James Olds and Peter Milner, "Positive Reinforcement Produced by Electrical Stimulation of the Septal Area and Other Regions of the Rat Brain," Journal of Comparative and Physiological Psychology 47 (1954): 419-28.], Dr Heath was the main speaker at a seminar conducted by the Army Chemical Corps at its Edgewood Arsenal medical laboratories. Dr Heath's topic was "Some Aspects of Electrical Stimulation and Recording in the Brain of Man." Details of Dr Heath's own involvement in the MK-ULTRA project remain unclear; but Tulane University continues to enjoy close ties with the CIA. Dr Heath also conducted numerous experiments with mescaline, LSD and cannabis.
So we're left with the anecdotal hearsay, which might well be true, but even if true, doesn't really show that fMRI was effective against prisoners beyond its use as a psychological trick.
If you allow an argument from authority of the practical use in counter terrorism interrogation, see the works of bioethicist Jonathan Marks https://scholar.google.com/citations?user=MpKuUlkAAAAJ&hl=en who in his 2007 paper cites "Correspondence between a[n anonymous] U.S. counterintelligence liaison officer and Jean Maria Arrigo" (2002-2005) https://en.wikipedia.org/wiki/Jean_Maria_Arrigo :
> Brain scan by MRI/CAT scan with contrast along with EEG tests by doctors now used to screen terrorists like I suggested a long time back. Massive brain electrical activity if key words are spoken during scans. The use of the word SEMTEX provided massive brain disturbance. Process developed by NeuroPsychologists at London’s University College and Mossad. Great results. That way we only apply intensive interrogation techniques to the ones that show reactions to key words given both in English and in their own language.
[Military interrogation takes two forms, Tactical Questioning or Detailed Interviewing. Tactical Questioning is the initial screening of detainees, Detailed Interviewing is the more advanced questioning of subjects.]
Note that I did not even make the stronger claim of decades of applied usage -- that you are making me defend and invalidate my references with -- just that it was decades underway. But above quote should satisfy even that.
CT scans don't tell you anything about brain function (they're structural), nor do the sort of MRIs that do tell you about brain function tend not to use contrast agents. People have used iron oxide to measure changes in cerebral blood volume, but it swamps the BOLD signal that's usually used to read out task-related activity.
On the other hand, I can imagine that you could figure out if a non-cooperative subject knew the word "SEMTEX" was actually a word with an oddball paradigm. Not sure how much that really helps but...
Also, the source you're quoting actually seems decidedly skeptical about whether any of this works. Here's Mark's paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1005479
I'm not trying to split hairs. It is possible that, despite the interest, the technology isn't at a state where it would actually be useful in practice. However, your new citation is stronger.
Israeli airport security (arguably the best in the world) deploys derivatives of these systems, that look at micro-gestures, elevation of heart rates, pupil dilation, and temperature changes, to see if passengers respond with familiarity to terrorist imagery flashed on a screen as they walk by it. If that already works in practice, imagine the same, but being strapped with hundreds of sensors.
See also the 2010 research on image reconstruction from brain activity, and extrapolate that 10 years in the future and applied to military interrogation: https://www.youtube.com/watch?v=nsjDnYxJ0bo
Well, given that unreliable interrogation techniques are pretty commonly used historically, maybe it's more of a value thing.
i.e. water-boarding is unreliable and cheap, fMRI is unreliable and expensive.
But how realistic is that?
Your brain is a vital organ. It's encased in a hard skull. There is very little margin for error.
It just doesn't strike me as the sort of procedure that could ever be made as cheap, easy, and safe as getting a tattoo -- at least not in our lifetime.
Obviously we won't be getting things quite to the level of cost and safety as tattoos anytime in the near future. Even Elon Musk's goal with Neuralink is somewhat less ambitious; he only wants it to be as safe and convenient as LASIK.
There was a throwaway comment about that at a press conference and lots of rumors (positive and negative), but the white paper only mentions “monkey” once—-and in a reference to another group’s paper.
It can be unexpectedly hard in so many different ways. The dura covering the monkey brain is much tougher, the brain itself is larger, more convoluted, and moves more, even just from breathing and heartbeats). The animals have busy, clever little fingers, so the interface itself needs to be mechanically robust and durable because these implants need to last for years.
I certainly want this to be true: with the exception of neuropixels, electrode technology has been depressingly stagnant. On the other hand, I need to see data before I get too excited and if I did have it, I'd be shouting it from the rooftops.
Neuropixels are fairly new and offer ~300 channels (selectable from ~1000). The neuralink thing would increase this another 10-fold.
Sparse signal reconstruction is a massive and very possible thing to do using IIRC various forms of FFT.
I think this has already been done, and probably consumer devices will be using sparsity to reconstruct cortical signals with sufficient detail for this.