The current state of the art is humans, mostly quadriplegic people, with chips which let them control robot arms with enough accuracy to pick up small delicate objects, play games, and type by thinking words. Current state of the art implants let people get tactile feedback too - they can intuitively know the position of the arm they’re controlling and feel the amount of pressure on each finger tip.
I had the pleasure of meeting a man with an implant in 2016 and the technology he was using was incredible. He could pick up a grape with his arm without crushing it and then eat it. If the arm was in another room, he could feel around with it to find objects. He described how the arm just felt like a part of his body when it’s turned on.
To use SpaceX as an analogy: this isn’t a first flight. This is more like a static test fire. If Neuralink follows the SpaceX trajectory of driving down cost/increasing innovation, it results will be really out of this world. Stuff that would just sound bonkers right now.
Previous discussion of surgical robot https://news.ycombinator.com/item?id=24311152
Remember, Musk is all about scalable manufacturing and products. If this was other's tech from 20 years ago, where's the massive commercialization? They failed. Musk is focused on this aspect, and he will (given his past achievements) succeed.
Here is a video from 2014 of someone controlling a quadcopter with their mind with zero implanted electrodes: https://www.youtube.com/watch?v=lsraC04Mm8w
Having 10x more electrodes that stay active for a long time, attached to self powered unit under skin with low latency wireless comm link is a major step forward.
Besides that, why the pushback? Is it coming from field experts who are like, "bah, old hat as shiny new. humbug!"; disappointed futurists who expected more ("we were promised flying cars and intelligent robotic spouses, and all we got was 2000 electrodes in our brains, and 140 characters"); misMuskists; people who sense unfairness in praises lauded upon this, in their mind, mediocre achievement, echoing to them a theme that the already-anointed and, in their view, undeserving, accrue all praise, while the genuinely meritful, themselves, for example, are overlooked, yet again?
Because the part that you would use to control your extra tentacle or whatever is probably currently used for something. If you are missing an arm is obvious what part of your brain you would use, otherwise, some sacrifice probably have to happen.
Even reading, it seems, take the space that is used in people that can't read for some detailed visual processing (1).
Maybe you could add some extra processing in the extra arm, so it's more "cheap" to use.
(1) - I learnt that in "Reading in the brain" by Stanislas Dehaene.
For instance, I'm learning German and I can feel how is messing with my English, specially with the spelling (English is not my first language).
The new skill you learn may reinforce existing areas if its components are transferrable. Also, I see schemas as compression in the brain; eidetic memory shows how much some brains are capable of.
Forgetting is very good for clarification. I think your example suggests that your spelling in English hasn't been established deeply enough; I understand why you might have an adverse reaction to learning it though (German spelling is far more reasonable).
I suspect it would be possible if you can do some kind of real-time feedback/tuning of the ML model. Reinforcement learning stuff.
By slowly adjusting the model to reward hitting the ball back in pong, but also to penalize moving the real physical hand, you might achieve it.
Obviously tactile and proprioceptive feedback through the chip would enable better / quicker control, given the sufficient plasticity hypothesis.
This uncanny sentence is giving me dystopian vertigo. I picture a mob of derelicts home-invading a 130 years-old tech boomer, fighting his remote-controlled spare limbs from room to room..
The risks of these devices are also significant: you have to get risky brain surgery to install them and then you might still reject the implants and have to get them removed.
A very limited number of people have these - the lab this guy worked at had I think about a dozen people, all with BMIs installed using different surgery techniques (some had the implants on the areas where the brain met the nerves going to the muscles so the output to the neural net was a more "direct" representation of movement whereas others had implants on motion planning areas on the brain giving a more "high level" output to the neural net). There are several such labs in the world. My best guess is there are less than one hundred BMIs installed on people today, as well as quite a few more on animals.
For human applications, most of the time the researchers are using patients that requires several brains surgery (e.g severe epilepsy) so that they can do some sort of fly-by experiments (several surgery are needed so that they can also remove the electrodes).
I studied this at uni; my knowledge is probably not up to date.
I’m asking in part because Elon’s suggested this could eventually lead to whole brain uploads and downloads. Partly asking because it’s pretty easy to be very destructive with a comment like yours and offering a glimpse of where this stands out from the crowd balances the attack.
Does anyone have book, talks and other material you'd suggest looking at?
The Annual BCI Research Award highlights interesting advances each year
"Search for Paradise: A Patient's Account of the Artificial Vision Experiment" is an older book (2012) about a research subject's personal experience with an attempted vision-restoring implant
The research put out by the University of Pittsburgh Human Neural Prosthetics Program and Braingate at Brown University
As a starting point, here is a good intro article from WIRED that you might find useful:
Stupid speed of light, making it lag so much.
See https://www.youtube.com/watch?v=TJJPbpHoPWo for a video of the pong demo in humans.
The motor cortex seems to have a clear, learnable signal though it is rather noisy sampling <1000 locations.
Mary Lou Jepsen at Open Water has ideas about how to do this non-invasively .
Is there a big loss of time sending the signal down to the arm, or is the reaction time mostly cognitive anyway?
NL’s approach is scalable due to their robotic insertion system which can implant a (multi-channel) thread every few seconds. It should be possible to hit a few thousand channels within the window of a few hour surgery. They do face the same challenges with size, weight, and power that everyone else does, which forces trade offs on the bandwidth, ability to isolate spikes from individual neurons, and number of active channels.
The primary limitation of this approach is that the needles cannot easily insert deeper than the outer layer of cortex (to my knowledge). This limits the application space to anything with recording or stimulation targets on the surface. Motor prostheses and gaming are perfect for this due to the anatomy, but many other medical applications require deeper targets, which their sensor cannot readily hit at the moment.
I think the applications of this are going to be in research for years, not as implants that paralyzed people can readily use.
Much less happy are the animals we eat, those confined in zoos, and many of the pets people lock in tiny spaces.
Rhesus Macaques  have a lifespan of thirty years. Thirty years. Thirty years in a massively controlled environment, often sedentary, with limited opportunities to socialise with peers, and the simple pleasures of mutual grooming, lying in the sun etc. Lab animals that are rehomed in rehab centres / zoos etc are often overweight, in poor health, and have a range of nervous tics and social inexperience. Many adapt to the non-lab environment, but they tend to have underlying health problems and issues with acceptance by the alpha individuals because they lack the years of experience required to understand and fit into the complex group etiquette.
Animals may still be the best model for testing drugs / devices that will go into humans, but lets not understate the massive cost to the individual animals concerned.
[Source: personal involvement with a primate rehab centre]
Sounds a lot like work, to be honest.
I think it's a pretty tough topic in ethics but when I was younger, I didn't think much about it because we were doing science. I remember seeing my first rat die. I didn't feel bad at the time because we did it "humanely" with gas (and it's normalized and I was only 20) and they just sort of drifted off and defecated. It's an interesting and perhaps sad use of a life.
I stopped going to zoos because I didn't want to contribute to keeping animals like that, but would like to read information to the contrary if that's the case.
I’ve never been to zoos outside the UK, but here often the primary function is conservation - both of the animals, which are often rescues, and of wider wildlife which is funded by ticket sales
For example, London Zoo is managed by the charity Zoological Society of London  and places like Monkey World are essentially rescue centres you can visit 
Can confirm this (disclaimer- I'm a supporter). Many of their animals are rescues, including more than 70 Capuchins that had been lab animals in Chile and four groups of Chimpanzees, many of whom had been rescued from use in circuses or as tourist props - the latter often with teeth knocked out so that they couldn't bite the punters. There is currently a sadly growing collection of marmosets, most rescued from the UK pet trade after tipoffs from animal protection agencies. Many of the marmosets have diseases such as rickets resulting from their owners' lack of animal husbandry skills (e.g. thinking that all they need to eat is bananas). Most of these animals lack the survival skills or health to be released back into the wild. On a more positive note, Monkey World is also a hub for breeding critically endangered species, e.g. Woolly Monkeys and Orangutans.
Sadly, if an animal is being cared for at Monkey World, it generally means that the specific individual has been abused in the past and / or the species faces functional extinction in the wild.
The one where I live has historically been a bunch of very desolate animal display cases, but they've committed to doing what amounts to a slow U-turn, and they have done quite well in that regard.
The old enclosures were clearly built to keep animals in plain view at all times. They've remodelled a lot of them since, and they're completely rebuilding some. There are times where you don't see a single great ape during a whole visit, because their new habitats have some caves and comfy areas hidden from view, with the result that especially the gorillas now appear to have quite a bit of fun interacting with visitors at the four separate points in their enclosure where that is possible and definitely seem much more relaxed than the previous generations (as far as I can tell, I'm no gorilla myself).
Certainly still worse than a life in their natural habitat, but they are part of a multinational conservation project and all their current gorillas were essentially sent to them via this project, with the end-goal of (as I understand it) having a viable captive gorilla population in zoos around the world so there is a "backup" in case the natural populations collapse. The living conditions they provide have earned praise from experts (for what that's worth, seeing they're not gorillas themselves, either).
They still keep their lions in a tiny, cruel pen, but they're building a new one currently that is supposed to be state of the art. Most monkeys have gotten new accomodations last year I think and they're pretty involved in keeping Kunekune pigs from going almost extinct a second time and they're breeding Visayan Warty Pigs (which are absolutely amazing things, and critically endangered); I guess for a relatively provincial central European zoo with a limited budget, that's quite decent.
They've also recently completed a pretty big section with heirloom breeds of common farm animals and they do a lot of education programs and events for schools, which I feel is something my generation missed out on big time, not necessarily from a zoo, but some kind of getting in contact with animals other than the occasional cat or dog might have been quite helpful. I feel there are some really weird misconceptions about animals that are pretty widespread among people my age.
For what it's worth, I've been quite opposed to that zoo in the past, but their efforts over the last decade or so have been enough to convince me to pay for a year pass. Some zoos are much slower adopting this approach, I surely wouldn't be as supportive of one of those.
I had the same reaction. I just recently had a lot of trouble pairing Airpods to Macbook (both no more than 2 years old). If Apple can't make it work between their own devices, it's doomed. A piece of shit technology, a black stain on the whole stack.
The demo is impressive otherwise, and I don't think it's weird to control a device with your mind. Or, if weird, it's the interesting kind of weird.
I'd like to see a multi-modal GPT successor that learns not just text, image and video but also neural brain signals. It's one modality we haven't touched on yet. Maybe it will be able to extract speech directly from the brain, which is orders of magnitude harder than controlling a joystick.
Check out https://www.nature.com/articles/s41586-019-1119-1
And yes it is definitely orders of magnitude harder than controlling a joystick
It works because there is direct correlation between the speech-motor cortex and how the vocal articulators (larynx, tongue, etc) move, and how that combine to produce speech.
Abstract "thoughts", on the other hand, are not so straight-forward. For one, there's no central location in the cortex where the concept of "car", for example, lives. The distributed representation of abstract thoughts within the brain makes it orders of magnitude more difficult than decoding speech for one specific individual. Then add orders of magnitude of orders of magnitude to generalize that to different people.
With a few decades (years?) of refinement, this technology absolutely has the potential to capture private thoughts.
Sure, it may require an implant, but just like we all carry tracking beacons that have become indispensable to daily life today, in 20 years it may be “the done thing” to have an implant to control your home automation, etc. Or maybe the tech will improve to where it just needs a hat. And at that point, certain authorities will have no qualms about using it in interrogations, with or without “due process.”
Yay for planned obsolescence.
For me the strangest part of watching this is not the lab work or the animal, but the implications of this technology and where it might lead to in the future once applied to humans. I really like the idea and premise presented, but let's be honest: There are also way too many evil use cases ...
Also, if I may ask, do you think we have to choose between caring about these individuals and caring about those in factory farms? I personally care about both (as well as plenty of other ethical issues, naturally). And I imagine other people feel the same. So your suggestion that we measure the two causes against each other is rather confusing.
I don't think these monkeys are "not worth discussing" - but I don't think they merit more than a cursory evaluation. The same is true for say, people that are killed by falling coconuts.
There are, in all likelihood, cost-effective causes that you can donate to, in terms of your dollars, skills and capcity to give a damn. I'd suggest these monkeys shouldn't make the list.
I'd understand if you said they didn't make the cut for you personally, but I'm not sure why you'd be invested in ruling them out for everyone.
(And this a sidebar, but I think one could quite reasonably believe that advocacy here is worthwhile in exactly the bang-per-buck utilitarian sense you're invoking. For instance, people who are galvanized on behalf on these monkeys might then change their actions towards less visible, less relatable nonhumans like pigs, chickens, and fishes.)
The latter point (galvanised to support other welfare causes) is roughly what inspired the original post - our intuitive emotional reactions to visible harm should indeed encourage effort in investing in effective harm reduction advocacy. But if seeing these monkeys makes you sad, you really ought to think about all the less-visible, cheap to attack welfare issues that are available.
So I do very much appreciate the spirit of your comment: that we should attend to less visible (and more easily addressed) harms, and not get caught up in 'celebrity causes,' so to speak.
But I think where I differ from you is that I don't view it as an either-or paradigm. I'd say to people "Go ahead and try to help these monkey individuals, and also work for, e.g., food-farmed nonhumans (who are easier to help, etc.)"
I appreciate the risk you're citing, that people could effectively "waste" time and resources on a case like this. But I'd argue there's a greater risk in approaching ethics as A) zero sum, and B) generalizable. I'll elaborate:
A) It's certainly true that we only have so many minutes in the day and so many dollars in our wallet. But I think these zero-sum resources are often not the final limit on what we can do. Rather, the limits we reach are emotional and psychological energy - which is often not zero-sum. Getting engaged in an issue (especially when it's an issue that radicalizes you) can actually increase the amount of time and resources you find for other issues. (I.e., you reclaim it from less important stuff.)
B) I'd argue ethics is patently not generalizable (in the sense that you're suggesting - i.e. that everyone should reach the same conclusion about which cases are worth effort), simply because humans are so varied. One person might have tons of money and be happy to spend on this cause in addition to whatever they give to help farmed nonhumans. Another person might feel a special bond with monkeys that makes this an easy, non-taxing (or even net-energy-positive) issue to engage in. Yet another person might currently find the plight of food-farmed nonhumans overwhelming to consider, but these monkeys will be a stepping-stone issue that help them get there. Etc.
In a more developed, market society, you typically have several commoditized, replaceable products and most tend to leverage BT. The idea of replaceableness belongs to the product itself, but what we see in common for all of them is BT connectivity, so we tend to perceive BT as an identifier of replaceableness as well.
Neuralink very casually and tongue-in-cheek BT "paired" to the monkey, reinforcing the idea of replaceability and commoditization of the monkey.
Many people have empathy for animals and monkeys are seen as precursors to humans. It's very easy to see how treating a pre-human species like a replaceable product, leaves humans themselves creeped out and feeling like in the future they may also be treated like a replaceable product.
Neuralink needs an ethicist in a high exec position or board and a better PR manager. They should have done some of these to reduce that perception mess:
- allude to the use of BT in serious medical applications before "pairing" with the monkey
- reduce or change the common BT terms like pairing
- use a computer rather than a smartphone
- don't mention BT, just say wirelessly connected
- don't use a Bond-villain smooth British voice
By talking about it as a PR problem, I don't mean to say that is the root cause, it is just what we can see at the surface.
Do they really care about ethics? Is it execs, engineers, video directors, everyone, no one? Do they care, but are just bad at PR? Do they not care, and this video is a reflection of that? This is what matters, for all those people to genuinely care about ethics.
Can they still genuinely care though, after becoming a corporation? Can bringing shareholder value align with ethics?
One botched video is simply one data point in the public trying to understand Neuralink's genuine stance on ethics.
Only people with close contact to Neuralink will really know. We are all hyper connected, but sadly, only superficially.
1) The deliberately soothing British voice does not come off as soothing. It comes off as insidious, and threatening. This is in part influenced by our own cultural context with media like Black Mirror, but the effect is there nonetheless.
2) The comparison points to "pair with your iPhone" feel WILDLY misaligned with the rest of the message. The premise of Neuralink is that this is a world-changing cutting edge technology for the good of humanity. Then all of a sudden you have a situation where a living sentient being is paired to an iPhone like some sort of bluetooth speaker. It reeks of confused ethics.
I think you put your finger on why that part was unsettling.
Nobody loves them, but they're important in this case to help parralyzed humans maybe with ALS or some other disease communicate with loved ones
Where it gets funky though is trying to quantify that to an extent: how many monkey disfigurements is worth fixing one human disfigurement? Ten? A thousand?
How many chimps would you blind in order to prevent humans being blinded by the latest lash extension cosmetic?
Is it even controversial that yes, of course they are?
I mean, Boeing and the FAA kept 737 MAX's in the air after brown people died in the first crash when we all know they'd have grounded them if it was a crash in Kansas. We value human lives differently let alone animals.
> How many chimps would you blind in order to prevent humans being blinded by the latest lash extension cosmetic?
None. But I'd blind as many as are needed to trial human eye transplants, for example.
It's only a matter of time before certain governments start alpha-testing this type of technology in places like Xinjiang or Myanmar.
Who wouldn't want to ultimately be assimilated by the Borg?
First @Neuralink product will enable someone with paralysis to use a smartphone with their mind faster than someone using thumbs
Later versions will be able to shunt signals from Neuralinks in brain to Neuralinks in body motor/sensory neuron clusters, thus enabling, for example, paraplegics to walk again
That aspect of connecting via bluetooth from a phone is most conventionally used to interact with replaceable commodities such as wireless speakers/headphones, but here it's being used to interact with a _live_ monkey. This framing somewhat gives the impression that this living being has been reduced to the status of a replaceable commodity, a mere peripheral that one might connect to via bluetooth.
I agree that given how much modern society relies on animal testing that it's not really a rational response - maybe this reaction could have been mitigated somewhat if they had connected to the monkey via a computer or more involved process rather than simply just the conventional bluetooth pairing flow on a phone.
I understand that, but I think to some degree, you're letting somewhat unrelated things influence your opinion of this.
You know what else it interacted with and controlled wirelessly? Pacemakers. Nobody thinks of those people are replaceable commodities.
Also, what about those that don't live a life of luxury and have access to lots of commoditized devices that wirelessly pair through a cellphone? People in less affluent countries might be less jaded about controlling something wirelessly and still view that as an amazing new technology associated with things they can rarely afford.
So, to what degree are those associations useful and accurate, and to what degree are they you bringing unrelated prior biases to bear?
The good thing about messy, human models of transactions and interaction is that it can take a long time and many different voices can be heard, allowing disuptes to occur and be resolved.
Many of these successful tech corporates work to eliminate the human discussion element, and replace it with digitized (and frequently proprietary, or at least gatekept) rules.
I think I've dealt with a few difficult dominant personality types in the past, and it would not surprise me at all to see them consider humans-as-pets as a desirable future. Match that with digitized 'asset ownership' and other non-repudiable mechanisms and there could be a very dystopian and authoritarian future in the mind of some of these people.
Now I'll make sure to sound like a complete nutter (as if I hadn't already) and mention that some of these individuals and companies are now so essential to the U.S., both domestically and internationally, that they are becoming untouchable.
Meanwhile our own tech industry is busy debating and trying to determine what the future of libre software will look like. It's a pivotal moment and I'm optimistic we'll figure it out to everyone's benefit, but there is a lot at stake.
It's great that this tech could help paralyzed people and amputees, but is it worth the cost?
Edit: This is a moot point if it turns out we can repair the damage from paralysis and amputation using bioengineering.
Why don't they do it on humans then?
If you've never gone through the process of bringing a medical device to market, well, I'm not sure there's an easy FAQ that will answer your scepticism. But the basic idea is that you have to do a bunch of testing in the lab, then probably a bunch in animals, then finally a bunch in human clinical trials, then you can actually market the device to the public.
Neuralink is in the 'animal testing' phase, and it sounds like they're likely to start human testing soon.
Note that once they've gotten a device approved for humans, that doesn't mean they will stop testing on animals. There will likely be improvements to the device, new protocols, etc. that will necessitate continued testing as new features are brought to market.
> Among the types of evidence that may be required, when appropriate, to determine that there is reasonable assurance that a device is safe are investigations using laboratory animals, investigations involving human subjects, and nonclinical investigations including in vitro studies.
What smoothing algorithms did they use to guess the 'certainty' of the 'intended' movement in a Y-Plane game vs. the X,Y grid they learned from?
Having researched in this space, you don't get better results from 'training' on a X,Y space and reducing that training to a Y-grid predictor.
There is a ton of smoothing going on in the video... or the metal conductor plays a huge role in the electrical signals they get from the implanted electrodes. I once blew up a demo because of something like this metal stick as a constant I didn't think to consider.
I highly doubt that, you must be talking about non-invasive electrodes such as EEG. When the electrodes are inside the brain and thus in the cranium, they are effectively protected from outside EM activity since they're in an effective Faraday cage, so your signal has much higher fidelity.
Pretty much this on a different and somehow related note I would still try to check if the algorithm does not just trick us into thinking that it is doing what the monkey wants it would be nice if they post some more in depth material of how the code or whatever they use looks at the end just to be sure we are not in front of a half machine clever hans (https://en.wikipedia.org/wiki/Clever_Hans)
TO clarify: we dont have the ability to know if the monkey made an error he wont say hey I didnt want to get that one right or win that pong movement, have your milkshake back please. And we kind of need to know that in fact when doing mathematical modeling and AI having a way to quantify that is a requirement and one of the first things they teach you about.
I'm just guessing on this, but the "training" session probably involved having the monkey stare at the screen for a while, while the cursor moved, which allowed the device to capture a spatial heatmap of which cells fired at which locations. There's probably some online optimization happening as the monkey then continues the "training" process by completing the task. Overall, this task is completely doable with the current technology, so I would not assume any foul play here. If the monkey was writing Shakespeare, I would definitely doubt it.
After the monkey has played the game a bunch, you can correlate the neuron firings generating motor signals to the arm with the motion of the cursor on the screen. Later, the monkey's arm need not actually move the joystick.
The hippocampus is pretty deep in the brain, and the Neuralink surgery can't go deeper than a centimeter or two.
Yeah, I am speaking through non-invasive experience, however, unless your grounding is LIGO level sensitive, sticking a 3 ft metal pole into your mouth takes some significant processing to get a good ground, especially if it comes into contact with the earth and then you change your connectivity with opening and closing your mouth.
Again, low probability, but that just looks like it's asking for problems.
Like, you can't just tape a metal coat hanger to an iPhone and start getting text messages through it.
But somehow you can put some wires just as few mm into the surface of a brain and extract fine motor signals??
There is nothing stopping you from using a cloth hangar. You just need to make sure it is impedance matched and properly sanitized for subdermal use. The signal will be much worse as a cloth hangar had gauge/diameter that is several magnitudes bigger than that of a neuron. Nothing here is particularly "novel" that it cannot be trivially replicated if you have the budget of one of the FAANGs. If you have 100million in capital, you can easily find poor neuroscience and EE PhDs who will be willing to work for you for <80K a year just for the opportunity to build something like this. The reason it is difficult and out of reach for smaller companies is because small amplifiers and electronic packages that can be subdermally implanted requires significant manufacturing capital as they fall under semiconductors and biomedical devices. Once the industry settle on a design expect the price to drop dramatically as the core chips and parts become commoditized.
Here's a graph that summarize the temporal/spatial tradeoff of the different BMI tech: https://www.researchgate.net/figure/The-various-neural-recor...
*a shortcut by
You can't just read fine motor signals from any neuron, but given enough neurons, since they're deeply connected, you can infer a close approximation of the intended motor signals given a known output. That's why they need a lot of sampling wires in the brain; to makes sure they have enough data to make that inference. Luckily, machine-learning is very good at figuring out patterns in this sort of data.
My guess is that they train an ML model using the inputs and outputs when the joystick is connected. Then they can use the trained model to just generate the output directly from the inputs when the joystick is disconnected.
This will work fine for one particular monkeybrain/interface combo; however, you cannot use the same model on other monkeys. You have to train a new one each time.
You could if we didn't encrypt connections.
On that note, does anyone know how to upgrade libssl across my entire nervous system?
The game examples were super fascinating, but since I was already essentially familiar with the fundamental principles and technology, it didn't quite impact me as much as the researcher pairing his iPhone with a Bluetooth transceiver in the monkey's brain. That sent weird shivers down my spine.
‘Inception’ shows someone who tries to do something like that, but there seems to be a vulnerability.
They are also recording from up to 1000 channels, which is probably overkill for mind pong, tbh. But you'll need that many implanted electrodes to study long term electrode biocompatibility.
Not exactly “Hello, World” but maybe “ToDo List App”.
Granted it moves slower than actual Atari Pong, and takes a few minutes to get the hang of controlling the paddle. But IME it isn't much harder to control the up/down of the paddle with the EEG than it is to play a game like flappy bird, and that's using a couple random scalp electrodes from a consumer device.
So yeah thanks for bringing this up, because I feel some of the comments here are acting like this is much closer to "mind reading" than it actually is. Not that it isn't cool, but the overhyping kinda kills it for me lol. Is this how robotics people feel when Boston Dynamics releases a new video?
It's not totally trivial, but a reasonably skilled person should be able to get a not-too-janky version working with a bit of effort. (Also, where are you that undergrads get to interact with monkeys?!)
For example, a very basic approach is design experimental tasks that have contrasting conditions, or events to be predicted. They don't do that. Even something as simple as pressing the controller left vs right vs not moving at all in donkeykong would have been more interesting.
In any case, brains and traditional computer circuits work on different principles as I noted in my OP. Furthermore, there is quite a lot more redundancy in brains--and partial-redundancy is a bit of how it works; for example, if you have a population of neurons with different tuning curves, then the output in response to a stimulus can be integrated, and a likelihood distribution obtained for the actual value of a stimulus; that's just fundamentally different than how, say, a 5-stage pipelined RISC cpu works.
You can get a 128 key out of most electronic devices in a reasonable amount of time: https://en.wikipedia.org/wiki/Electromagnetic_attack#Known_a...
People invading other people's minds, Borg-like clusters of minds outsmarting anyone who is not connected with others, forcing others to play along to stay competitive. Maybe entities that even never grew up as unaugmented, unconnected persons because they were already linked in young age - one mind with different bodies.
It is absurd to try to protect humankind against a perceived existential threat by AGI by developing a technology that would also destroy humanity if it works as advertised. Add to all that the cases in which it doesn't work properly. Developing inflammation around the implants, transmitter breaking, batteries running out, losing wireless connection, ... if you think dead zones while hiking are infuriating then wait until your cognitive capabilities depend on a network connection.
I am afraid that all the good that SpaceX or Tesla have achieved so far will be undone by the damage that NeuraLink's technology will do to humankind one day.
Also, as this stuff gets further along, there's going to be large incentives for unethical testing. Rockets can be tested with unmanned flights, but you can't really dry run brain implants.
Pong is the absolute POC. Neuralink will most likely be able to achieve higher differentiability and thus be able to complete tasks of higher complexity, so stay tuned for even more impressive tasks. EEG can't get past pong; it's limited by physics itself. The solution is to put the electrodes in the brain, and there is likely no other solution.
Looking forward to the future.
Therefore, pong is less impressive than the previous task.
I think this gets deep into the difference between premeditation and action and what that means at a neural level. I would guess that Neuralink is not processing the "logical thinking" as much as it is on the neurons that dumbly encode an association with ping-pong racket location.
As long as some set of neurons uniquely fire approximately when the pong moves along its degree of freedom, different electrodes will pick up different voltages, and that can be learned by an algorithm. So we don't need to know how the monkey arrived at a decision, but rather just be able to index the possible outcomes of the decision. Which is why I thought that the entropy of the decision space mattered. It's easier to learn a smaller space.
I would tend to think this is a dumb pattern matching cluster (e.g. see visual encoding) that's being picked up by Neuralink, and not the actual "logical decision making". Although, there may be more subtle neurons that fire in a sort of "premonition" which have learned the correlation of the racket position with the ball over time.
I can link to more papers on that -- it's not just visual encoding, but rats can "plan" where they are going to go, a sort of premonition. Humans also do this. It's like an echo that happens before you yell. That's what the brain does when you plan to go to the grocery store.
This is all really interesting stuff, but I'm not an expert!
It’s exciting stuff and I’m sure there are concerns to be addressed down the line wrt altering perception, but let’s not put the cart before the horse. That would be like worrying about impending AI drones in the 70s.
Neuralink in "write" mode could be engineered to do the same thing by introducing an EM current that changes over time. In the past, I've worked with "read-only", passive tetrodes, but the opposite is possible from a physics standpoint. Moving charges is just another way of saying "propagating photons". I think there's not only precedent, but there's proof that this can be done.
The hard part is figuring out the biology (for a given memory, you still need to calculate the correct I(t) that actually implants it), which is a relatively unexplored domain. Neuralink is centuries away from using "write" mode to turn everyone into Neo, but much simpler use-cases can still make a huge impact within the next couple decades.
The signals Neuralink is reading are from the "part of the brain responsible for the actual motor function". I'm not sure what you're suggesting here.
I think you may be confused on how the majority of paraplegic and NDG conditions work - it's not that the brain isn't producing the signals, it's that the signals aren't reaching the muscles they need to to create motor function - it's the link that's broken, for a variety of reasons.
Of course, the obvious corollary here is reading the signals and using electrodes implanted in the muscles to directly stimulate a contraction
Edit: After reading another comment about the link being broken (hence the name Neuralink), this makes more sense now. Although, I still think that maybe you only get one usage out of one chip? I suppose the definition of "usage" is what is important. Presumably you won't need a chip per joint in a leg, and presumably one chip could control two legs? Or perhaps you get finer control with a chip per leg?
Total non-expert here and legitimately curious.
Butlike I said, traditional PT is a much less invasive solution to that, so I have no idea what GP is talking about.
Or - hear me out - for a small bonus, you could volunteer for a second shift in the Amazon warehouse while you sleep. An algorithm right bits to drive your body as you shuffle unevenly from station to station in the dark (no need for lights, the computer knows where things are). They can call it the Neuralink Autopilot.
Edit: would it be possible to externally trigger firing of motor neurons in someone who is brain-dead? The future might be a little Ancillary Justice-y, but with a strong "posthumous work contract" capitalist flavor to it.
Also, as this comment suggests, it seems to be more of an advance in wireless data transfer than neuroscience: https://news.ycombinator.com/item?id=26746553.
I hope I'm wrong, but Muskuito's misleading marketing is never far on the horizon.
It could be billed based on the energy required to translate the objects.
More seriously, I actually yearn for the days of Star Trek and where we look at how innovations like this can make people's lives better first and then just stop there.
I was really excited to see the first goal they mentioned:
>Our first goal is to give people with paralysis their digital freedom back: to communicate more easily via text, to follow their curiosity on the web, to express their creativity through photography and art, and, yes, to play video games.
Modes of interaction in society are so poorly designed for anyone whos sensory functions are different than the norm. Bridging directly from the brain to interactions that currently require voice (normed to English Speaking White Males) or touch or movement or vision is a game changer not just for those who are paralyzed but many different groups for whom technology is not designed and is instead adapted.
It could be, but it will be billed with ads inserted directly in to your brain that you can’t skip or turn off - even with your eyes closed.
Two years ago without a brain implant.
The problem is that without an implant, only this is possible. Anything significantly more complex than moving a cursor on a screen is just not possible or feasible without an implant, because of physical (electromagnetic) limitations.
I am not a vegan/whatever/animal activist, but maybe we should allow terminally-ill or death-row volunteers for this kind of experiment... With good money for the family, and I think some people would like to help mankind in the end. It is better than dying without accomplishing anything in life.
So, you have at most one year with an inoperable tumor, and you will not travel because you want to be close to your family. Would you be a volunteer for this research, for 500 thousand dollars? (monkeys are cheaper, but you are the real deal)
On the other hand, non-human primates are readily available in a controlled environment, can be used to generate huge amounts of data specific to a diverse set of tasks and experiments that you just can not do on the human patients.
The brain is a complex machine that is extremely hard to decipher just by studying a single species or one kind of experiments. The need for studies across species with a hundred different ways of collecting and analysing data is essential to first understand the mechnasims of the brain and only then (hopefully) be able to causally alter it for specific applications in medicine and so on.
I don't think it's a good idea to allow death-row inmates to do this though; They'd likely be pressured into volunteering as the only way to delay the execution.
Imagine being able to virtualize consciousness, back up memories, live forever...
The problem is how invasive it is, and how little reward there is initially. We have to solve the chicken and egg problem, but unfortunately cracking open the skull and sticking electrodes into the brain is not something one typically wants to do if they're conscious about their health.
I hope we can gain traction with medical applications and then begin to make steady advances. It'd be neat to have fully virtual, synthetic senses before we die.
However, the medical applications should come much sooner. What they've shown is amazing. The monkey's control of the paddle seems to be precise and fast. If people who are paralyzed and wheelchair bound just had a way to operate the wheelchair, a robot arm, and communicate with the outside world reliably, that would already be amazing. If we could somehow reconnect their nerves or give them some kind of mechanized suit, it could be even better... And if Neuralink's tech works, that could be only 10-15 years away.
The way this idea came to me was in a weird day dream thinking about the first experiment to attempt it... Patient lying on the table all wired up, a Dr. Frankenstein moment of ‘the upload’ followed by a screen witnessing the boot of the relocated consciousness. The doctor interacts with the persona, asking what it’s like and getting direct responses saying how amazing it is.
Meanwhile, amidst the commotion and celebration, you see the ’donor’ patient wake up in a reflection on the monitor and hear them faintly ask ‘did it work?’
PKD's "We Can Remember It For You Wholesale" notwithstanding, "Consumer Neurotech" feels like one of those categories that even the most visionary of sci-fi writers failed to build into all aspects of human existence in the future ;)
No need to waterboard prisoners for information anymore when you can just suck data from their brains.
In most judicial systems, you cannot be compelled to testify against yourself. With this technology, I really worry that that right will slip away.
This feels like the precipice before dystopian hell, to me.
I also think we should restrict such technology. Without restrictions we can play God, and I am in no way approving of doing anyhting similar to that.
The part about pairing the monkeys NeuraLink Device "like you do with a speaker" made me laugh. Not becuase it's funny, but because it is so incredibly absurd. I dont know wether to feel good about how far technology has come, or to feel terrified about all the implications.
(I can't help myself; Nietzsche once said (or wrote) "God is dead. God remains dead. And we have killed him. How shall we comfort ourselves, the murderers of all murderers? What was holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us? What water is there for us to clean ourselves? What festivals of atonement, what sacred games shall we have to invent? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it?")
If we want the brain to be an input device then we must treat it like an input device. That means interfacing with it with the most modern/common standards we have, which, for better or worse (mostly worse), is currently Bluetooth.
Do you also think it's weird that people can put on VR headsets to alter their sight, or put in AirPods to subvert their hearing? 'Cause it is, but I think we've all normalized it to the point that it feels like something we're allowed to do. I think the same will happen with Neuralink at quite a fast pace.
You can not compare VR Headsets and airpods to a chip that is, literally, implanted IN your brain. -
These devices generate Extrinsic Stimuli. The Neuralink however directly READS the Neuron Activity (as I understood it).
We've been playing God for about as long as human civilization has existed. Agriculture is playing God - overwriting the byproducts of nature. Medicine is playing God - overwriting natural selection.
Is it God's will for paralyzed people to remain unable to interact with the world, and live the rest of their lives in misery? If we have a way to do a lot of good for a lot of people, isn't it inherently immoral to not develop the technology further?
While I share your view, I think it's important to point out that this really isn't the best place to start if you want to reduce animal suffering.