What makes this awesome from a scientific point of view:
1) 2-dimensional decoders, where an individual controls a mouse with their mind, have existed for a while. What makes this really cool is that there are actually quite a few degrees of freedom in a robot arm, but they're basically using the same hardware. So the influence of software/algorithms in this case is pretty fundamental. There are a lot of papers on improved neural-decoding methods that at first glance appear really dry, boring, not the `sexy' kind of science with huge breakthroughs, but they end up being crucial to good performance as the complexity of the robot grows.
2) One participant was implanted with the electrode array 5 years before the study, and had the injury 10 years before that. Usually the signals don't last that long in monkey models. And we know that your cortex changes with disuse, so it's awesome that they were able to get usable signals so many years later.
What needs to get better are a couple things:
First, these decoders aren't perfect yet. What the Wired article didn't tell you is that the performance for the woman's implant was around 20-50% successful trials (still awesome from a comparison to no interaction).
Second, incorporating sensory feedback is another challenge that is really hard to address, but also very important. Imagine building a robotic controller in which the only information you received about the robot's position was visual. That's the way this works. If we find a reasonable way to mimic sensors of muscle extension (a proxy for joint angle) then we can create more controllable devices.
There are a few things that we've talked about are:
1) Heat. Patches that can be placed on areas of your skin that will increase/decrease heat quickly as a way of giving forece-feedback. This was just something we talked about briefly, and I really don't know how workable it is. I don't know offhand of any materials that can rapidly go from hot to cold and back again.
2) Electrodes. This is difficult because skin conditions change. There is [from what we've looked at] a smallish envelope between "I can feel this" and "this is potentially dangerous". The envelope can change with skin conditions [and people]. I've built a little prototype that sits on my tongue and gives me feedback, but that is clunky, and something we'd like to avoid.
3) Vibration. The problem here is that it turns out that it's a bit tough to judge different levels of vibration. It ends up as an on/off sensation. What we've talked about is using different patters of on off, or a variable frequency of on-off. (Here is one of the motors walking through the different freqs. It sounds kindof funny: http://www.youtube.com/watch?v=C_bGb2Xij8I&feature=youtu...)
There is a ton of interest in this stuff, at least in my circle.
This was something I put together over an evening, so it's really rough (just a PoC). What I was actually doing here was tying the electrodes to different events on my webserver. The idea here being that the "force feedback", could indicate to me how much traffic was hitting different areas.
I almost constantly tail -f /var/log/apache/access_log. This was supposed to be a wearable version of that :).
(The project, for me, changed a bit after this. What I'm working on now is a general purpose haptic device that can be tied to arbitrary inputs around your environment)
Haptics is a really awesome field that I feel is going to be really really important moving forward with some of this new HCI stuff.
How the hell is this area of science NOT sexy? Are you kidding me? IMHO, this is the ultimate in science, interfacing the human being directly with technology. That is a "sexy" as it gets, its right up there with CERN and spacey stuff. Stockings and suspenders couldn't make this more sexy.
And if that aint good enough for "us" then I merely have this to say, "think Russian".
Not sexy, pah!!!!
Not being in this field is possibly the only professional regret I have. I bow to you and your colleagues. Keep up the SEXY work.
Just strikes me as odd but I can see the potential of weaponizing this technology.
I think they should be funding this - they owe it to their disabled veterans.
Militarily the primary goal is likely improving the lives of soldiers who have been maimed and dismembered in battle. This is even more of a concern today because the improvements in battlefield medicine have resulted in more and more soldiers surviving with wounds that previously would have been fatal, leading to more veterans with very serious disabilities.
Certainly there is a potential to weaponize this technology but you have to keep in mind the reasons why that's not such a big concern at least in the near-term. First, this requires pretty invasive brain surgery. Second, it has a fairly low success rate at the moment. Third, the amount of control available is significantly diminished compared to functional nerves and muscles. Overall there aren't any good reasons why you would want to try to switch to using a system like this for controlling a tank or a fighter jet, so it's questionable what sort of battlefield potential the technology has now. When we get to the point where these systems can match flesh and blood then that will change, but that's a much larger can of worms than merely military applications.
The smile on that lady's face says more than any number of facebook 'likes' or google +1's could ever do for me, that's a real and measurable quality of life improvement for a single person, this really makes a difference.
The engineering on that arm is pretty heavy duty, I wonder what it weighs and how fast it could move and what kind of safeguards are built in to avoid the operator injuring themselves due to glitches in the system. If you look at the way the arm moves it is actually quite comparable to the arm of a baby that moves an object to its mouth the first couple of times.
With one big difference, a baby may get it wrong but it is not strong enough to do too much damage, even if it pokes itself in the eye every now and then it is usually with very little force. This arm however looks engineered to be strong enough and fast enough that it could do real damage to the operator or its environment. If that's servo driven there has to be a whole slew of safety systems in case a driver decides to hook a motor to V+ or V- because of a blown FET.
In any real-world machinery situation where the machinery is under software control and has the capability of doing real harm (for instance: machine shops with CNC gear) there are normally countless interlock systems that you'd have to bypass before you could get yourself in contact with a piece of it moving under control of the computer. In this case the operator is extremely exposed and I cringed when the arm sped up towards her face with the bottle. I also found myself sort of 'willing the bottle' in the right direction, they way some movies will have you react to something on the screen. Hard to describe.
This is not a soft hand moving either, it looks like it is made of pretty strong aluminum and fairly heavy.
Really, though, all that engineering is of secondary interest. That look on her face is exactly why I got into robotics. And we've only been at it for 15 years; imagine what we'll be able to do in another 15.
EDIT: the full video (or at least, the one I saw) is here: http://www.youtube.com/watch?feature=player_embedded&v=o...
We tend to forget how necessary movement and control is for our daily routines. There are people out there who even lack the ability to turn on|off the TV. A little hacking with an arduino, and some buttons will make it possible for them.
In hackaday.com, one consistently sees hacks that are for the disabled. Special game controllers for people missing an arm, or some other limb are popular and simple to re-create.
I do have a personal anecdote.
I was once buying some resistors on the local radio shack. While I looked in the mess of tiny plastic bags, another person started to browse for electronic bits. I started to talk to him, and found out he was a doctor. His work was in dealing with disabled people. He had learned how to work with embedded electronics and fiber composites in order to build prosthetic limbs for impoverished patients.
One of my friends has coauthored several papers with some of the BrainGate folks.
We spent a road trip a couple years ago discussing his research. One thing that jumped out is how similar many of the tools used in this field are to those in a startup. In particular, many big-data analysis tools and techniques that can be developed, optimized, and funded within the context of the web sphere are applied to the bioengineering problems. A few months ago an article floated around HN on how the current tech bubble, if it's such, lacks a "byproduct" that will benefit society when the scraps are cleaned up. You can make a pretty good argument that it's this.
Once the photo sharing app market dries up, people will have to look into other areas. I know that in the next ten years we will have startups (even 1-2 person teams) building much more than simple web apps. Innovation will slowly seep to other areas.
(And thus, Jeff, that is why people need to learn how to program)
1) stability of the electrode array over long timescales.
2) increasing the number of degrees of freedom of robust control that we can decode from the neural data. Activity in the motor cortex inherently lives in a (very) roughly 10 dimensional space, which is a bit of a mystery since we use thy activity to control hundreds of muscles independently. What this means, is that when you record from 96 electrodes simultaneously, many of the neurons picked up by the array are correlated, such that the resulting dimensionality is much lower than the number of neurons
3) An important step forward will be to develope optogenetic (or other) sensory write in, using pulses of light to activate neutrons in specific patterns that mimic priprioceptive signals from your limbs. This will increase the speed and robustness of movent via faster closed loop feedback.
4) Processing power. Currently it's typical to run decoders on real-time PC's. Creating an embedded processor capable of sufficiently low power operation with enough processing power to run the decode is non-trivial, especially if you want to implant this in the brain.
It is very unlikely, as suggested elsewhere, that current web tech will impact BMI research at all. The primary interesting tech is in the decode algorithm, which is a modified form of the Kalman filter. There is, however, a lot of room for hackers in research. I recently made the switch from working in defense, building robots and designing sensor fusion algorithms, to a phd in neuroscience. Probably won't pay as well long term, but it's far more rewarding and interesting! I'm surrounded by scientists that need better (software and hardware) tools and analysis methods. Also, an infusion of ideas and values aligned with open access publishing (or changing the scientific publishing model altogether), open source software, data and code sharing, etc. would generally benefit all and accelerate scientific research, but that's another topic worthy of more discussion elsewhere.
By thinking about moving her own paralyzed arm, one woman in the experiment used an artificial limb to serve herself coffee for the first time in 15 years.
Seeing her smile at the end of that video just made my otherwise terrible day rather special. And it's such great validation for the team that's working on this technology, I hope they continue to see much success in the future.
Your muscles respond to your thoughts not because they pick out some random thought from all of them and decide to act on it. That wouldn't work at all. Your muscles are controlled by your brain in a feedback loop where your desire to make a particular movement gets compared with the actual movement you are making and then that movement is corrected. For something new or exceptionally delicate that is a full-time conscious job.
For something that you've done a hundred times before you more than likely have abstracted it away and have it 'on call' to the extent that it can be done as a subconscious task.
I don't know if you have a driving license or not but because driving is a skill that many people learn only well after they've become conscious of their environment it is my favorite example of this sort of abstraction. When you first learn to drive it is difficult and each and every movement is something that your conscious is fully engaged with. If you learn to drive in a stick-shift car it is not rare to have an instructor operate the clutch and the brake for the first couple of lessons because there is already so much for you to pay attention to.
Fast forward a decade or so and you're listening to your favorite tune and possibly working out some problem in your head while driving on the highway at a 90 miles per hour.
That's the power of how your brain is organized, it has an automated process for abstracting skills on board that allows you to build on top of things you've previously acquired but still allows you to focus on those same things long after they've become abstracted away if required.