> "I'm trying to make a software version of Stephen's voice so that we don't have to rely on these old hardware cards," says Wood.
So back in 2010 we had someone help us extract the program ROM code from the SNES DSP-n coprocessors (used in games like Pilotwings and Mario Kart.) It turns out these chips were NEC uPD7725 DSPs. There was basically only one very terse document on how the chip worked, and no emulators for it, so I had to write one. Had a bit of help in fixing the overflow flag calculations from Cydrak.
A while later, I spoke briefly through a liaison with Sam Blackburn (who was then Stephen Hawking's assistant) back in 2011'ish. They were looking for permission to use my uPD7725 emulation code (which I said yes to, obviously.) Apparently the Speech Plus text synthesizer uses NEC uPD7720s. This is basically the same chip and ISA, but with less ROM/RAM. It's a neat little fact, but not too surprising. These DSPs are really versatile, and different programs can make them do very different things.
Reading this article, it sounds like the effort was as yet unsuccessful, though :(
(It's also important to note that the uPD7720 is probably an infinitesimal part of the overall system, so I suppose they ran into additional problems.)
TL;DR Hawking only has a single reliable, low-latency binary signal (facial muscle movements), so his interface has been a constantly-moving cursor that he can "click" when it's over the next symbol/command he wishes to select. The innovations here are in the interpretation of those selections: he now has autosuggest for text (designed specifically for him based on the corpus of his works) and shortcuts for filesystem management.
I'm looking forward to seeing when the source code is released, or when a paper is written. Just looking at the data-entry video, for instance, there are interesting parallels between the timing specifications for Hawking's Yes/No dialog and GUI design for dialogs for non-disabled users - in both cases, if there's not enough spacing in between buttons, or orderings are unpredictable, it's much easier for someone to mis-click!
I believe his assistants help him articulate his thoughts. The article implies that he writes everything meticulously himself, but that isn't true. As far as interviews go, interviewers usually send him the questions ahead of time.
A book might be around 50,000 words. There are about 2,000 working hours in a year if you only work 9-5 M-F. Which means that writing a book a year requires only 25 words per hour, on average.
I think it's not only the interactive elements, but also the way information is displayed. I actually think he (and others) might benefit from a really responsive desktop environment, especially compared to the Windows floating window manager.
The videos you linked to show this very well in my opinion:
- In the longer one with Hawking and the System he's using it to type and read Wikipedia, all the while quite some screen estate is wasted with (for him probably unusable) title bars, partially hidden desktop icons in the background and the browser partially behind his input software.
- The "data entry" video shows Notepad being opened and being partially hidden by the input UI.
That does not seem useful. I would rather use apps that automatically fit themselves to available space and predefined layouts for multiple apps or a dynamic tiling approach.
Is it open source or based on open source? The article seems to switch between both at random. If they've made the entire system open source that would be incredible. I think something like this, which can improve millions of lives, is the perfect project for an open source community to work on. Sick people shouldn't have to pay for this and I'm sure lots of people would be very happy to dedicate time to improving it. It's also good to know that if Intel decided to stop work on it the users who rely on it so heavily aren't screwed - the software can continue to be improved and will always be available.
> Professor Hawking has been using his new software for several months while
> Lama and her team have been debugging and fine-tuning it. It’s almost
> finished, and when it is, Intel plans to make the system available to the
> open source community.
Interesting. It says Hawking would rather stick to his old system than switch to a new one, which conflicts with reports I read like a year ago that suggested he was interested in a direct brain interface.
But perhaps it's not so surprising. He's old, and past a certain age you just don't have time to relearn everything. Your time is better spent squeezing the juice out of what you have.
>> "It says Hawking would rather stick to his old system than switch to a new one"
Huh? The quote you're replying to says he "has been using his new software for several months". Am I missing something? Is he switching back after testing? I know that he doesn't ever plan on changing the 'voice' as it has become known as his own voice but I don't see why he would test new software and then not use it. To test it you will have to learn it.
His system still uses the cheek trigger and still works more or less the same way, but improvements have been made to its word prediction and other details of its operation.
What software did Ebert use? I remember him talking about paying English researchers to synthesize his own voice, but I don't think anything materialized.
Actually the recordings from At The Movies weren't usable because so much of them had movies playing in the background. They used all of his DVD commentaries (mainly from criterion collection, if memory serves).
Arun Mehta wrote an essay called "When a Button Is All That Connects You to the World" for the book Beautiful Code (2007) about speech software designed for Stephen Hawking. I don't think the system described in Mehta's article is this one, though.
Do you have any proof to back these extraordinary claims?
If you have the faintest idea how "microwave technology" can mechanistically and physiologically read minds, you should explain your understanding.
My undergrad physical chemistry, bio degree, etc. make me question this. Microwave is for rotational spectroscopy. That gives you gas-phase or polar molecules. I assume you mean to say they're reading water? So... Concentrations? Blood flow? Seems incredibly low signal and dangerous.
If this is a joke, (which seems most plausible), this isn't a great manner of discourse for HN. It confuses, wastes mental cycles, and decreases signal:noise. Please think of everyone having to parse this stuff.
- Electromagnetic energy (photons) at certain wavelengths will pass though brain tissue (neurons) and interact with them.
- This interaction is dependent on the neuron's polarization state (from a negative voltage (unfired), to a positive one (fired)).
- The cacophony of noise and interactions can be measured and deconvoled to show patterns.
- Groups of neurons and their polarization states can be observed (much like an fMRI scanner deconvolves magnetic impulses to show a 3D image, this system deconvoles microwave radiation from a MASER).
- The microwave carriers (which are in the GHz) operate much faster than your brain (which is KHz at best), so deconvolution and denoising techniques have plenty of data to work with.
- The radiation reflected back is in terabits per second, but your mental processes and sensory inputs are much lower, so there is a million-to-one oversampling of the data (i.e. the information is there, decoding it is key).
Obviously if I had a working system that would be my proof, but that's a widget they won't let me near! I could write volumes about this, but convincing people that it is even possible is difficult. Hopefully the graybeards and smarties out around here can piece it together. I wish I were joking, but that's up to the reader to decide. <:)
>- The cacophony of noise and interactions can be measured and deconvoled to show patterns.
theoretically. On practice we are yet to see.
>The microwave carriers (which are in the GHz) operate much faster than your brain
they also tend to cook various molecules/tissues. Not the whole GHz range of course and not all the molecules/tissues. The issue is to find the frequencies which don't cook anything in the brain at the power level which would still allow for good signal/noise. It sounds like something that human civilization has been working for the latest half a century and still has work to do.
Well, the obvious answer is that it sounds like the product of mental illness, which is a) not something that we want to encourage people to shoot back at and b) can't really lead to good conversation.
The original comment was a joke, then? Because these are really the only two contexts it can be parsed with.
I took a look at your brief comment history. The amount of feedback you're going to get here will be increasingly limited as your comments won't even be visible to most users. So this could be the only opportunity to address you:
You might have a serious problem without realizing it. Please, if it hasn't been already, get it checked out.
I hope it didn't come across as me trying to diagnose you, because I obviously don't even know you well enough to say you have a cold. It's just, devoid of context, there's a really high chance that comment that sounds like that is a product of mental illness, and that's why people are prone to downvote them.
> "I'm trying to make a software version of Stephen's voice so that we don't have to rely on these old hardware cards," says Wood.
So back in 2010 we had someone help us extract the program ROM code from the SNES DSP-n coprocessors (used in games like Pilotwings and Mario Kart.) It turns out these chips were NEC uPD7725 DSPs. There was basically only one very terse document on how the chip worked, and no emulators for it, so I had to write one. Had a bit of help in fixing the overflow flag calculations from Cydrak.
A while later, I spoke briefly through a liaison with Sam Blackburn (who was then Stephen Hawking's assistant) back in 2011'ish. They were looking for permission to use my uPD7725 emulation code (which I said yes to, obviously.) Apparently the Speech Plus text synthesizer uses NEC uPD7720s. This is basically the same chip and ISA, but with less ROM/RAM. It's a neat little fact, but not too surprising. These DSPs are really versatile, and different programs can make them do very different things.
Reading this article, it sounds like the effort was as yet unsuccessful, though :(
(It's also important to note that the uPD7720 is probably an infinitesimal part of the overall system, so I suppose they ran into additional problems.)