
Giving Stephen Hawking a voice - Semetric
http://www.wired.co.uk/magazine/archive/2015/01/features/giving-hawking-a-voice
======
btown
Yahoo has a much more technically detailed article about the software,
including a video showing the software in action.
[https://www.yahoo.com/tech/how-intel-helped-stephen-
hawking-...](https://www.yahoo.com/tech/how-intel-helped-stephen-hawking-
communicate-with-104112393054.html)

Direct link to the screen-capture video:
[https://www.youtube.com/watch?v=mPU6mnM2i-k](https://www.youtube.com/watch?v=mPU6mnM2i-k)

TL;DR Hawking only has a single reliable, low-latency binary signal (facial
muscle movements), so his interface has been a constantly-moving cursor that
he can "click" when it's over the next symbol/command he wishes to select. The
innovations here are in the interpretation of those selections: he now has
autosuggest for text (designed specifically for him based on the corpus of his
works) and shortcuts for filesystem management.

I'm looking forward to seeing when the source code is released, or when a
paper is written. Just looking at the data-entry video, for instance, there
are interesting parallels between the timing specifications for Hawking's
Yes/No dialog and GUI design for dialogs for non-disabled users - in both
cases, if there's not enough spacing in between buttons, or orderings are
unpredictable, it's much easier for someone to mis-click!

~~~
notthemessiah
How has he written so many books considering the speed of the interface? Even
in interviews, he seems to take a long time to respond.

~~~
NickHolt
I believe his assistants help him articulate his thoughts. The article implies
that he writes everything meticulously himself, but that isn't true. As far as
interviews go, interviewers usually send him the questions ahead of time.

------
byuu
Interesting article ...

> "I'm trying to make a software version of Stephen's voice so that we don't
> have to rely on these old hardware cards," says Wood.

So back in 2010 we had someone help us extract the program ROM code from the
SNES DSP-n coprocessors (used in games like Pilotwings and Mario Kart.) It
turns out these chips were NEC uPD7725 DSPs. There was basically only one very
terse document on how the chip worked, and no emulators for it, so I had to
write one. Had a bit of help in fixing the overflow flag calculations from
Cydrak.

A while later, I spoke briefly through a liaison with Sam Blackburn (who was
then Stephen Hawking's assistant) back in 2011'ish. They were looking for
permission to use my uPD7725 emulation code (which I said yes to, obviously.)
Apparently the Speech Plus text synthesizer uses NEC uPD7720s. This is
basically the same chip and ISA, but with less ROM/RAM. It's a neat little
fact, but not too surprising. These DSPs are really versatile, and different
programs can make them do very different things.

Reading this article, it sounds like the effort was as yet unsuccessful,
though :(

(It's also important to note that the uPD7720 is probably an infinitesimal
part of the overall system, so I suppose they ran into additional problems.)

------
k-mcgrady
Is it open source or based on open source? The article seems to switch between
both at random. If they've made the entire system open source that would be
incredible. I think something like this, which can improve millions of lives,
is the perfect project for an open source community to work on. Sick people
shouldn't have to pay for this and I'm sure lots of people would be very happy
to dedicate time to improving it. It's also good to know that if Intel decided
to stop work on it the users who rely on it so heavily aren't screwed - the
software can continue to be improved and will always be available.

~~~
devnonymous
From [http://iq.intel.co.uk/how-intel-keeps-stephen-hawking-
talkin...](http://iq.intel.co.uk/how-intel-keeps-stephen-hawking-talking/)

    
    
      > Professor Hawking has been using his new software for several months while
      > Lama and her team have been debugging and fine-tuning it. It’s almost
      > finished, and when it is, Intel plans to make the system available to the
      > open source community.

~~~
bitwize
Interesting. It says Hawking would rather stick to his old system than switch
to a new one, which conflicts with reports I read like a year ago that
suggested he was interested in a direct brain interface.

But perhaps it's not so surprising. He's old, and past a certain age you just
don't have time to relearn everything. Your time is better spent squeezing the
juice out of what you have.

~~~
k-mcgrady
>> "It says Hawking would rather stick to his old system than switch to a new
one"

Huh? The quote you're replying to says he "has been using his new software for
several months". Am I missing something? Is he switching back after testing? I
know that he doesn't ever plan on changing the 'voice' as it has become known
as his own voice but I don't see why he would test new software and then not
use it. To test it you will have to learn it.

~~~
Someone
Three years ago, he was looking for an assistant.
[http://www.theverge.com/2011/12/29/2668408/stephen-
hawking-t...](http://www.theverge.com/2011/12/29/2668408/stephen-hawking-
technical-assistant-speech-system) states that, at the time, his computer was
running Windows XP, so he definitely wasn't on the bleeding edge. However,
that has changed a bit; he upgraded to Windows 7
([http://www.hawking.org.uk/the-computer.html](http://www.hawking.org.uk/the-
computer.html))

------
kps
I wonder how much work went into making it still sound exactly like a DECtalk.

------
kmfrk
What software did Ebert use? I remember him talking about paying English
researchers to synthesize his own voice, but I don't think anything
materialized.

~~~
agildehaus
It materialized, but I think there were limitations and he ended up not using
it:

[https://www.youtube.com/watch?v=_0KUw3xr7cA](https://www.youtube.com/watch?v=_0KUw3xr7cA)

And it wouldn't be possible for Hawking. Ebert's system was put together using
hundreds of hours of recordings from Skiskel & Ebert At the Movies.

~~~
cleverjake
Actually the recordings from At The Movies weren't usable because so much of
them had movies playing in the background. They used all of his DVD
commentaries (mainly from criterion collection, if memory serves).

------
omaranto
Arun Mehta wrote an essay called "When a Button Is All That Connects You to
the World" for the book Beautiful Code (2007) about speech software designed
for Stephen Hawking. I don't think the system described in Mehta's article is
this one, though.

------
dang
Url changed from [http://www.wired.co.uk/news/archive/2014-12/02/stephen-
hawki...](http://www.wired.co.uk/news/archive/2014-12/02/stephen-hawking-
intel-communication-system), which points to this.

------
applehammer
If they had been more agile and iterated to him releases rather than a big
bang release he would have embraced the technology more easily.

~~~
xahrepap
Unless it takes him a long time to adjust to even small changes. Or maybe his
old software was faster for him or more stable until recently?

------
acd
Where is the source Luke?

~~~
ofcapl_
so inspiring article.

