
Using Voice to Code Faster than Keyboard - idleworx
http://ergoemacs.org/emacs/using_voice_to_code.html
======
mechanical_fish
A few months ago I had an RSI problem so bad - able to type only a minute at a
time, even sitting with hands on keyboard hurt - that I started down this
route. This video was, literally, a life-altering motivator for me, and I was
quite obsessed with it.

Ironically, after seeing a physical therapist - which, _let me tell you_ , you
should do at the first sign of pain, because while they can't help some people
I personally am batting 1.000 with PTs for RSI over my many-year career - my
recovery is now so complete that I've totally fallen off the voice-computing
path... for now. But I intend to keep going, not just because it is hilarious
but because, well, RSI happens and it really pays to vary the routine sooner
rather than later. There is nothing like trying to do a ton of emergency
scripting on Python and emacs at the lowest possible point of your
productivity.

The most important hint I have so far is: do _not_ waste time with Mac OS. You
need a PC running the Windows version of Dragon. The Mac version is pretty
good for occasional email but lousy for emacs because it doesn't have the
Python hook into the event loop that a saint hacked into the PC version years
ago before leaving Dragon.

The speechcomputing.com forums are your friend.

Yeah, they say there is an open-source recognition engine that works okay, and
time spent improving free recognition engines is time that _really_ improves
the world for all kinds of injured people, but here's the problem: when you
need a speech system you really _need_ it, and there are a lot of moving
parts. Dragon, and Windows, and a super PC to run it on are super cheap
compared to your time, especially when your time is in six-minute increments
punctuated by pain.

~~~
milos_cohagen
As someone with a disability (quadriplegic), who types/codes with one finger,
I find it appalling that Nuance, Apple and Google haven't opened up their
speech recognition systems through a rudimentary API that would allow
innovation that would _directly_ help the lives of me and many other disabled
people whether it's RSI or worse.

~~~
mechanical_fish
It was a shock to me to discover that the livelihood and happiness of so many
people depends on a dubiously-reliable unofficial API that was hacked into
Dragon years ago and that has been lovingly preserved ever since, just below
the radar. It feels like being critically dependent on Windows 95.

------
lifeformed
I guess it depends on the type of software you're working on, but input speed
has never been close to being the bottleneck with coding for me...

Most of the time I'm trying to figure out what to do or how to implement an
algorithm. Rarely do I get those mad-scientist frenzies where I'm typing away
frantically trying to get all the words down as they come into my mind in a
flash of inspiration.

~~~
lambda
It's one thing to not be able to write code as fast as you can type. It's
another to use a speech to text input method that's designed for long-form
prose and try to use it to code. Can you imagine the frustration of trying to
enter longCamelCaseVariableNames without a special macro to do so? I don't
know the usual commands in Dragon, but I imagine it would be something like:
"long delete space uppercase camel delete space uppercase case delete space
upper case variable delete space uppercase names", possibly with a few false
starts and undos in there as it interprets some of your words as commands
rather than code.

To experience something like it, try using your phone keyboard, with word
prediction on, to write code. It will be slow, and frustrating, and have a lot
of false starts.

There's a big difference between "not the fastest way to enter text" and "so
slow it's unusable", and the impression I get is that without extensive macros
like this, most speech to text systems are so slow as to be unusable for
writing code.

~~~
egypturnash
That's kinda the point of this article. He's got a bunch of macros and
idiosyncratic commands.

At 11:30 in the video:

"Camel this is a test" -> thisIsATest "Studly this is a test" -> ThisIsATest
"Jive this is a test" -> "this-is-a-test" "Dot word this is a test" ->
"this.is.a.test" "Score this is a test" -> "this_is_a_test" "Mara" -> selects
all text on screen "Chik" -> delete

He says he'll release his code in a few months.

~~~
lambda
Yes, that was my point. I watched the video, and picked the camel case example
from it.

I was replying to someone who said that input speed is not the main bottleneck
in coding, hence implying that it's not all that useful to do things to
improve input speed. While I concede that input speed is not the primary
bottleneck, my point is that without macros like this to speed it up, voice
input would be way too slow to do anything useful.

------
henrik_w
Tangentially related, but I'll throw it in here, since so many developers
aren't taking ergonomics seriously. RSI can happen to you if you are not
careful, and it can wreck your career (almost happened to me). Several years
ago, I started having aches in my arms. Over half a year it got gradually
worse, until it was so bad, I thought I had to give up coding altogether.
Fortunately, I managed to get it under control, mostly with the aid of a break
program, and an ergonomic keyboard and mouse. I'm now completely over it, but
I still need to be careful not to get it back. A lot more details in this
post: [http://henrikwarne.com/2012/02/18/how-i-beat-
rsi/](http://henrikwarne.com/2012/02/18/how-i-beat-rsi/)

~~~
muxxa
Personal anecdote: I correlated my RSI directly to drinking coffee (tea is
okay). I notice when I'm caffeinated that my posture is very different and I
hold postures (e.g. holding down the shift key) for much longer. If RSI starts
to blight you, try substituting your morning coffee for tea or water. For me,
a break program just increased the stress levels of 'wanting to get something
done', which I think is the root cause of RSI (stress).

~~~
dctoedt
> _try substituting your morning coffee for tea or water._

Syntax [edit:] tip:

"try substituting tea or water for your morning coffee"

or

"try _replacing_ your morning coffee _with_ tea or water"

EDIT: For the downvoters: Fairly or unfairly, in the non-tech world people
judge you by your choice and arrangement of words. (Compilers do much the same
thing, of course.)

</pedantry>

~~~
mechanical_fish
Nice people also judge those who try to shame people (not all of whom are
native speakers of English) into silence on health-related forum threads by
picking on irrelevancies.

~~~
delluminatus
Yes, let's ignore non-native English speakers' mistakes. That way, they will
never learn, and we can continue to subjugate them, along with those for whom
English is a first language, but cannot speak it correctly, probably because
they were never taught that "should have" is not spelt "should of" and that
their "they're"s aren't quite there.

~~~
mechanical_fish
"Spelt", in my dialect, is incorrectly spelled, and is a noun referring to a
variety of wheat.

Now, was the "correction" I just offered you effective and useful, or was it
merely irrelevant, provincial, chauvinistic, uninvited, uninviting, and just
plain rude?

(A note to downthread grammar trolls: I just used an Oxford comma, boldly,
without apology. Have fun.)

------
mdaniel
My counter-argument to voice-driven coding has been primarily around the input
bandwidth and the fact that you _must_ work from home with that kind of setup.

I guess the presenter conducted the "faster than the keyboard" test under very
controlled circumstances (e.g. only working on his own code, so one doesn't
have to deal with non-english-word variables/functions).

I don't mean to be a hater, because that was an _amazing_ demo, but I don't
believe it's the holy grail the title implies it is.

~~~
applecore
_> you must work from home with that kind of setup._

Or, you have a private office with a door that closes.

~~~
mhd
A high-quality headset, maybe even with some noise-canceling features should
work okay, too. It wouldn't be that much different from a call center, and
those don't usually get their own offices, either.

Sure, not the ideal, distraction-free environment, but neither is a cubicle
farm.

~~~
WalterSear
Voice recognition isn't good enough for that yet: hyper-cardioid mics can only
do so much.

~~~
mhd
Really, Dragon can't cope with someone sitting ten feet away and speaking at
the same time? So I guess no listening to radio, either. Is it just the
specifics of speech or is it _that_ noise sensitive.

Maybe someone should do some kind of voice rec "groupware" then, where the
relatively louder results of the other person are used to filter out false
positives on my end...

------
fsck--off
"Emacs pinkie" is a non-issue if you use a keyboard with thumb clusters, e.g a
Maltron or a Kinesis model. Investing in a good keyboard is just as crucial as
investing in a good chair, especially if you make a living by coding. The time
that you spend compensating for a bad input device by hacking your own
workarounds can be more costly then spending money on a proper solution.

Once you are an adequate touch typist typing speed is only beneficial if you
use a language that requires you to type a lot of boilerplate. Even then, you
can use an IDE for auto-completion. I can type at very high speeds — as fast
as others can input text by using their voice — but I can't remember the last
time I needed to type for more than a minute at a time. If you use a language
that requires you to spend more time thinking about code than it does to
actually type it, typing speed really doesn't matter. Code is like speech in
that it is judged by the eloquence, not the speed, of its delivery.

~~~
ajross
It's similarly much less an issue when you map your keys correctly. Control
goes to the left of "A", meta below "/". Much less pinky travel. Sun got this
right way back in the 80's with the Type 3 keyboard (vi users prefer its
placement of ESC too).

~~~
rwg
FWIW, you can get an unused Unix layout (Control left of A, Esc left of 1,
Backspace directly above Return) Sun Type 6 USB keyboard for around $40.

[http://ep.yimg.com/ca/I/memx_2267_226185665](http://ep.yimg.com/ca/I/memx_2267_226185665)

If you're using X11, you can go nuts with xmodmap and get it functioning at
least as well as it did on Solaris.

I think getting a genuine Sun keyboard beats just remapping keys on a
101/104-key PC keyboard. There are 12 additional keys at the left and top-left
of the keyboard just begging to be remapped for your own nefarious purposes.
You also get meta keys that are separate from the Alt key, as well as Compose
and AltGr keys for your åçcéñtêd character needs.

Plus when you look down and see the Sun logo, you can reminisce about the old
days and have a good cry at your desk.

------
ics
I was trying to work something like this out to try about a month ago but had
to put it aside for later. Running my speech recognition inside a virtual
machine was a dealbreaker, but not all that uncommon for people doing this
sort of thing. I really, really wanted to get Julius[1] running in OS X but
after a couple tries I couldn't get it to build (problem on my end– this is a
good reminder to get it sorted out). If you're looking for an alternative to
CMU Sphinx that's still FOSS, you really should check Julius out. There are
plenty of docs on getting it running with languages other than Japanese. If
you're curious about how well it can work, check out this[2] demo (requires
Chrome).

[1]
[http://julius.sourceforge.jp/en_index.php](http://julius.sourceforge.jp/en_index.php)
[2]
[http://www.workinprogress.ca/KIKU/dictation.php](http://www.workinprogress.ca/KIKU/dictation.php)

~~~
porker
That does work well. I'm happy to pay for Dragon, but I find the Windows
version so superior to the OSX software I refuse to run it on OSX...

~~~
reeses
The OS X "version" is a nightmare. It's guaranteed to break with every major
OS release. Nuance takes months to release working versions. When it does
work, it's hostile to any other apps that use the accessibility hooks, such as
Text Expander, Alfred, etc., which would be _awesome_ with speech input.

The history of the Mac version (acquisition of a company that licensed the
Dragon engine) means that it and the Windows versions are very likely
permanently divergent. Given the relative market sizes, the Windows version
has the best development, the best recognition, and the least schizophrenic
product support.

I am glad that dictation (apparently powered by Nuance's engine anyway) is to
be included in Mavericks, including a disconnected (i.e., non-Siri) mode.
Maintaining an application with a skeleton crew and relying on system services
that change at a fundamental level every couple years is not a path to
customer satisfaction.

~~~
porker
> I am glad that dictation (apparently powered by Nuance's engine anyway) is
> to be included in Mavericks

I'd missed that, very interesting. I need a disconnected mode as being able to
only dictate short passages, and especially using an online system that
doesn't learn from corrections, is a pain.

------
swayvil
~99% of my time coding is spent working through the stuff in my head

Now if they could optimize that...

------
crazygringo
Where is it backed up that it's faster than the keyboard?

For the couple of minutes I watched of him demoing it... I type _waaaay_
faster than that. In fact, I can't possibly _imagine_ how I could speak faster
than I can code on the keyboard.

(Regular English sentences are another story, but code is full of important
punctuation, exact cursor positioning, single characters, etc...)

I mean, this is awesome for people with trouble typing (which was my own case
a few months back), but I don't think it needs to be over-sold by being
"better"...

~~~
cbhl
I think this is a silly point of contention. If I recall correctly, it's
established that for English-language prose, speech recognition is easily
faster (300+ wpm) than typing (150-200 wpm if you're good; 20-50 wpm typical,
IIRC).

All he needs to establish is that he can do things like type
aVariableNameLikeThis in six words (16% overhead) instead of fifteen[0] (200%
overhead) and the rest of the claim follows.

[0] If you tried to type it using the out-of-box dictation in, say, Android or
Dragon, you'd probably start with something like "lowercase a backspace
uppercase variable backspace uppercase name..."

------
sspiff
Whenever I see posts about voice controlling your computer, I spontaneously
think "thank the heavens I don't have to share an office with you." I realize
some people work alone, at home or in a sound proof office, but every work
environment I've worked in has had a shared acoustic space.

These voice control schemes almost always end up as a cool gimmick, and rarely
as a productivity boosting solution.

~~~
asgard1024
Because you're thinking about it wrong. Together with HUD, it will be a
godsend for anybody who needs to have hands free and yet work with a computer.
And if the microphone is close enough your mouth, you won't have to talk
loudly to it.

For example, I could go to tend garden and yet think about some problem, take
notes, even code. Or check email, browse internet. I can work on hardware
thing and have schematics or specifications appear in front of my eyes. I can
have a walk and take notes. I can eat while working.

Eventually, no office will be required. You can just stroll in the park and
get the work done.

~~~
sspiff
None of those usecases seem like something I would find useful, and talking
with my mouth full doesn't seem convenient, I'm guessing your recognition
ratio would go way down.

~~~
andreyf
Stroll through the park coding on your Google Glass doesn't seem useful? Well,
maybe not useful, but it would certainly be cool :)

------
rossjudson
While I've never been able to adapt to using voice to code, what I have done
successfully is use Dragon to document my code. I set up some macros that
could move forwards and backwards between methods in Eclipse, added a "start
doc" macro...Eclipse does a lot of very smart completion so basic features in
Dragon handled it without difficulty.

Dictating your javadoc is pretty damn convenient.

~~~
JabavuAdams
I have a relatively small working memory, and I've been coding since I was a
little kid. Coding is like thinking out loud for me.

My default way to work is to bang some stuff into an editor and then
constantly revise and reshape it. I'll draw diagrams on paper or white-board
as necessary. I also tend to cut and paste "code notes" into a separate window
so I don't have to keep that in my head.

------
MarcScott
This reminded me of the guy who tried some Perl scripting using Windows Vista
voice recognition.

[http://www.youtube.com/watch?v=MzJ0CytAsec](http://www.youtube.com/watch?v=MzJ0CytAsec)

~~~
colinm
First thing that came to mind.

Hilarious!

------
asgard1024
I like it a lot. I wish there would be solution to tie this with say Google
Glass, and be able to go on a walk or sit in the woods and code or make notes
with it, hands free. Or while doing cooking or laundry, etc.

It's unfortunate he couldn't get the OSS speech recognition to work, though.

~~~
SimHacker
Yea, Google Glass would be ideal for DoucheScript Brogramming. Everyone could
listen to you reindent your code while you held up the line at Starbucks.

~~~
JabavuAdams
Was just thinking of a way to be able to code on the subway. While it could
annoy some, I'm often annoyed by stupid conversations on the subway. Can't
close ears.

------
daGrevis
Reminds me of VimSpeak.

[https://github.com/AshleyF/VimSpeak](https://github.com/AshleyF/VimSpeak)
[http://www.youtube.com/watch?v=TEBMlXRjhZY](http://www.youtube.com/watch?v=TEBMlXRjhZY)

~~~
reirob
Thanks for sharing!

Just watched it and I find it awesome, not just for the voice recognition but
as well as a nice spoken out video of VIM usage. I learned some of nice things
that I will use now more regularly in VIM.

------
ohwp
What I think is interesting is that a lot can be done to make typing easier
and more human when you can type like you speak (and think).

For example: we say/think

    
    
      for each item in list
    

but in a lot of languages you need to type something like

    
    
      foreach(item in list) {
    
    

A step further: we say/think

    
    
      let a be the substring of b from 1 to the end
    

we need to type

    
    
      a = b.substring(1)
    

Ofcourse the last example is much shorter and even more readable (to the
machine for sure) but maybe code could be a little more human.

~~~
gd1
I disagree. You could argue that a musician probably thinks "I have to play a
D# for one and a half beats" as well. Or they can draw a dotted quarter on the
sheet. We have symbolic languages for a reason - they are, once learnt,
superior. If anything code needs to move _further_ away from spoken language,
more in the direction of APL and its descendants.

A skilled musician likely doesn't engage the speech centres of their brain,
they see a note on the sheet and translate it to motion. You should be able to
take in the symbol for "apply a function to each item in a vector" at a glance
without any clumsy English getting in the way. APL had it right, but coding
has been crippled by catering to the lowest common denominator.

~~~
ohwp
_" they see a note on the sheet and translate it to motion"_

Indeed. I think notes are more 'human' than most programming languages. If the
music goes up, the notes go up. If the notes are short they look short (and
more dense).

But I agree that typing "let a be the subtring of b from 1 to the end" is no
fun. So I'm glad we have symbolic languages. But I think they could be made
more 'human'.

------
speeq
That was a fun talk to watch. Someone should try something similar using some
kind of brainwave detecting glass gear to make it possible to code by simply
thinking. That'd be awesome.

~~~
therandomguy
At that point why not cut out the coding altogether? Just visualize the output
and there it is.

~~~
dylangs1030
Just materialize whatever you want, in perfect working order, by thinking of
it.

~~~
TeMPOraL
With 2030-grade brainwave gear and 3D printing, why not?

------
charlieflowers
Question (halfway on topic) --

Who makes the best speech recognition software in the world? Regardless of
whether it is available to consumers ... who is the best at it?

In particular, how do Apple (Siri) and Google (Google Now) compare to Nuance's
stuff? Is Nuance so far ahead of everyone else that they're the clear leader?
Or is their codebase "legacy" and vulnerable to better, more accurate software
which can be built now due to better algorithms and approaches?

~~~
DigitalJack
I don't know who makes the best. But I know the history behind dragon is very
sad.

[http://en.wikipedia.org/wiki/Dragon_NaturallySpeaking#Histor...](http://en.wikipedia.org/wiki/Dragon_NaturallySpeaking#History)

~~~
charlieflowers
Wow! That would lead one to speculate that perhaps they haven't had the best
of engineering teams focused on improving the product over the years! Which
means there might be a huge opportunity here.

------
cbhl
A word of warning -- I started dictating all of my email and Facebook replies
on my Android using Google's voice keyboard on my Nexus One a few years ago in
response to RSI pain in my hands from overusing my cell phone. Within a month,
I started losing my voice.

RSI comes in multiple forms; using your voice exclusively is not going to fix
the problem. The trick is to switch things up, which involves having
alternatives in the first place.

~~~
reeses
Those vocal exercises singers do seem silly until you run into a problem such
as this. They've been working on getting more mileage out of their larynxes
for hundreds of years and have some pragmatic practices that can help.

Lots of water, avoiding nastiness in the air, learning the bare minimum volume
of air you can push through your throat and still get results, and taking
breaks when your body (either by feel or sound) tells you that it's tired.

In this specific case, adding leverage with short macros such as "laip" and
"slap" is essential. There's no way you could work a full day spelling
everything that wasn't in the recognizer's dictionary.

------
klancaster1957
In the video he mentions that he wish he had known about the previous talk.
Looked it up - [http://pyvideo.org/video/1706/plover-thought-to-text-
at-240-...](http://pyvideo.org/video/1706/plover-thought-to-text-at-240-wpm).
Pretty interesting. They are applying court reporter techniques to coding,
cutting down on the keystrokes immensely.

------
mugenx86
Anyone else find speaking commands out loud to distract from thought?

"slap... slap... jog... dot... word... chk... slap... snore"

~~~
jotux
Most programmers have mnemonics for text motions and symbols so as long as
they're mnemonics you're familiar with I'm sure there's no problem.

------
dylangs1030
This is amazing!

If you could speak a bit softer with this, maybe throw in some noise-
cancelling headphones, I could totally see this being useful even in an office
situation.

I could see a potential pseudo-language developing out of this to abstract a
lot of the individual characters, functions and common invocations used while
coding.

------
unclesaamm
Okay, here's the million dollar question that isn't on the FAQ and no one in
the audience asked.

How the hell did he code it without using his hands? With help?

To his amanuensis: Slap. York. Tork. Jorb. Chomp.

Or maybe he felt his hands going, and he spent the last few months of his pre-
RSI existence coding this up.

~~~
tavisrudd
Once I got the basics working with the DragonFly and Natlink libraries I
mentioned, I bootstrapped the rest mostly by voice.

------
bshanks
Here's an open source Python script i wrote a few years ago that allows you to
type with your voice. It's based off of CMU Sphinx. The accuracy is almost
certainly not as good as Dragon, and it doesn't have a macro facility, so you
cannot code as fast as typing. I haven't improved it much over the past few
years because my hands got better and i don't need it anymore.

[https://sourceforge.net/projects/voicekey/](https://sourceforge.net/projects/voicekey/)
(tarball, includes language model)
[https://github.com/bshanks/voicekey](https://github.com/bshanks/voicekey)
(repo, does not include language model)

------
tavisrudd
Hi, I'm the guy in the video. You might also be interested in a presentation I
gave last Sept at Strangeloop with a much longer demo of coding in Clojure and
Elisp: [http://www.infoq.com/presentations/Programming-
Voice](http://www.infoq.com/presentations/Programming-Voice)

There's also this lightning talk
[http://www.youtube.com/watch?v=qXvbQQV1ydo](http://www.youtube.com/watch?v=qXvbQQV1ydo)
from PolyglotConf (warning: crappy audio from a shaky cell phone cam).

I promised to release my duct tape code later this year. I'm a bit behind
schedule with that but it should be out in a month or two.

~~~
brownbat
Strangeloop was a great demo.

What's the next big leap for speech to text programming? A language designed
specifically to be speakable, ie, all keywords and no symbols?

I mean, I'd like speech recognition to get more natural error correction,
drawing more from the way we use inflection to give feedback about which
syllables to correct. (I love how Google on mobile now gives visual indication
of which syllables it heard clearly, and which it didn't. I just wish it would
understand when I shout "No, X not Y" to replace just that one misheard word.)

It'd be interesting to hear about where voice is heading from someone who uses
the technology far more.

------
unono
There's a lot of potential for multimodal gamified programming using tablets.
A combination of gesturing, shaking the tablet, face expression, hand drawing,
myo sensing, as well speech, in addition to machine learning in the compiler
and for regular expression building. Within the next year a whole raft of apps
along these lines will be coming online in the app stores. Big opportunity for
Indie developers on the app store, you can easily charge $20+ if they're good
and disrupt the emacs/vi/eclipse monopoly/monotony.

------
D9u
This is a cool project, as I think a voice interface would be the ultimate in
computing, something like in "2001, A Space Odyssey," or "Star Trek."

I remember first playing with voice recognition and voice command on a PPC Mac
back in 1994.

That the technology hasn't progressed along the same lines as cell phones and
processors is testament to how difficult voice recognition actually is when
dealing with a wide variation of dialect within any given language.

I would love to be able to use my voice as my main input to my computers and
other devices.

------
balakk
It's awesome that it works, but that looks totally tiring.

------
singularity2001
We need a new programming language optimized for voice:
[https://github.com/pannous/natural-english-
script](https://github.com/pannous/natural-english-script)

------
frakkingcylons
Interesting talk. Naturally it made me think about steps I should take to
prevent any kind of RSI. Should I be seriously concerned if I type for about
4-5 hours on average per day? How can I prevent it?

~~~
DennisP
Anecdotal: I was getting soreness in my finger joints, and about that time
went to a presentation talking about repetitive motion causing arthritis for a
lot of typists. It was pretty grisly. Padding in finger joints wears down, and
little chips of bone start breaking off, causing pain from bone chips and
realignment of fingers to fit the new bone faces. Padding restores with rest,
so it helps a lot to catch it early.

I bought a couple nice mechanical keyboards with Cherry switches (red and
brown). I type very lightly on them, seldom bottoming out the keys. Finger
troubles went away.

------
quantumpotato_
Any good machine intelligence integrated with IDE? I'd love some AI
autocompleting things.

~~~
SimHacker
"Uuuuhh..." should trigger the autocomplete menu.

------
ChrisAntaki
This would be amazing, especially if it one day supported Linux natively.

------
jerogarcia
this is great , even that seems complicated and hard to get used to ... it's a
fantastic option when nothing else works.

------
krupan
Amazing, but the cubical farm is noisy enough as it is.

~~~
stretchwithme
Welcome to the call center. How may I annoy you?

------
frozenport
I wonder if we should also be voice coding in a language drastically different
then for example, C++? Maybe a language more syntactically friendly for voice?

~~~
singularity2001
Fully agree This why we started working on a new language called "english
script" : [https://github.com/pannous/natural-english-
script/tree/maste...](https://github.com/pannous/natural-english-
script/tree/master/lib/english-script)

~~~
ufo
hmm, wouldnt it be better to link to some examples instead of the
implementation?

