Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Anyone using a “no hands” setup for programming?
78 points by lazyjones on April 8, 2015 | hide | past | favorite | 46 comments
It sometimes happens that programmers lose the ability to use their hands due to accidents/illness. Has anyone here used / seen an efficient setup for programming (i.e. entering code in various languages) that works with voice recognition, perhaps combined with eye movement? Please share info (hardware/software/effective "typing" speed). I'm sure it can be implemented better than using standard voice recognition software and text editors.



[This talk by Tavis Rudd](https://www.youtube.com/watch?v=8SkdfdXWYaI) about a system he used when his hands were afflicted with RSI is pretty interesting.

Not entirely a no-hands approach, but I can't seem to remember the developer who made his first app in the hospital by typing with one/two fingers. He wrote a blogpost about his process that was really inspiring.


Ever since I first saw that talk, I've been checking his github page to see if he's pushed his code yet. Not that I'm judging; if I had a dime for every project I totally intended to push to github "once I clean up the duct tape", and then didn't, I'd probably have, like, a dollar.


Other people have pushed source code to github so you can stop waiting.

http://thespanishsite.com/public_html/org/ergo/programming_b...


After hearing this talk, I used this repo as an inspiration to get started on my own setup: https://github.com/dictation-toolbox/aenea

Works well for me, but with some friction from reliance on a Windows VM.


Would be nice if Dragon could enhance their Mac product to be on par with their PC product. Even with Apple's recent resurgence it doesn't always get first class treatment.


I think the quality difference might be due to a lot of government jobs where you people use dragon are on Windows.


Try Dasher http://www.inference.phy.cam.ac.uk/dasher/

It's been developed for people only having a "one dimensional" possibility for doing inputs, i.e. only can move an eye or one muscle.

Given a custom dictionary you can write quite fast, though I don't know how practical this would be for programming.


It would probably quite fast for everything other than writing out strings or naming variables/methods etc.

Intellisense in most modern IDEs already does most of the custom dictionary work.


It also has the advantage of using the same interface to enter text character-by-character and to accept predictions; predicted next text becomes easier to enter with a wider area to hit, while unlikely next text can still be entered.


Thanks for posting this one ; I saw it a while ago and some more polished version as well. Seems like a good idea to play around with for touch screen.


What was the more polished version you saw? I've been really interested in Dasher for quite some time but the last time I looked it appeared to have stagnated in terms of progress


Here is one that instantly comes to mind http://pyvideo.org/video/1735/using-python-to-code-by-voice


Any idea where the Dragonfly code is for those commands?


I think he used the python NLTK Kit. http://www.nltk.org/

I don't think that Dragon Naturaly Speaking is the best NLP tool out there, it appears to me that Stanford's CoreNLP is much more accurate, albeit being a little slower and more ressource hungry due to Java I think.

EDIT OT: I've found this blog which might be helpful for people who try to keep up with assistive technology. http://www.assistivetechnologyblog.com/

I wish I knew what advancements they[1] made since 2012: http://www.sciencedaily.com/releases/2012/06/120628164426.ht...

Journal Reference: [1] Bettina Sorger, Joel Reithler, Brigitte Dahmen, Rainer Goebel. A Real-Time fMRI-Based Spelling Device Immediately Enabling Robust Motor-Independent Communication. Current Biology, 2012; DOI: 10.1016/j.cub.2012.05.022


My relative has no hands since birth, he types on keyboard with his feet. I think most people could learn it after some training.


A kid in my CS program at uni only has one hand and only has 3 fingers on that hand, he's a great programmer and only slightly slower than most people. Most of my time spent programming is actually reading and thinking, not typing, so this comes as little surprise.


Interesting, because sometimes I neglect all of my elementary school touch typing lessons and end up typing with only the index and middle finger of my right hand (but my full left hand). I'm still one of the faster typists among my friends.


I tend to just use my index finger on each hand unless I'm typing letters right next to each other. For example, in "other", I type the "o" with the middle finger of my right hand, the "t" with the index finger of my left hand, the "h" with the index finger of my right hand, the "e" with the middle finger of my left hand, and the "r" with a small movement of my left hand index finger. I hit the shift key with my ring finger or pinkie and the space key sometimes with my index finger and sometimes with my thumb depending on where my hands are.

I've been playing guitar for longer than I've been seriously typing and guitar places a heavy emphasis on efficiency of the movement of your hands and fingers. There are multiple ways to form a chord and multiple places on the fretboard to play a note, so you find the one way that is easiest to switch to quickly from your previous note/chord. I wonder if that influence has changed the way I type as I tend to slide my hands left and right along the keyboard depending on where I will be hitting the keys next.


[flagged]


I'm not trying to start a pissing contest, just trying to provide more examples of programmers with disabilities who are still productive.


A friend of mine in college typed primarily with her feet (at about 30 words per minute, if I recall correctly.)


http://www.looknohands.me/ This New Zealand designer has a great setup that works for her.


She's a web designer. For programming, you'll be using a much different interface. Although, it'd be cool if someone built an IDE that had more gesture support.


Slightly Offtopic: Does anyone use/know of a system for "no eyes" (blind) programming? Does anyone know any blind developers?


This SO thread has some interesting answers: http://stackoverflow.com/questions/118984/how-can-you-progra...


Nolan Darilek is a blind developer who comments on HN occasionally about the topic. https://news.ycombinator.com/user?id=ndarilek


Sam Hartman is an extremely competent, blind developer.

http://raphaelhertzog.com/2011/06/24/people-behin-debian-sam...


The Debian project has multiple blind developers. One serves on the Debian Technical Committee.

Useful packages include brltty (for Braille terminals), emacspeak, and orca.


I think blind programmers mostly just use ASCII braille, with standard accessibility equipment like braillers, braille notetakers, and screenreaders.


I'd hire a secretary and dictate whatever I want to do (not limited to typing: window switching, etc).

A sophisticated multi-monitor setup would help.


Not practical for most people, of course.


Might be interesting to set up some sort of system for pairing young healthy aspiring programmers with older more experienced programmers that have lost the use of their hands (or eyes, etc) such that the younger programmer could learn from the more experienced but disabled programmer while helping them get stuff done.


Hal Finney did when he was paralyzed from ALS. He used a makeshift arduino setup and a commercial system to control his chair and keyboard with his eyes somehow. https://bitcointalk.org/index.php?topic=155054.msg1643833#ms...


Here are some resources that I've gathered on programming by voice.

http://thespanishsite.com/public_html/org/ergo/programming_b...

I haven't gotten around to setting up the Window's VM on my Mac and trying.


There are some solutions if you can use one hand. Matias makes a (really super expensive) one-hand keyboard, and there's also software that allows you to mirror the keyboard in halves with a hotkey. I'm sure there are other options in that realm.




Unfortunately its only for windows.


Not an answer, but a thought:

Code is really just a seraliazed AST. That tree structure should be modifiable with gestures / voice as much as any other tree structure.


What are some examples of "any other tree structure" that are readily modifiable by gestures/voice, then? That simply didn't sound very familiar to me, so it made me curious what you're thinking of.


Windows Explorer, Finder.

The DOM in DevTools to a lesser extent.


Seems like a good idea. Only certain things are type-able at any one time.

Intellij has this feature called "smart complete".. it just types whatever is required at that moment. Must work on this sort of principle.

Seems that you could put voice input into some sort of low-gear mode of just one letter at a time. And that could work well inside IntelliJ with its auto-complete / code gen / refactoring sort of commands.


Good thought. For example, if I were telling another human what to type I would start saying things like, "the child of this if statement" instead of "open brace tab tab" or "in the parent function change parameter a to b" instead of "up up up up up up..."


You're not really exposed to the tree structure though, unless you're writing in Lisp.


Sure, and you'd have to expose the developer to that tree structure for this to happen.

You could take steps to make it look closer to the original serial data (eg, the DOM doesn't have opening and closing tags, but DevTools shows it that way because developers expect it). Or you could come up with a more efficient but less familiar form.


There may be "good enough" parallels in other languages though, for example structured-haskell-mode for emacs.


It seems as the requirement for no hands coding is quite high but there is only one dude doing that, and only on windows (or at least needing a VM)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: