I want to solve speech recognition for programmers, so that we can create and edit source code efficiently without using a keyboard or a mouse. This will be great for RSI sufferers (unless voice also fails because of overuse).
Accurate speech recognition is a hard problem because human language is very ambiguous. As far as I know, there's currently no working solution for programming by voice in any programming language. DragonDictate or NaturallySpeaking with custom macros may come close, but it's difficult to set up. I found some related material, but most of it seems to be either academic research or unmaintained software.
Initially, I'm going to focus on speech recognition for Python. I know there's probably a bigger market for C++ or Java programmers, but Python code is more similar to human speech and is rather concise, using fewer lines of code than Java for the same task. Python has a large standard library which we can pre-parse and digest to reduce ambiguity during recognition.
Python's interactive interpreter with speech recognition and voice output would make an awesome demo. You could say "three times five" and the computer would respond with "fifteen". Or you could say "from time import localtime (pause) call localtime without parameters slice the first three elements" and the system would say "two thousand nine eight seventeen".
The entire speech recognition software could be free open source, maybe based on CMU Sphinx-4 (which is in Java). The business model could revolve around a web service that lets people upload their utterances (snippets of recorded speech) during or after their programming session. We can use these files to improve the recognition engine and train the speaker-independent acoustic model. So the recognition would get better over time, but speaker-independent models only work for some "standard" pronunciation without too much accent.
For a small fee (e.g. $49) users could download their personal acoustic model for improved accuracy, which would be generated from the voice snippets that they have uploaded. The model training process needs several minutes or even hours of CPU, but an email could be sent to the user when the model is ready for download. When the software improves over time and they have recorded more utterances, they can pay another fee and generate an even better model.
If I can get speech recognition for Python code to work, maybe SQL or bash could work too (both support auto-completion, which can be useful for reducing ambiguity).
Please let me know what you think. I'm planning to implement a simple demo in Seattle during the next few weeks. Want to brainstorm with me over beer or coffee? We could be co-founders if we work well together.
It's similar to math - when I was in university, I broke several bones in my writing hand about two months before I had a calculus exam. I knew I wouldn't be able to write for my finals, and I spoke to the Disability office to make a reservation for someone to write for me. So I decided to hire a guy for a few weeks to get comfortable with doing math verbally, like I would have to do for the final.
After about a week I decided it was pretty hopeless - my whole workflow was gone, I couldn't sketch out ideas since there was such a mental load in "writing down" haldf an idea, etc. As a result, I just learned to write semi-legibly with my off hand, and had the guy whose job it was to transcribe my math recopy what I scribbled with my left into something readable.
Moral of the story, at least for how I work, is that you should have very good support for moving text around.