"If you are unsure which computer/laptop/tablet to purchase
and are considering spending a lot of money then please email me
- I can offer personal advice on how to target the sweet spot
between cost and performance (and screen size)."
Truly an inspiring person.
Open source is the better model for every single industry. It's coming.
Furthermore, companies which would lose more profit than hard-to-monetize goodwill gained, by releasing something their competitors can use against them, has as much chance as an airplane made of lead.
Open source can help sell small products get initial traction but it makes them easy to knock-off and easy to steal (unless you make the build process hard and fragile)... It just won't scale because human nature.
Life experience and wisdom cannot be taught easily.
A lot of interesting information from the author over there.
META: Interesting to note how much cool stuff is turning up on reddit first before it appears here... I feel like it used to be the other way around.
People will stop posting to a community when they feel their posts aren't up to what they believe the community tends to accept as Par for the course.
Then I came to HN and searched for optikey and I just found this and several other threads by others...
It's out and 22 public forks already. Who knows how many git clones...
The trackers he listed have an accuracy of about 0.3-0.5 degree which at a distance of 50cm to the screen still maps to a fairly large area of about 50x50 pixel (depending on screen resolution and PPI). That's far too big to guess where you want your cursor to be placed.
Moreover, the fact that you'll see the cursor moving creates a loop in which you will follow the delayed cursor around with your gaze.
And on the other end of the speed spectrum, it seems like it's unlikely that this will ever quite be as quick as operating even a QWERTY keyboard with fully functional hands.
For the average person, we're pretty close to being able to use voice instead of typing.
Throw in eye tracking and precise gestures (http://www.youtube.com/watch?v=0QNiZfSsPc0) and the keyboard isn't necessary.
I'm waiting for the windows version to be released, so I don't have any first hand experience. The videos look promising though.
I use Dragon and python extensions to supplement typing for now.
A b c d E f g h I j k l m n O p q r s t U v x y z.
The issue being that while I can certainly find the letters when they are alphabetical - clusters of commonly used letters are further apart. Having "th" and "ng" visually close to each other aids a lot.
Unfortunately, none of the VR headsets shipping in the next six months have iris trackers (much less scanners). The only headset I found was Fove, which was successfully kickstarted and ships in 2016. Fove did a promotion with a Japanese school for wheelchair-bound children, in which a child played a piano using the iris tracking function of the headset.
There are some iris scanner-equipped phones shipping in the near future, but none with iris trackers as far as I could tell.
I can share the post link if people are interested.
This will explode
As it progresses, input devices will need to evolve quite a bit since it won't always be convenient to have a full-size keyboard, and part of the value of having a screen on your eyes vs. in your hand is that it frees up your hands for other things. Being able to use slight muscle gestures (ala "Rainbow's End" by Vernor Vinge) in combination with eye-tracking, audio controls, etc. seems like a logical leap until we can decode brainwaves sufficiently to use that as input.
I imagine something that shows a swipe-style keyboard overlaid on screen with your current finger position is also a likely solution.
That said, my big beef with this seems to be forcing a traditional keyboard layout on screen that takes up a massive amount of screen real estate. Would much prefer thinking outside of the traditional keyboard layout and making better use of space.
Assume you have a bunch of HUDs/AACs/Glasses/Whatever that are using this tracking tech. Assume that they are ONLY looking at the real world, not some online data/webpage etc...
There is a camera that either is fwd looking or also 360 looking.
Use the tech to eye track exactly everything that MANY people are looking at to train an AI as to what items in the real-world are important.
i.e. "the ground exists and we know its there, thus its priority in information is low" however "these signs that are being looked at have a higher context priority, and require understanding"
By doing this on a fair scale... you could train AI to use the information of "whats visually important to human navigation" to train them to navigate. This augments what other ML/AI stuff has already been going on...
I do not know if this is basically how self driving cars were developed -- but now that you have a seed of this tracking tech in open source -- it could blossom.
Basically this probably won't happen until Google (or Facebook, or Apple, etc) decides that knowing what people look at is worth the effort/cost.
The accessibility space needs as much open-source development as possible - most of the commercial tech, if you can find it, is locked down and outdated.
From looking at the tracking software it seems to calculate contrast between the retina and the surrounding eye to track what it is looking at.
Does this exist yet?
(You can see the performance of the eye-tracking warping + mouse at 2:41 of the video: http://youtu.be/7BhqRsIlROA?t=2m41s).
(Not a criticism of the excellent work here. It's a fundamental bandwidth problem.)
Concretely: In the time the demo took to write "Meet OptiKey. Full computer control and speech using only your eyes.", I wrote in an Android text area: "I am just typing words as quickly as I can in the android interface. I'm even having to think a bit about it more that I've written so much. uh, hello world? and other stuff. I'm still going and going and going." Including the missing "now" between "more" and "that", and you can't see it but I had to correct 3 words, too. That's about three times faster on the phone I have in my pocket. And let me just say... that is impressive performance for a free eye tracking suite, in my opinion, to still be that fast with just eyes.
>Microsoft patents eye-tracking keyboard software
The idea’s just like swipe-based keyboard software, but instead of tracking the motion of your fingertip, the system tracks eye movement.
As your glance moves from key to key, darting around and changing direction, a devices camera records its gaze, and with a little algorithm magic, Microsoft tries to work out just which keys you were looking at – and ultimately, what word you were attempting to construct.
The author seems to be responsive, but the Reddit thread got /r/bestof'd so he's probably RIP Inbox'd. And he's a new dad. Might be worth a followup in a few days if he doesn't get back to you beforehand.
I hope it meets your needs though!
Edit: He's also in this thread!