Hacker News new | past | comments | ask | show | jobs | submit login
OptiKey – Full computer control and speech with your eyes (optikey.org)
437 points by aw3c2 on Sept 10, 2015 | hide | past | web | favorite | 86 comments



From the get started page

  "If you are unsure which computer/laptop/tablet to purchase
  and are considering spending a lot of money then please email me 
  - I can offer personal advice on how to target the sweet spot 
  between cost and performance (and screen size)."
So not only is he giving away this amazing software but he is also offering free, personal advice on what you need to buy working with your budget!

Truly an inspiring person.


"The way to take over an industry is not to fix the current model, but to completely destroy it and replace it with a model you know is better"

Open source is the better model for every single industry. It's coming.


It's not a panacea. Open source software is nice when it doesn't put contibutors into the poor house while corporations depend on the code at large-scale and don't support what they take. Open source software is great for SDKs and other supporting materials, but Github and Google aren't going to make their core infrastructure available to all... it defies common sense.

Furthermore, companies which would lose more profit than hard-to-monetize goodwill gained, by releasing something their competitors can use against them, has as much chance as an airplane made of lead.

Open source can help sell small products get initial traction but it makes them easy to knock-off and easy to steal (unless you make the build process hard and fragile)... It just won't scale because human nature.


Define common sense please. Also please define human nature... Who would steal this software? What for? To compete against something free?


Please gain more points by not demanding impossibly-unreasonable answers.

Life experience and wisdom cannot be taught easily.


Do you think open source fine art is better than one from a single artist's vision?


Those aren't necessarily mutually exclusive options. Something can be "open source" and still have only one contributor.


Art has been quite open source in its workings since centuries. Look at how artists were joining masters to learn the art and how they "forked" masters' techniques to create theirs. Personal vision is not precluded from open source, to the contrary it is encouraged.


R&B, EDM, pop and so on wouldnt have much left without remixes and sampling.


But in that case money changes hands. That's pretty different from open source.


Nothing about open source precludes money from changing hands. Case in point: Red Hat.


Yes. Open source is about way more than the finished product; it's about the shared community, techniques, processes, and tools.


What's amazing is that I could see a cottage industry around this with respect to google glass like overlays that allow for visual navigation of the overlay... I think we have our first entry into the cyberpunk visual navigation future being seeded by this guys great work.


https://www.reddit.com/r/programming/comments/3ke7ug/eye_tra...

A lot of interesting information from the author over there.

META: Interesting to note how much cool stuff is turning up on reddit first before it appears here... I feel like it used to be the other way around.


Re: META

People will stop posting to a community when they feel their posts aren't up to what they believe the community tends to accept as Par for the course.


That's my impression as well.


The way posts work on HN is a bit puzzling and not immediately obvious - I'll post something and then refresh the front page and won't even see it. The Reddit model seems simpler - I post something and it appears.


I think you're correct. A Show HN that goes nowhere fast will disappear and be seen by virtually no one. An equivalent in r/startups or similar will still be seen by a few people even if it doesn't attract comments or upvotes.


The popularity of reddit over HN (then and now) means that more interesting things have always shown up there first.


Interestingly, as soon as I saw this post on reddit I PMd Julius to please post to HN.

Then I came to HN and searched for optikey and I just found this and several other threads by others...


Also... That reinforces "front page of the Internet"


Could this be covered by existing patents? Tools like these for the disabled are a big business, and patent holders have shut down competition in the past: http://www.disabilityscoop.com/2012/06/14/dispute-ipad-app-p...


This is a good point - how do copyrights protect intellectual property from open source alternatives? I think that this is a really good thing to open-source, but the law can be tricky to navigate. Almost like you need a degree in it...


Well, good luck shoving this code back in the bottle (read: git archive).

It's out and 22 public forks already. Who knows how many git clones...


I wonder if you can combine this with Vim so that you could use it for lightning fast cursor movement and selections and still use the keyboard for anything else.


The problem with gaze tracking is that achieving the kind of accuracy you are thinking about isn't possible right now.

The trackers he listed have an accuracy of about 0.3-0.5 degree which at a distance of 50cm to the screen still maps to a fairly large area of about 50x50 pixel (depending on screen resolution and PPI). That's far too big to guess where you want your cursor to be placed.

Moreover, the fact that you'll see the cursor moving creates a loop in which you will follow the delayed cursor around with your gaze.


I wonder how well the hardware would work with an input system like Dasher. Dasher was experiment in statistical inference and tries to be an accessibility tool as well. http://www.inference.phy.cam.ac.uk/dasher/. I found it really fun to play around with and could get decent speeds but not great. It's true about eye tracking being prohibitively high. When we order an eye tracker for the lab, it ended up costing us £10K.


yeah, I remember dasher. Quite effective and pretty fun - agree a mashup of eyetracking and dasher could be worthwhile.


In the demo, he was quick and he didn't even seem to use much autocompletion. However, I wonder if using a different keyboard layout would be helpful. Dvorak a Colmak, for example, place the more frequently used letters on the home row so you'll spend more time with your eyes there. You can evaluate different layouts with a tool like this:

http://patorjk.com/keyboard-layout-analyzer/


I wonder if it really matters with your eyes vs. your hands and fingers. Flicking your gaze around the keyboard seems much easier than moving your fingers around - you'll never have two fingers hitting each other.

And on the other end of the speed spectrum, it seems like it's unlikely that this will ever quite be as quick as operating even a QWERTY keyboard with fully functional hands.


This tool is for people who can't use their hands, of course.

For the average person, we're pretty close to being able to use voice instead of typing.

https://www.extrahop.com/blog/2014/programming-by-voice-stay...

http://ergoemacs.org/emacs/using_voice_to_code.html

Throw in eye tracking and precise gestures (http://www.youtube.com/watch?v=0QNiZfSsPc0) and the keyboard isn't necessary.


Instead of coming home from the office with Carpal Tunnel we'll be coming home with hoarse voices. Color me skeptical.


Some people will gladly take a hoarse voice. Ever see someone have to resort to using their nose?

http://www.looknohands.me


There's also this http://voicecode.io/

I'm waiting for the windows version to be released, so I don't have any first hand experience. The videos look promising though.

I use Dragon and python extensions to supplement typing for now.


Thanks for the Project Soli video!

Mind blown!


If we're talking about what's optimized for eyes and the movement of them, you wouldn't even want a keyboard layout, you'd want something more like a dart board or oval shaped dartboard where the spaces are letters and the most common letters are towards the center. QWERTY is fine for someone familiar with it, like an experienced typist whose already mapped those keys mentally, but I expect someone with a lifelong disability would have no true familiarity with QWERTY (since they can't type) and would have to learn any layout from scratch anyway, so you might as well start from scratch.


Someone did start from scratch, almost: https://en.wikipedia.org/wiki/FITALY


There were keyboard layouts optimized for palmos pen input; fitaly and atomik. The most used letters are clustered in the middle so movements is minimized, theoretically speeding up the eye typing as well. It seems to be a faster entry method then using a switch like Stephen Hawking does or using a single switch morse type system at perhaps 20 wpm, but indeed much slower then having 10 fingers (switches) and perhaps two feet available at the same time.


For visual search I would disregard keyboards entirely and go alphabetical:

A b c d E f g h I j k l m n O p q r s t U v x y z.


I tried to eye out "Batmobile" and had a surprisingly hard time compared to Colemak or QWERTY layouts. (I tested by making fake layouts in Photoshop.)

The issue being that while I can certainly find the letters when they are alphabetical - clusters of commonly used letters are further apart. Having "th" and "ng" visually close to each other aids a lot.


Arranged in a clock face perhaps?


Something like the radial keyboard used in Steam's big picture mode could be really useful here. It would reduce potential error surface by quite a bit.


Good idea, it would make a more memorable position for every letter.


Excellent resource. Thanks for that. Julius


This is amazing software and I'm glad something like this is open source. Accessibility hardware often costs a fortune (for understandable reasons) and software isn't typically cheap either. This seems to be very high quality and it has the potential to really improve people's lives. I'm glad things like this exist.


I have been using voicecode.io (with dragon naturally speaking) and IR head tracking so that I don't have to use my hands for anything now with pretty good success. The head tracking mouse has a much smaller learning curve, but programming by voice has been rewarding as well.


Has anyone gotten the other solutions to work? I've been collecting my notes but I haven't gotten around to it.

http://thespanishsite.com/public_html/org/ergo/programming_b...


voicecode.io is far ahead of anything else in terms of out of the box programming by voice. Everything else requires a lot of up front effort and essentially designing your own language before you can be productive.


This could be a really nice keyboard interface for VR too.


It's curious that Optikey was posted today, because yesterday I wrote a post about the promise of iris scanning technology to deliver a seamless payments experience in VR headsets.

Unfortunately, none of the VR headsets shipping in the next six months have iris trackers (much less scanners). The only headset I found was Fove, which was successfully kickstarted and ships in 2016. Fove did a promotion with a Japanese school for wheelchair-bound children, in which a child played a piano using the iris tracking function of the headset.

There are some iris scanner-equipped phones shipping in the near future, but none with iris trackers as far as I could tell.

I can share the post link if people are interested.


https://theeyetribe.com/products/ looks kind of interesting regarding eye tracking, I'm curious how well it performs


Heh. I came to the same conclusion as well re iris tracking, so clearly it's an inspiring work by Julius...

This will explode


Was thinking the same thing. The inevitable trend with VR is that it gets miniaturized to slimline eyewear, contact lenses, or embedded directly wetwear.

As it progresses, input devices will need to evolve quite a bit since it won't always be convenient to have a full-size keyboard, and part of the value of having a screen on your eyes vs. in your hand is that it frees up your hands for other things. Being able to use slight muscle gestures (ala "Rainbow's End" by Vernor Vinge) in combination with eye-tracking, audio controls, etc. seems like a logical leap until we can decode brainwaves sufficiently to use that as input.

I imagine something that shows a swipe-style keyboard overlaid on screen with your current finger position is also a likely solution.

That said, my big beef with this seems to be forcing a traditional keyboard layout on screen that takes up a massive amount of screen real estate. Would much prefer thinking outside of the traditional keyboard layout and making better use of space.


Here is an idea: Tell me if this is stupid:

Assume you have a bunch of HUDs/AACs/Glasses/Whatever that are using this tracking tech. Assume that they are ONLY looking at the real world, not some online data/webpage etc...

There is a camera that either is fwd looking or also 360 looking.

Use the tech to eye track exactly everything that MANY people are looking at to train an AI as to what items in the real-world are important.

i.e. "the ground exists and we know its there, thus its priority in information is low" however "these signs that are being looked at have a higher context priority, and require understanding"

By doing this on a fair scale... you could train AI to use the information of "whats visually important to human navigation" to train them to navigate. This augments what other ML/AI stuff has already been going on...

I do not know if this is basically how self driving cars were developed -- but now that you have a seed of this tracking tech in open source -- it could blossom.


I don't think this is stupid, but probably won't happen any time soon. A project to correlate what humans look at to objects in the real world is probably possible, but would be a significant effort to get it to work reliably and accurately. Scaling it to a large population and processing the data to the point of providing useful insight about how we behave and look at things would be another hurdle.

Basically this probably won't happen until Google (or Facebook, or Apple, etc) decides that knowing what people look at is worth the effort/cost.


I'd be happy if all those time-and-space-stamped photos uploaded to facebook could be stitched together to make a photoessay of my vacation. That way I wouldn't even have to take a camera with me; facebook could use Other People's Pictures that I just happened to be in proximity to.


Great work. I wonder if the author had heard of the Eyewriter project [1]? Similar system, fully open source, but I'm not sure how active development is nowadays. It's been up since ~2011, though, so quite old by software standards (uses openFrameworks/openCV). Still, it worked impressively well when I built a derivative system.

The accessibility space needs as much open-source development as possible - most of the commercial tech, if you can find it, is locked down and outdated.

[1]: http://www.instructables.com/id/The-EyeWriter-20/


One of the main motivators behind OptiKey - fantastic project.


Can someone elaborate on the use of eye trackers in dark rooms (rooms lightened by dimmed lights or only the monitor itself)? Does it work?


My mom has ALS and has an eye tracker. The camera is infrared and shows up based on any light in the room.

From looking at the tracking software it seems to calculate contrast between the retina and the surrounding eye to track what it is looking at.


Thank you.


That's correct. I find they work better in darker environments. They struggle with direct sunlight as it contains lots of IR light.


You should take a look at EyeFluence - they have developed some incredible techniques to type with your eyes using non-dwell methods.


I will, thanks for the heads up


Readers here may appreciate: I spoke with the dev for a piece on OptiKey: http://www.businessinsider.com/an-eye-tracking-interface-hel...


As someone that suffers from RSI, I'd be interested in an Opti-mouse.

Does this exist yet?


Tobii bundles some (Windows only) integration software with their EyeX. It allows you to click on whatever you are looking at with the press of an action button (left ctrl key for me), along with a few other features (eye tracking the alt-tab switcher is what I really like). It works okay, but eye tracking isn't quite as accurate as a mouse and you either need to make your icons big and widely spaced or you have to zoom in (with an action click) before selecting the icon precisely.


If eye-tracking is not a replacement, eye-tracking can still be used to initially teleport your cursor near your target, and then you can use the mouse to precisely place the cursor (Tobii EyeX (and Sentry?) has the feature).

(You can see the performance of the eye-tracking warping + mouse at 2:41 of the video: http://youtu.be/7BhqRsIlROA?t=2m41s).


I use a trackIR and linux-track with good success. The only issue is you have to have a reflective dot (or 3) on your head, ideally on the bill of a hat. It is accurate enough that I have to use the trackpad maybe once a day. I don't think it would work well for drawing or anything like that though ...


https://www.justgiving.com/Julius-Sweetland also raising money for cancer charity at the same time.


Thank you for including this. People are being very generous.


I wonder if a version of this for mobile phones could make typing on a phone faster than fumbling with a touch screen.


Mmmm... that was a great deal slower than me using the latest Android swiping support, even counting errors. Can't speak to the other mobile tech stacks, but if you're typing on a phone is slower than that you might want to poke around, you may be missing a technology already installed on it.

(Not a criticism of the excellent work here. It's a fundamental bandwidth problem.)

Concretely: In the time the demo took to write "Meet OptiKey. Full computer control and speech using only your eyes.", I wrote in an Android text area: "I am just typing words as quickly as I can in the android interface. I'm even having to think a bit about it more that I've written so much. uh, hello world? and other stuff. I'm still going and going and going." Including the missing "now" between "more" and "that", and you can't see it but I had to correct 3 words, too. That's about three times faster on the phone I have in my pocket. And let me just say... that is impressive performance for a free eye tracking suite, in my opinion, to still be that fast with just eyes.


You'd perhaps need something like this to come closer:

>Microsoft patents eye-tracking keyboard software

The idea’s just like swipe-based keyboard software, but instead of tracking the motion of your fingertip, the system tracks eye movement.

As your glance moves from key to key, darting around and changing direction, a devices camera records its gaze, and with a little algorithm magic, Microsoft tries to work out just which keys you were looking at – and ultimately, what word you were attempting to construct.

http://pocketnow.com/2014/12/24/eye-tracking-keyboard


No you're right. Dexterous fingers beats eye tracking for text input, but my target audience can't use their hands so this is (hopefully) a good alternative.


Please allow me to reiterate: I'm impressed with your work. Nothing there was a criticism. In fact it struck me as well-designed.


Pretty cool, fellow Julius!


This is amazing software, but will it work with me? My left eye is a glass eye and it has little to no movement.


If you reach out to the author, I'd imagine he'll get back to you as soon as he's able. He's answered a few questions over on Reddit, and his email address is on the page linked in the HN OP.

The author seems to be responsive, but the Reddit thread got /r/bestof'd so he's probably RIP Inbox'd. And he's a new dad. Might be worth a followup in a few days if he doesn't get back to you beforehand.

I hope it meets your needs though!

Edit: He's also in this thread!


I have a Tobii EyeX and it has the option to track a single eye. You would basically need to set it to track just your right eye, which is a piece of cake in the config (on Windows at least).


And if OptiKey doesn't support it yet, open an issue on github! :)


OptiKey will listen to whatever it's told to! If you're using a Tobii tracker you should be able to change your configuration to track just one eye. OptiKey just cares about the coordinates that the Tobii engine spits out so it should be fine. Best of luck.


Ir rorkr rrrlly rrll ir you only rypr on rhr rirhr rirr or rhr kryborrr.


It works really well if you only type on the right side of the keyboard?


yrr


this is so mind blowingly good.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: