Cool experiment! But maybe I'm not understanding something. When a human can't see an object in sufficient detail they lean forwards to increase its apparent size in their optical field. This breaks that fundamental interaction. When you lean forward it stays the same size. Argh! In other words if your eyes are good enough to read the large text, they'll be good enough to read the small text when you lean in - nothing has changed, the system maintains the text's apparent size. If on the other hand your eyesight isn't good enough to read the large text leaning in won't help!
I don't think the application of this technology would be the way it is on this page: likely it would detect your face and then set the text accordingly, and then leave it alone. A constantly changing text size is bad interaction anyway.
It also shouldn't be in websites. This would make an awesome browser plug-in, which would also let you calibrate it (as mentioned by someone else) to be the "right size" for your vision.
I think a U-shaped curve could also be good: big text when you're far away, small text when you're at normal viewing distances, and zoom further [esp for full-page zoom] when you lean in very close.
To solve this, the responsive website should first calibrate on you as a user (either by a user supplying his eyesight quality or by going through some simple OK/NOTOK tests).
As a workaround, change the font-size to suite you (on desktop browsers, Ctrl+Mousewheel)
Hi, creator of the headtrackr library here. As far as the headtrackr library goes, it does include a smoother (moving average) to avoid jittering of the tracking. It's set pretty low by default though, since heavy smoothing will lead to lag in tracking sudden movements. Any kind of moving average is a trade-off between responsiveness and smoothness, unfortunately...
One of the best ways to deal with this is to create a minimum lag for changes in direction which is dependent on the magnitude of change.
AKA, assuming you just bumped v from 10 - 11 based on a change from 100.9cm to 101cm then going back to 100.9cm will not change things for say 1second, going to 99cm changes things in 100ms second, 98cm changes in 100ms.
Neat, but this completely ignores the real reason people have trouble with small fonts: bad eyesight. The size the font needs to be is a factor of distance and eyesight. The real solution is to just use the default font size and have users adjust that to their preferences.
and the ambient light conditions, the amount of sleep someone has had, color blindness (could be classified as bad eyesight), the actual size of the screen (for scrolling/line length), the font itself, the amount of experience the reader has with the latin script, the quality of font hinting, the quality of sub-pixel rendering, the quality of kerning.
I wonder if, with a good enough camera, you could detect people with non-ideal vision by watching their eyes and "anti-blur" the screen in such a way that it's perfectly in focus for their eyes and head position?
This is a really interesting thought! It's something that probably wasn't viable before the advent of High-DPI displays. Making it worth the effort in a real-world (rather than lab-based) setting is probably insanely difficult though.
Neat, clicking through the attributions, Headtrackr (https://github.com/auduno/headtrackr/) by auduno of Opera Software looks quite useful. That's in turn based on ccv (https://github.com/liuliu/ccv), which I knew about, but Headtrackr looks much nicer to use if you just want headtracking out of the box. It does some trigonometry, based on some assumptions about field of view, to provide the 3d coordinate estimates needed for demos like this, whereas ccv focuses on object identification/tracking within the 2d image (and is much more general, so more complex to use out of the box).
Cool, another potential tool against poorly readable sites.
I just created http://cantheysee.it/ for web developers to (roughly) simulate and test for users with poor eyesight.
Whatever that font is (Source Pro Sans), it's fairly unreadable to me (Chrome on Ubuntu).
My eyesight is not great, but having a font that is too thin doesn't help either.
I wish body text just stayed in either serif or sans-serif at 1em. Use custom fonts for headers, but let my browser preferences determine the body text.
I don't have a webcam installed on this machine so I can't test the implementation but what a BRILLIANT idea. This is exactly how cellphones should work; judge distance and then resize the reading pane to accommodate.
Maybe (like other comments mention). Sometimes I bring my phone closer to my face because I'm having trouble reading something because of my eyesight. Making the text smaller in response can be counter-productive.
Brilliant! Couldn't get it running in FF, but works like a dream in Chrome.
Very useful for interfaces which may run on, say, a TV. Knowing the physical size, or the viewport dimensions, doesn't tell you how far away the user is.
I was going to comment to say the same thing. I've been playing Dust 514 on my PS3 recently, it's enjoyable but most of the text is too small for me to read comfortably, if they could detect a player being sat further away and increase it then it'd make life a lot easier.
I'm mostly impressed by the image analysis. Performance and the fact that it runs on javascript. Well I've heard of ccv.js before but not seen it's capabilities for pixel analysis. Now I have even less excuses to reimplement that broken real time image analysis app that I made for my bachelor thesis.
Hmm, there are no explicit licence terms in the repositories of ccv or headtrackr though. :/
This is really awesome, although a bit spastic at times. I'm not sure if this is the proper way to use this concept, but I could see how this concept could be used in similar fashions for setting settings. Like set it once, and then have that size apply to whole site or something. Either way, never clever idea, and well done! :)
This is awesome. The algorithm seems to have some trouble with glasses and headphones, but works pretty well otherwise. It's such a simple idea, and with the new web technologies becoming widespread, I expect that we will see more of this sort of thing in the future.
The main issue I could see implementing this is that you'd have to constantly get permission from the user to use their webcam. I'm not sure I'd trust a site to just use my mug for improved readability.
Lighting is pretty important for the quality of the face tracking. If it doesn't work first time, try reinitializing a couple of times, the face detector tends to lock onto other objects than the face some times.
This is really slick. I think it'd be great as a browser plugin so I could use it almost like an accessibility tool on sites with horrid typography. It seems better than cmd++.
If only my laptop-mounted webcam wasn't next to me instead of in front of me together with my laptop 'cause I always use it with external monitor, keyboard, and mouse :p
That's so cool! I wonder if anything like this will ever be standard. I imagine people would be creeped out if their webcam was always on, but its a really cool concept.
This is pretty cool! does something like this exist for eye-tracking? i.e. have it zoom areas of text you are looking at? could be great for people with poor vision
very neat. Perspective bug: nodding your head down or up or turning it to the side makes your face appear smaller to the camera which will increase the text size.