Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Realtime Responsive Typography Based on Viewing Distance via Webcam (maratz.com)
254 points by tblancpain on Feb 11, 2013 | hide | past | favorite | 59 comments


Cool experiment! But maybe I'm not understanding something. When a human can't see an object in sufficient detail they lean forwards to increase its apparent size in their optical field. This breaks that fundamental interaction. When you lean forward it stays the same size. Argh! In other words if your eyes are good enough to read the large text, they'll be good enough to read the small text when you lean in - nothing has changed, the system maintains the text's apparent size. If on the other hand your eyesight isn't good enough to read the large text leaning in won't help!


I don't think the application of this technology would be the way it is on this page: likely it would detect your face and then set the text accordingly, and then leave it alone. A constantly changing text size is bad interaction anyway.


It also shouldn't be in websites. This would make an awesome browser plug-in, which would also let you calibrate it (as mentioned by someone else) to be the "right size" for your vision.

I think a U-shaped curve could also be good: big text when you're far away, small text when you're at normal viewing distances, and zoom further [esp for full-page zoom] when you lean in very close.


To solve this, the responsive website should first calibrate on you as a user (either by a user supplying his eyesight quality or by going through some simple OK/NOTOK tests).

As a workaround, change the font-size to suite you (on desktop browsers, Ctrl+Mousewheel)


Well obviously we can solve that by detecting how far the user is from the camera and increasing the "root size" as they get closer. /s


Great idea, a couple of small improvements would make it even better:

* Use a moving average, to avoid flickery transitions

* Animate the text to the target size rather than changing in steps. This would mitigate the flickering problem too.


Hi, creator of the headtrackr library here. As far as the headtrackr library goes, it does include a smoother (moving average) to avoid jittering of the tracking. It's set pretty low by default though, since heavy smoothing will lead to lag in tracking sudden movements. Any kind of moving average is a trade-off between responsiveness and smoothness, unfortunately...


One of the best ways to deal with this is to create a minimum lag for changes in direction which is dependent on the magnitude of change.

AKA, assuming you just bumped v from 10 - 11 based on a change from 100.9cm to 101cm then going back to 100.9cm will not change things for say 1second, going to 99cm changes things in 100ms second, 98cm changes in 100ms.


Hmm, that's interesting. Is there any name for that kind of method? I might look at implementing it.


Hysteresis


Yes, but Kalman filter.


Neat, but this completely ignores the real reason people have trouble with small fonts: bad eyesight. The size the font needs to be is a factor of distance and eyesight. The real solution is to just use the default font size and have users adjust that to their preferences.


> The size the font needs to be is a factor of distance and eyesight.

And the PPI of the monitor/viewing device.


and the ambient light conditions, the amount of sleep someone has had, color blindness (could be classified as bad eyesight), the actual size of the screen (for scrolling/line length), the font itself, the amount of experience the reader has with the latin script, the quality of font hinting, the quality of sub-pixel rendering, the quality of kerning.


I wonder if, with a good enough camera, you could detect people with non-ideal vision by watching their eyes and "anti-blur" the screen in such a way that it's perfectly in focus for their eyes and head position?


That only works if your screen can produce a lightfield. Maybe future 3D screens will have such a feature.


This is a really interesting thought! It's something that probably wasn't viable before the advent of High-DPI displays. Making it worth the effort in a real-world (rather than lab-based) setting is probably insanely difficult though.


Neat, clicking through the attributions, Headtrackr (https://github.com/auduno/headtrackr/) by auduno of Opera Software looks quite useful. That's in turn based on ccv (https://github.com/liuliu/ccv), which I knew about, but Headtrackr looks much nicer to use if you just want headtracking out of the box. It does some trigonometry, based on some assumptions about field of view, to provide the 3d coordinate estimates needed for demos like this, whereas ccv focuses on object identification/tracking within the 2d image (and is much more general, so more complex to use out of the box).

The Headtrackr guy also put up a demo of a game controlled using head movement: http://www.shinydemos.com/facekat/


Great idea, however you should use a transform rather than font-size, so the line breaks won't change.


Cool, another potential tool against poorly readable sites. I just created http://cantheysee.it/ for web developers to (roughly) simulate and test for users with poor eyesight.


Whatever that font is (Source Pro Sans), it's fairly unreadable to me (Chrome on Ubuntu).

My eyesight is not great, but having a font that is too thin doesn't help either.

I wish body text just stayed in either serif or sans-serif at 1em. Use custom fonts for headers, but let my browser preferences determine the body text.


Fixed, you're absolutely right - that font should only be used for headers.


Only because you are using the extremely thin version. There's a thicker version of the font that's perfectly readable as body.


Thank you, most kind


And this is the answer to the "race against the machine" - how will automation that is destroying jobs provide value no-one ever thought of.

Total "cat-flap" moment - its not something you ever think of, but once you see it, its obvious.


I don't have a webcam installed on this machine so I can't test the implementation but what a BRILLIANT idea. This is exactly how cellphones should work; judge distance and then resize the reading pane to accommodate.


Maybe (like other comments mention). Sometimes I bring my phone closer to my face because I'm having trouble reading something because of my eyesight. Making the text smaller in response can be counter-productive.


Brilliant! Couldn't get it running in FF, but works like a dream in Chrome.

Very useful for interfaces which may run on, say, a TV. Knowing the physical size, or the viewport dimensions, doesn't tell you how far away the user is.


I was going to comment to say the same thing. I've been playing Dust 514 on my PS3 recently, it's enjoyable but most of the text is too small for me to read comfortably, if they could detect a player being sat further away and increase it then it'd make life a lot easier.

Definitely a cool concept.


I’m using Firefox Aurora and it worked fine, so it is at most 7 weeks away from release.


I'm mostly impressed by the image analysis. Performance and the fact that it runs on javascript. Well I've heard of ccv.js before but not seen it's capabilities for pixel analysis. Now I have even less excuses to reimplement that broken real time image analysis app that I made for my bachelor thesis.

Hmm, there are no explicit licence terms in the repositories of ccv or headtrackr though. :/



Neato!

Does the emerging browser camera support allow requesting permission for a single snap (to calibrate distance) as opposed to constant-video?



This is really awesome, although a bit spastic at times. I'm not sure if this is the proper way to use this concept, but I could see how this concept could be used in similar fashions for setting settings. Like set it once, and then have that size apply to whole site or something. Either way, never clever idea, and well done! :)


Cool experiment! It works especially well if you substitute a face for an optical illusion poster (http://www.popartuk.com/g/l/lgpp0906+cogs-twisting-cogs-mind...)

In all seriousness, it seems to lock onto that poster quite often.


This is awesome. The algorithm seems to have some trouble with glasses and headphones, but works pretty well otherwise. It's such a simple idea, and with the new web technologies becoming widespread, I expect that we will see more of this sort of thing in the future.


This is very neat!

The main issue I could see implementing this is that you'd have to constantly get permission from the user to use their webcam. I'm not sure I'd trust a site to just use my mug for improved readability.

Could be great for games though!


The face tracking didn't work well at all for me. Maybe I have an odd face.


Lighting is pretty important for the quality of the face tracking. If it doesn't work first time, try reinitializing a couple of times, the face detector tends to lock onto other objects than the face some times.


Hi everyone, thanks for your feedbacks! :)

More ideas https://twitter.com/markodugonjic/status/301013228463476736


This is really slick. I think it'd be great as a browser plugin so I could use it almost like an accessibility tool on sites with horrid typography. It seems better than cmd++.


Nice, I built something similar with a buddy using a kinect a couple month ago. Think poster not website.

http://youtu.be/Xy8oRmoV8Ag


Nice experiment...

If only my laptop-mounted webcam wasn't next to me instead of in front of me together with my laptop 'cause I always use it with external monitor, keyboard, and mouse :p


That's so cool! I wonder if anything like this will ever be standard. I imagine people would be creeped out if their webcam was always on, but its a really cool concept.


This is pretty cool! does something like this exist for eye-tracking? i.e. have it zoom areas of text you are looking at? could be great for people with poor vision


I took it one step further and integrated rotation into the mix:

check it out. http://codepen.io/JAStanton/pen/meDLB

too far?


Not far enough!


you sir are mad!


THREE DEE. Gogogo!


This is an impressive demo however I usually move my face closer to the screen because the font is too small.

It would be neat to see a demo of parallax using a webcam.


Check out http://auduno.github.com/headtrackr/examples/targets.html (though you'll need webgl support)


Cool idea but it just blew up the font to an unreadably huge size for me. Using an external camera mounted on the monitor in front of me.


What do you have to assume about e.g. user monitor ppi and webcam field of view to make this work?


Wow! Just WOW!!! Although, of course, there is a lot to improve, like jittery zooms.

Love the idea though!


very neat. Perspective bug: nodding your head down or up or turning it to the side makes your face appear smaller to the camera which will increase the text size.


oh please, someone make a extension for chrome so I can scroll just by nodding. Thank you!


how does one enable webcam for this to work?


Just wow




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: