Hacker News new | past | comments | ask | show | jobs | submit login
IPad 2 gets glasses-free 3D display using front-facing camera for head tracking (tuaw.com)
275 points by jashmenn on April 11, 2011 | hide | past | favorite | 50 comments



See also http://www.youtube.com/watch?v=h5QSclrIdlE , which is actually a product that you can buy, though I think it was never localized for the US.

Edit: Also, http://johnnylee.net/projects/wii/ , under "Head Tracking for Desktop VR Displays using the Wii Remote" (down a bit).

(Also, to be clear, I just mean these as interesting "see alsos". I am not saying "it's been done" (so what?) or "it doesn't work" (I have no direct experience to make a judgment). Nor am I saying anyone accused me otherwise. The iPad is still a potential interesting sweet spot for this tech.)


http://kotaku.com/#!5541044/looksleys-line-up-micro+review-b...

Looks like it did come out in the US, although from the review it looks like we might not have heard about it because the effect just didn't work well enough.

Maybe this implementation will deliver? It looks fairly promising, and the effect looks like it might be more pronounced on the (compared to the DSi) large screen of the iPad. Of course, I'd like to pass judgement on it personally, so here's hoping for a demo we can hold in our hands soon.


This seems like a big deal even if it didn't work in the previous attempts. In part because I feel like it eventually will work.

I'm excited and impressed by it. Although I do get a little disappointment that I didn't think of it. What the heck is my brain doing in my free time?!


What the heck is my brain doing in my free time?!

If you're bored... people ask for a touch sensitive rear surface - how about using the rear camera for gesture control?

Wave your finger in front of it to scroll, that kind of thing.

Also, some acoustic gesture recognition would be good to pick up one/two/three fingernail taps on the back.


Apple's already patented it. I don't want to get sued into oblivion. :-)

http://www.addictivetips.com/mobile/iphone-control-through-c...


That's annoying, but it only seems to cover tap detection by accelerometer where I was thinking of tap detection by audio click, which definitely seems plausible from a quick test of the sound recorder.

Also it only covers finger swipe detection/direction determining, where I was also imagining gestures from holding fingers a few centimeters away, such as scissoring two fingers, thumb/finger flick, two finger scroll movement, two fingertips touching, etc. A quick test with the iPhone camera suggests that might have to be too carefully positioned to be practical, but should be possible at some level.


I'm excited and impressed by it mainly because I did think of it and started implementing it, thinking to develop it as a desktop interface, or possibly a cad program interface. Lots of linear algebra involved...

However, I stopped mainly because at the time (about 3-4 years ago) my webcam and opencv introduced a delay that was unacceptable and caused visual jarring in a small application (I think I was using a window of 320x240 or so).

I've also seen a demo done in flash by MrDoob of Three.js fame (IIRC), so it's been done by several people. I haven't really seen a useful implementation yet, so this is getting more interesting with these smaller, higher spec integrated devices...


Johnny Lee demonstrating Wii-based head tracking using the same plates-on-sticks imagery in 2007: http://www.youtube.com/watch?v=Jd3-eiid-Uw

Guy mounts two large framed LCDs in a wall and uses this effect to create a very effective virtual window which adjusts the view depending on viewer's position: http://www.engadget.com/2010/04/15/winscape-virtual-window-f... ("Winscape") in April 2010.


Does anyone have a reference for the camera transforms this is doing? I've got as far as figuring out that it's using an asymmetric frustum, but plugging the "obvious" numbers into OpenGL gave me obviously wrong results.


I wonder what happens when several people try and look over your shoulder. How does it handle multiple heads?


It could only render for a single viewpoint. I would imagine the simplest approach (i.e. without tracking) would be to render for the viewpoint of the closest detected head.


dont be fooled, this is not 3D as in "stereoscopic" 3D like avatar. You will not get the feeling of depth because for that you need a different image for each eye (only achieved with glasses or a special screen (like 3DS or at the movies). This is just a 3D geometry trick...


Do be fooled. What do you gain by refusing to be fooled?


Well I'm just annoyed because many people still say that the current 3D TVs dont bring anything, or that the technology with glasses will disappear soon. They think that someone will find a magic software trick to make it 3D without glasses, maybe by tracking eyes. But it is impossible, it is a misunderstanding on how stereoscopic 3D actually works. Current 3D technology is actually really good, though the glasses will surely get much lighter.


You're probably correct in saying that glasses won't disappear anytime soon, and it is impossible with eye/head tracking alone to achieve stereoscopic 3D, but I'd like to bet that sometime in the future a holographic display could be created. One that sends different images to different regions of the room, such that your left eye is literally seeing a different view than your right eye, without shutters or polarization to mimic it.

To truly get there, you'd have to actually change the path length of each ray, to allow your eyes' focus to agree with their convergence (a common complaint with current 3D, where the eyes are forced to focus on light coming from 3 meters away but converge on an object thats "1 meter" away.)


While I undersand that 3D is a lot easier to say than "stereoscopic", I don't think any 3D movies warrant being called any more three dimensional than this sort of thing. IMHO, it's not 3D until I can move around and see different things (in each eye), i.e. until the medium understands itself in 3D space and reflects accordingly on my eyes.


it is widely accepted that 3D in context of movies and games is "stereoscopic", meaning you have a perception of depth. I believe the perception of depth gives a very different experience, personally I was stunned when I saw 3D movies for first time, since you really feel like you are present in the scene. Looking around and seeing different things is already implemented in 99% of games, since the camera is controlled by the player.


Yeah, I know it's widely accepted (I grew up being bombarded with Spy Kids 3D Volume 72 advertising throughout my whole childhood). I'm just of the opinion that it doesn't leave much room for consumer understanding of advances in 3D technology by dominating the terminology.


What's important is tricking my brain into perceiving depth in something that has none. How that's achieved is irrelevant.


This type of 3D (head tracking) combined with 3DS type 3D (glasses free depth) would be amazing.

You would get both depth and the ability to "see" a different perspective as you look around just like real life.


The 3DS depends on a very narrow viewing angle, so the different approaches cannot be combined at this time.


Some more advanced autostereoscopic displays use head tracking as well to configure the paths of light instead of arbitrary prescribed fixed sweet spots. It's also pretty cool in that you could use it to give a different full sized image to different people watching it. So for playing games, you don't have split screen, but rather each player sees their content full size sans glasses. Maybe you could use the ultrasound interference patterns to create directional sound too.


So you have to constantly move your head to have some 3D illusion, otherwise if you not moving your head it's just plain 2D. It would be more interesting to combine with stereoscopic 3D, but even with this we are still far from real 3D. The next interesting thing will be real hologram displays using rewritable holograms: http://www.youtube.com/watch?v=b84RBl-jgZM


Not just moving your head...you'll also get the effect from moving just the iPad. Will be interesting how much of the effect holds up from just subtle movement of either.


Mr Doob tried something similar a while back...

http://mrdoob.com/blog/post/643


This is pretty amazing. I think that this coupled with multi-touch interactivity could be pretty revolutionary. It turns the iPad into an "iWindow".


Yes! As I was looking at the cube, I wanted the viewer to reach out and manipulate it: spin, pinch, what-have-you.

I think the worst thing that could happen is that this sort of experience get baked into the core of the iPad's UI (or the UI of any tablet). But I could see imaging applications (MRIs?) where it would be valuable.


I suspect that touching the screen would, at all but one depth plane, break the illusion. The brain is eerily good at fooling itself, until it receives contradictory information.


This is pretty cool. I'm most impressed that the iPad 2 has the horsepower to do all of the face-tracking and geometry rendering.

As others have mentioned here, Johnny Lee had something similar running with the Wii hardware but as far as I remember the display was all driven by a PC which was (presumably) much more powerful than the iPad.


Johnny Lee's version was definitely cool, but it was more of a clever hack. It took advantage of the fact that the Wiimote is basically a very cheap IR camera, which normally tracks the IR light coming from the sensor bar. By using the Wiimote as a stationary camera, and wearing infrared LEDs on his face (attached to safety goggles, iirc) he could track the position of user's head. I'd imagine the Wii hardware would definitely be up to that task, as it is basically just the inverse of what it does already.

Also, if you really want to be impressed with the iPad's rendering capabilities, check out Infinity Blade. That will blow your mind.


This looks like it would support some fun new game mechanics on the iPad. I can imagine some sort of game where you have to look at the screen from different angles in order to perceive the route that your character (or a ball, vehicle, etc.) ought to take. Even something like 3D Pong would be kind of neat.


It's starting to feel like the future.


Yup. I'm wondering when more average people are going to start noticing this.


I'm sure your father was saying the same thing forty years ago.


But this is what I was told the future would be when I was a kid.


As someone said, this was done for Wii several years ago. I think the idea is more practical for iPad, since you can rotate the screen instead of moving around. I would probably be more likely to use this if the position tracking was done via gyroscope.


This could be extended with eye tracking to create even more impressive functions.


I doubt the iPad's camera is good enough for eye tracking but maybe in future devices.


Is this more or less what the new nintendo handheld does? How do they differ if not?


The 3DS screen is closer to one of those lenticular "3d" things you'd see on a ruler or a bookmark as a kid. It uses two interlaced images that are fed to each eye separately. The iPad 2 screen is just a normal screen.

http://loot-ninja.com/2010/06/22/how-the-nintendo-3ds-screen...

Basically, the iPad 2 demo only gives the illusion of being 3D when you move it, but since your head and hands are pretty much constantly moving it should look "mostly 3D" if it has enough sensitivity.


Thx, yeah I knew the ipad screen was a standard display, but having only seen ads for the new nintendo I wasn't sure what their tech was.


I'm pretty sure that the 3DS isn't doing any head tracking. If you move your head much at all the 3D effect breaks; you have to be looking practically dead at it.


Yep. The way the 3D effect works, there are essentially physical barriers inside the screen separating the different views being rendered for each eye. Move your head so that you're not viewing the screen straight-on and the barriers don't work properly, ruining the effect.


So this is "3d" now? Wow, I've got a lot of cool stuff that is 3d, including a prototype compiz plugin that does roughly the same thing. Gives you spatial context for windows on your desktop environment via Kinect and OpenCL.

I had no idea I could get away with calling this "3d", and have people in the comments thinking it's the same technology as the 3ds.


3D on a screen is always an illusion of some sort. The 3DS is just a different illusion (albeit a more technically involved one, for sure). It's not any more "3D" than this. Perspective not changing when your head moves is one of the things that breaks a 3D illusion; with this, you don't have that problem.


There is a big difference between old style 3D (that games have had for dozen years) and stereoscopic 3D where your brain captures the depth because each eye sees a _different_ image. Seeing depth makes it a much more involving experience.


Do you still see 3D with one eye closed?

Most people instinctively say "no". But the reality is that there are far more cues for depth than just stereoscopic vision. Someone with one eye still perceives depths by judging relative object sizes, changes in focus and subtle changes in objects when we move relative to the object.

Head tracking will almost certainly play a big part in coming 3D techs (with or without stereoscopic tech). Add to this eye tracking to adjust virtual focus, and you start to get "real" 3D. One day!


I agree, there is. Stereoscopic 3D isn't an end-all solution, however, and this isn't just ordinary "old style 3D", either. Head-tracking 3D is still a pretty novel technique, adding the illusion that you can look around an environment just by moving. I imagine that combining the two you could create a pretty immersive experience, although I can also admit there might be some difficulties doing so.


That hurts my retenal neurons just thinking about perceiving that.


that is sick




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: