What I did was rig it to track contours on the depth image, and attempt to pick out a rectangular object. Then, by using the detected location of the corners, I can apply it as a perspective transform to my laser projector. The end result is that the cardboard box I'm holding becomes a "virtual screen" that is tracked by the laser projection in real time and in perspective :)
Also, there seem to be a fair number of Kinect projects popping up nowadays. Is there a website to keep track of all this? If not, perhaps it would be worth someone's effort to start a blog on the topic...
What I did was rig it to track contours on the depth image, and attempt to pick out a rectangular object. Then, by using the detected location of the corners, I can apply it as a perspective transform to my laser projector. The end result is that the cardboard box I'm holding becomes a "virtual screen" that is tracked by the laser projection in real time and in perspective :)
Reminds me of Johnny Lee's awesome Projector-Based Location Discovery and Tracking work: http://johnnylee.net/projects/thesis/
Also, there seem to be a fair number of Kinect projects popping up nowadays. Is there a website to keep track of all this? If not, perhaps it would be worth someone's effort to start a blog on the topic...