BTW, this post reveals a bit more information than Leap Motion would like developers to reveal. Essentially, OP broke the agreement.
IIRC, they released the code as open source.
I am imagining something along the lines of paint with silk, but volumetric.
Hopefully you'll come back and post something, if not her then on reddit.
Fingers will disappear without notice when nothing all that crazy is happening and the frame rate of the device (which is speced at 120+ fps) is much closer to around 45-55 fps. This leads so some major problems with long term finger acquisition that has to be handled by the developer. Quite frustrating to do things yourself that should be handled by the SDK.
While I understand that this SDK batch is a "beta/alpha" test, it is much buggier than it should be. The SDK will hang the entire OS quite often, and there is simply no way to detect if the device is actually plugged in. It will report invalid finger data rather than telling you that no device exists.
Overall a cool device with lots of hype, but needs a lot more work to even be mildly useful for anything more than just simple gestures.
Radio had advanced beyond touchscreen and into motion detection.
It meant you could control the radio with minimal effort
but had to sit annoyingly still if you wanted to keep
listening to the same channel.
If you have an informed (hands on) opinion to the contrary I would be very interested
I personally imagine an interface not unlike a wand - people might carry around a pointing stick of some kind, with a button built into the handle to confirm gestures either on or off (at least until we can start doing crazy things like gauging intent - what if the computer only responded to your motions if it knew you were looking at it?).
Probably a terrible idea.
Painting, drum-machines, games and many more applications come to mind. DJs moving from turning knobs on impenetrable devices to conducting a live mix on a huge ass screen? Yes, please.
Hands on experience
So, as much as Gorilla Arm would be a problem for everyday/all-day use of no-touch gestural interfaces, they are a great solution for existing problems.
You can play/pause or skip to next song or previous song while your hands are deep into batter. works from 1-6 feet.
Extracting gestures is indeed a problem. Most of the approaches I know depend on a state triggered by the appearance of a new input (in the video, when you add or a remove a finger) and then work by doing a temporal sum of the movement to get a shape.
This of course introduce problems about how fast or how slow the person draws the shape in the air - unless you trigger that when a finger is added, a finger is removed (as explained before) OR when you have just successfully detected a gesture - I don't mean identified it, but a quick deceleration of the finger followed by a short immobilization of the finger can reset the "frame of reading".
You may or may not have successfully grasped what was before that shape, but then an human will usually stop and try again so you get to join the right "frame of reading"
I've done a little work (computer vision MA thesis) on using Gestalt perceptual grouping on 3d+t (video) imaging. The goal was automating sign language interpretation (especially when shapes are drawn in the air, something very popular with the French Sign Language - and therefore I suppose with the American Sign Language considering how close they are linguistically)
However we were far from that in 2003, and we used webcams only. A lot of work went to separate each finger - depending on many things on its relative position to other, ie at the extremity of the row you either have the index or the pinky, and you guess which one if you know which hand it is, and which side is facing the camera)
I don't think it is or even it was that innovative. I've stopped working on that, so I guess there must have been lot of new innovative approaches. So once again, go read more about computer vision. It's fascinating!
I'd be happy to send anyone a copy, but it's in french :-)
This has been tried many times before but this is the first product i've seen that is accurate, extendable (with sdk), and offered at a decent cost.
But after reading the OP article, you'll realize that all he did was to play with the hello word like demo given by the SDK and create nothing (and BTW broke the SDK agreement giving to much details about how it works since he couldn't do anything else with it - oh, I forgot he complained about the bent mini usb cable too)
So no, I would not call what he did innovative by any stretch of imagination. Using the algorithms I proposed, he could have done something better than hello world. At least I had the excuse of no real hardware implementation - I would have killed for something like that Leap thingy.
I cannot tell you how incredible this product is. I'm a first year CS student, and I've never done anything even remotely close to gesture tracking before. But at the end of the night, I was able to play rock paper scissors with my computer. The API is that simple to use.
Yet, as mentioned, it's so incredibly accurate. One of the biggest bug we faced was even when we thought our hands were still, the device still registered imperceptible movements which were translated into false moves.
Overall it's a great product, especially for the price.
While this is about the extraction of finger gestures from 2D video, many of the techniques translate forward and are applicable to 3D point cloud data (if that is what the raw leap motion device exposes).
Should be a fun read, other papers by the same authors are also relevant to the problem domain.
EDIT: Some form of running debouncing filter should kill the jitter- the Savitzky Golay filter family is good for that.
Even doing the calibration I could never going to all four corners of a 24inch screen. I suspect they will get it ironed out in the end, it does seem like AI/software issues rather than hardware issues.
I will say, that when it's working its really magical feeling. It feels accurate and like I am truly controlling something; but beyond that it didn't feel nearly robust enough for real world use.
It was cool because it had kick ass software behind it that could do all the work.
I see this continuously with things like glass/mirrors that can be touch screens etc
They look awesome because the (imaginary) software demoing it does awesome things.
Leap could be a cool device, but you’ll need to think outside the box to see how.
My fat ass is not going to wave at anything that it can do with a mouse let along the quicker speed at which we can type/shortcut/mouse compared to physical movement.
Personally if I was a developer I'd look at things totally new.
Stop thinking about it as a computer accessory and start thinking about it as a general UI device that can be used in everyday situations.
For example, most gamers would say 3D shooters work well from the keyboard. However, if you could just point your index finger and then simulate the recoil motion (saying 'poof' would be optional), I think one could make a nice shooter (probably a bit slow, as the software could only simulate firing after it detected the 'recoil', but if all users suffer from that, it can be designed around)
Everyone seems to be trying to replicate iPad gestures in 3D, e.g., point your finger at something, drag something around by pointing.
How about instead we create a virtual hand that shows up in your screen and it mirrors the movements of your real hand and you use that to interact with objects on the screen?
I just think it would be awesome to be able to virtually reach into your screen and move things around! And it seems like it would be quite intuitive, no?
As some examples: I'm picturing moving your hand to the top of a window, make a grabbing motion and grab the top of a window and move it around. Grab the corner of a window to resize. You could even have the hand type on a virtual keyboard shown inside the screen. What do you guys think?
What would be amazing is using this to interact with a 3D display system. Not the 3D in a box that you get with TV's, but something closer to what we often think of holograms.
Holograms are probably a ways off (though they are doing interesting things with targeting light at retinas).
The Occulus Rift though (as others have mentioned) may be able to do something truly awesome when paired with an interface like this.
Both devices are incredibly exciting. If they manage to mesh up...
Furthermore, my pinkies and thumbs take a beating pressing the command, control, and shift keys. I would much rather wave my thumb or pinky in a particular direction to get those modifiers. This may not be possible with the current Leap, but will no doubt be possible soon.
Cad work is much better done with a mouse, you want quick, precise movement that takes little physical effort.
(I have been doing autocad and now revit for 22 years)
Now blender, on the other hand, is ripe for experimentation.
Can you talk a little bit about the construction of the unit? How does the craftsmanship look?
That said, I had a fun time writing stuff. Now, if only it had proper linux support (instead of Virtualbox hacky hacks that I have to use)
Every time? I've been prompted twice, I think. Once the very first time I set up the SDK, and then once again after, I think, an update.
And, yeah, I'm waiting for the Linux support too. :)
"we will announce a ship date later this month."
Besides, the thing is really tiny