I'm working on a wearable device with a camera+ultrasonic sensor which is capable of accurate hand pose estimation, which will then result in sign language recognition mainly (extending to gesture recognition and HCI applications). The device should be able to integrate into a smartwatch. I'm only at the 'will it work?' stage, working on the algorithms for recognising ASL words through synthetic data. Would appreciate criticism and pointers on what to keep in mind :)
Hey there, I've been working passively on something somewhat on the same page. A wristband that has both input and output modalities: haptics and gestures.
Here's a notion of some of my bookmarks, ideas, you might find a few things useful:
very cool, hard and exciting problem!
How are you handling the stability of the ultrasound sensor relative to the hand? Usually that's the shakiest/blurriest part.
honestly, haven't thought about it yet. I was thinking of calibrating it with a fixed hand gesture or indicating the user to press their palm against a flat surface, but yeah come to think of it, it might not be frequent enough.