I really wouldn't have expected this to work but once you read about it.... of course it would.
This is one of those things that while there maybe wasn't a 'problem' shows how knowing a few things critical, you can really solve problems / come up with solutions that nobody would have ever thought of (well I wouldn't have...).
It's a fun DIY project and you can get it almost working. I suspect edge cases (stemming from different lighting conditions and imperfect finger detection) would render this method unworkable in general.
The bonding process adds a bit more cost.
It's all been done before.
Again, fun DIY project, limited commercial value.
Gestural stuff is nice when it's transparent and guess-able. Sadly not often the case, but when it is it's pretty magical.
I tried doing this with the internal webcam, using a clip-on fisheye lens, and mirrors. And eventually punted, using the bare webcam only for head tracking, and adding usb cameras on sticks perched on the laptop screen. With more sticks to protect the usb sockets from the cables. And lots of gaff tape.
Leap Motion has finally been acquired, so the future of the product is unclear. And it's Windows-only (the older and even cruftier version supporting linux, doesn't do background rejection, and so can't be used pointing down at a keyboard). But it has apis, so you can export the data. My fuzzy impression is it's not quite good enough for surface touch events, but it's ok-ish for pose and gestures. When the poses don't have a lot of occlusion. And the device is perched on a stick in front of your face.
> Gestural stuff is nice when it's transparent and guess-able. Sadly not often the case
I fuzzily recall some old system (Lisp Machine?) as having a status bar with a little picture of a mouse, and telling what its buttons would do if pressed. And a key impact of VR/AR is having more and cheaper UI real estate to work with. So always showing what gestures are currently available, and what they do, should become feasible.
Even on a generic laptop screen, DIYed for 3D, it seems you might put such secondary information semitransparently above the screen plane. And sort of focus through it to work. Making the overlay merely annoying, rather than intolerable.
But when it all works, yeah, magical. Briefly magical. The future is already here... it just has a painfully low duty cycle. And ghastly overhead.
But once it it got warm and humid it was unusable. I had to turn off the touchscreen and then I could at-less use the mouse.
This hack would work even when the air is humid.
The only thing I don't get is why Apple didn't create MBP touchpads compatible with their digitizer. When I saw the announcement of the first Macbook with this new huge touchpad that seemed so obvious to me, and I was baffled they only went for that strange touchbar.
I use a mouse most of the time but when I have to press buttons on dialog boxes it's easier to raise my hand from the keyboard and push the button. Same for touching a window a raise or above the others.
The size of the button or the exposed size of the window make the difference. There must be plenty of space or mistouches kill the experience. Resizing windows is something I do only with the mouse.
A hybrid approach, mouse/touchpad and touchscreen is the best IMHO. If my next laptop has a touchscreen option I'll buy it.
* Not counting the "Metro" monstrosity that briefly reared its head during Windows 8; I'm not sure if it's still around.
Of course even 4 years after the Windows 10 launch you still have the new UWP settings app and the old Control Panel. It's a bit of a mess.