Hacker News new | past | comments | ask | show | jobs | submit login

That is amazing.

I really wouldn't have expected this to work but once you read about it.... of course it would.

This is one of those things that while there maybe wasn't a 'problem' shows how knowing a few things critical, you can really solve problems / come up with solutions that nobody would have ever thought of (well I wouldn't have...).




>I really wouldn't have expected this to work but once you read about it.... of course it would.

It's a fun DIY project and you can get it almost working. I suspect edge cases (stemming from different lighting conditions and imperfect finger detection) would render this method unworkable in general.


The “Filter for skin colors” step would probably need to be fixed. That’s the kind of kludge that pops up all the time in demo programs but doesn’t generalize very well.


That one's a minefield with lots of issues that _still_ aren't worked out in shipping products: https://www.mic.com/articles/124899/the-reason-this-racist-s...


Right - for example, which skin colors?


This could be worked into the calibration phase pretty easily, I think.



Not to mention the constant image processing going on in the background. I wonder how it'd affect battery life / fan noise.


I wonder if this has ever been tentatively tested by the laptop makers, perhaps even by Apple. Surely thought about -- but tested?


I doubt it. An estimate for the iPad puts the cost of the touchscreen controller at 2$[1]. I'm not sure that actually includes the capacitive sensors but I doubt they add much more cost.

[1] https://www.macobserver.com/imgs/tmo_articles/20100129ipadbo...


Even then, isn't the 'capacitive sensor' just a conductive coating on the glass?


It's a conductive coating, but applied in an intricate pattern and connected to the controller with some very delicate connectors or anisotropic conductive film.

The bonding process adds a bit more cost.


Bootstrapping a non-touch device for touch isn't really a novel idea though. There were, historically, lots of attempts, especially after the iPhone/iPad release. Here's a more recent (commercialized) one: https://air.bar/pc and here's one from 2008: https://web.archive.org/web/20080709071620/http://www.magict... ... Here's a 2012 hands-on preview of a product from Leap Motion: https://www.theverge.com/2012/6/26/3118592/leap-motion-gestu... . Here's a product using Kinect to add multi-touch functionality to a radiology viewer:https://www.gestsure.com/

It's all been done before.

Again, fun DIY project, limited commercial value.


Possibly in the past, but at this point I think everyone expects touchscreens to be multi-touch, which this can't be, not fully. (Since a higher finger can obstruct a lower one.) It's an awesome gimmick, but no one would try to sell a touch-integrated product with only single-touch now.


Use another mirror on a side, reflecting onto the top one.


There are plenty of Windows laptops with a touchscreen. In Apple's case MacOS isn't really designed for touch


No-no - I mean using the webcam and a mirror. I.e there would be a little mirror that would fold out of the screen.


I think his point was that it would be a bit silly to put this kind of solution into a product -- which is, in a word: jank -- when capacitive touchscreens are already a tried and true solution in production devices.


I often think “hand-pointing” interfaces have unexplored / unrealised potential... like, who would want a touch TV screen?! Sure, you could have the touch UX for the TV on your phone... but even better would be if you would be able to just point to the TV and MAGIC. The biggest problem I see is that humans aren’t that good at pointing - we can only “aim” with one eye, keeping both eyes open messes it up.


If you had feedback for where you were pointing I think most people would get pretty good pretty quickly (excepting people with movement disorders). It wouldn't be very precise, but I think it would be great for lazily triggering multitouch things like zoom/scroll/page-up/page-down/switch-app etc.


I've often wondered what having one of these on a laptop, tracking space above the keyboard / in front of the screen, would be like: https://www.leapmotion.com/

Gestural stuff is nice when it's transparent and guess-able. Sadly not often the case, but when it is it's pretty magical.


> on a laptop, tracking space above the keyboard

I tried doing this with the internal webcam, using a clip-on fisheye lens, and mirrors. And eventually punted, using the bare webcam only for head tracking, and adding usb cameras on sticks perched on the laptop screen. With more sticks to protect the usb sockets from the cables. And lots of gaff tape.

> leapmotion

Leap Motion has finally been acquired, so the future of the product is unclear. And it's Windows-only (the older and even cruftier version supporting linux, doesn't do background rejection, and so can't be used pointing down at a keyboard). But it has apis, so you can export the data. My fuzzy impression is it's not quite good enough for surface touch events, but it's ok-ish for pose and gestures. When the poses don't have a lot of occlusion. And the device is perched on a stick in front of your face.

> Gestural stuff is nice when it's transparent and guess-able. Sadly not often the case

I fuzzily recall some old system (Lisp Machine?) as having a status bar with a little picture of a mouse, and telling what its buttons would do if pressed. And a key impact of VR/AR is having more and cheaper UI real estate to work with. So always showing what gestures are currently available, and what they do, should become feasible.

Even on a generic laptop screen, DIYed for 3D, it seems you might put such secondary information semitransparently above the screen plane. And sort of focus through it to work. Making the overlay merely annoying, rather than intolerable.

But when it all works, yeah, magical. Briefly magical. The future is already here... it just has a painfully low duty cycle. And ghastly overhead.


I've often wanted some eye tracking device when working on more than one screen. Like, when looking at my terminal screen I just want to start typing, instead of alt-tab'ing to it first.


I actually saw a demo of this (plus more; it was a “universal remote”) at Cal Hacks a few years ago. They used a Myo for gesture recognition and the whole thing was really slick.


So a Wii with less steps?


WRONG! I had a working touch screen in my Dell Laptop, but when I went on vacation to Cuba the screen mouse moved a random. I noticed it worked properly in my room after half an hour or early in the morning if I was outside.

But once it it got warm and humid it was unusable. I had to turn off the touchscreen and then I could at-less use the mouse.

This hack would work even when the air is humid.


That sounds more like a failure to do humidity tests during development than a fundamental failure mode of capacitative touchscreens.


And more laptops without any touchscreen. Because it's far quicker and more convenient to move a single finger across a little touchpad instead of the whole hand across a 13+ inch near-vertical screen. I used a Surface Pro for 2 years and in "laptop mode" I used the touchpad 95% of the time, and now and then the touchscreen to scroll or zoom a website.

The only thing I don't get is why Apple didn't create MBP touchpads compatible with their digitizer. When I saw the announcement of the first Macbook with this new huge touchpad that seemed so obvious to me, and I was baffled they only went for that strange touchbar.


I've got a Samsung tablet with Linux on Dex. It's Ubuntu 16.04 with Unity in a LXC container running on the Android Linux kernel.

I use a mouse most of the time but when I have to press buttons on dialog boxes it's easier to raise my hand from the keyboard and push the button. Same for touching a window a raise or above the others.

The size of the button or the exposed size of the window make the difference. There must be plenty of space or mistouches kill the experience. Resizing windows is something I do only with the mouse.

A hybrid approach, mouse/touchpad and touchscreen is the best IMHO. If my next laptop has a touchscreen option I'll buy it.


macOS supports touch input. I used it with a Dell touchscreen monitor back in around 2012 I think. Windows* isn't really "designed for" touch either; it's tacked on. Most proper desktop apps (on any OS) are a pain to use with touch for a more than a few minutes at a time.

* Not counting the "Metro" monstrosity that briefly reared its head during Windows 8; I'm not sure if it's still around.


Microsoft is working towards transitioning the Windows ecosystem towards UWP (which uses Metro design), which is designed to work equally well with mice and touch (and everything else like the XBox and HoloLens).

Of course even 4 years after the Windows 10 launch you still have the new UWP settings app and the old Control Panel. It's a bit of a mess.


Just a detail: Metro is dead, the current design philosophy from Microsoft is Fluent https://www.microsoft.com/design/fluent/#/


Can you, say, scroll in different windows simultaneously with multiple fingers on a Windows tablet, like you can on iPad Split View? How does it handle focus?


equally well, in this context, of course means the same as equally poorly.


My Windows touchscreen system failed when it got too humid.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: