Hacker News new | past | comments | ask | show | jobs | submit login
Turning a MacBook into a Touchscreen with $1 of Hardware (2018) (anishathalye.com)
1724 points by soegaard on Aug 6, 2019 | hide | past | favorite | 214 comments



That is amazing.

I really wouldn't have expected this to work but once you read about it.... of course it would.

This is one of those things that while there maybe wasn't a 'problem' shows how knowing a few things critical, you can really solve problems / come up with solutions that nobody would have ever thought of (well I wouldn't have...).


>I really wouldn't have expected this to work but once you read about it.... of course it would.

It's a fun DIY project and you can get it almost working. I suspect edge cases (stemming from different lighting conditions and imperfect finger detection) would render this method unworkable in general.


The “Filter for skin colors” step would probably need to be fixed. That’s the kind of kludge that pops up all the time in demo programs but doesn’t generalize very well.


That one's a minefield with lots of issues that _still_ aren't worked out in shipping products: https://www.mic.com/articles/124899/the-reason-this-racist-s...


Right - for example, which skin colors?


This could be worked into the calibration phase pretty easily, I think.



Not to mention the constant image processing going on in the background. I wonder how it'd affect battery life / fan noise.


I wonder if this has ever been tentatively tested by the laptop makers, perhaps even by Apple. Surely thought about -- but tested?


I doubt it. An estimate for the iPad puts the cost of the touchscreen controller at 2$[1]. I'm not sure that actually includes the capacitive sensors but I doubt they add much more cost.

[1] https://www.macobserver.com/imgs/tmo_articles/20100129ipadbo...


Even then, isn't the 'capacitive sensor' just a conductive coating on the glass?


It's a conductive coating, but applied in an intricate pattern and connected to the controller with some very delicate connectors or anisotropic conductive film.

The bonding process adds a bit more cost.


Bootstrapping a non-touch device for touch isn't really a novel idea though. There were, historically, lots of attempts, especially after the iPhone/iPad release. Here's a more recent (commercialized) one: https://air.bar/pc and here's one from 2008: https://web.archive.org/web/20080709071620/http://www.magict... ... Here's a 2012 hands-on preview of a product from Leap Motion: https://www.theverge.com/2012/6/26/3118592/leap-motion-gestu... . Here's a product using Kinect to add multi-touch functionality to a radiology viewer:https://www.gestsure.com/

It's all been done before.

Again, fun DIY project, limited commercial value.


Possibly in the past, but at this point I think everyone expects touchscreens to be multi-touch, which this can't be, not fully. (Since a higher finger can obstruct a lower one.) It's an awesome gimmick, but no one would try to sell a touch-integrated product with only single-touch now.


Use another mirror on a side, reflecting onto the top one.


There are plenty of Windows laptops with a touchscreen. In Apple's case MacOS isn't really designed for touch


No-no - I mean using the webcam and a mirror. I.e there would be a little mirror that would fold out of the screen.


I think his point was that it would be a bit silly to put this kind of solution into a product -- which is, in a word: jank -- when capacitive touchscreens are already a tried and true solution in production devices.


I often think “hand-pointing” interfaces have unexplored / unrealised potential... like, who would want a touch TV screen?! Sure, you could have the touch UX for the TV on your phone... but even better would be if you would be able to just point to the TV and MAGIC. The biggest problem I see is that humans aren’t that good at pointing - we can only “aim” with one eye, keeping both eyes open messes it up.


If you had feedback for where you were pointing I think most people would get pretty good pretty quickly (excepting people with movement disorders). It wouldn't be very precise, but I think it would be great for lazily triggering multitouch things like zoom/scroll/page-up/page-down/switch-app etc.


I've often wondered what having one of these on a laptop, tracking space above the keyboard / in front of the screen, would be like: https://www.leapmotion.com/

Gestural stuff is nice when it's transparent and guess-able. Sadly not often the case, but when it is it's pretty magical.


> on a laptop, tracking space above the keyboard

I tried doing this with the internal webcam, using a clip-on fisheye lens, and mirrors. And eventually punted, using the bare webcam only for head tracking, and adding usb cameras on sticks perched on the laptop screen. With more sticks to protect the usb sockets from the cables. And lots of gaff tape.

> leapmotion

Leap Motion has finally been acquired, so the future of the product is unclear. And it's Windows-only (the older and even cruftier version supporting linux, doesn't do background rejection, and so can't be used pointing down at a keyboard). But it has apis, so you can export the data. My fuzzy impression is it's not quite good enough for surface touch events, but it's ok-ish for pose and gestures. When the poses don't have a lot of occlusion. And the device is perched on a stick in front of your face.

> Gestural stuff is nice when it's transparent and guess-able. Sadly not often the case

I fuzzily recall some old system (Lisp Machine?) as having a status bar with a little picture of a mouse, and telling what its buttons would do if pressed. And a key impact of VR/AR is having more and cheaper UI real estate to work with. So always showing what gestures are currently available, and what they do, should become feasible.

Even on a generic laptop screen, DIYed for 3D, it seems you might put such secondary information semitransparently above the screen plane. And sort of focus through it to work. Making the overlay merely annoying, rather than intolerable.

But when it all works, yeah, magical. Briefly magical. The future is already here... it just has a painfully low duty cycle. And ghastly overhead.


I've often wanted some eye tracking device when working on more than one screen. Like, when looking at my terminal screen I just want to start typing, instead of alt-tab'ing to it first.


I actually saw a demo of this (plus more; it was a “universal remote”) at Cal Hacks a few years ago. They used a Myo for gesture recognition and the whole thing was really slick.


So a Wii with less steps?


WRONG! I had a working touch screen in my Dell Laptop, but when I went on vacation to Cuba the screen mouse moved a random. I noticed it worked properly in my room after half an hour or early in the morning if I was outside.

But once it it got warm and humid it was unusable. I had to turn off the touchscreen and then I could at-less use the mouse.

This hack would work even when the air is humid.


That sounds more like a failure to do humidity tests during development than a fundamental failure mode of capacitative touchscreens.


And more laptops without any touchscreen. Because it's far quicker and more convenient to move a single finger across a little touchpad instead of the whole hand across a 13+ inch near-vertical screen. I used a Surface Pro for 2 years and in "laptop mode" I used the touchpad 95% of the time, and now and then the touchscreen to scroll or zoom a website.

The only thing I don't get is why Apple didn't create MBP touchpads compatible with their digitizer. When I saw the announcement of the first Macbook with this new huge touchpad that seemed so obvious to me, and I was baffled they only went for that strange touchbar.


I've got a Samsung tablet with Linux on Dex. It's Ubuntu 16.04 with Unity in a LXC container running on the Android Linux kernel.

I use a mouse most of the time but when I have to press buttons on dialog boxes it's easier to raise my hand from the keyboard and push the button. Same for touching a window a raise or above the others.

The size of the button or the exposed size of the window make the difference. There must be plenty of space or mistouches kill the experience. Resizing windows is something I do only with the mouse.

A hybrid approach, mouse/touchpad and touchscreen is the best IMHO. If my next laptop has a touchscreen option I'll buy it.


macOS supports touch input. I used it with a Dell touchscreen monitor back in around 2012 I think. Windows* isn't really "designed for" touch either; it's tacked on. Most proper desktop apps (on any OS) are a pain to use with touch for a more than a few minutes at a time.

* Not counting the "Metro" monstrosity that briefly reared its head during Windows 8; I'm not sure if it's still around.


Microsoft is working towards transitioning the Windows ecosystem towards UWP (which uses Metro design), which is designed to work equally well with mice and touch (and everything else like the XBox and HoloLens).

Of course even 4 years after the Windows 10 launch you still have the new UWP settings app and the old Control Panel. It's a bit of a mess.


Just a detail: Metro is dead, the current design philosophy from Microsoft is Fluent https://www.microsoft.com/design/fluent/#/


Can you, say, scroll in different windows simultaneously with multiple fingers on a Windows tablet, like you can on iPad Split View? How does it handle focus?


equally well, in this context, of course means the same as equally poorly.


My Windows touchscreen system failed when it got too humid.


So I was expecting one of those sensors that can turn a TV into a touchscreen with like an IR sensor or something so I checked this wondering how they were so cheap.

I was delighted to see this is so much better. Figuring out touch events from the reflection is really cool. Well done.


I remember following the work of this guy, Johnny Chung Lee, who did a ton of really neat human-interface stuff with the Wii remote's IR sensor: http://johnnylee.net/projects/wii/


I remember making a smart whiteboard with his software back in the day! It worked surprisingly well, even in a teacher's classroom, with an online store bought ir led pen.


What they did not factor in was the cost of the software.

You can make junkyard bound stuff do amazing things when you program them right, but you pay for that programming or time spent.


The software is a "one time cost" in quotes due to maintenance etc but talking in broad strokes which wouldn't impact the claim of $1 touch

Or to put it another way you wouldnt invent a new way to desalinate seawater and then say the cost is $10/litre instead of 2 cent because of r&d on the device you made..


> invent a new way to desalinate seawater and then say the cost is $10/litre

Depends on the marketing strategy, most likely. But I like GP's point about undervaluing programmer time. It is non-free and everyone down to the programmers themselves tend to grossly underestimate its cost.

Humans are kind of expensive to keep alive and happy enough to program willingly

/That last statement had more caveats than I wanted...


Yes, dead humans are probably cheaper to sustain and never get too unhappy to program willingly.


It really depends on their intention, whether they consider that part of the work DIY or not.

If I paint my wall, I would generally say it costs the price of the paint, not whatever I'd have to pay to have someone do it for me.

Or what about the people who tinker with their own smart home solutions with arduinos and raspberries. If you factor in the price of work/research/coding, it would be very hard to justify not buying an off-the-shelf solution instead.


> If I paint my wall, I would generally say it costs the price of the paint

That's the mentality I'm challenging here. Mentally you don't account for the cost of your time, but from an outside perspective, you're spending it all the same.

That you have to factor in the price of labor with DIY electronics is the main reason you don't see more of those easy electronics projects available as some off-the-shelf contained product (where's my IoT API-accessible thermometer for $10!!!???).


In economic terms, the marginal cost is low.


We already socialize and use effectively 'free' labor from universities.

And then it gets turned around and copyrighted/patented anyways.

So yeah it should cost $.02/gal . It's only cause of profittering goons at every level do essential needs for humans get privatized and 'owned'.


Pharma would like to have a word with you.


The programmer has donated their time for free. The code has an MIT license.


Next step is to gauge the amount of blood underneath the fingernail to tell the amount of pressure being applied.


I had an idea years ago using a webcam and the 'blood under fingernail' to have a trackpadless-trackpad (i.e. same as this project but on your table).

Maybe some time-rich person will fork it and do so.


Or track your eye/ eye lens


Haha, acid.


This is exactly what a hackathon project should be. Delightful, innovative and scoped small enough that they produce results without too much bullshit. Not the usual "world changing" hacks that end up abandoned the moment the event concludes. If only events encouraged more hacks like these.


I especially liked the way it was presented on the website, with short videos for each step - made it really easy to follow!


>that end up abandoned the moment the event concludes

What makes this one different in this regard?

(I'm not trying to say it's a meaningless project.)


The difference is that when this is abandoned, it is complete


Bingo.


The last hackathon I attended, the winners just built something from an Instructable that existed well before the hackathon.

My friends and I built a time machine radio (i.e. you tune it to a decade, with a knob and everything, and radio news from the decade was included if we could find it) and won best hardware hack, so it wasn't a total loss. Had a blast either way.


> The last hackathon I attended, the winners....

(Not a comment on you, but you poked the button)

While my hackathon experience is limited, I think any attempt to crown a "winner" reduces the experience. Of course, I am too sensitive because everyone can just enjoy it...but as soon as there's a winner there starts to be rules that people try to game, the are added restrictions and limits that reduce what is being created, etc. And creating is the point.


However, you do want to show off the more interesting/higher quality projects, to encourage further work, and get the whole thing jiving (people enjoy competition; it oils the machines)

The problem I think is that judges themselves are incompetent/careless — when you promote a “bad” winner, you just discourage everyone from trying, because they realize there is no value to this kind of promotion; its bullshit.

I had a similar experience in my university — some kind of business project idea competition; we submitted my lab’s current project, and there were other interesting projects involved who I would have been happy to lose to.

Instead I lost to the girl with the contentless idea for edible spoons for developing countries... an idea I’d seen repeatedly in recent news cycles because some NGO in India was doing it for the last three years.

Five minutes of googling would have discovered this, but instead we simply ignored the competition since.

Attending itself was valuable, along with the half-attempt at winning — we clarified the project and ideas, and had an excuse to clean things up — so it would have been a good exercise to do yearly (and probably for the other projects involved), and the competition was healthy.

But it just takes an incompetent judge to spoil the whole thing.

There’s nothing wrong with people trying to game the competition, its a natural function of competing. But the judges are meant to operate as experts, and an expert should have (some) ability to discern bullshit from something real.


Agreed. I wish hackathon judges actually understood how to question and analyze projects critically. I attended probably one of the most prestigious and well regarded college hackathons, where the grand prize was won by a project that did speech to text, then text to sign language. What nobody pointed out or acknowledged, was that deaf people can read. But since the project was done on a HoloLens the judges got swept away by the fancy AR and awarded them the prize.

I've become extremely skeptical of the hackathon model in general. It doesn't really produce good projects or products, it encourages an unsustainable style of working and it's a terrible place to recruit good developers. It can encourage people to build stuff, but only monkey patched, bursting with buzzwords, overly flashy projects that are thrown away at a moment's notice.


>What nobody pointed out or acknowledged, was that deaf people can read

I actually did something similarly stupid in college; for an embedded development class, my team needed a final project. I proposed a hat for the blind, with distance sensors and buzzers on the inside to inform distance from the walls and such. We also used black conductive thread on a black hat, so it was impossible to work with except for the girl who threaded it in the first place.

Easily the professor's favorite project, and I guess he still uses it as an example for future projects (and at some point he had some team extending it into some kind backpack?)

Later a friend working at a retail store selling products for the disabled contacted me about the project; apparently his boss was interested.

What no one acknowledged is that blind people are not walking into walls... the cane is far more practical, efficient and effective tool than this thing could ever be.

I had meant it to be a joke.


Rebrand this as a device for people to not waste precious time looking where they're walking, and it's a gold mine.


"Ever worried about those people who are reading their phone while walking around? Now you can randomly put this hat on their head."

The best thing about it is, the people who need and want the device aren't the ones walking around reading their phone, it's the ones who can't help but mind someone else's business - and this merely feeds their need rather than satisfies it. So there will be an never-ending market of unsatisfied busybodies.


Yep, this is truly awesome. These types of hacks are what got me into computing as a kid. Reminds me of early Hack-A-Day content before most of their posts became Arduino/Raspberry Pi centric.


https://www.playosmo.com/en/ <-- this also uses a mirror to refocus the built-in camera but on another surface, not the screen.


Hmm, given camera is on top, computing homography and rectifying the perspective image would give fairly good precision around the camera, falling off quickly with distance, likely not being very useful at the bottom of the screen. Also, it's likely sensitive to both ambient light and the dominant light on the screen, in dark rooms with dark themes it might be pretty off, as most computer vision algorithms need constant tuning for changing conditions.


Maybe you could add an IR LED that could be observed by the webcam, but not in the visible spectrum. I’m not sure if the MacBook webcam includes an IR filter or not. It wouldn't help the precision, but might make the camera less susceptible to changes in ambient light.


Even 5$ webcam include a IR filter, the image quality without is pretty awful, so I’m pretty sure that apple webcams have it.


I recently shined a remote control into my late 2013 macbook pro's camera and I totally saw the ir light. Not saying that you're wrong, I don't know if there are filters but only for certain specific wavelengths or something like that.


Remote controls are pretty bright IR, so it could be just that it only blocks up to a certain brightness.


Makes sense. Direct IR light is probably harder to block.


Even in my webcam before removing the IR filter I could see the faint remote light if pointing directly at it. After removing the ir filter I could literally see in a room without lights just lit by the remote.


Coul use a mirror “broken” in half to get stereo vision maybe.


Wow, this is a great idea! Looking at the video, looks like the latency isn't great; what kind of problems should be solved for this to have a ~100ms latency?

Rewriting it in C/C++/Rust instead of Python would be one... Any ideas?


Webcams usually have somewhat poor latency, at least in my experience. And I think the performance critical code is all in OpenCV, not Python.


Webcams don't have memory, so they can't have latency. It's all in the software. In fact, I've done some apps on Linux that just did readout of the frame from the camera sensor to ram, and displayed the buffer on the next framebuffer flip with zero copies. There was no perceived latency.

It just takes some work.


Web cams do have memory, and do a remarkable amount of post processing on the frames, hence the latency.

Source: have used used webcam cube cams in a robotics application where latency needed to be accounted for.


I guess it depends on how the camera chip is configured.

But all kinds of post-processing can be (and is) done on readout (pixel by pixel) without needing to keep the whole frames and go through them. You just keep some calculated parameters from previous frames (and not the whole frames).

You can do a lot of processing this way, incl. scaling, white balance, color correction, effect filters, etc.

Web cams add usb interface and an additional controller chip, so maybe there's some added latency there. But if you use the camera sensor directly over CSI, you can get pretty low latency.


There was ~100ms of latency measured for CSI going directly into an FPGA. The vendor considered the post processing to be part of their value add, and had no way to access the non buffered output, even if you disabled the obvious stuff like noise reduction.


Not to drag this too far off topic, but what kind of post-processing? And is it standardized across different types of webcams?


Noise reduction is a big one. You usually need to interpolate over several frames to do that.

Also depending on the bus being used cameras often compress the image using JPEG to reduce the bandwith (very common with USB cameras, I doubt that the Mac camera does that).

Then on a computer you have the latency of whatever pipeline is being used and how much buffering is involved. For instance since typically the display doesn't run exactly synchronized with the camera you'll have some frames that'll end up "stuck" waiting for the monitor's vsync. If the app then does some internal buffering on top of that you can very easily reach a noticeable amount of latency.

In general you can consider than anything above 100ms is easily noticeable and that's only 3 frames at 30fps and 6 at 60, it's really not that much when you consider the complexity of modern video capture pipelines.


Why use averaging of multiple frames instead of just raising exposure time? I guess it can lead to smaller rhomboid distortion due to rolling shutter? But that should not increase latency, comapred to equivalent increase in exposure time - so that we're comparing apples with apples.


I expect it's more complicated than just simple averaging.


Not really standardized, as there is a broad range of camera hardware out there.

A pipeline might look something like demosaicing -> denoising -> light/exposure adjustment -> tone mapping -> encoding

Some of these steps will likely have hardware support (e.g. demosaicing, encoding) some won't. You may carry a few frames around for denoising, e.g. You have to sync up with the display at some point. Some of the steps might change order too, depending on what you are trying to do and what hardware support you have.

I haven't looked at available hardware specs recently. You used to be able to get very basic capture cameras that offloaded nearly everything to the computer - so that would give you minimal latency on the capture side (limited by your communication channel obviously); However keeping a stable image as resolution and frame rate grow is really hard that way, eventually impossible without a realtime system. More modern cameras are going to do much of that on board.


While not post-processing, often you can configure the exposure duration and frame rate which will make things feel more or less responsive.


This is as much as possible what the Camera cue in QLab[1] does (I'm the lead video developer), and there most definitely is perceived latency. It's better than it used to be, when webcams were DV over Firewire, but it's still enough that webcams are generally a poor choice for latency-sensitive situations like live performance.

The alternative that usually gets used in productions with a moderate budget is a hardware camera with HDMI or SDI output into a Blackmagic capture device. There's still some latency, but it's better than a webcam. It's pretty much down to the minimum of what you can do with a computer -- the capture device still has to buffer a frame, transmit it to the computer, which copies it into RAM, then blits it to video RAM, where it waits for the next vertical refresh.

All that said, the human finger doesn't travel all that fast, so for a HID application like this, you could probably get the latency way down by extrapolating the near-future position of the finger, much like iPadOS does with Pencil.

[1] https://figure53.com/qlab


>Webcams don't have memory, so they can't have latency.

You don't need memory to have latency. Speed of light / electrons is enough...

That said, webcams absolutely have extra processing (and thus latency).


If you can't store the frame, and have to send it out on a shift out from CMOS, where do you get the latency? The processing has to be real-time if you can't store multiple frames.


YMMV, but I've experienced slight webcam lag on Macbooks even with Apple-optimized applications such as Photo Booth and FaceTime. Perhaps the biggest bottleneck is at the hardware / driver level instead of at the CPU-cycles level?


> Rewriting it in C/C++/Rust instead of Python would be one...

Would it? It's using libraries for the heavy lifting already.


Slightly off topic, but C/C++/Rust are always the go-to languages when talking about high performance. People love the speed of C but it's so bare-bones that they move to C++, or it's too dangerous so they move to Rust. But those are much too complicated, so people came up with Zig and Nim and D instead, but those are very far from C, even though they're not as far as C++ and Rust. So I'm wondering, are there any languages almost as close to C but slightly safer and slightly less bare-bones? Because I like C a lot, I think it's a great language with an excellent simplicity, except for some slight rough edges.


The gap between C's model and a somewhat memory safe language is unfortunately very large. You can either limit semantics a lot, use GC (mostly-D, Go, ...) or have relatively complex ways to describe life times (Rust).


If you're looking for "slightly", there are libraries to use that expose macros and function calls, instead of directly accessing pointers, along with the "n" version of libc functions.


Unfortunately no.

We could have a better/safer C but alas we don't. It's either C or all the way to Zig/Rust/etc territory (or using C++ like C).


Zig seems very close to C to me. What makes you think it's different?


Crystal? Hasn't reached 1.0 though.


I think Crystal is garbage-collected, right?


Yeah, the whole language is designed around it. Impossible to remove without rewriting it from scratch: https://github.com/crystal-lang/crystal/wiki/FAQ#language-x-...


Maybe ATS? C--? Cython?


I'd be interested in a version that used the Vision Framework. Back when I looked at the Apple frameworks for iOS, they utilized the Neural Engine hardware available on iPhones while OpenCV didn't (that may have changed).

The laptop hardware is different of course, but it would be an interesting performance comparison.

Here's an interesting overview of using Core frameworks with Python: https://developer.apple.com/videos/play/wwdc2018/719/


It looks like it's just applying a bunch of filters/convolutions to the image. You could write it in C++ to be pretty quick, in CUDA you could get it down to microseconds I think.

Though that does not take account of the latency of the webcam itself.


Too bad you can't get CUDA easily on a recent Macintosh.



You’ll also need an NVIDIA GPU. All of the MacBooks that came with NVIDIA GPUs are from 2010 and earlier. That’s quite a while ago in computer age.

https://support.apple.com/en-us/HT204349#nvidia


You could use one via the eGPU support in most newer Macs: https://support.apple.com/en-us/HT208544


Yeah but then you haven’t turned a MacBook into a Touchscreen with $1 of Hardware. You’ve turned a MacBook into a Touchscreen with $price_of_GPU + $price_of_eGPU_adapter + $1.

And additionally you’ve made your MacBook a lot less portable.


I'm always amused when Mac people ask me why I don't use Macs and I tell them "I program CUDA for a living" and they respond "But you can jerry rig a PCIx slot in a box over a wonky cable plug it into your Thunderfire port" or something. As if that's a real solution.


There’s no need to do any of that. Commercially-available eGPU boxes[1] work well, no jank required.

[1]: https://www.sonnettech.com/product/egfx-breakaway-box.html


Commercially available jank is still jank. And the matter of portability (why else would you be using a laptop in the first place?) still remains. Forget walking around with it around towb in a bag; if I am trying to work on my deck chair outside, where do I put that mother of all dongles? Balance it precariously on the chair's armrest? No thanks!


I wonder if Mac users are pulling our legs, or if they are True Believers. Like it or not, a significant number of people doing signal processing, simulation, AI, and even playing games, need or strongly prefer Nvidia and CUDA. And for them, Macs are not an option.


I know we frown upon the kind of comment I’m about to make but I really feel the need to point this out right here; relevant username!


Just can't update past High Sierra. No Metal drivers for Nvidia = no longer working even with scripts.


MacBooks haven't had Nvidia GPUs for a few years.


They have also supported eGPUS for years though...


Huh? You have to go back two versions of the OS in order to follow those instructions.


You can do all that in Metal -- though as GP said, it doesn't matter, because there's fixed latency at the camera input.


I had a laptop with a touchscreen and the only time I ever used it was when the linux mouse drivers were playing up. So I guess that would be one use.


They are asking what kind of problems need to be solved in order to speed up the latency.

They are not asking what kind of problems having a touch screen on a laptop would solve.


Wow! This is really creative. Gets my gears turning on all the other applications for using opencv as an interface!


I love this. When I was young I saw a hack for an Amstrad which added light pen support. If I recall it effectively synced with the scans on the crt which tells you your coordinates. I wanted to build it so much as a kid, but never could. It’s the sort of thing that really fired my imagination and helped me fall in love with computers.


I wonder what the field of view is like. The demos only show it being used towards the middle of the lower half of the screen - can the camera really see the top left or right corners?


From the conclusion:

> With some simple modifications such as a higher resolution webcam (ours was 480p) and a curved mirror that allows the webcam to capture the entire screen, Sistine could become a practical low-cost touchscreen system.


The hovering aspect of this makes it something quite special, a feature normal touchscreens don't have. I imagine css ::hover elements would feel nice with this.


Maybe next someone will build a “camera keyboard“ for my iPad. Turn any surface into a keyboard with a camera and machine vision.


Do you mean similar to a projection keyboard[0] but without actually projecting a keyboard onto the surface and using the integrated camera?

[0] https://en.wikipedia.org/wiki/Projection_keyboard


Sure. Ideally the keyboard wouldn’t have to look exactly like a standard keyboard. It could be split, and perhaps tented.

Found this Kickstarter project that’s now a company. http://serafim-tech.com/products/serafim-keybo-world-s-most-...


Interesting, thank you for the link!

I'm not at all interested in this as a keyboard per se, but the fact that it functions as a piano keyboard as well gives me ideas. I wonder if the latency and resolution would allow for things like automatic transcription of written documents as you write, a mouse replacement, or other cool things that I haven't thought of yet :)

They claim they have an SDK. I'll have to explore this some more when I have time later this evening.


Ooh, how about this?

1. Point a camera at a surface (be it a piece of paper or whatever).

2. Write/print a word on that paper and use CV/OCR to parse it.

3. Detect when a person has pressed on that word and exec the command specified by the text.

Instant macro pad?


4. Scatter bunches of 'rm -rf /' pieces of paper at your local wifi hotspot.


I see, you thought the same as I did. :-D


In my mind I can already see people running around with malicious code printed out and holding it in front of such devices in Apple stores or whatever.


I did that a few years ago, although it used a Kinect: https://pdfs.semanticscholar.org/b07f/4732e5d631cf182a59b1f2... . It works with gestures, without a projected keyboard and has some audio and visual feedback on screen.


Who needs a camera when you have a microphone? https://phys.org/news/2005-09-recover-text-audio-keystrokes....


Then you need a physical keyboard with which to make noise.


Or maybe it works with tapping on the desk too. I can distinguish different noises when typing on my desk.


Is anyone found a stylus that works with the macbook's trackpad? The usecase is to be able to draw some diagrams during video calls/screencasst.

Trying to avoid the wacom/tablet path to keep it lightweight :)


I bought a case in India for my Kindle Paperwhite which came with a stylus [1]. That stylus works, and works quite well with my 2015 MBP.

[1] - http://www.proelite.co.in/Amazon-Kindle/All-Amazon-Kindle-10...


Some accompanying software would definitely be required so that it ignores/reduces the velocity factor, and has a way to differentiate movement with pen-up vs movement with pen-down.


Very neat! I am wondering whether it could also recognize multiple fingers and thus sort of emulate multitouch gestures, although my guess is latency is not great with one finger already, so this might not work well.

Als interested to see how it works with different lighting and how dirty the screen can get before recognition fails because the reflection is too „muddled“ ;)


One issue I imagine is coverage. How do you get decent horizontal resolution on top and on bottom of the screen?

Either you have a mirror that converts your webcam into ultra wide angle so you cover the top edge of the screen (which you pay for by poor resolution on the bottom) or you decide not to cover the whole top area (which is where most window control elements are).

The video action all happens on the lower half of the screen for a reason.

A solution would be some sort of orthographic lense, that basically splits the sensor into multiple distributed lenses to avoid the FOV problems. This however would be not trivial to get right without serious tools.


Use a pinball, not a flat mirror, and map back to something like a flat plane before processing. The registration is a bit harder, but not by an intractable amount.


Long time ago there was an ad PC vs Mac. PC guy had usb camera tapped on top of his head, claiming how PC are entering new modern age. Mac had camera build in. This kind of feels similar.


Pure brilliance. While many commenters seem to focus on computer applications this could be very viable solution for industrial environments for touch controls.


That is great!

On the topic of touchscreen Macbooks, I've been using my Mac a lot less since I've spent a lot of time on a touchscreen Windows laptop. It is super handy! A lot of times when using the Mac I'll absentmindedly drag something on the screen only to be disappointed. Apple's resistance to the idea reminds me of their stubbornness on the single button mouse. Admitting you're wrong is not a weakness.


I don't miss it. While working I hate finger grease on my MacBook screen much more than I do on my iPhone or iPad. Also, my wife has a Surface laptop with touch screen and she never uses it.


Yes! Takes me back to the mid-90s when my then 12-year-old daughter would point out things on my computer screen BY TOUCHING THEM drove me NUTS!


Touchscreens, like RGB lights or a numpad, are one of those things I specifically want my laptop to not have, regardless of if it can be disabled. I don't want to have to pay extra for it, I don't want the battery hit, and I don't want something designed by someone who thinks it's useful. All the touchscreen laptops I've seen have poor picture quality and the actual screen looks recessed from the glass, making it jarring to look at. It's not a good workflow for me to have to take my hands off of home row to navigate either - I do all my mousing with a thumb on the trackpad.


Are you saying that because you don't want it I can't have it?


What hardware do you have? All the touchscreen windows laptops I've tried have the responsiveness and accuracy of a touchscreen from 1990.


Resistive touchscreens which were dominant in 1990 typically have higher accuracy than the capacitive touchscreens of today. But they have terrible usability because you have to apply pressure to the screen to use them.


Have you tried the Dell XPS? Even with shitty linux touchscreen drivers/experience I end up using it almost every day when scrolling. On Windows it felt very natural.


I think Apple is trying to push folks to iPad Pros since they're touch capable and now support a mouse.


For some reason I really like hitting the 'submit' button via the touchscreen after filling out a web form.


Apple sells products, not hacks. Apple doesn't clutter their SKUS with niche features that work poorly are almost never useful. Adding touch to a laptop or desktop just so you can occasionally touch scroll is deeply overkill.

They sell iPads that are touch-enabled, that work well with touch.


how does the battery life compare to the touch enabled windows laptops?


And they called it Sistine, which flabbergastingly shows that coming up with a good name for something is indeed possible.


This probably needs [2018] in the title?


Stunning out of the box thinking. Fantastic idea and implementation. Color me impressed.


This is super interesting, I'm about to undergo a small DIY smart mirror project and wanted touchscreen capabilities but all the hardware solutions seemed impractical and expensive. This might be a super easy way to make that work.


It's a good idea and great implementation.

However I do strongly feel that touchscreen is the opposite to productivity, maybe usability to the end users, but definitely not what developers need/want.


I have a coworker who uses a touchscreen and changed my view on this because she does not use just the touchpad or just the touch screen. She uses both at the same time with one hand on the touchpad and one pointing at the screen.

I agree it's less productive on its own, but it works well when used as a hybrid where you have two types of pointers on one screen.


I use touch screens on all my devices: desktops, laptops, tablets, phones. I spend a lot of time writing code, and I need and want those touch screens, and working without them is often slow and clunky. Your feeling is irrelevant, and in general just wrong. Developers are end users.


It helps older users who have trouble with the mouse/touchpad.


A touch screen is definitely a more approachable method to interact with computers for people who are not as tech literate. My grandma had a computer for years and only kind of used it. Then she got an ipad from a relative and she uses everyday, from casting netflix to her chromecast to reading ebooks she rented from the library.

I may not prefer touch screens but they can be a complete game changer for some users.


There is nothing wrong with touch screens when they are horizontal like a tablet. Vertical touch screens like on laptops or desktops are not ergonomic, and especially so for the elderly.


Entirely horizontal touch screens have the same problems as trackpads/keyboard/mice (or piano playing) that they can have in terms of wrist strain/RSIs (including carpal tunnel), and sometimes elbow movement strain/RSIs (including things like "tennis elbow").

Vertical touch screens trade the more traditional wrist/elbow strain/RSIs for other arm/elbow/shoulder strain/RSIs (including things like "gorilla arm").

Given the nature of repetitive stress, the growing consensus seems to be not that one orientation is better than the other, but that a range of orientations, and a mixture of movement types / input methods is better. For instance, move your arm some to touch a vertical screen, to mix things up from mouse movements and keyboard movements.


Those users need a touch OS like iOS or Android (or maybe Windows RT/Metro), not a touchscreen on their desktop OS.


This is great. I notice it's Python 2, could this be ported to Python 3 easily or some inherent issue in upstream deps?


I did a similar exercise late last year, using opencv v4 beta and python. I defaulted to py3. And immediately had problems, even with example-ish code. Camera misconfiguration and intermittent latency excursions, IIRC. So I backtracked to the "road well traveled" of py2. Which just worked.


While I understand the desire for a unified Python ecosystem as much as anyone - this kind of pettifogging is not going to convince anyone to switch to Py3.

Consider if the authors of the article had used (for example) Ruby - you probably wouldn't ask why they didn't do it in Python3, so why do you do it if they write it in Python2?


I think they asked in a respectful manner, so I don’t see anything wrong with their comment really.

Python 2 will be EOLed soon.

https://pythonclock.org/

4 months 25 days 5 hours left until Python 2 is retired. After that point it will no longer be maintained by the Python project.

Of course Linux distros and companies will keep Python 2 running for years to come because of all of the tools that are Python 2 only.

New projects ought to be written in Python 3. However, in this case my guess would be that OP probably used Python 2.7 because macOS ships with Python 2.7 preinstalled.

So if we could, Apple are the ones that we ought to speak to. Convince Apple to ship macOS with Python 3 pre-installed.


The switch from Python 2 to 3 was slowed down tremendously because major libraries did not fully support Python 3 immediately. The question regarding the dependencies was legitimate.


I agree with your general point, but your example if Ruby is very different. Python 2 is expected to be EOL Jan 1, as far as I know, Ruby is not.


With respect, if nobody finds the issues in the upstream dependencies and works to make them compatible with Python 3, they won't be supported. That would be a shame for this project and others that use the same upstreams.

I'd have hard time finding support for some Ruby 1.8 upstream dependencies for recent projects, so I don't agree with you.


Put Python 2 on the dunzo list!!! https://www.youtube.com/watch?v=acLp0S3K6mk


Incredible. And here I thought a good assortment of lollies at the local corner store was the best $1 could do


I hope this hack get’s so popular that the folks at Apple finally makes a MacBook with a proper touch screen.


I normally hate touchscreen computers, but this project and its implementation are really cool.


Could we just add some shiny window tint foil to normal laptop screens to achieve this effect?


This is a normal laptop screen. What do you mean?


This project makes laptop vendors selling touchscreen laptop with premium price cry.


Curious what kind of CPU usage the OpenCV code requires on the backend...


There is some well-informed speculation about latency up-thread, but I've tinkered with OpenCV from time to time. Specifically, I used ROS and OpenCV to do face detection on a netbook in about 2012. That worked out pretty well - it would track multiple faces in realtime. Edge detection is a different algorithm, but isn't super likely to be the origin of the latency issues they're having. It's just a threshold check over a window of a few (e.g. 3 or 5) pixels, multiplied by the dimensions of the image.


What sorts of improvements could you do if you used a stylus instead?


The basic principle behind Sistine is simple. Surfaces viewed from an angle tend to look shiny, and you can tell if a finger is touching the surface by checking if it’s touching its own reflection.

This is especially true of MacBook screens!


Yay now their software can see everything on your screen.


As opposed to a driver that may have greater permissions?


Do you think this MIT grad student wants to steal your data?


No it can't. Look at the photos. The glare is part of what makes this method work in the first place.


This could be extended for multitouch too, yeah?


Yes, but it would have a possibly annoying limitation of occlusion making the multi-touch fail at some positions. Fingers very close to the camera would appear huge and occlude everything else.


Sort of. Actual optical multitouch systems use multiple cameras, so fingers blocked by other fingers can be seen from other angles.


This is really f'ing cool. Nice work!!


Is this something he should have patented?


The main idea was probably patented decades ago. There have been optical touch screens around for ages (that use multiple cameras for good multitouch, and edge IR illumination to make detection more reliable and precise).


That... is a _hack_! Awesome :D


Now, that is just smart.


whoa! amazing. out of box thinking


Brilliant


Prototypes are cool, but I think this is dead.

Anything that requires tons of visual calibration is serious work. Both for the programmer and the computer.

It worked for a planned demonstration, but I imagine this would be impossible for Mass production. And I wonder if the users found it useful due to latency and accuracy problems.


I think you may be approaching it differently than the creators are.

They hacked something cool together, I suspect that is the point, and it is awesome.

This isn't a product.


The authors didn't publish thinking "this is going to rock the world, it's ready for mass production"... More along the lines "hey fellow hackers, we found a way to emulate a touch screen at 1% of the price".


But if the latency and accuracy isn't great, did they solve anything?


They weren't trying to solve anything. They wanted to see if they could make it work, and they did.


Yes, they figured out a way to get something similar for 1% of the price.


This is a hackathon project, the aim is to make something fun and sorta useful and this is exactly that.


I don’t think it was intended for mass production, and that’s okay. It’s a really cool and impressive hack.


This is a project share, not a YC pitch



>Are reposts ok?

>If a story has had significant attention in the last year or so, we kill reposts as duplicates. If not, a small number of reposts is ok.

That post is from over a year ago, and didn't get much traction.


Although it was indeed posted over a year ago, 239 points is certainly traction.


Missed the old post. Thought reposts were rejected automatically (like on Reddit).


FWIW it does, although it's fussy and only works on posts in the last few days.

tbh Reddit's repost detector causes more problems than it solves (it still triggers if the original post was removed).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: