I really wouldn't have expected this to work but once you read about it.... of course it would.
This is one of those things that while there maybe wasn't a 'problem' shows how knowing a few things critical, you can really solve problems / come up with solutions that nobody would have ever thought of (well I wouldn't have...).
>I really wouldn't have expected this to work but once you read about it.... of course it would.
It's a fun DIY project and you can get it almost working. I suspect edge cases (stemming from different lighting conditions and imperfect finger detection) would render this method unworkable in general.
The “Filter for skin colors” step would probably need to be fixed. That’s the kind of kludge that pops up all the time in demo programs but doesn’t generalize very well.
I doubt it. An estimate for the iPad puts the cost of the touchscreen controller at 2$[1]. I'm not sure that actually includes the capacitive sensors but I doubt they add much more cost.
It's a conductive coating, but applied in an intricate pattern and connected to the controller with some very delicate connectors or anisotropic conductive film.
Possibly in the past, but at this point I think everyone expects touchscreens to be multi-touch, which this can't be, not fully. (Since a higher finger can obstruct a lower one.) It's an awesome gimmick, but no one would try to sell a touch-integrated product with only single-touch now.
I think his point was that it would be a bit silly to put this kind of solution into a product -- which is, in a word: jank -- when capacitive touchscreens are already a tried and true solution in production devices.
I often think “hand-pointing” interfaces have unexplored / unrealised potential... like, who would want a touch TV screen?! Sure, you could have the touch UX for the TV on your phone... but even better would be if you would be able to just point to the TV and MAGIC. The biggest problem I see is that humans aren’t that good at pointing - we can only “aim” with one eye, keeping both eyes open messes it up.
If you had feedback for where you were pointing I think most people would get pretty good pretty quickly (excepting people with movement disorders). It wouldn't be very precise, but I think it would be great for lazily triggering multitouch things like zoom/scroll/page-up/page-down/switch-app etc.
I've often wondered what having one of these on a laptop, tracking space above the keyboard / in front of the screen, would be like: https://www.leapmotion.com/
Gestural stuff is nice when it's transparent and guess-able. Sadly not often the case, but when it is it's pretty magical.
I tried doing this with the internal webcam, using a clip-on fisheye lens, and mirrors. And eventually punted, using the bare webcam only for head tracking, and adding usb cameras on sticks perched on the laptop screen. With more sticks to protect the usb sockets from the cables. And lots of gaff tape.
> leapmotion
Leap Motion has finally been acquired, so the future of the product is unclear. And it's Windows-only (the older and even cruftier version supporting linux, doesn't do background rejection, and so can't be used pointing down at a keyboard). But it has apis, so you can export the data. My fuzzy impression is it's not quite good enough for surface touch events, but it's ok-ish for pose and gestures. When the poses don't have a lot of occlusion. And the device is perched on a stick in front of your face.
> Gestural stuff is nice when it's transparent and guess-able. Sadly not often the case
I fuzzily recall some old system (Lisp Machine?) as having a status bar with a little picture of a mouse, and telling what its buttons would do if pressed. And a key impact of VR/AR is having more and cheaper UI real estate to work with. So always showing what gestures are currently available, and what they do, should become feasible.
Even on a generic laptop screen, DIYed for 3D, it seems you might put such secondary information semitransparently above the screen plane. And sort of focus through it to work. Making the overlay merely annoying, rather than intolerable.
But when it all works, yeah, magical. Briefly magical. The future is already here... it just has a painfully low duty cycle. And ghastly overhead.
I've often wanted some eye tracking device when working on more than one screen. Like, when looking at my terminal screen I just want to start typing, instead of alt-tab'ing to it first.
I actually saw a demo of this (plus more; it was a “universal remote”) at Cal Hacks a few years ago. They used a Myo for gesture recognition and the whole thing was really slick.
WRONG! I had a working touch screen in my Dell Laptop, but when I went on vacation to Cuba the screen mouse moved a random. I noticed it worked properly in my room after half an hour or early in the morning if I was outside.
But once it it got warm and humid it was unusable. I had to turn off the touchscreen and then I could at-less use the mouse.
And more laptops without any touchscreen. Because it's far quicker and more convenient to move a single finger across a little touchpad instead of the whole hand across a 13+ inch near-vertical screen. I used a Surface Pro for 2 years and in "laptop mode" I used the touchpad 95% of the time, and now and then the touchscreen to scroll or zoom a website.
The only thing I don't get is why Apple didn't create MBP touchpads compatible with their digitizer. When I saw the announcement of the first Macbook with this new huge touchpad that seemed so obvious to me, and I was baffled they only went for that strange touchbar.
I've got a Samsung tablet with Linux on Dex. It's Ubuntu 16.04 with Unity in a LXC container running on the Android Linux kernel.
I use a mouse most of the time but when I have to press buttons on dialog boxes it's easier to raise my hand from the keyboard and push the button. Same for touching a window a raise or above the others.
The size of the button or the exposed size of the window make the difference. There must be plenty of space or mistouches kill the experience. Resizing windows is something I do only with the mouse.
A hybrid approach, mouse/touchpad and touchscreen is the best IMHO. If my next laptop has a touchscreen option I'll buy it.
macOS supports touch input. I used it with a Dell touchscreen monitor back in around 2012 I think. Windows* isn't really "designed for" touch either; it's tacked on. Most proper desktop apps (on any OS) are a pain to use with touch for a more than a few minutes at a time.
* Not counting the "Metro" monstrosity that briefly reared its head during Windows 8; I'm not sure if it's still around.
Microsoft is working towards transitioning the Windows ecosystem towards UWP (which uses Metro design), which is designed to work equally well with mice and touch (and everything else like the XBox and HoloLens).
Of course even 4 years after the Windows 10 launch you still have the new UWP settings app and the old Control Panel. It's a bit of a mess.
Can you, say, scroll in different windows simultaneously with multiple fingers on a Windows tablet, like you can on iPad Split View? How does it handle focus?
So I was expecting one of those sensors that can turn a TV into a touchscreen with like an IR sensor or something so I checked this wondering how they were so cheap.
I was delighted to see this is so much better. Figuring out touch events from the reflection is really cool. Well done.
I remember following the work of this guy, Johnny Chung Lee, who did a ton of really neat human-interface stuff with the Wii remote's IR sensor: http://johnnylee.net/projects/wii/
I remember making a smart whiteboard with his software back in the day! It worked surprisingly well, even in a teacher's classroom, with an online store bought ir led pen.
The software is a "one time cost" in quotes due to maintenance etc but talking in broad strokes which wouldn't impact the claim of $1 touch
Or to put it another way you wouldnt invent a new way to desalinate seawater and then say the cost is $10/litre instead of 2 cent because of r&d on the device you made..
> invent a new way to desalinate seawater and then say the cost is $10/litre
Depends on the marketing strategy, most likely. But I like GP's point about undervaluing programmer time. It is non-free and everyone down to the programmers themselves tend to grossly underestimate its cost.
Humans are kind of expensive to keep alive and happy enough to program willingly
/That last statement had more caveats than I wanted...
It really depends on their intention, whether they consider that part of the work DIY or not.
If I paint my wall, I would generally say it costs the price of the paint, not whatever I'd have to pay to have someone do it for me.
Or what about the people who tinker with their own smart home solutions with arduinos and raspberries. If you factor in the price of work/research/coding, it would be very hard to justify not buying an off-the-shelf solution instead.
> If I paint my wall, I would generally say it costs the price of the paint
That's the mentality I'm challenging here. Mentally you don't account for the cost of your time, but from an outside perspective, you're spending it all the same.
That you have to factor in the price of labor with DIY electronics is the main reason you don't see more of those easy electronics projects available as some off-the-shelf contained product (where's my IoT API-accessible thermometer for $10!!!???).
This is exactly what a hackathon project should be. Delightful, innovative and scoped small enough that they produce results without too much bullshit. Not the usual "world changing" hacks that end up abandoned the moment the event concludes. If only events encouraged more hacks like these.
The last hackathon I attended, the winners just built something from an Instructable that existed well before the hackathon.
My friends and I built a time machine radio (i.e. you tune it to a decade, with a knob and everything, and radio news from the decade was included if we could find it) and won best hardware hack, so it wasn't a total loss. Had a blast either way.
While my hackathon experience is limited, I think any attempt to crown a "winner" reduces the experience. Of course, I am too sensitive because everyone can just enjoy it...but as soon as there's a winner there starts to be rules that people try to game, the are added restrictions and limits that reduce what is being created, etc. And creating is the point.
However, you do want to show off the more interesting/higher quality projects, to encourage further work, and get the whole thing jiving (people enjoy competition; it oils the machines)
The problem I think is that judges themselves are incompetent/careless — when you promote a “bad” winner, you just discourage everyone from trying, because they realize there is no value to this kind of promotion; its bullshit.
I had a similar experience in my university — some kind of business project idea competition; we submitted my lab’s current project, and there were other interesting projects involved who I would have been happy to lose to.
Instead I lost to the girl with the contentless idea for edible spoons for developing countries... an idea I’d seen repeatedly in recent news cycles because some NGO in India was doing it for the last three years.
Five minutes of googling would have discovered this, but instead we simply ignored the competition since.
Attending itself was valuable, along with the half-attempt at winning — we clarified the project and ideas, and had an excuse to clean things up — so it would have been a good exercise to do yearly (and probably for the other projects involved), and the competition was healthy.
But it just takes an incompetent judge to spoil the whole thing.
There’s nothing wrong with people trying to game the competition, its a natural function of competing. But the judges are meant to operate as experts, and an expert should have (some) ability to discern bullshit from something real.
Agreed. I wish hackathon judges actually understood how to question and analyze projects critically. I attended probably one of the most prestigious and well regarded college hackathons, where the grand prize was won by a project that did speech to text, then text to sign language. What nobody pointed out or acknowledged, was that deaf people can read. But since the project was done on a HoloLens the judges got swept away by the fancy AR and awarded them the prize.
I've become extremely skeptical of the hackathon model in general. It doesn't really produce good projects or products, it encourages an unsustainable style of working and it's a terrible place to recruit good developers. It can encourage people to build stuff, but only monkey patched, bursting with buzzwords, overly flashy projects that are thrown away at a moment's notice.
>What nobody pointed out or acknowledged, was that deaf people can read
I actually did something similarly stupid in college; for an embedded development class, my team needed a final project. I proposed a hat for the blind, with distance sensors and buzzers on the inside to inform distance from the walls and such. We also used black conductive thread on a black hat, so it was impossible to work with except for the girl who threaded it in the first place.
Easily the professor's favorite project, and I guess he still uses it as an example for future projects (and at some point he had some team extending it into some kind backpack?)
Later a friend working at a retail store selling products for the disabled contacted me about the project; apparently his boss was interested.
What no one acknowledged is that blind people are not walking into walls... the cane is far more practical, efficient and effective tool than this thing could ever be.
"Ever worried about those people who are reading their phone while walking around? Now you can randomly put this hat on their head."
The best thing about it is, the people who need and want the device aren't the ones walking around reading their phone, it's the ones who can't help but mind someone else's business - and this merely feeds their need rather than satisfies it. So there will be an never-ending market of unsatisfied busybodies.
Yep, this is truly awesome. These types of hacks are what got me into computing as a kid. Reminds me of early Hack-A-Day content before most of their posts became Arduino/Raspberry Pi centric.
Hmm, given camera is on top, computing homography and rectifying the perspective image would give fairly good precision around the camera, falling off quickly with distance, likely not being very useful at the bottom of the screen. Also, it's likely sensitive to both ambient light and the dominant light on the screen, in dark rooms with dark themes it might be pretty off, as most computer vision algorithms need constant tuning for changing conditions.
Maybe you could add an IR LED that could be observed by the webcam, but not in the visible spectrum. I’m not sure if the MacBook webcam includes an IR filter or not. It wouldn't help the precision, but might make the camera less susceptible to changes in ambient light.
I recently shined a remote control into my late 2013 macbook pro's camera and I totally saw the ir light. Not saying that you're wrong, I don't know if there are filters but only for certain specific wavelengths or something like that.
Even in my webcam before removing the IR filter I could see the faint remote light if pointing directly at it.
After removing the ir filter I could literally see in a room without lights just lit by the remote.
Wow, this is a great idea!
Looking at the video, looks like the latency isn't great; what kind of problems should be solved for this to have a ~100ms latency?
Rewriting it in C/C++/Rust instead of Python would be one... Any ideas?
Webcams don't have memory, so they can't have latency. It's all in the software. In fact, I've done some apps on Linux that just did readout of the frame from the camera sensor to ram, and displayed the buffer on the next framebuffer flip with zero copies. There was no perceived latency.
I guess it depends on how the camera chip is configured.
But all kinds of post-processing can be (and is) done on readout (pixel by pixel) without needing to keep the whole frames and go through them. You just keep some calculated parameters from previous frames (and not the whole frames).
You can do a lot of processing this way, incl. scaling, white balance, color correction, effect filters, etc.
Web cams add usb interface and an additional controller chip, so maybe there's some added latency there. But if you use the camera sensor directly over CSI, you can get pretty low latency.
There was ~100ms of latency measured for CSI going directly into an FPGA. The vendor considered the post processing to be part of their value add, and had no way to access the non buffered output, even if you disabled the obvious stuff like noise reduction.
Noise reduction is a big one. You usually need to interpolate over several frames to do that.
Also depending on the bus being used cameras often compress the image using JPEG to reduce the bandwith (very common with USB cameras, I doubt that the Mac camera does that).
Then on a computer you have the latency of whatever pipeline is being used and how much buffering is involved. For instance since typically the display doesn't run exactly synchronized with the camera you'll have some frames that'll end up "stuck" waiting for the monitor's vsync. If the app then does some internal buffering on top of that you can very easily reach a noticeable amount of latency.
In general you can consider than anything above 100ms is easily noticeable and that's only 3 frames at 30fps and 6 at 60, it's really not that much when you consider the complexity of modern video capture pipelines.
Why use averaging of multiple frames instead of just raising exposure time? I guess it can lead to smaller rhomboid distortion due to rolling shutter? But that should not increase latency, comapred to equivalent increase in exposure time - so that we're comparing apples with apples.
Not really standardized, as there is a broad range of camera hardware out there.
A pipeline might look something like demosaicing -> denoising -> light/exposure adjustment -> tone mapping -> encoding
Some of these steps will likely have hardware support (e.g. demosaicing, encoding) some won't. You may carry a few frames around for denoising, e.g. You have to sync up with the display at some point. Some of the steps might change order too, depending on what you are trying to do and what hardware support you have.
I haven't looked at available hardware specs recently. You used to be able to get very basic capture cameras that offloaded nearly everything to the computer - so that would give you minimal latency on the capture side (limited by your communication channel obviously); However keeping a stable image as resolution and frame rate grow is really hard that way, eventually impossible without a realtime system. More modern cameras are going to do much of that on board.
This is as much as possible what the Camera cue in QLab[1] does (I'm the lead video developer), and there most definitely is perceived latency. It's better than it used to be, when webcams were DV over Firewire, but it's still enough that webcams are generally a poor choice for latency-sensitive situations like live performance.
The alternative that usually gets used in productions with a moderate budget is a hardware camera with HDMI or SDI output into a Blackmagic capture device. There's still some latency, but it's better than a webcam. It's pretty much down to the minimum of what you can do with a computer -- the capture device still has to buffer a frame, transmit it to the computer, which copies it into RAM, then blits it to video RAM, where it waits for the next vertical refresh.
All that said, the human finger doesn't travel all that fast, so for a HID application like this, you could probably get the latency way down by extrapolating the near-future position of the finger, much like iPadOS does with Pencil.
If you can't store the frame, and have to send it out on a shift out from CMOS, where do you get the latency? The processing has to be real-time if you can't store multiple frames.
YMMV, but I've experienced slight webcam lag on Macbooks even with Apple-optimized applications such as Photo Booth and FaceTime. Perhaps the biggest bottleneck is at the hardware / driver level instead of at the CPU-cycles level?
Slightly off topic, but C/C++/Rust are always the go-to languages when talking about high performance. People love the speed of C but it's so bare-bones that they move to C++, or it's too dangerous so they move to Rust. But those are much too complicated, so people came up with Zig and Nim and D instead, but those are very far from C, even though they're not as far as C++ and Rust. So I'm wondering, are there any languages almost as close to C but slightly safer and slightly less bare-bones? Because I like C a lot, I think it's a great language with an excellent simplicity, except for some slight rough edges.
The gap between C's model and a somewhat memory safe language is unfortunately very large. You can either limit semantics a lot, use GC (mostly-D, Go, ...) or have relatively complex ways to describe life times (Rust).
If you're looking for "slightly", there are libraries to use that expose macros and function calls, instead of directly accessing pointers, along with the "n" version of libc functions.
I'd be interested in a version that used the Vision Framework. Back when I looked at the Apple frameworks for iOS, they utilized the Neural Engine hardware available on iPhones while OpenCV didn't (that may have changed).
The laptop hardware is different of course, but it would be an interesting performance comparison.
It looks like it's just applying a bunch of filters/convolutions to the image. You could write it in C++ to be pretty quick, in CUDA you could get it down to microseconds I think.
Though that does not take account of the latency of the webcam itself.
Yeah but then you haven’t turned a MacBook into a Touchscreen with $1 of Hardware. You’ve turned a MacBook into a Touchscreen with $price_of_GPU + $price_of_eGPU_adapter + $1.
And additionally you’ve made your MacBook a lot less portable.
I'm always amused when Mac people ask me why I don't use Macs and I tell them "I program CUDA for a living" and they respond "But you can jerry rig a PCIx slot in a box over a wonky cable plug it into your Thunderfire port" or something. As if that's a real solution.
Commercially available jank is still jank. And the matter of portability (why else would you be using a laptop in the first place?) still remains. Forget walking around with it around towb in a bag; if I am trying to work on my deck chair outside, where do I put that mother of all dongles? Balance it precariously on the chair's armrest? No thanks!
I wonder if Mac users are pulling our legs, or if they are True Believers. Like it or not, a significant number of people doing signal processing, simulation, AI, and even playing games, need or strongly prefer Nvidia and CUDA. And for them, Macs are not an option.
I love this. When I was young I saw a hack for an Amstrad which added light pen support. If I recall it effectively synced with the scans on the crt which tells you your coordinates. I wanted to build it so much as a kid, but never could. It’s the sort of thing that really fired my imagination and helped me fall in love with computers.
I wonder what the field of view is like. The demos only show it being used towards the middle of the lower half of the screen - can the camera really see the top left or right corners?
> With some simple modifications such as a higher resolution webcam (ours was 480p) and a curved mirror that allows the webcam to capture the entire screen, Sistine could become a practical low-cost touchscreen system.
The hovering aspect of this makes it something quite special, a feature normal touchscreens don't have. I imagine css ::hover elements would feel nice with this.
I'm not at all interested in this as a keyboard per se, but the fact that it functions as a piano keyboard as well gives me ideas. I wonder if the latency and resolution would allow for things like automatic transcription of written documents as you write, a mouse replacement, or other cool things that I haven't thought of yet :)
They claim they have an SDK. I'll have to explore this some more when I have time later this evening.
In my mind I can already see people running around with malicious code printed out and holding it in front of such devices in Apple stores or whatever.
Some accompanying software would definitely be required so that it ignores/reduces the velocity factor, and has a way to differentiate movement with pen-up vs movement with pen-down.
Very neat! I am wondering whether it could also recognize multiple fingers and thus sort of emulate multitouch gestures, although my guess is latency is not great with one finger already, so this might not work well.
Als interested to see how it works with different lighting and how dirty the screen can get before recognition fails because the reflection is too „muddled“ ;)
One issue I imagine is coverage. How do you get decent horizontal resolution on top and on bottom of the screen?
Either you have a mirror that converts your webcam into ultra wide angle so you cover the top edge of the screen (which you pay for by poor resolution on the bottom) or you decide not to cover the whole top area (which is where most window control elements are).
The video action all happens on the lower half of the screen for a reason.
A solution would be some sort of orthographic lense, that basically splits the sensor into multiple distributed lenses to avoid the FOV problems. This however would be not trivial to get right without serious tools.
Use a pinball, not a flat mirror, and map back to something like a flat plane before processing. The registration is a bit harder, but not by an intractable amount.
Long time ago there was an ad PC vs Mac. PC guy had usb camera tapped on top of his head, claiming how PC are entering new modern age. Mac had camera build in. This kind of feels similar.
Pure brilliance. While many commenters seem to focus on computer applications this could be very viable solution for industrial environments for touch controls.
On the topic of touchscreen Macbooks, I've been using my Mac a lot less since I've spent a lot of time on a touchscreen Windows laptop. It is super handy! A lot of times when using the Mac I'll absentmindedly drag something on the screen only to be disappointed. Apple's resistance to the idea reminds me of their stubbornness on the single button mouse. Admitting you're wrong is not a weakness.
I don't miss it. While working I hate finger grease on my MacBook screen much more than I do on my iPhone or iPad. Also, my wife has a Surface laptop with touch screen and she never uses it.
Touchscreens, like RGB lights or a numpad, are one of those things I specifically want my laptop to not have, regardless of if it can be disabled. I don't want to have to pay extra for it, I don't want the battery hit, and I don't want something designed by someone who thinks it's useful. All the touchscreen laptops I've seen have poor picture quality and the actual screen looks recessed from the glass, making it jarring to look at. It's not a good workflow for me to have to take my hands off of home row to navigate either - I do all my mousing with a thumb on the trackpad.
Resistive touchscreens which were dominant in 1990 typically have higher accuracy than the capacitive touchscreens of today. But they have terrible usability because you have to apply pressure to the screen to use them.
Have you tried the Dell XPS? Even with shitty linux touchscreen drivers/experience I end up using it almost every day when scrolling. On Windows it felt very natural.
Apple sells products, not hacks. Apple doesn't clutter their SKUS with niche features that work poorly are almost never useful. Adding touch to a laptop or desktop just so you can occasionally touch scroll is deeply overkill.
They sell iPads that are touch-enabled, that work well with touch.
This is super interesting, I'm about to undergo a small DIY smart mirror project and wanted touchscreen capabilities but all the hardware solutions seemed impractical and expensive. This might be a super easy way to make that work.
However I do strongly feel that touchscreen is the opposite to productivity, maybe usability to the end users, but definitely not what developers need/want.
I have a coworker who uses a touchscreen and changed my view on this because she does not use just the touchpad or just the touch screen. She uses both at the same time with one hand on the touchpad and one pointing at the screen.
I agree it's less productive on its own, but it works well when used as a hybrid where you have two types of pointers on one screen.
I use touch screens on all my devices: desktops, laptops, tablets, phones. I spend a lot of time writing code, and I need and want those touch screens, and working without them is often slow and clunky. Your feeling is irrelevant, and in general just wrong. Developers are end users.
A touch screen is definitely a more approachable method to interact with computers for people who are not as tech literate. My grandma had a computer for years and only kind of used it. Then she got an ipad from a relative and she uses everyday, from casting netflix to her chromecast to reading ebooks she rented from the library.
I may not prefer touch screens but they can be a complete game changer for some users.
There is nothing wrong with touch screens when they are horizontal like a tablet. Vertical touch screens like on laptops or desktops are not ergonomic, and especially so for the elderly.
Entirely horizontal touch screens have the same problems as trackpads/keyboard/mice (or piano playing) that they can have in terms of wrist strain/RSIs (including carpal tunnel), and sometimes elbow movement strain/RSIs (including things like "tennis elbow").
Vertical touch screens trade the more traditional wrist/elbow strain/RSIs for other arm/elbow/shoulder strain/RSIs (including things like "gorilla arm").
Given the nature of repetitive stress, the growing consensus seems to be not that one orientation is better than the other, but that a range of orientations, and a mixture of movement types / input methods is better. For instance, move your arm some to touch a vertical screen, to mix things up from mouse movements and keyboard movements.
I did a similar exercise late last year, using opencv v4 beta and python. I defaulted to py3. And immediately had problems, even with example-ish code. Camera misconfiguration and intermittent latency excursions, IIRC. So I backtracked to the "road well traveled" of py2. Which just worked.
While I understand the desire for a unified Python ecosystem as much as anyone - this kind of pettifogging is not going to convince anyone to switch to Py3.
Consider if the authors of the article had used (for example) Ruby - you probably wouldn't ask why they didn't do it in Python3, so why do you do it if they write it in Python2?
4 months 25 days 5 hours left until Python 2 is retired. After that point it will no longer be maintained by the Python project.
Of course Linux distros and companies will keep Python 2 running for years to come because of all of the tools that are Python 2 only.
New projects ought to be written in Python 3. However, in this case my guess would be that OP probably used Python 2.7 because macOS ships with Python 2.7 preinstalled.
So if we could, Apple are the ones that we ought to speak to. Convince Apple to ship macOS with Python 3 pre-installed.
The switch from Python 2 to 3 was slowed down tremendously because major libraries did not fully support Python 3 immediately. The question regarding the dependencies was legitimate.
With respect, if nobody finds the issues in the upstream dependencies and works to make them compatible with Python 3, they won't be supported. That would be a shame for this project and others that use the same upstreams.
I'd have hard time finding support for some Ruby 1.8 upstream dependencies for recent projects, so I don't agree with you.
There is some well-informed speculation about latency up-thread, but I've tinkered with OpenCV from time to time. Specifically, I used ROS and OpenCV to do face detection on a netbook in about 2012. That worked out pretty well - it would track multiple faces in realtime. Edge detection is a different algorithm, but isn't super likely to be the origin of the latency issues they're having. It's just a threshold check over a window of a few (e.g. 3 or 5) pixels, multiplied by the dimensions of the image.
The basic principle behind Sistine is simple. Surfaces viewed from an angle tend to look shiny, and you can tell if a finger is touching the surface by checking if it’s touching its own reflection.
Yes, but it would have a possibly annoying limitation of occlusion making the multi-touch fail at some positions. Fingers very close to the camera would appear huge and occlude everything else.
The main idea was probably patented decades ago. There have been optical touch screens around for ages (that use multiple cameras for good multitouch, and edge IR illumination to make detection more reliable and precise).
Anything that requires tons of visual calibration is serious work. Both for the programmer and the computer.
It worked for a planned demonstration, but I imagine this would be impossible for Mass production. And I wonder if the users found it useful due to latency and accuracy problems.
The authors didn't publish thinking "this is going to rock the world, it's ready for mass production"... More along the lines "hey fellow hackers, we found a way to emulate a touch screen at 1% of the price".
I really wouldn't have expected this to work but once you read about it.... of course it would.
This is one of those things that while there maybe wasn't a 'problem' shows how knowing a few things critical, you can really solve problems / come up with solutions that nobody would have ever thought of (well I wouldn't have...).