Hacker News new | comments | show | ask | jobs | submit login

I have always thought that it was a complex computer vision problem that made this work, very interesting that it was more than just software and makes sense.

I am tempted to think that this was the first widely adopted (and accepted) instance of augmented reality. Very cool in that context!




A lot of this has changed since then, of course. No grass swatches anymore! We have a lot more computer horsepower to throw at the problem now, plus a lot of gained experience.

We have a bay in our lab with a to-scale football field where we can run indoor tests, but on the field it is computer vision that is doing it all. The cameras are instrumented to get attitude, zoom, and so on (we have to modify the lens to get the accuracy we need), and then of course you have to compensate for lens distortion. If you look closely, the "line" is not a line, but a curve. If we drew a line it would look stupid, and not reg well in most parts of the image.


Very cool. Are you guys doing statistical filtering at all or is it strictly palette matching? Seems like your data inputs are specific enough that you are trying to get where you don't need much buffer on either side of the wavelength.


I will have to ask. I don't work with this technology, and haven't touched the code. I work in computer vision for baseball, with a completely different set of challenges.


Is an automated "yellow box" to show the strike zone ever going to happen? I feel like that would have a huge impact on the viewing experience!


I dunno. I kind of like trying to judge it visually and waiting for the ump's call.

I can also imagine MLB having similar concerns as the NFL when, for instance, they insisted on removing the line as the ball is being spotted. That is, if they are showing the strike zone and the ump's call seems inconsistent, then it wouldn't be a "good look" for the sport.


There might be trouble with depth perception there. When I watch baseball, I can't easily tell at what point the ball crosses the plate in the behind-the-mound view. When a pitcher throws a breaking ball, I usually have to look at the non-overlay strike zone they show afterwards to know where the ball was when it passed the batter.


That might be an interesting approach. Instead of physically tracking the orientation of the cameras, you could place markers at set points around the playing surface. Then, as long as enough of the markers were always visible, the software could fit the virtual geometry of the surface onto the video. It's the same idea used in QR codes, where the three large squares mark three corners of the QR code. I suspect it could work, but would never be as reliable and precise as the real implementation.


Well the field just becomes a large marker at that point and you use the boundaries as easy reference points. This is what the real future of AR and CV combined are.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: