Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Captchas are hard because there's only so much an algorithm can extract from spatial information. Computers are excellent with temporal data, given essentially unlimited memory for past video frames. Computers benefit more from increased information than humans do.


This is computer vision, an I'm somewhat an expert in the area. I can tell that video sequence recognition is _much_ harder problem than image recognition.

For example, if you would show an letter made out of random noise moving through random noise, current computer vision algorithms would not be able to recognize anything. And you would pick out that letter immediately. Human visual subsystem is really amazing in that sense.


It should be possible to do this with an animated GIF. Do you have any references/examples I could use as a starting point?


Oh. I remember reading some vision paper and in the supplement materials there've been a couple of videos with letters moving. Doubt, I'll be able to find it that easily.

Should be relatively easy to code with any library that can draw a text on a bitmap. Like PIL, matplotlib, etc. Use ffmpeg to make a video out of frames.

1. draw letters (just black/white) masks; 2. fill letters with noise; 4. fill background with noise; 5. copy letters using a mask onto background, using X,Y as loc; 6. add a little bit of new noise to letters; 8. modify X,Y coordinates (move letters SLIGHTLY); 9. go to step 4.



This is simple, brillant. The best solution I have seen ever.

Do you have a patent already? :)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: