Hacker News new | past | comments | ask | show | jobs | submit login

Maximizing probability naively sometimes works, but of course it can produce misleading garbage.

And then you can get fooled instead of actually correctly believing the image was unreadable.

There is no free lunch, even with robust estimators. They will make mistakes. For image quality, it is ok to make a mistake here or there. For actual recognition? Terrible.

Better than human brain? Show it.

People are pretty good at reading blurry text when trained, but I'm not aware of a test pitting trained people against a machine.

(No, Mechanical Turk does not count as trained at a specific task.)




Human brain can just as easily predict erroneously, we just seldom happen to have only a single shot at it. For visual recognition we usually look at it for an extended amount of time, waiting with "judgement" until the probability that what we see is indeed what we think it is is sufficiently high. Neural networks also output a probability (when trained in a problem that require it), that can signal their confidence in their answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: