Hacker News new | past | comments | ask | show | jobs | submit login

Yes, that is explicitly part of the point Doctorow is making. It’s why the essay mentions the fact that humans see faces in clouds, etc. Humans typically know when they are “hallucinating” a face, and ML algorithms don’t. When humans see a face in the snow, they post it to Reddit; they don’t warn their neighbor that a suspicious character is lurking outside. This is the distinction the essay draws.



People perceive nonexistent threats all the time and call the police. The threshold is simply higher than current AI but that’s a question of magnitude rather than inherent difference. Fine tune a reinforcement model on 5 years of 16 hours a day video and I’m sure it will also have a better threshold.


There is general knowledge about the world for humans to know that there isn’t a giant human in the sky no matter how good the face looks.

Train it with as many images as you want and as long as a good enough face shows up, the model is going to have a positive match. The entire problem is it’s missing that upper level of intelligence that evaluates “that looks like a face, could it actually be a human?”


>There is general knowledge about the world for humans to know that there isn’t a giant human in the sky no matter how good the face looks.

Is there? Humans used to think the gods were literally watching them from the sky and the constellations were actual creatures sent into the night. So this seems learned behavior from data rather than some inherent part of human thinking.

>Train it with as many images as you want and as long as a good enough face shows up, the model is going to have a positive match.

So will a human if something is close enough to a face. A shadow at night for example might look just like a human face. Children will often think there's a monster in the room or under their bed.


Children will think there is a monster under their bed based on no evidence at all. That speaks to something beyond object recognition that happens at a different processing layer.

Humans do not need to be trained on billions of images from around the globe to semantically understand where human faces are not expected to appear. Modern AI can certainly recognize faces really well with that level of training now, but it still doesn’t even understand what a face is (i.e. no model of reality to verify its identification against).


But very seldom do they do that because of a hallucination.


Well, we seem to experience such things in a split second, and then we correct ourselves. We use some kind of reasoning to double-check suspicious sensory experiences.

(I was thinking of this when I was driving in a new place. Suddenly it looked like the road ended abruptly and I got ready to act, but of course it didn't end and I realized that just a split second later.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: