Hacker News new | past | comments | ask | show | jobs | submit login

Humans don’t need LiDAR to recognize billboards





Self driving cars can't rapidly move their cameras in multiple spatial directions like humans do on a continuous basis.

Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.


People don't have eyes in the back of their heads. Self-driving cars don't get drunk or distracted by cell phones. Comparing humans with AVs is apples & oranges. The only meaningful comparison is in output metrics such as Accidents & Fatalities per mile driven. I'd be receptive to conditioning this metric on the weather... so long as the AV can detect adverse conditions and force a human to take control.

Chimps have us beat when it comes to short-term visual memory (Humans can't even come close).

Mantis shrimp have us beat when it comes to quickly detecting colors since they have twelve photoreceptors vs. our three.

Insects have us beat when it comes to anything in the UV spectrum (we're completely blind to it). Many insects also cannot move their eyes but are still have to use vision for collision detection and navigation.

Birds have us beat when it comes to visual acuity. Most of them also do not move their eyeballs in spacial directions like we do but still have excellent visual navigation skills.


Humans have visual processing which converts the signals from our three types of cones into tens to hundreds of millions of shades of color. Mantis shrimp don't have this processing. Mantis shrimp can only see 12 shades.

Human color detection is about six orders of magnitude greater than mantis shrimp's.


Right, but the theory is that they have us beat when it comes to speed since they are directly sensing the colors whereas we are doing a bunch of post-processing.

I think the point was that brains are the best pattern and object detection computers, not necessarily just human brains.

Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.

Not defending those who say that LIDAR isn't useful/important in self-driving cars, but this assertion is only marginally true today and won't be true at all for much longer. See https://arxiv.org/pdf/1706.06969 (2017), for instance.


Humans have about 2° field of sharp vision. Computers with wide angle lenses don't have to oscillate like the eyes do.

Humans are underrated.

On driving? I would posit that most humans are vastly overrated.

I suspect if you crunch the numbers, accidents are going to be above normal for a while after Covid-19 reopenings.

Anecdotally, I'm seeing people doing mind-blowingly stupid things on the roadways right now. It seems like people have forgotten how to drive. I suspect the issue is that people rely too much on other cars to cue them how to behave and the concentration is too low.

(It could also be that a constant accident rate cleans off the worst of the drivers with regularity as they get into accidents and then wind up out of circulation. I really hope that isn't why ... that would be really depressing.)


No they’re underrated. We all know the stats. Driving isn’t the safest activity. Having said that there’s a lot of wishful thinking that the current state of ML can do any better if we were to just put them on the roads today as-is.

You are right, for example, humans don't need anywhere near the amount of training data that AIs need.

I learned to drive a car when I was 13. My older cousin took me to warped tour, got hammered and told me I had to drive home. I didn’t know what a clutch was, let alone a stick shift. After stalling in the parking lot a couple of times, I managed to drive us from Long Beach all the way back to my parents house in Pasadena. Love to see an AI handle that cold start problem.

Cold start? You had 13 years!

Self-driving cars could work more like a hive mind. Humans can share ideas, but not reflexes and motor memory. So we practice individually, and we're great at recognizing moving stuff, but we never get very good at avoiding problems that rarely happen to us.

And we know we shouldn't drive tired or angry or intoxicated but obviously it still happens.


Exactly. The way to improve performance on a lot of AI problems is to get past the human tendency to individualistic AI, where every AI implementation has to deal with reality all on its own.

As soon as you get experience-sharing - culture, as humans call it, but updateable in real time as fast as data networks allow - you can build an AI mesh that is aware of local driving conditions and learns all the specific local "map" features it experiences. And then generalises from those.

So instead of point-and-hope rule inference you get local learning of global invariants, modified by specific local exceptions which change in real time.


It seems to me that humans require and get orders of magnitude more training data than any existing machine learning system. High "frame rate", high resolution, wide angle, stereo, HDR input with key details focused on in the moment by a mobile and curious agent, automatically processed by neural networks developed by millions of years of evolution, every waking second for years on end, with everything important labelled and explained by already-trained systems. No collection of images can come close.

Depends on how you quantify data a human processes from birth to adulthood.

You're forgetting the million years of evolution



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: