Hacker News new | past | comments | ask | show | jobs | submit login
Self-Driving Cars Are a Dangerous Pipe Dream [video] (youtube.com)
9 points by askl 9 months ago | hide | past | favorite | 9 comments



I can't say I'm a fan of self-driving cars, but these days, with everyone texting and talking on their phones while driving, I'm not a fan of people-driven cars either.

Of course there are going to be mistakes with self-driving cars, and they will kill people. But at some point - and maybe we're already there - self-driving cars are likely to be safer overall than distracted people attempting to drive. If a self-driving car kills or injures someone, it is all over the news as a major problem. In that same news story, they should also tell us how many human-driven cars cause death and injury because of human driver mistakes.


Self driving cars have the same problem as "AI"

Limited "AI" is good, like GPT-4 and the image generation tools. But an "AI" that is similar to the ones in the movies? Nope.

Limited self driving is good, on the freeways and larger roads where the traffic is predictable, perfectly doable. Many cars can do it even today, even if they don't advertise it as "self-driving".

But letting a car loose on the roads of a large city? Hell no. Unless it starts reading the thoughts of everyone nearby, there's no way we have the technology to have the self-driving car navigate without issues.


Right, this has been the problem with FSD is that it is incapable of dealing with situations that it did not see in its training dataset. It seems incapable of abstract general reasoning that AGI would have, and that humans do.

This is important for all the generalized cases of mayhem you see driving on local roads. Wrong way drivers, cars driving in reverse, emergency vehicles driving in opposite lanes, bikes not in bike lane going wrong way, etc.

There was a nice YouTube video last summer of a Tesla FSD car perfectly identifying that there was a tram moving in its direction & rendering it in 3D on the center screen. However, FSD tried to make an unprotected left turn in front of the tram anyway, the excuse being the usual "well it hadn't seen an unprotected left in front of a tram in its training data".

AGI/more abstract and generalized reasoning is required for "things that never happened before happen all the time" state of the real world.

None of the self driving systems seem to be able to infer intent from other vehicle behavior/pedestrian body language. Something that any B- or better driver should be able to do. Signals, coasting, braking in certain manners that you can infer they are about to turn, change lanes or try and park. Or that they see an obstacle in the road, etc. This is free information that these systems do not attempt to use.


Human beings cant read the minds of everyone nearby, and they can drive reasonably well.


But you, as a human, can look at another human walking parallel to your car and based on your life experience and knowledge determine the probability of that human ending up in front of your moving car. And you'll do this in microseconds. And you can do this for every other object in your vicinity.

When can an FSD AI system do the same?


I would only say why couldn't an AI do the same? I dont think they can atm, its just a problem that needs more work.


There is no technical reason why we couldn't have an AI system do the same thing.

But the "just a problem that needs more work" requires impractical amounts of computing power and storage so it's not feasible for vehicle use. Unless the vehicle is a van where the whole back is a full-on datacenter and batteries to run it.

Tesla, in the end, is on the correct path by going camera only. You CAN do everything with just a ton of cameras, computing power and a neural network. But it's not easy or computationally light.

For example: a LIDAR can get multiple datapoints from a single object with one pass (like exact distance to every point in the object). A camera system has to calculate or guesstimate it every single time. It's possible, but requires cycles that could be used elsewhere.

Same with parking radars in cars. They're dirt cheap and tell you how far stuff is from the sensor. You CAN do it with cameras, but again, that's a ton of calculation and guesstimating.

Maybe we'll get there at one point, but we're not there yet.


Bottom line --- Any "intelligence" in AI is extremely limited and self driving is just one example.


Having owned a Tesla for a few years and followed all the AP/FSD and other vision self driving projects for a while.. I really think true self driving is quite close to being a large subset of what some call AGI.

It is not some small trick on a path to a long & far away AGI. It is a broad, general and very difficult problem. Add in the need to make instantaneous decisions operating at the limits of your sensor/compute suite .. and the consequences of being wrong, it's a very large problem to crack.

Owning a Tesla 2018-2022 experiencing the "progress" and observing it before & after, I think self driving cars are closer to 20 years away than it is to 2 years away.

Watching latest builds of FSD attempt to blow stop lights/signs in 2023, and regress on the same stupid intersection test the well known YouTubers have been doing for half a decade.. its almost comical if it wasn't so scary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: