We created nukes, landed on the moon, took sludge out of the ground and used it to power the world. We connected this world with wires and glass fibers to build a real-time global communication system that also gathered all of the world's information in a singular and immediately accessible place.
Building a self driving car is hard but really not as tough.
How quickly do think we'd get self driving cars if the USA spent 4% of the federal budget on it like NASA received in the 60s. (That's about $40B a year for a decade).
> How quickly do think we'd get self driving cars if the USA spent 4% of the federal budget on it like NASA received in the 60s. (That's about $40B a year for a decade).
At that cost we could adapt all infrastructure to suit self driving cars, instead of developing self driving cars to adapt to human infrastructure. But I think that kind of cost is always going to be beyond what's acceptable.
I think the discussion is mostly pointless because of diminishing returns: if you can have "99.9% full self driving" for a tiny fraction of the cost, who would want to pay to go from 99.9% to 100%?
Initially human remote drivers will take care of the rest. And then there is a very slow commercial race towards using fewer humans that drives the very slow march to 99.9% and 99.99% self driving and so on. Driving the last second of the last edge case route is basically something that requires AGI (as long as we don't adapt infrastructure).
I think remote drivers will probably have to "rescue" cars without piloting them, usually just assessing a situation and overriding something (driving through an obstacle etc). A passenger (if there is one) could do the same. But sometimes actual remote driving would be required of course.
I don’t think we’ve demonstrated anything autonomous beyond the most trivial kinds of autonomy (e.g. the V2) in all of our technology history.
It's an extremely narrow set of problems that have to be solved incredibly well. It mostly just comes down to creating an accurate 3d representation of the world from a bunch of sensors. You also have to correctly segment and label each object in that 3d representation. If you did those two things extremely well, the actual driving logic can be hardcoded.
The problem is that each of these systems has problems so they all have to improve and compensate for each other.
I don't know whether or not AGI needs to be developed to make a useful self-driving car, but as time goes on I'm beginning to believe that's the case.
Predicting motion once you have small time slices and very accurate 3d representations is very very easy. You can easily calculate expected paths. You have to remember that computers see the entire situation at the same time. A bike doesn't just cut off a self-driving car the same way it does for a human. Humans are slow, our increments of time are large and in the hundreds of milliseconds and we can only focus on a couple of things at a time. A computer will notice the slight change in velocity and acceleration within single-digit milliseconds. Then it just has to predict the probability of collision. These calculations are simple.
Deciding what to do in these situations can very much be efficiently hardcoded using decision trees. No one right now working on self-driving cars dares to use a neural network or any other unexplainable & unbounded ml algorithm for policy. You have to be able to hard code in new edge cases as they emerge. You have to be able to study specific crashes or incidents and then adjust the decision-making scheme to specifically avoid that situation in the future.
Truly, the hardest problem is taking in data from multiple sensors, segmenting it, and then labeling it. All in real-time. The sensors are faulty and super expensive. There are also so many different objects out there. If you actually look at the ancillary startups in this industry. They're not working on "common-sense" general intelligence algorithms. They're working to make better & cheaper lidar. They're working on computer vision problems. They're working on image segmentation.
Let's say you're driving through an intersection with a green light, and there's a pedestrian waiting to cross. The robot has the right of way and goes, but suddenly the pedestrian decides to cross in front of the vehicle. Even if the reaction time was 0.00 seconds it's too late to avoid a collision. The problem is the robot didn't anticipate that the pedestrian was going to cross despite not having the right of way. Humans are better at reading social cues than robots. Maybe robots can learn that, but it's a significantly harder problem than path planning and image segmentation. This applies further than pedestrians and also drivers and predicting their behaviors on the road. And if you try to drive cautiously to avoid this potential scenario, you effectively stop and crawl every time you see a pedestrian and are not very useful for moving from point A to point B (not to mention all the pissed off traffic behind you).
The reason it's difficult is because it's an uncontrolled environment, and the robot has to be able to anticipate what other drivers/cyclists/pedestrians will do. Robots have done wonders in controlled environments, but trying to bring them to the real world has always been a struggle.
The standard isn't "perfect under all conditions", it's "better than a human". Humans are, honestly, pretty bad at driving. The bar is not that high, perhaps unfortunately.
Why does a robot driver need to anticipate this? Does a human driver need to?
If I'm walking up to a pedestrian crossing and a car is approaching, I don't just step out into the road, even though I have the right of way. I try to make eye contact with the driver to see if they recognize I'm crossing. They'll often nod or do something similar to signal that they're letting me cross.
A machine has to understand these social cues as well. It might even be helpful if the machine has a way to signal its intentions back to pedestrians.
Computers can also have a much faster reaction time, so a human may need to predict one second ahead, but computers may be able to get away with less.
This is an assumption and has not been shown to be correct or even probable
Alternatively you could develop braking technology which gives vehicles a stopping distance of 0m, but this might be a bigger technological advance than full self-driving AI, and I'm not sure it would be that comfortable for the passengers....
One key part of driving is communicating - with pedestrians, cyclists, other drivers. This happens through body language and other fairly subtle cues.
When you can't make AI work for responding to questions given in text form on an extremely limited problem domain, how on earth would it work for something that's orders of magnitude less well defined and more broad?
I mean, it _could_ be hardcoded, but there's millions of edge cases so it's pretty infeasible. I agree with parent comment that full level 5 requires something close to AGI - the difficult part in in getting a self driving car is giving the AI something along the lines of "common sense", the ability to reason about what to do in an unfamiliar situation.
What happens when a street is temporarily closed but doesn't have the correct signage? What if there's a police office or road worker signalling instructions by waving his hands? What if the lights at an intersection stop working? What happens if there's a car burning on the side of the highway and drivers need to change lanes to go around it?
And these are just some of the problems in a large American city. Think about rural areas, places with more aggressive traffic, places with wildly different written and unwritten traffic rules.
It does require AGI, at least if you're planning to drive on most of the world's roads, and not only on some "pampered" streets in the middle of the desert or on heavily-regulated and very well maintained streets like in Norway or Switzerland.
As a human a I have a quite "accurate 3d representation of the world" but even so, many times I'm left dumb-founded by what the people driving on the same streets as me are doing. And even if you do manage to replace all those other people with self-driven cars, how do you account of cows ending up in the middle of an interstate road (it happened to me at least once), with wrong street markings or no street markings at all or with drunken bicyclists whom you can't see at night?
Here's someone following right along with the suggestion that regulations, not lack of technology is to blame: https://www.wired.com/story/outdated-auto-safety-regulations... The author, part of the Competitive Enterprise Institute, works for Google: https://services.google.com/fh/files/misc/trade_association_...
That was page one of my search results, but suffice to say Google's been insinuating this for a while, both from the Chris Urmson era and the John Krafcik one.
I specifically referred to Tesla in my post as well. I saw the suggestion that Elon's claims about release dates for Autopilot features were effectively timed to manipulate the market. I'd give credit to that theory, or that Elon just has no clue how far he actually is from success. One of the two.
Both companies horribly misrepresent the fact that self-driving isn't around the corner.