Obviously, as there is very little alternative to human error right now. But there's no reason to believe that to be the case when you introduce autonomous vehicles. There's a whole new variable.
Still, a required step for progress.
If you still don't believe me go compare the "optimization" of your local DMV and the Google labs where these autonomous cars are being created and see if you walk aways with any illusions of Government grandeur.
Manufacturing defects, eg faulty brakes.
Have we? The highest profile one I know of turned out to not be be software at all (Referring to Toyota accelerator), but rather a combination of floor mats and human error.
Care to expand why do you think that?
I think there are many reasons to believe human errors will still be the majority cause of accidents after some cars are autonomous. The main, and most touted reason, is that driving is one of the types of tasks computers are much better suited to than human beings. Driving well is, mostly, just following a few clear repetitive rules over and over. We still fail at that very often, but computers excel at following clear rules repetitively.
So there are many reasons to believe computers will outperform humans at driving. And, in my opinion, by a large margin. I'm intrigued to know why you wouldn't think so.
Edit: As I read it, this is pretty straight forward, doing the following things:
- Makes it explicitly legal to operate an autonomous car on public roads, if your car has met a safety standard yet to be devised.
- Authorizes the establishment of safety standards for autonomous vehicles by the California Highway Patrol.
- Until these standards are devised, it does not prohibit autonomous cars from operating on CA public roads.
"Autonomous Cars" in this case are defined fairly narrowly: a car capable of driving "without active control and continuous monitoring of a human operator".
I think most drivers would instinctively take control of the car if they felt in danger, whether or not it's statistically in their interest.
On the other hand, I have to say that I don't exactly have the greatest level of faith in the California Assembly based on past performance. Here's hoping they buck the trend and establish a framework to encourage rather than inhibit innovation.
If I were making said cars, I would not sell them without that feature. Otherwise, if something ever happened to the car I would have no decent record of what my software was trying to do in the incident. You would have no way of fixing serious issues, or absolving yourself of fault.
This is the reasoning behind airline black boxes.
Unless of course your being accused of a crime that was either caught on film two minutes ago or it involved a car crash.
EDIT: Or was caught on film two minutes prior and involved a car cash.
> I'd prefer not to have all my actions recorded
Fair enough, but it's too late for some countries.
There is a cost to that of course, Google's system uses $100k of sensors, and isn't well integrated into the body of a car. This means it's father off from being seen a in a production car. I could see it being sold as an aftermarket add on to specific cars, maybe even only to business customers, who then want to sell the service and automated car can provide.
Several teams competed, entering driverless cars which interacted with each other and some vehicles operated by real people on a closed course. Carnegie Mellon's Boss car completed all the DARPA conditions and won the grand prize.
A system like this is only going to be as good as the data coming to the car, and given the knowledge that all cars will react a certain way to a certain stimulus, it's a lot easier to design a low-tech hack that would kill a lot of people. Here's one that comes to mind:
Given an two-lane road with a narrow shoulder and an embankment, place a small boulder on the right side of each lane. A human being will either swerve off the embankment or rip their transmission out on the rock. What's a bot going to do?
Go back to the beginning of the 20th century and think how much damage could be caused by creating a nationwide electric grid. People could electrocute people at will. Personally, I'm kind of glad we went ahead with it.
That's true. But blowing up a diesel truck is hard to do remotely, and requires someone to actively attack at a specific place and time.
I'm not sure why I got downrated for my post; I'm just asking, doesn't this create a lot of security holes and attack vectors that need to be studied before allowing it? I mean, I haven't heard anything about security, at all. The focus seems to be on a safe driving experience, but where's the white paper on counter-hacking measures? Can you imagine if Google launched Gmail without any kind of plan to mitigate stolen passwords or hijacked accounts? As it is, there are plenty of people who do have their accounts hijacked. Luckily, that doesn't lead to collisions and deaths.
Consider for a moment how many people run Windows and IE, with the latest security updates, who are still vulnerable to zero day exploits. Consider how many don't update their software and get swept up in botnets a few days later. Now imagine each and every compromised PC has physical control over 2-3 tons of rolling aluminum and steel, that can go anywhere on a public highway, with human beings inside it.
An attacker who had taken control over a botnet of compromised autonomous carscould drive swarms of them wherever they wanted by remote control.
Now, rather than downvote me, tell me what security protocols will be in place to prevent the scenarios I've outlined.
It really isn't. Rebels/terrorists/freedom fighters† the world over could tell you how to detonate an explosive using an off-the-shelf prepaid cell phone.
† pick your preference
I wouldn't discount having your car stolen remotely, but hijacking with humans inside is unlikely to work, and so is crashing into things.
replace wall and cliff with obstacle/dangerous environment of your choice and the sentence will always end with some coder some where made a mistake. It's really is as simple as that. sensor wins over map when it comes to avoiding a crash. What possible condition can you come up with that would make it desirable for a car to ignore it's sensors and go with what a map says is supposed to be in front of it?
If sensors say the road turns and you go with the sensors, what happens to the navigation? Eventually they become irreconcilable. The car will completely lose track of where it actually is, having only local (and perhaps some limited amount of historical) sensor data.
The scenarios aren't just limited to "STOP or CRASH", there's a lot of subtle ways things can go wrong.
..Yeah, as opposed to driving nonsensically? I think I'll take the car that defaults to whatever won't kill everyone around me.
>Or if the map is outdated, which will of course happen.
Assuming the cars download new maps on a regular basis, I'd consider this situation pretty unlikely on any official road or highway (unless we are to accept that in certain locations every car will consistently stop driving).
However, if this situation were to occur, option one: sync with the latest map data; failing that (network issues, etc.), option two: pull to the side of the road, stop, and enter manual mode.
>If sensors say the road turns and you go with the sensors, what happens to the navigation? Eventually they become irreconcilable. The car will completely lose track of where it actually is, having only local (and perhaps some limited amount of historical) sensor data.
What do you mean by this? Google Maps and most GPS navigation systems recalculate routes perfectly fine.
* Being possible is not the same as being easy or likely. For example, what if the system only accepts maps digitally signed by the company? You know have to either get the signing cert from the company or break digital cryptography.
* If there's a person inside, the can still take over and drive it themselves or tell it to park and ask for technical support.
Not networking vehicles because they might be hacked into a botnet is a little bit like not networking personal computers for the same reason. We could have just decided to not have an internet. Or we could have decided that we wouldn't allow people to print flyers because they might organize a revolution.
We will definitely want communications security though.
Of course, there is the possibility of things going wrong, but, on average, things generally get better, faster, than the alternative. (At least, I hope so ;).
Please see Halting State, Charles Stross, aka cstross on this forum.