Hacker News new | comments | ask | show | jobs | submit login
California Proposes Rules For Autonomous Cars (motorauthority.com)
84 points by phreeza on Mar 10, 2012 | hide | past | web | favorite | 54 comments



>"The vast majority of accidents are due to human error."

Obviously, as there is very little alternative to human error right now. But there's no reason to believe that to be the case when you introduce autonomous vehicles. There's a whole new variable.

Still, a required step for progress.


Manufacturing defects, bad roads, poor signage - Government has optimized these pretty well.


Weather conditions as well. Black ice is one hell of a thing.


An interesting note from Sebastian Thrun--I think in the second office hours of his CS373 course--Google's self driving car can't currently drive in snow because it covers up the road, and the sensors can't see the lines on the road (but they can adjust for the rain).


With roads and poor signage the citizens tend to "optimize" the Government (vote them out) if they fail to do a reasonable job on those issues given how much tax they swallow up on them. And with manufacturing defects the common law provides remedies and puts the worst offenders out of business (along with a free press providing consumer reviews for the milder cases). I'm not some foaming at the mouth libertarian but let's not pretend Government is the one that's responsible for common sense things (just look at the efforts Ford went through to get people to wear seatbelts, or SAAB's safety obsession that arguably put them out of business).

If you still don't believe me go compare the "optimization" of your local DMV and the Google labs where these autonomous cars are being created and see if you walk aways with any illusions of Government grandeur.


very little alternative

Manufacturing defects, eg faulty brakes.


Having the car make driving decisions is a different realm than manufacturing defects. We have seen some software problems with various cars already, and as the software gets more complex, I would expect to see that magnified. Look at the aerospace industry for examples (e.g. F-22, Airbus).


> We have seen some software problems with various cars already

Have we? The highest profile one I know of turned out to not be be software at all (Referring to Toyota accelerator[1]), but rather a combination of floor mats and human error.

[1] http://www.nhtsa.gov/PR/DOT-16-11


The Civic Hybrid is one example (actually the base model Civic needed a firmware upgrade). Auto manufactures do not produce error free software, and there are a lot more cars than planes.


> But there's no reason to believe that to be the case when you introduce autonomous vehicles.

Care to expand why do you think that?

I think there are many reasons to believe human errors will still be the majority cause of accidents after some cars are autonomous. The main, and most touted reason, is that driving is one of the types of tasks computers are much better suited to than human beings. Driving well is, mostly, just following a few clear repetitive rules over and over. We still fail at that very often, but computers excel at following clear rules repetitively.

So there are many reasons to believe computers will outperform humans at driving. And, in my opinion, by a large margin. I'm intrigued to know why you wouldn't think so.


Here's the proposed text, for anyone who's interested: http://leginfo.ca.gov/pub/11-12/bill/sen/sb_1251-1300/sb_129...

Edit: As I read it, this is pretty straight forward, doing the following things:

- Makes it explicitly legal to operate an autonomous car on public roads, if your car has met a safety standard yet to be devised.

- Authorizes the establishment of safety standards for autonomous vehicles by the California Highway Patrol.

- Until these standards are devised, it does not prohibit autonomous cars from operating on CA public roads.

"Autonomous Cars" in this case are defined fairly narrowly: a car capable of driving "without active control and continuous monitoring of a human operator".


Prediction: at some point, there will be an accident involving an autonomous car. The event data recorder (aka the "black box") will indicate that the human operator took control of the vehicle before the accident occurred. The driver/passenger, however, will claim this was not true, and a lawsuit will commence where there will be claims that the EDR was hacked or that the car manufacturer/software provider modified the EDR to falsely blame the human in the event of an accident.


More generally, how can we evaluate an autonomous car's effectiveness in avoiding an accident, if there is always a human sitting in the driver's seat?

I think most drivers would instinctively take control of the car if they felt in danger, whether or not it's statistically in their interest.


Won't the driver perhaps claim the vehicle was going to crash and they attempted to stop it?


I would really love this if it creates a minimally-invasive legal framework to enable innovation and curb nonsense like pedestrians jumping in front of cars and trying to sue the manufacturer when they get hit (hopefully while providing a mechanism where manufacturers can indemnify their vehicles without too much liability danger in such cases).

On the other hand, I have to say that I don't exactly have the greatest level of faith in the California Assembly based on past performance. Here's hoping they buck the trend and establish a framework to encourage rather than inhibit innovation.


Have cameras on the car. Have them record into a 120 second buffer. When the car detects a crash write the buffer to disk with a timestamp.


Ugh, as a driver I'd prefer not to have all my actions recorded regardless of whether it's beneficial to me. This reeks of invasion of my privacy and possibly opens me up to prosecution if the camera were subpoenaed in a criminal proceeding for something unrelated to the car's function. Same reason I don't leave a GPS trail everywhere by choice.


It's highly likely that any autonomous car will have something very similar but with sensor data instead of video.

If I were making said cars, I would not sell them without that feature. Otherwise, if something ever happened to the car I would have no decent record of what my software was trying to do in the incident. You would have no way of fixing serious issues, or absolving yourself of fault.

This is the reasoning behind airline black boxes.


I don't think your going to be outed by a 120 second buffer. Especially if it isn't written to disk until theres a crash. (Heck, two minutes is probably a shorter time than it takes to open the hood and spray the RAM with liquid nitrogen.)

Unless of course your being accused of a crime that was either caught on film two minutes ago or it involved a car crash.

EDIT: Or was caught on film two minutes prior and involved a car cash.


In the UK we have automatic number plate recognition and a nationwide network of CCTV. Some places can recognise a stolen car and dispatch police within 60 seconds. (Notably the City of Westminster and Heathrow airport.)

> I'd prefer not to have all my actions recorded

Fair enough, but it's too late for some countries.


I mean no offence (I fully recognize that the UK is a sovereign state with rules that largely satisfy it's own culture and people), but I feel that if we universalize this idea ("Fair enough, but it's too late for some countries.") the result will be nothing more than a race to the bottom.


Does anyone know if Nevada or California have discussed operating an autonomous vehicle while intoxicated yet?


:-) The SEO kings are going to be waaaaayyy out in front on those terms now. Shall we start the domain rush?


Are there companies working on this besides Google?


Car companies. Google certainly wasn't the first to be working on self-driving cars, they just got a lot of time in the tech press for it. The big car companies have been working on it for years. I can't find any details but I remember hearing about some companies working on it about 8-10 years ago. More recently BMW have been working on it: http://www.motorauthority.com/news/1072117_on-the-road-with-...


There is a difference in complexity though. Most systems developed by the Automotive companies are just a step above highway lane keeping, and adaptive cruse control. Google's system handles city roads with pedestrians and more complex traffic patterns.

There is a cost to that of course, Google's system uses $100k of sensors, and isn't well integrated into the body of a car. This means it's father off from being seen a in a production car. I could see it being sold as an aftermarket add on to specific cars, maybe even only to business customers, who then want to sell the service and automated car can provide.


You're right there is a difference. Google has better software. The car companies have better cars. If Google does continue to push the technology I hope they license it. I dread to think what an actual Google car would be like. It's an interesting market that's been progressing for years. It seems now that we are very close to seeing it in production models.


Just look at the participant list for the DARPA Grand Challenge and Urban Challenge competitions; http://archive.darpa.mil/grandchallenge/teamlist.asp though you also need to look past some of the university affiliations to see the partner companies (VW for the Stanford team, for example.)


Is there any article/video (I didn't investigate) showing multiple autonomous cars interacting ? I'd be curious to see if there's any behavioral resonance leading to epic havoc.


DARPA Urban Challenge (2007): http://www.google.com/search?tbm=vid&q=darpa+urban+chall...

Several teams competed, entering driverless cars which interacted with each other and some vehicles operated by real people on a closed course. Carnegie Mellon's Boss car completed all the DARPA conditions and won the grand prize.


Thanks a lot. I wish there was a higher density of cars. Also different teams => different systems.. that might introduce good jitter to avoid systemic locks. Anyhow it feels pretty safe.. can't wait to see it becomes standard.


One issue that I haven't seen much talked about is the is system wide control instabilities if everything is automated, as you have alluded to. I also don't know if anyone is doing any work in preventing cross talk and interference when you have several dozen vehicle all with similar lasers and radars in close proximity. If you operate two of the popular Velodyne lasers near each other, used on the Google vehicles, you get a good amount of noise on both sensors.


Excellent. The sooner we get the laws into place, the sooner autonomous cars become a reality. Does anyone know if California's laws are modeled after Nevada's?


Looking at http://www.leg.state.nv.us/register/2011Register/R084-11I.pd..., it looks like Nevada decided to define a class of license (G) for operation of autonomous vehicles. The CA legislation just makes operating an autonomous car legal if it meets certain safety and performance criteria, which it doesn't define (instead, asks the CHP to come up with these rules). The way this bill is drafted, it looks to me like CA would be much less restrictive of autonomous car operation than Nevada.


Has anyone written or thought deeply about all the ways that self-driving cars could be tricked or hacked into causing accidents, kidnapping passengers, driving off cliffs, running people over, or otherwise creating havoc? Seems like it was just a couple years ago that Toyota was recalling cars over a "sudden acceleration" problem.

A system like this is only going to be as good as the data coming to the car, and given the knowledge that all cars will react a certain way to a certain stimulus, it's a lot easier to design a low-tech hack that would kill a lot of people. Here's one that comes to mind:

Given an two-lane road with a narrow shoulder and an embankment, place a small boulder on the right side of each lane. A human being will either swerve off the embankment or rip their transmission out on the rock. What's a bot going to do?


Or you could fill a truck up with diesel and fertilizer and blow it up. This technology is far more likely to reduce deaths associated with cars than increase.

Go back to the beginning of the 20th century and think how much damage could be caused by creating a nationwide electric grid. People could electrocute people at will. Personally, I'm kind of glad we went ahead with it.


For the first time in history, a large portion of adult lives will hinge on the unhackability of consumer gear.


Yes - exactly my point. Before, it was relatively rare for a hack or a computer crash to lead to death. This radically changes the probability of that occurring... therefore it seems that saying we'll take standard security measures and be complacent until something happens is probably not a responsible reaction. No matter how great the technology or how gung-ho people are to see it deployed.


[EDIT] Again, I really don't understand why I'd be downvoted for asking a legitimate question like this. It's rather disconcerting. While it's great to think about a future in which this technology is safe and widely available, it seems to me I'm being attacked for simply asking basic security questions that I would ask about any large system that was going through trials prior to mass rollout. And rather than hearing any specific answers, I'm just being attacked and voted down. I think anyone who's spent time in IT would probably ask these things first before they deployed a new system in their company office, so I don't think it's unreasonable to ask them about about a potentially game-changing social innovation.[/EDIT]

That's true. But blowing up a diesel truck is hard to do remotely, and requires someone to actively attack at a specific place and time.

I'm not sure why I got downrated for my post; I'm just asking, doesn't this create a lot of security holes and attack vectors that need to be studied before allowing it? I mean, I haven't heard anything about security, at all. The focus seems to be on a safe driving experience, but where's the white paper on counter-hacking measures? Can you imagine if Google launched Gmail without any kind of plan to mitigate stolen passwords or hijacked accounts? As it is, there are plenty of people who do have their accounts hijacked. Luckily, that doesn't lead to collisions and deaths.

Consider for a moment how many people run Windows and IE, with the latest security updates, who are still vulnerable to zero day exploits. Consider how many don't update their software and get swept up in botnets a few days later. Now imagine each and every compromised PC has physical control over 2-3 tons of rolling aluminum and steel, that can go anywhere on a public highway, with human beings inside it.

An attacker who had taken control over a botnet of compromised autonomous carscould drive swarms of them wherever they wanted by remote control.

Now, rather than downvote me, tell me what security protocols will be in place to prevent the scenarios I've outlined.


> But blowing up a diesel truck is hard to do remotely

It really isn't. Rebels/terrorists/freedom fighters† the world over could tell you how to detonate an explosive using an off-the-shelf prepaid cell phone.

† pick your preference


No network access. The computer that controls the car doesn't need to talk to anything other than the car. That eliminates a slew of attack vectors right there.


Is "no network access" going to be part of the legal framework under which the vehicles operate? Presumably they need to download maps from somewhere, along with traffic updates, road hazards, etc. Most new cars have network capabilities as it now stands. So it's unreasonable to assume that they won't have any network access. And as we know, anything with network access can eventually be rooted.


My guess is that the basic systems (keep the car from going off-road or colliding, letting people takeover, etc) will run on an real-time barebones OS on a embedded system and will have a very simple and well defined interface to a full machine that runs all the crap like the UI, navigation systems and such.

I wouldn't discount having your car stolen remotely, but hijacking with humans inside is unlikely to work, and so is crashing into things.


If it's navigating according to a map that's downloaded from a network, upload a faulty map to it. Then it will go whereever you want it to.


These aren't "dumb" cars. For example, Google's have a high fidelity laser range finder that builds dense 360 point clouds 15 times second. If the car is given a bad map which sends it into a build or other cars, the collision avoidance system will recognize that fact before an accident occurs and stop the vehicle.


It's a lot more complicated than that. The car can't always stop when the map disagrees with the sensor. There are any number of situations where that's a bad idea.


If a cars sensor says there is a wall in front of the car and it goes with the map then some coder some where made a mistake. If the car sensor says there is a cliff in front of the car and it goes with the map some coder some where made a mistake.

replace wall and cliff with obstacle/dangerous environment of your choice and the sentence will always end with some coder some where made a mistake. It's really is as simple as that. sensor wins over map when it comes to avoiding a crash. What possible condition can you come up with that would make it desirable for a car to ignore it's sensors and go with what a map says is supposed to be in front of it?


If the car stops every time the map differs from the sensors, then I just give you a nonsensical map and you go nowhere. Or if the map is outdated, which will of course happen.

If sensors say the road turns and you go with the sensors, what happens to the navigation? Eventually they become irreconcilable. The car will completely lose track of where it actually is, having only local (and perhaps some limited amount of historical) sensor data.

The scenarios aren't just limited to "STOP or CRASH", there's a lot of subtle ways things can go wrong.


>If the car stops every time the map differs from the sensors, then I just give you a nonsensical map and you go nowhere.

..Yeah, as opposed to driving nonsensically? I think I'll take the car that defaults to whatever won't kill everyone around me.

>Or if the map is outdated, which will of course happen.

Assuming the cars download new maps on a regular basis, I'd consider this situation pretty unlikely on any official road or highway (unless we are to accept that in certain locations every car will consistently stop driving).

However, if this situation were to occur, option one: sync with the latest map data; failing that (network issues, etc.), option two: pull to the side of the road, stop, and enter manual mode.

>If sensors say the road turns and you go with the sensors, what happens to the navigation? Eventually they become irreconcilable. The car will completely lose track of where it actually is, having only local (and perhaps some limited amount of historical) sensor data.

What do you mean by this? Google Maps and most GPS navigation systems recalculate routes perfectly fine.


What if someone games the sensor? There are some pretty heinous examples or road rage out there - everything from attacking people's cars with a nine-iron to throwing their poodles into traffic at a stoplight. What happens to that point cloud if someone throws a bunch of silver ball bearings out their window in front of you? What if they aim a laser pointer at the receiver? What's to stop someone from developing a universal remote that you can point at any car to make it think there's a wall 3 feet in front of it? And how do you design a countermeasure against that and still ensure that the car does stop if there is a wall?


Well, two things:

* Being possible is not the same as being easy or likely. For example, what if the system only accepts maps digitally signed by the company? You know have to either get the signing cert from the company or break digital cryptography.

* If there's a person inside, the can still take over and drive it themselves or tell it to park and ask for technical support.


Actually I think that the advantages of networked communicating vehicles will outweigh the possible disadvantages related to security. Vehicles that communicate are much safer and more effective.

Not networking vehicles because they might be hacked into a botnet is a little bit like not networking personal computers for the same reason. We could have just decided to not have an internet. Or we could have decided that we wouldn't allow people to print flyers because they might organize a revolution.

We will definitely want communications security though.


Every advance comes with the ability to cause problems (for the record, I didn't downvote). The question is: are we better off without the advance? Personally, I think the answer is rarely (if ever) "yes". Quality of life has gone up over time, and I don't expect this will change that.

Of course, there is the possibility of things going wrong, but, on average, things generally get better, faster, than the alternative. (At least, I hope so ;).


Has anyone written or thought deeply about all the ways that self-driving cars could be tricked or hacked into causing accidents, kidnapping passengers, driving off cliffs, running people over, or otherwise creating havoc?

Please see Halting State, Charles Stross, aka cstross on this forum.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: