In the same way that has already begun in the rest of the world. Same legal/insurance issues stand (as elsewhere) and are likely the bigger barrier to adoption.
Nowhere in the world has permitted non-test driverless cars because non-test driverless cars aren't a thing yet.
The whole "rest of the world" remark seems like smug condescension. Which is a little odd as only a small handful of places even allow test-cars to be run on public roads. Plus isn't this a welcome addition? The UK's roads are quite a different test bed to many West coast US cities where there is no grid pattern, roads are smaller, and curvey.
Apologies is there was any condescension, I was only trying to convey that it's not that different to what's been going on elsewhere. I'm from the UK myself.
Here is a Google autonomous vehicle navigating Lombard Street in San Francisco: http://youtu.be/eXeUu_Y6WOw?t=1m34s
The UK certainly has plenty of surprising road layout, often involving roundabouts, oneway systems, medieval street plans, and roadworks.
The places I know of - Lincolnshire and East Lothian - where roads follow field boundaries have a lot of straight roads with 90deg bends. Interesting you suggest that field boundary following would make roads more "bendy" (suggesting non-straight edges and non right-angles).
Perhaps the fields of Salisbury weren't dissected for inheritance purposes or are older and follow more natural lines?
Two wide cars can't pass there, one of you has to go back to a place without the hedge. I'd like to see how that kind of negotiation is handled by self-driving cars.
(I wonder if you can tell any of the digits of someone's ATM card by their hand positioning?)
I love the idea of self driving cars and they should make roads like that a bit safer.
This obviously presents challenges such as driver etiquette (car needs to play nicely with others but still get you to your destination) and safety (children can lurk unseen between the cars and leap out out).
Perhaps there are parts of America that are like this, but Hollywood hasn't shown many of them to me.
Have such tests already begun the "the rest of the world"?
In addition, I suspect that the cars will be 'self-driving' cars rather than 'driverless' (the distinction being that the former has a human behind the wheel, which can take the wheel at a moments notice).
As for the rest of the world, from the article:
The US States of California, Nevada and Florida have all approved tests of the vehicles. In California alone, Google's driverless car has done more than 300,000 miles on the open road.
In 2013, Nissan carried out Japan's first public road test of an autonomous vehicle on a highway.
And in Europe, the Swedish city of Gothenburg has given Volvo permission to test 100 driverless cars - although that trial is not scheduled to occur until 2017.
Last time I went to California I saw one up close, that was pretty neat (though I don't know if it was driving itself at the time)!
And I know they have been testing driverless farm machinery near where I live since the late 70's
It'd be like taking "FDA approves new drug for human testing" and making the headline "FDA to allow human use of new untested drug."
Both are technically true, but the latter makes it sound like a widespread thing being done regardless of poorly understood risks. And that's misleading, since it's being done in very limited scope to help us understand and reduce those risks.
I think the two main questions will be liability & drivability -
1) when (not if) these cars get into a serious crash,
who is to assume liability? Is it Google who created the algorithm? Is it the Audi integrator who fused Google technology into the Audi? Is it the fault of the mapping software that did not update the fact that the signals had been moved to a different position on that street?
2) More mundane - will a driverless car be able to drive every single place that a drivered car would? When a flash flood closes down the freeway will this autonomous beast be able to drive on the back road that is normally closed to traffic?
Having said that I cannot wait for Cars-As-A-Service where the Cars park themselves & disappear when I don't need them and magically reappear when I do (without humans - Lyft, Uber et al. need not apply).
1) Liability is assessed exactly as it is now. We use the same system to determine which car was at fault. If it's your car, you are at fault. Someone will insure a self-driving car, especially once the safety record is established. Your current provider might not, but someone will. If you were using the self-driving system improperly (e.g. in weather it can't handle well in yet), you may also be subject to different rules or even criminal prosecution.
Vendors may also be willing to pay for that insurance as part of a monthly fee just to shut people up over this very easy-to-solve "concern."
If no one is willing to insure your self-driving car, I'll start that business. I could charge a premium for a service that costs me less to provide!
2) People seem to think self-driving cars should be perfect before they're introduced. They won't be. Neither are we. It just has to be better (under given weather/traffic conditions) than we are to save lives. We'll have steering wheels in the cars for a long time which you'll have to use when the car can't safely drive itself.
The Wired article is just clickbait using the fear angle. We face the same decisions every day, and "unavoidable" accidents will be even more rare than it is now as this technology evolves.
If it's your car, you are at fault.
The control system itself is then a part of the car, which will dramatically increase the likelihood that it is at-fault in the case of an accident. It presumably would have to be audit-able in the case of an accident to identify whether it made the 'correct' decisions or not, adding additional legal and technical complexity.
If privately owned, would there be regular sensor cleaning/calibration tasks that need to be met before the manufacturer is deemed liable? What about tire pressure?
There will be a lot of factors that could go into deciding whether the user or manufacturer was at fault in the case of an accident that simply wouldn't be a problem now, because the user of the vehicle is also the one responsible for maintenance.
It will be determined via litigation, as it is now. Technological complexity of subject matter has yet to present any serious roadblock, or cause any significant change, in the prosecution of the law.
Similarly, whether neglected maintenance (or third-party modifications/parts) contributed to a collision will be determined in court, just as it is now.
As a side note: given the service opportunities afforded by self-driving vehicles, I would be surprised if operators and insurers didn't subsidize or operate "while you sleep" maintenance service plans. e.g. Once a month or so, while you sleep, your car will drive itself to the shop to get checked out, ensure updates are received, recall services performed, etc.
Manufacturers would add a revenue stream and lower their legal costs/exposure and consumers would have yet-another-hassle of car ownership removed.
Ridiculous. To cite only one counterexample, self-driving cars will make the controversy over the EU's "right to be forgotten" law look like a polite discussion over a beer.
That said, the problems are solvable and we'll all ultimately be better off for facing them.
You misunderstand me. Of course self-driving cars will generate many and expensive legal fights. But those lawsuits will look much like any of our current lawsuits.
That the courts don't understand the technology will not stop jury trials from deciding liability, just as courts not understanding genetic evidence doesn't stop them from throwing people in jail for life based on misunderstandings.
That legislators don't understand the technology will not stop laws from being written any more than their not understanding criminal justice stops them from writing self-defeating "tough on crime" laws and "prison as punishment" regulations that only increase recidivism and multiply the social cost of crimes.
My point is that courts and legislators not having an understanding, let alone answers, is not a stumbling block to self-driving cars. It won't prevent self-driving cars from moving forward until and unless it's addressed.
They'll just blunder through it, making a mess, making mistakes, as they've done with everything else.
> To cite only one counterexample, self-driving cars will make...
Hey, something's not right... ;).
Please clarify your point with a non-circular example.
> ... it has yet to happen ...
Ridiculous. To cite only one counterexample, it will happen.
As someone from the UK, I'm amazed that isn't the case. See http://en.wikipedia.org/wiki/MOT_test#Overview_of_the_test
In order to put an autonomous vehicle on the road, it's going to have to be insured. Insurance companies will have to vet any and all autonomous vehicles anyway, assume direct financial liability for all harms resulting from anything that could go wrong with such a vehicle, and charge whatever premiums they think are approprite to cover those liabilities to the owner of each vehicle.
How they vet autonomous vehicles up to them, and it's probably going to be tricky at first - maybe on a comparable order of trickyness as actually making an autonomous vehicle. But they're going to have to do it if only to get their premiums right. Maybe they'll require certain development practices, like using specific programming languages, or programming conventions within a language, or requiring 3rd party static analysis of all the code (e.g. Coverity), or mandatory reviews of all checkins by at least one other developer than the person who wrote it, or a certain level of automated test coverage, or "passing" a certain percentage of simulated situations.
Whatever the insurance companies do, they're going to have to get some measure of how dangerous these vehicles are.
The better a manufacturer does on the vetting process, the more open they are, the lower the premiums will be to any owners of their cars.
At that point, the insurance companies should be in a position to assume all direct financial liability resulting from an accident - as they are now (discounting your excess) - including negligence, as that's the only incentive we can put on them to ensure they've done their part right. The lawyers will of course ensure that if a vehicle as supplied is not up to the standard that the insurance company confirmed, either in terms of hardware or software, then the insurance company will have a way to take that up with the manufacturer.
Naturally, if you mod your autonomous vehicle in any ways not permitted by your cover, you may void your insurance. At that point the liability falls entirely on you, the owner. But again, that's just as things are now.
There will of course be actuarial exercises involved, but rather than trying to predict everything and more and imposing unnecessarily limiting restrictions (which is not to say there shouldn't be any) on the development process, they'll accept the increased risk to get the feedback from actual daily use bootstrapped.
Once there's a better understanding of the issues that will occur in the real world, traditional insurers will join the fray.
This page is a hub for legal information regarding autonomous vehicles: http://cyberlaw.stanford.edu/wiki/index.php/Automated_Drivin...
The federal body (National Highway Transportation and Safety Board) seems to focused primarily on the development of technical standards that will allow for a uniform certification. But questions of driver liability will likely be left to the states, as most accidents resulting in lawsuits are tort actions alleging driver negligence, and different states have different rules on both driving and allocation of fault.
The aviation industry is a fantastic parallel that should be modelled after.
Until then it has to be a case of "tough" if you want to sue. The Victorians had the right attitude to future technology and it's something we need to get back to.
I'd also expect any accident to end up worldwide news simply because it's the first ever of it's kind. Same thing happened with the first car accident in the world, it happened in 1891 in Ohio .
I'd imagine that #2 will take longer to take care of properly, but with the traffic and accidentent and road closure monitoring that goes on now I'm going to bet that it'll at least be kind of decent, I'm actually more worried that it won't be able to follow detours properly and cause gridlock around construction areas during the transition from human driven cars to driver-less.
2) I think there should be always an option to drive the car yourself.
In the train case you would do things like get movable property out of the way, for the cars you can think about things like a defensible yard barrier. Some people already do this for drunks, but putting a 12 - 14" 'step up' along the edge of the property that borders the street will stop most out of control passenger vehicles. Laws will get tested and litigated, new ways to analyze risk will be developed, planners will want to think about how they design roads/signage/maintenance around them.
I suspect this a 'moonshot' technology, which is one where you can demonstrate it in 1969 but can't actually repeat it commercially until 50 years later in 2019.
This strikes me as something which is particularly difficult for an algorithm to process effectively (without generating lots of false positives, which also fails that segment of the test) especially based on the fairly low resolution video human users are presented with in normal test conditions.
Hope they're not going to waive that for the bots, even if they do have 360 degree vision and superior concentration and reaction times.
Surely the point is to not kill people?
So which is more important, recognising potential hazards early, or unwavering attention and superior concentration? Because although I'm pretty sure it's the humans who're ahead at the moment, but I'm not sure it'll always stay that way. Bots may never match humans at a hazard perception test, but if bot reaction, vision and AI gets good enough, they may not need to.
Don't bother relying on body language or eye contact, just automatically sense the person shaped object, if it is close to the roadway, slow to a speed where you can avoid easily if they step out. Assume that they will. Heck, add in a buffer so you don't bother the passengers of the car by having to slam on the brakes.
A person driving should do the same.
I don't see the discrepancy.
> A person driving should do the same.
At what range? If you assume someone could always jump out in front of you, and that running them down will always be unacceptable, you'll be doing a handful of miles an hour whenever there are humans around. In practice I doubt too many people are in favour of taking safety to that extreme.
It seems to me that it's clear that driverless cars are the future, which is going to become real very soon. Any progressive government would (and should) allow such testing, even given the inflexibility of the state's bureaucratic machine. Allowing is an easy part. The hard part is actually building those cars and to my knowledge no UK firm does this at the moment.
The fact that no major consumer automotive marques are British doesn't mean there isn't a very strong auto industry in the country.
After the taxis come the trucks. I am quite certain shipping companies will be the quick ditch humans. After all, this means no more travel expenses & salaries, no more breaks, probably automatic unloading too! Goods will get cheaper, although hundreds of thousands of people will be out of work.
I am scared.
Also, I think the shipping cos will do away with their drivers first. Highway traffic is easier to navigate with an algorithm - nearly trivial if you could compel a retrofit of all vehicles with a low power radio broadcasting its position, so that any autonomous vehicles around it can perceive it via its own sensors, any external gps radar, and the cars pings. Even without that, it is easier to juggle the variables of a high way (multiple lanes, moving in a direction, consider merge traffic, maintain safe stop distances, etc) than to try to navigate intersections and pedestrian traffic. Also, 18 wheelers don't need human interfacing, whereas self driving taxis require that licensed, but non-owners, sit in the drivers seat of the taxi since nobody will let self driving cars pilot themselves without someone to take manual control if necessary for a while. Albeit, that remains a problem with the 18 wheeler, but I see letting self driving tractor trailers without pilots come much sooner than self driving taxis without the passenger in the drivers seat.
Consider, what if several shipping companies delegated fuel administration to a company that had manned staff 24/7 at places like Flying J diesel and gas fuel locations?
If you have autonomous trucks coming in, I don't see tons of places upgrading their facilities to support robotic administration. (At least for a few years)
There's also maintenance when there are problems.
I expect that trucks driven by a human-computer team will be safer than trucks driven by either one alone.
Only because trucks have to stop overnight. If you remove the driver, they will be able to work 24/7 and only refuel in automated, safe locations.
They would need to setup a way for the trucks to refuel, just an attendant at the truck-stop that would be paid a small amount for refueling them, or maybe a new kind of pump that would automatically fuel the trucks when they pulled up to them.