If anyone is looking to get a little deeper, check out some of his literature during his time at Stanford:
Also, if anyone is interested, my thesis:
For real life, there is obviously a huge engineering component, but you can definitely see how the ideas could come together to build something like the automated cars.
Then the miles I put on my car are from my use for my pleasure.
When you can just ask for a car to pick you up and one will arrive, corporate robo-taxi fleets will have economies of scale over family cars sent off to earn their keep. With ordinary people not needing to buy a car, facilities for maintaining and supplying them will dry up too, as the manufacturing effort shifts to taxi-ready cars for fleets. And the price will shift up out of your range to reflect the earning potential.
The robo-car will end the private car as anything but a hobby comparable to the classic car.
It's also worth noting this type of service is only effective in dense urban centers. Enough people live in the country/rural/suburban areas where owning a car is necessary.
Americans spend a ridiculous amount of money on cars as status symbols. It'll be a long, long time before that ceases to be the case.
Maybe you meant the robo-car will end the private car in cities where its citizens focus on different status symbols? I can see that being the case in SF and maybe NY in the near future, but not anywhere else in the US.
Robo-taxis have an economic attractor at the "runs forever, gets good mileage" end of the design scale, because energy and maintenance are the biggest ongoing costs. There would be incentives to make them small and efficient, maybe sized for one occupant. And they would not sit around idle, so they would need a long MTBF, which rules out designing for speed or beefiness and suggests electric (it's got fewer moving parts).
All of which means, fewer, smaller, cleaner, slower, simpler, vehicles. Aesthetes will pine for the days of muscle cars. In terms of green-ness, they may come to eclipse buses etc, while retaining the advantages of cars (goes where you want, when you want, and is not full of strangers).
This is sort of the whole point: everyone thinks they can eat their cake and have it too.
DUI? I'm not the driver, the car is! Hic!
Run over a kindergartener? I'm not the driver, blame Google!
Auto insurance rates go through the ceiling because of untested technology? Wait, I'm the same driver I ever was, this is unfair!
Can't get insurance at all? Help, the Man is a luddite! I'm being repressed!
So, "The Man" in this case is a luddite, and making laws that require humans to be in control of the vehicle actually decrease public safety.
I don't mind public transit but I'd much rather be driven around by a cab.
Likewise, it may not be possible in the short term to convince everyone to stop driving cars. Self-driving cars won't fix the long-term problem of efficient transportation, urban sprawl, or emissions, but they can reduce its harm by efficiently providing car-like service to the public.
The great thing about an automated fleet is that you could actually switch cars during the journey. Take a small car to leave the busy urban areas and than switch to a larger car when you get to the motorway.
Hey, there's a start-up idea: Buy a fleet of these and deploy them around the Bay as the Robot Cab Company. Have them queue at designated points or hailed via smartphone. I expect to see this at Y-combinator in 3-5 years.
Now, if we could only wedge a social networking component into it...
“I drove to West Virginia in this car,” he recalls. “It’s about a nine-hour drive from Detroit. Out of that entire nine hours, I drove for 45 minutes. And the only reason I drove that much was because there was this nice mountain pass, with sharp S-turns.”
If we are going to undertake such a major shift in how we all get from point A to point B, shouldn't that be a significant point of consideration? It is rarely even mentioned in the context of this discussion, which I find to be disappointing.
However, that being said, here are some of the points I've seen:
Car sharing is easier, since you can pull out a smartphone and "order" a car to come pick you up at any point, there is less of a need to actually own a car. This should lessen the total number of vehicles manufactured/maintained/etc. The huge amounts of energy for making steel come to mind here. Also, if you are using a CaaS(car as a service) many would probably opt for a cheaper option where the car will stop and pick up other passengers on the way, offsetting the cost and therefore reducing price. A person doesn't have to arrange a carpool this way. If you make it easier and cheaper to carpool than to not (currently it is cheaper, but a hassle) then people will do it more.
Reduced traffic congestion: This sets the stage for the other benefits. Cars stopping and accelerating less equals less fuel used. Many, many more autonomous vehicles can share a roadway without reducing speed than human drivers.
More suburbinazation/low population density: This is a big possibility. While living in a distant exurb would mean you probably have to own an autonomous vehicle, or schedule one to pick you up well in advance, car sharing suddenly works less.... but a 90 minute commute today would be much faster with autoomous vehicle roads, and people can read/chill/watch news/surf the net etc while cruising to work. Commuting is made more relaxing, so more people do it....
It's a very complex issue to try and understand the implications, but I really think that it would be an environmental net gain, due to the fact that, if everyone is using Car as a Service, then the CaaS providers will have cost minimization as a priority, and work to constantly increase efficiency. People only care about fast acceleration and performance when they are driving themselves. Computers don't have small penises to compensate for.
I also don't see why this tech couldn't be used in public transit/shipping thereby reducing one of the largest cost factors (salaried drivers, poor lifestyle) and actually increase the amount of available public transit.
So long as it's ready by the time I'm too old to safely drive, I'll be happy.
There is also a lot of legal work that will begin as these roll out, especially dealing with liability, and these things take time that simply selling millions of these cars in 3-4 years will not allow for.
Well think of that, but with cars, with people in them...
Edit: I think it's being misconstrued that I am somehow against Google Cars, when in fact I'm very much for them. Regardless of the likelihood of the aforementioned scenario, I agree that it's still far better than the wildly unpredictable human factor. And ultimately I think that self-driven cars will be a boon both for road safety as well as fuel economy and overall emissions. (Not to mention traffic, I can't wait for a world when traffic is basically non-existent)
One thing I realized after making this comment too, is that road situations are far easier to predict than the randomness of the market, and the consequences are much higher than 0's in a bank account, so I'm sure there will be fail-safes.
When it comes to making accurate split-second decisions, I'll take an algorithm over a person any day of the week.
Also, garbage in garbage out. et. al.
You're assuming too much. Just because it's an algorithm, doesn't mean it'll know what to do in every situation. It'll take a lot of work to develop a set of algorithms that can handle all the events that confront a driver on a regular bases.
Don't get me wrong, I would be comfortable being driven by a computer but not just yet. Before I say "I'll take an algorithm over a person any day of the week." I want to make sure that algorithm is tested and works well.
That event which was exciting if you were an HFT, but which you can't actually find in a daily stock chart?
I'm hoping that self driving cars will be as robust as our electronic trading systems.
edit: Also to add, the reason that flash crashes happen is because the algorithms are unaware of what the other are doing (although it can try to guess). Also, self driving cars are designed to follow laws not beat the competition.
I couldn't even imagine them all networked, although it may actually be better, it seems like even more danger.
Hah, yeah it kind of ended up being that way didn't it? :) Even though I don't feel I was arguing against them in the first place.
Far more likely is a human driver screwing things up.
However, I suppose a safe distance for an autocar should be considerably smaller than for a human-driven car, since the start of braking would be essentially instantaneous.
The same isn't true for cars, and that'll likely affect the code testing process.
Plus, if you're going to reference the flash crash, I'll point out that people, not computers, caused the Great Depression.
Not to mention the flash crash...
Both the market and the road have a number of autonomous agents interacting with each other under some rule set. The nature of the players has an enourmous impact on how the game is played. A set of humans is physically incapable of "flash crashing" a stock market because they are literally physically incapable of trading fast enough for that to happen. The introduction of other effectively-autonomous agents into the market changes the nature of what the collection considered as a whole can and does do.
It is true that introducing computer-controlled cars onto the road in quantity will almost certainly qualitatively change the nature of driving on the road, and it is valid to be curious or even concerned about what this effect may be. It would be particularly bad to imagine that all the cars are running the exact same code; it is a completely valid concern that one particular bug could be trigger which could cause mass failure of some type, including true automotive crashes, but possibly also just being software crashes. If you dig into real automotive code, you'll find similar things that have happened in real code.
Long term, I think it is likely to be a net positive effect. You'll have a lot more drivers on the road taking what will probably be a very conservative approach to driving, with much more careful management and maintenance of margin for error, including in situations where humans tend to play fast and loose without even realizing it. It seems likely to me that computer cars will eventually refuse to drive in certain bad conditions, like icy roads, and that over time we will consider that to be an acceptable reason to not drive. But I do think we also want to be careful to ensure that there are many implementations of self-driving cars; monoculture has too significant a chance of a Black Swan event. But it isn't impossible we'll pass through a period where the net result is a bit more dubious.
There's a lot of corner cases to work out, and that includes things that today we wouldn't even consider. Suppose 99% of the cars on the road are computer controlled. What do they do when a teenager hops on an overpass and starts throwing paint balloons? What happens when teenagers start jumping into highways on a dare? As the system becomes a computer system rather than a human system, we must also consider how it will be attacked not only by the real world, but by humans as well. Google's really being too optimistic here. They've made enormous strides, truly enormous strides, and now we're seriously talking about them as a thing that may happen for real, rather than the ever-nebulous "someday", and that's big. But they've got a long way to go before we can truly put them in the hands of the public.
False. They did it in 1962.
So, it's okay for tens of thousands of human drivers to be killed while driving, but inconceivable for tens of people to die at the hands of a computer?
I think you summed it up nicely. That's the way the general public will react.
Note that we're living in a world when hardly a day passes by without some major security exploit being discovered and hardly a day passes by without hearing about some foreign goverment taking control of domestic computers. What if Iran manages to gain control of all the autonomous vehicles? Isn't that worth a thousand atomic bomb?
And, yes, there's a psychological problem when there's no human to blame but just a computer or a software (and, actually, ultimately the entire chain up to politician that allowed such car to be used on the road and software programmers to write that software). It's alienating to society as a whole and it's a real issue. People are trying to find solution to deal with this problem.
I mean: it's amazing to see all these people saying, on one side, "It's normal that there are hundreds of millions of zombies part of gigantic botnets, we'll never have 100% secure software" and, on the other side, these very same people saying: "Can't wait to have an autonomous car in 3 years / 15 years".
So what is it? Insecure software or safe cars?
So are there going be theorem provers used to proove that the OSes in these cars are bulletproof? And the compilers of course shall have been prooved secure too?
If not it means there are software security issues right? Someone commanding one million zombie PC is not cool. Someone commanding one million zombie autonomous car is downright scary.
How many of these are part of botnets? A huge lot.
Software ain't secure yet. I'm not saying it can't be. But we're simply not there yet.
So what if there are actually zero dead? The car so good, with a feedback loop so fast that you simply can't have accidents anymore...
Will the speed limits be then raised to 100mph in the street and 200mph or more on the highway?
... wat. Google is at most a kilo-corp.
As of now the problem is mitigated by having someone behind the wheel. Even when the motor shutdowns and when you lose assistance, you can still turn the steering wheel, brake and/or use the handbrake. This has saved many lifes.
But now how often do OS / software have bugs? Security issues? I can't imagine for a second that these cars are going to be fully autonomous and not connected to a network. How can we possibly make that network safe?
Lately on HN the consensus seemed to be: "Every software has bugs, every software can be exploited if the attacker spends enough time".
So is the software in these cars going to be safe? Safe from ill-intentioned exploits, safe from bugs potentially threatening lives?
Now if there are less dead then what we typically have in a year, ok. You'll have to check the numbers.
But there are going to be deads. That is just a fact.
And the public reaction when there's going be a dead due to a "computer driving" is going to be terrible.
So what is it? Insecure software or safe cars?
According to the NTSB, unintended acceleration was not caused by electronic errors. It was caused by "pedal misapplication" and mechanical failures (gas gets stuck under the floor mat).
Your implication being that there is such a thing as a safe car. I think that people are excited for self-driving cars on the basis that they will kill fewer people on average than human drivers do.
It's a slightly depressing way to think about things, but it's realistic. However, the media hysteria that will surround the first self-driving car accident/killing will be critical.
I mean, realistically that is the only barrier between a human being and driving. Jay walkers get hit by humans. Cars running red lights get hit by humans. Sometimes they don't even brake because they were looking in their mirrors or fiddling with the radio.
The risk of your car being hacked is the one area that I find really scary. It can already happen to an extent - but this could be a lot lot worse.
You're talking, what, another 2-5 years of testing and adaptation from car manufacturers, if they choose to go with Google? Almost all of them have their own projects and, IMO, are unlikely to be tied to Google and their likely demands.