Very cool, but I sometimes worry that the illusion of safety is more dangerous than the original danger itself.
In Boulder, where I live, several crosswalks were identified as being particularly dangerous for pedestrians, and pedestrian crossing signs with big flashing lights were installed at these locations over the past 5 years. This spring, a report was released[1] that showed accidents at many of these crosswalks had actually increased since the lights were installed. "Taken together, the data suggests that approximately eight additional crossing accidents per year occur at these locations," says the report.
There are lots of theories about why, but I think it boils down to one thing: when pedestrians can hit a button and light up these big signs which are supposed to make everyone stop, it makes them feel much safer, to the point that many people will hit the button and start walking almost immediately, without taking the time to make sure that all lanes of traffic are aware of their presence and stopping.
The prospect of automated cars scares me because, obviously, they cannot be perfect, and they will not be able to identify every dangerous driving scenario. Of course, there is a manual override, but I fear that the car being right 99% of the time will lead to such a complacency in "drivers" that, in the 1% of cases where the car is wrong and about to hit something, we will not be able to stop in time. The more accurate the car is, the more safe we feel, and the less likely we are to monitor it as closely and notice when it is wrong.
Anyway, I rather hope I'm wrong. I'd really like to drink my coffee and read during my commute as my car drives me to work.
1) neither are people; far from it. (you already knew this though, so...)
2) I wholly agree... as long as other people are driving. Get rid of people's irrationality / drunken behavior on the road, and automated car driving becomes a much easier problem. A computer can take more into account than a human, and decide faster.
The transition is almost guaranteed to occur at some point, unless we reach mass transit for everyone prior to that (and who drives those?). It has to start somewhere, and that start will probably be harder than at any other point along the road.
2) I wholly agree... as long as other people are driving. Get rid of people's irrationality / drunken behavior on the road, and automated car driving becomes a much easier problem. A computer can take more into account than a human, and decide faster.
Even if you have near perfect data about where the other cars are there's always uncertainty about other objects on the road: tree branches, deer, a child running after her puppy. I'd speculate that it's better to rely on a vision system that can recognize those as well as cars well enough to be safe instead of treating different classes of objects differently.
True, but once you know what other cars are doing, and get data from cars ahead, you should be able to identify things like that before you even encounter them.
A tree falling on a car while it's driving: relatively unavoidable. Cars hitting that car: far less likely with computers. They don't get "shocked", or hit the wrong pedal, and have faster reaction times. And pile-ups due to an accident are often far more deadly than the accident itself, especially in busy areas where automated driving could make its biggest improvement in safety.
Deer / kids / unexpected obstacles: can be detected prior to them even entering the road by cars ahead, and flagged for caution for cars behind. IR cameras will spot living things for you, motion / blockage detection will spot things near / in the road. Once such a thing is flagged, it's reasonable to assume that other cars would leave more room between each other and around the obstacle, and probably slow down. All things which people should do, but only actually do rarely, and never collaboratively except at extreme cost and increased risk over something organized.
For many of those below, a rebuttal, and some more meta discussion:
I have made no claims that computers can make flawless decisions. I make no claims they can detect / avoid everything avoidable. I don't know where you're getting the idea that I've said this, unless you're dragging in past battles with other people - everywhere I've claimed computers would do better I've included qualifiers like "likely".
The point I was trying to make is that the instructions of a computer, barring external phenomena which cannot be accounted for, are fully deterministic and accurate. If the high-level decisions are made incorrectly, then the end result will be incorrect, but all down the path everything will be accurately performed.
Not so with humans (just look at someone with epilepsy, or at your own memory), or at least not in the minds of some people, which is the only reason I include it. And, if nothing else, we're far further from accurately simulating an entire human body / neural net than from simulating another computer.
Unless they're using a random number generator seeded with outside data, all computer operation can be called deterministic and 100% predictable. Because it's 100% repeatable given identical preconditions. People? Not so / not currently doable.
If the wrong operation for a situation were performed, it'd be due to a higher-level decision going wrong, not the command. Meanwhile, people can "hit the brakes" (high-level command) and instead slam the accelerator (command not carried out). And fully believe they hit the brakes, and that the car accelerated uncontrollably.
Wow. I don't even know where to start on that one, because that is about 100% the opposite of true.
I work on autonomous driving robots. Teams I have been on in the past won extremely competitive autonomous vehicle competitions. Our robots crashed all the time. ALL THE TIME!
There is nothing deterministic about how autonomous systems work. In fact our algorithms were by definition random number generators! We randomly generated "hypotheses" and then had heuristic tests that figured out which was the most likely trail and then chose it. It worked well, 99% of the time. But that 1% was a killer.
Mind you, since we were winning we were doing better than the other solutions which tried to be more deterministic and use maps or things like that. We were basically stateless which reduces some of the feedback associated with getting messed up in some of your estimations.
The problem is all at the low level, not the high level.
Give a robot a perfect map and it is easy to find a way around safely. The problem comes determining what the hell the camera is seeing (lasers are a lot easier, but not as flexible and more expensive). That is why the bulk of the effort is focused on computer vision not path finding, path finding algorithms are simple.
Sure actuating the pedals is fairly trivial, but to call autonomous systems "deterministic" is scary to someone who knows how fickle they are. I'm a huge advocate of self-driving cars, but we are not there yet, and when we get there we need to do a pile of peer review on the code (which better be open source). You never know what race conditions will blow it all up.
It's deterministic. You can run a fully-accurate simulation of your entire setup in a computer, feed in the current input data, and get the precise output in the simulation that the real one would get.
That's essentially the definition of deterministic. If your machine is powerful enough, you can even do it before the real one completes its operations. Anything deterministic is predictable in this manner.
That you can't predict it, or that you don't have a computer powerful enough to predict it has nothing to do with something not being deterministic, nor predictable. Even a massive, PRNG-influenced neural network is deterministic.
------------
On an aside for people who are thinking of quoting the halting problem at me:
The halting problem requires that the detector returns an answer, and it must be true or false and accurate, or the proof through paradox fails. Therefore, it cannot simulate the operation of what it's detecting, because there could be an infinite loop which would cause it to run without end. You also cannot merely abort it after a while, as that would be either a 3rd return state ("unknown") or an inaccurate response.
Ok yes, if I give the robot the same exact pictures and random generator seeds, it is deterministic.
But in the field, even if it is stationary the pictures are never the same from frame to frame and everything relies on a lot a randomness, and random noise.
Simulators are almost a waste of time because the real challenge is handling that noise, NOT dealing with the information extracted after the noise. Once you have a clear view of the world, everything it is easy, it is getting the eyes to actually understand anything that is hard.
Even still, let's imagine they're performing using a pseudo-random number generator, and let's imagine they're using a trained neural network at some part of the program. The training (genetic or back-propagation based) didn't account for a very specific scenario, and the tool mis-classifies it, telling the computer to do the wrong thing.
If you're suggesting that we know what a computer will do on every single given input, we don't. That's in a class of unsolvable problems, as it requires the halting problem to be solved.
[Added the following section after being down-voted]
A "shock", in the deterministic computer world, could be an incorrect classification resulting in an incorrect action. The results are similar to when a human is shocked and doesn't act correctly.
Is the scenario I've painted limited to neural nets? No, any trained classifier, even any program at all may have "minor" slips like this. (When dealing with an analogue world, there's a ridiculously large number of situations that may exist). But you know what? Humans don't perform perfectly; and a computer that's been shown to perform the correct final* response 100% of the experimental time, and 99.9997% of the real time is competent at the activity, and trustworthy.
Disclaimer: I love machine learning, I love neural networks, and I see great usefulness for them in this world; however, I also believe that the perfect combination of variables could yield unusual results. Is it unlikely? Incredibly. Possible? yes.
* Consider a situation where a child is playing with a ball. The computer may not recognize that anomaly on the side of the road as the child and the ball, or even as something to be avoided. If the computer recognizes the anomaly as something that needs more information; then tells the car to slow down until it can acquire that information, then recognizes it as something that needs to be avoided and does so, successfully, safely, and with sufficient space for the conceivable outcomes that may have resulted with this child; then, while the computer couldn't discern the object at first, it did in the end and adapted to the situation appropriately. The "final response" was correct.
[edit: rather than simply down-vote me, please tell me where I'm wrong so I can learn.]
Even discounting buggy software; think hardware, wiring. It is quite possible that a malfunction will take place (they do, constantly).
I could go for "fully computerised traffic would be significantly safer than human traffic", but absolutes about the former being unfailing are just silly.
I think we'll need multiple processors thinking independently and reaching consensus before the gas pedal is pushed and letting any single processor hit the brakes (and disengage the gas, of course). Also good would be to allow other cars to request that a car stop, perhaps even communicating with the traffic control network to make it stop.
And of course, there will be the Android app the pedestrian can use as a last resort to avoid beng run over that stops any cars heading for the phone. :-)
Reminds me of a 0.1-version of this application that I heard about some years ago: some guy saw a fairly fast-moving heading toward a red light with pedestrians crossing with the driver looking down at something. Guesstimating the car was about to be too close to stop in time, the guy threw his cell at the car and got the driver's attention.
I agree computers will be able to drive much better than humans, and this transition will occur at some point--the question is, when? Is the 8 years mentioned in the article anywhere close to realistic? I don't think so.
Not only does the technology have to become better in the aggregate than humans, it has to become so much better there's aren't any crash situations where human drivers perform better. There also can't be many crash situations when the computer doesn't significantly outperform a human. Otherwise both public sentiment against the cars, and legal liability will be too great.
Imagine: whenever there's a software bug and an automated car crashes in any situation where a human might not have, the GoogleCar manufacturer will get sued, the local paper will write an article about how Google killed someone, etc. And then, once a year, there will be another story about how crashes are down 2%... that's a lot of lives saved, but which article do you think will get more attention? And do you think "our cars usually make driving safer, just not the night of October 10th when John was killed" will be a winning defense?
I can't wait for this technology, I think it really will deliver all the benefits the article mentions long-term. But it's a long way off (I hope I'm wrong). I give a lot of credit to Google for investing in it now--this is one of the things I read and think "don't be evil" is more than just marketing-speak.
Keep in mind that there are already millions of lines of code running in modern cars. The worst-case failure mode of all that code would certainly result in loss of life, and legal liability on the part of the manufacturer/insurer/etc. So really the question is just how much automation we'll see in automobiles in the future, not if.
The Toyota Pedal case certainly points towards liability demanding less code.
If there was a automation feature that prevented accidents in 90% of the situations where a human would crash but caused an accident in a situation that was 10% as likely to occur as a human crash, the result would still huge liability since the manufacturer would "definitely" be at fault in the latter case.
Suppose a robot driver is 1/10th as likely to hit and kill a person as a human driver. It would still be something that would happened. And when it did, the results would very messy since the law presently sees any accident through the lens of finding a person at fault. When a robot kills (and a robot will kill if they are "out there") is the fault that of the owner or the manufacturer or the programmer?
I assume the programmer works for a company, and the manufacturer is simply building the device. If the problem was related to faulty components, then I would assume the manufacturer. A flaw in the software or design? I would then contribute it to the company that designed the bot. If the device is improperly used then it would be the owner. I think it all comes down the specifics inside the context.
All of the sorting-out approaches you mention would certainly come into play.
Normally, the programmer is in the clear but obviously there would be exceptions - if you could prove they intentionally introduced code that make the car do something bad, they would be liable. Now introduce a plaintiff's lawyer to muddy-up the definition of these terms.
Let me point out that determining liability/culability/etc in a messy accident is difficult task when you've got just two human decision makes involved. It's decided by jurors who aren't necessary geniuses but probably think they understand human reactions and decision. Introduce two or three more decision makers into the discussion and you've got a messy process indeed.
We saw how far the faulty Toyota accelerator pedal charge went. A big factor in the reactions to evidence was the spookiness of "drive by wire" (I think we've gone over how the actual cause was driver fault but consider how the complexity of the system made a lot of people less likely to believe this).
I mean, it's terribly ironic. Driving in general is one of the most dangerous activities a modern person can engage in. If robots drove our cars the way we drive our cars, the protests would be huge. In a sense, the way we've socially normalized driving is through the huge insurance and liability complex. This assures people that when someone drives wrong, they will be punished somehow. A similar or even significantly small number of accidents without person to punish would be unsatisfying socially.
And I suppose we'll have to get rid of bicycles and pedestrians, too, in this perfect world where intuition of human behavior is unnecessary to safely pilot a vehicle on the road?
Far from it. If cars were less dangerous to bikers and pedestrians, people could actually use bike lanes safely, and cross at walk signals.
Besides. I never said nor implied "perfect". Merely better, and being better is easier without other human drivers, who can far more quickly take a bad (or good) situation and make it worse. Bikes don't go 60+ mph and weigh over a ton. Neither do pedestrians. And very few of either of those carry large quantities of gasoline.
I walk a lot and the times where I've been nearly hit by a car have ALWAYS been at crosswalks and traffic lights (where the pedestrian light is green and cars should be stopping).
Crosswalks are a problem because you can't predict car behaviour: some will stop. Some will completely ignore them (for reasons unknown). If there were no crosswalk you know exactly what cars will do. That's much safer for both parties.
The "illusion of safety" is most dangerous at traffic lights. 2-3 times I've had cars just sail through, completely oblivious. Note: they're sailing through on RED lights. Not amber going red. It's not one of those borderline cases.
Once I had a cop have a chat to me about jaywalking but I've never been fined. I would be pissed if I ever was. In a fight between a car and a pedestrian the pedestrian loses (big time) so pedestrians have a higher vested interest in their own safety. I know what the light changes are at intersections I cross a lot. Where are these cops when cars sail through red lights or nearly run over pedestrians when turning when the pedestrians have right of way?
I get the distinct impression these skills will come in handy when I move to New York next month!
Anyway, back to the self-driving cars: this I believe will be a painful transition that will take many, many years. It's nice to see Google working on this.
"Once I had a cop have a chat to me about jaywalking but I've never been fined. I would be pissed if I ever was. In a fight between a car and a pedestrian the pedestrian loses (big time) so pedestrians have a higher vested interest in their own safety"
Look at how many people drive off cliffs, into trees, or into other cars when a deer runs across the road. Vested interests in safety aside, it's always dangerous to do something unexpected in traffic.
That http://en.wikipedia.org/wiki/Right_turn_on_red is the source of almost all dangerous situations I had on the roads as pedestrian. The one single time when cause was different that I can remember was when somebody drove in one way street in wrong direction and I was crossing the road assuming that this was impossible.
Where I recently moved to there are a lot of one way streets. I feel stupid looking to see if anyone is turning down the wrong way, but what you described is exactly why I do.
WHere I live the govenrment is putting up big fences in the middle of the road to prevent jaywalking, because people sprint across roads at night, even the highway, and there are lots, lots, lots of car -vs- pedestrian accidents because of people running across the street to catch a bus instead of using hte pedestrian bridge or crosswalk not far away. At night. Wearking dark clothing. (No, I'm not making any of this up)
> Once I had a cop have a chat to me about jaywalking but I've never been fined. I would be pissed if I ever was. In a fight between a car and a pedestrian the pedestrian loses (big time) so pedestrians have a higher vested interest in their own safety. I know what the light changes are at intersections I cross a lot. Where are these cops when cars sail through red lights or nearly run over pedestrians when turning when the pedestrians have right of way?
So pedestrians can do whatever they want because its more dangerous for them? Did you know that a flashing red hand means "don't walk"? You can be ticketed by entering the crosswalk when it is flashing. How many of your close calls would that eliminate?
As a driver, what is your first instinct when you see someone in the middle of your path on the road? I also hate people who jaywalk onto medians - how do I know they're going to stop when they get there? Especially if they're running at full speed. I've also had several pedestrians almost jaywalk right out infront of my car before, some not noticing me until I honked at them (too close to stop).
We would all be safer if we all followed the rules. Not just cars.
If anything goes wrong, there won't be time to hit the manual override. Mainly because we as passengers won't be paying much attention to whatever is going on outside. At the same time, panicky breaking maneuvers made by humans randomly overriding their steering system would probably be a bad idea anyway.
Besides, humans are really lousy drivers. They should not be trusted to operate such vehicles at all. Machines will makes mistakes too, but I bet we'll see a lot less of them compared to the era of human pilots.
"Anyway, I rather hope I'm wrong. I'd really like to drink my coffee and read during my commute as my car drives me to work."
I was struck by this comment, because I do this every day...by riding the train. Automated cars, flying cars, double-decker highways, etc. are fantastic innovations, but I sometimes wonder if we couldn't do better than the car, at least in terms of our regular commuting. I ride my bike and take the train because I have that option where I live and I love it.
OK, forget my dumb last comment...
So imagine... cars are automated, and possibly pooled. Bus is the most energy efficient way of transport. So you want to go from A to B. Instead of looking up static train/bus schedules, you just type that into your phone. A server system looks at how many people wants to do a similar trip, and automatically starts an automated taxi that would have just the needed size and will come up 10/15 min later. That way you get the best of both worlds. And if that's not enough space privacy for some, you could just plug mini-cars together, eliminating air resistance but with everyone having its close module transport. Even better, you could switch module transports between road trains. The possibilities are unthinkable. I think this marks the beginning of a new ear in transport.
Very simple. It takes me 30 min to drive to work in my car and 1 hour in the good case with the train. The train can take up to 2 hours overall in the worst case. And this is partly time to switch train, going to the train station etc. Not much time you can use to read or take breakfast quietly.
I commute via bike in good weather and train in bad weather, but that option isn't available for many people, nor for many potential trips. The problem with trains is that the whole train goes on one route, so if you want to go elsewhere, you need to change trains, and have all the extra hassle and delay of switching trains and waiting for another train (and of course, there are many places that aren't served by subways, or are served only by commuter trains that come only once an hour).
In a car, you can choose your own route, which makes a lot more routes feasible to do in a reasonable amount of time. Plus, in an automated car, you can spend your entire trip reading and drinking coffee, instead of having the walk to and from the train, trying to juggle your coffee and umbrella and laptop all while trying to hurry to catch the train.
Problem is, while a robot car for every person is far from the optimal solution, there's a very large gap between the current public transit technology and the required technology to allow public transit to approach or match the advantages of a personal car.
http://www.ruf.dk/ - I heard about this invention years ago and apparently they are still working on it.. Don't know if I believe in it but it is an interesting concept.
This especially scares me as a cyclist. I'm sure they'll put a lot of effort into preventing automated cars from hitting other vehicles or slow-moving pedestrians. Now what about bicycles traveling around 20 mph in the right-hand side of the lane?
Many people seem to neglect or completely fail to understand the risk of increased perception of safety leading to an overall decreased safety in these systems. We can't have the best of both worlds – a machine's constant attention and reaction time coupled with a human driver's intuition – because the moment you give him an autopilot that he thinks will keep him out of trouble, the human sits back and stops paying attention to the road.
These automated systems may eventually become all-around better than any human driver, but it will be take many iterations to get there, and the first versions will certainly lack a human's ability to anticipate trouble before it arises. It will be a long while before an autopilot can say "hey, that car ahead is weaving unnervingly, the driver might be drunk so I should keep my distance" or "the cars ahead of me are curving slightly to the left, there might be a cyclist riding on the shoulder so I should slow down a bit and do likewise." And until we have that level of automation, I'm afraid our roads are about to become a lot more dangerous.
I'm an avid cyclist. I think I'd prefer a deterministic, law abiding driver to the average human.
What makes me feel unsafe are the drivers who break the law (you don't know who you are) and the drivers who hate cyclists and pass us as closely as possible, throw bottles at us, etc (you do know who you are).
What makes me feel unsafe is the unpredictability. I would expect a computer would just follow the rules exactly and be very easy to bike around.
depends on where you are. I've never seen it happen in the sf bay area, but it was a pretty common occurrence where I grew up in the central valley, several hundred miles away.
I am a cyclist who was run off the road by an suv while training for the Westchester half ironman. Was fortunate to walk (ride) away with only one broken bone (4th metacarpal). I've been riding my entire life in NYC without ever having an incident (thankfully) and the one time I go ride in the 'country' I get my ass handed to me.
My gut tells me that drivers in unpopulated areas are less aware and less able to 'share the road'.
Yep, when I ride my bike in the early morning, I purposefully do not wear light-colored or reflective clothing. I don't want to ever believe for a nanosecond that my safety is anyone's job but mine so I prefer to view the cars as actively trying to hit me and work to make it impossible for them to do so. The relationship (cars and bikes) is fundamentally adversarial in nature, best to act like it.
It does seem tricky for the car to properly account for the presence of a cyclist using just the cameras and radar and whatnot. What if as a cyclist you carried a special RFID that robot cars would all be programmed to recognize?
If there was a special device which made you more recognizable in an automated traffic environment (be it on a bike, or as a pedestrian), I would buy such a device even if it wasn't mandatory to have one.
why? Have you seen lidar work? It's pretty effective. I understand lidar wouldn't be used, as it's dangerous (laser, yay!), but I imagine other radar is similarly effective
"The prospect of automated cars scares me because, obviously, they cannot be perfect."
They can, at least 1000 times more perfect that humans are. A computer could have and exact model of the road, and compare it to what it sees, could report bugs in an objective way, the same way computers on airplanes work, having made flying way more secure.
This is true, but the fear that could prevent implementation is that you want that fear in your own hands, not the hands of something else. It's like trusting someone to catch you when you fall - statistically you know it's pretty likely, but there's something unsettling knowing you're relying on an extraneous factor, not YOU, to prevent harm.
But don't you also have to trust a bus-driver, airplane-pilot, etc. And all kind of things can go wrong with these human drivers: they might be drunk or under some medication, they can die in a sudden hart attack, they can just make bad decisions, or even worse - they can be commanded to land the plane into some tower.
If the computer driver is correct 99% of the time and the human driver 98%, then it is a vast improvement. I imagine that ultimately this technology would result in almost 0 human input other than e.g. typing in a destination.
I go food shopping every week at about the same time. I email trip plans between friends and schedule them. I don't drive to work but certainly if I did that's an obviously predictable event.
Between synergy among the various google properties and good pattern-recognition in the google cars (or whatever) system itself I'd be surprised if predicted trips didn't hugely overwhelm explicitly-ordered ones.
I agree. They'll have to accurately distinguish between toddlers walking into traffic and birds flying in front of cars.
One thing I read said the human driver had to take control because of a cyclist. Obviously there's more work to do.
But eventually these systems will get there. And they'll probably be able to share information with other vehicles too. So robotic vehicles will be able to see around corners. And tell each other exactly what they'll do before they start doing it.
Safety could wind up being 6 sigma instead whatever is currently. And we'll actually have the ability to measure it because collecting such statistics will be more feasible.
Right now, most of our screw ups don't end causing accidents and they never get recorded. And we prefer it that way! But we'll be able to do data mining on every little thing robotic cars do.
I strongly suspect that the reason the human took control was because there was a journalist in the car. In videos I've seen, the vehicles are able to handle bicycles just fine.
And they choose which videos they show, correct? So its all carefully presented. Which means the way they are not completely confident with how the situation is handled.
Which is not saying this effort shouldn't continue. Human drivers don't always handle cyclists well either.
I don't think there is as much filtering on stuff for internal consumption as external. (I'm an employee. Internal videos are the ones I was referring to.)
I recall an article on here not too long ago about mine cars (I believe in Australia) that ran 20kph and 10 meters apart when driven by human, but over 120kph and 6cm apart when driven by robots (numbers completely pulled by memory and almost certainly all wrong). Robotic drivers will be better in almost every way. All of your concerns go for people as well and people have proven themselves very bad at driving.
My biggest fear is that there will be a manual override.
I know we're talking hypothetical percentages here, but if the automated car is right/safe in 99% of driving circumstances, how much help do you think a human will be? It's not as if humans are right 100% of the time. I'd gander we're closer to 97% or so, or possibly even less.
It seems to me, at some point, computers will easily out drive humans.
> I'd really like to drink my coffee and read during my commute as my car drives me to work.
unfortunately, I think a lot of Americans are already doing this, or something close to it, today -- but without an AI driving the car. I've lost track of the number of times I've seen women driving huge SUV's with kids in the back, and the woman is yapping on a cellphone while checking her makeup in the mirror, etc. Oh and, while driving.
This is such good news -- probably some of the best tech news so far this year -- that I keep looking for problems. It's too good.
So, although I'm hugely supportive of this, and I think it's possible to have fully autonomous vehicles within a decade, I want to see it. In production. Then I'll believe. Up until that point it's like one of those science stories where it works in the lab but nobody has any idea whether or not it will work with humans. There is a long, long way to go from the mechanics of AI and auto-driving and the real thing happening. It's not just the technology. There are about 17 large groups of people and organizations that need to be re-aligned before this would ever fly, even if it were perfect. This could easily end up like pot legalization, where it's obvious for years that the current laws are idiotic but it takes an entire lifetime to get the politicians on the same page as the public.
I'd love to go all blue-sky on this, going on and on about the massive changes true auto-drive would entail. But instead I'm opting to be very cautious about separating press releases from real products and benefits.
The key question is why would Google produce this technology and enter the autonomous driving market?
I think one of the key issues for Google, from a corporate strategy standpoint, is "freeing up people's time." Driving is one of last places where we spend significant time awake without being able to use the Internet and hence any of Google's services (except if you use a smartphone, which is now illegal in some states, and in any event isn't an ideal place to be clicking on ads). There is a safety issue here as well. Although using a smartphone while driving is illegal in some states, people are driving while using their smartphones with increasing frequency. We need our Internet "fix."
I am sure this driving technology also taps into several of Google's key capabilities: e.g., programming expertise, its voice recognition technologies, search, and its mapping software (Google Maps and Navigation).
Dr. Thrun is known as a passionate promoter of the potential to use robotic vehicles to make highways safer and lower the nation’s energy costs. It is a commitment shared by Larry Page, Google’s co-founder, according to several people familiar with the project.
It sounds like this may be as much an idealistic pet project as it is any kind of corporate strategy.
Larry's idealistic pet projects _are_ corporate strategy. That's why Google dominates - they are unpredictable and creative.
Crazy stuff Google did:
* mapped all the world's roads using cameras and a mechanical turk, destroying billions in market cap for map data providers
* Android, groundbreaking in both technology and pushing the envelope on patent law
* launched a satellite
* deep tendrils of fiber optics and community oriented wi-fi projects to connect the world
It doesn't surprise me at all Google is building autonomous cars. Search and adwords are not even the most intriguing things to me about Google these days.
Imagine a mad scientist who has $12,000,000,000. That is Google's strategy... also lots of testing.
Small correction: Google didn't "map all the world's roads using cameras and a mechanical turk, destroying billions in market cap for map data providers." That is more than a stretch. They mapped SOME of the roads in the US, augmenting existing TIGER data from the US government, and used it in place of TeleAtlas. The "new" data is pretty bad, so I use Bing for directions because of it. They still rely on TeleAtlas (one of the map data providers they "destroyed") for almost every other country.
That's probably what Google Translate and speech recognition and text-to-speech probably all sounded like ten years ago. But now all of these are able to work together to make a universal translator, turning speech in one language into speech of another language. Pretty fking cool, I'd say!
A company like Google can turn an idealistic pet project into something truly game-changing with consistent time and resources.
I don't know Sebastian that well, but I believe his motivations are largely idealistic. At least, that's the pitch he and his group used to make to prospective members - that solving the autonomous driving problem could save tens of thousands of lives a year.
Google currently hires people to drive their streetview cars. They could save a significant amount by automating that collection process. Beyond not having to pay drivers, you also don't have to have vehicles designed to keep their occupants safe, which could be significantly cheaper to produce, operate, and maintain.
And even though it's not a core part of their business today, the first company that can offer a viable automated trucking platform is going to be strongly positioned in a multi-trillion dollar industry.
People criticize Google for being "just an ad company". Branching out with tech like this would put an end to that pretty quickly, wouldn't it?
It's the other way around too, at least for the first trips.
A human-driven Google Street View car would not only retrieve mapping data, photographs for the Street View and cellphone tower and wifi signatures for positioning; it would also serve to annotate the roads for easier navigation by autonomous vehicles.
I assume this is sarcasm, but perhaps not. If Google is developing this automobile technology, perhaps it should enter and promote telecommuting technologies to reduce the amount of time that people are in their cars?
Google is good at hard computer-science problems, which this is. It also has an obvious market application. Even if they don't market this themselves, it'd be easy to spin it off or sell it for lots of money.
I think it's more likely that someone high enough up though it was "cool", decided how "cool" they thought it was (1M? 10M? 50M?) and then approved the project.
One possible reason is what you just pointed out: If you have a 20 minute commute and you could trust your robotic driver to take care of it, that is 20 minutes you could be on your smart phone interacting with google advertising.
(personally I think the other comments about google doing it mostly because it is a hard, cool, worthwhile challenge are most on the money though)
And in the event of an accident, who would be liable — the person behind the wheel or the maker of the software?
The insurance company, just like today. Possibly paid for by the manufacturer and entirely included in the sales price, or possibly paid for by the driver/owner like today.
Either way, expect a discount, at least after the technology is proven. I know I'd rather insure a robot than an often sleepy, distracted, intoxicated human!
For years, even decades, I've heard this cited as if it's a legal can of worms that makes the whole notion of automated driving a political impossibility, so to speak, in the foreseeable future. Sure, it would save a lot of lives, the argument goes, but the rare failures would bring down a deluge of lawsuits that would crush any manufacturer foolish enough to try to mass-produce something like this.
I don't think that's the case, even taking as a given America's disgusting obsession with lawsuits. There's an obvious answer to the liability question, and one that I think Google is already depending on in allowing their robot cars to drive all over California: The person in the driver's seat is responsible as always. That person can choose to press the autopilot button but it's not fundamentally different than cruise control. Touching the brakes (or the steering wheel) immediately transfers control back to the driver. The car is doing what it does, as always, strictly as the driver's proxy.
Now of course it will happen that some driver sues the manufacturer because there was an accident they believe wouldn't have happened if they'd never gone on autopilot. Just like there have been (I blithely presume) lawsuits over cruise control. But the manufacturer will weather those lawsuits and reasonable legal precedent will be established soon enough.
But now suppose you press autopilot, go to sleep, and wake up to find you've committed vehicular homicide. That's a scary thought, but is it a potential dealbreaker in bringing this to the mass market? I think the answer is no, because we've already established the liability question: You were, officially and legally speaking, criminally negligent to go to sleep. Unofficially it might well have been a perfectly reasonable risk to have taken, if the probability of it ending in tragegy is sufficiently low. Logically, you could argue, citing statistics on human and robot drivers, that going to sleep with the car on autopilot was markedly less negligent than not putting the car on autopilot at all. I don't know if that would fly in court. But as far as legal obstacles to adoption of robot cars, I don't see it as much of an issue.
Civil liability is only half the issue. What about criminal liability? If your autonomous vehicle runs someone down because of a bug, who goes to jail?
My understanding is that criminal liability almost always requires either intent negligence.
If the automated system is less error-prone than a human, the driver can't reasonably be negligent for doing nothing ("it would be incredibly foolish and arrogant for me to think I could handle that situation better than a computer designed by 10 Phds, especially when the insurance company agrees that the computer drives better").
For the manufacturer the reasonable position would be that as long as they followed certain verification/testing standards, and fixed any discovered bugs as fast as reasonably possible (probably not issuing a recall or telling people to turn it off, assuming the system was still safer than a human driver), they should be fine. The industry might need to get together and come up with standards and a certification body for the first part to work, and I have no idea how well the no-recall thing would work without something explicitly permitting it.
This is surely the rational way to handle it. But I'm not convinced a judge and jury would see it the same way.
The prosecution would likely argue that no computer program can be a match for a human's ability handle new and surprising situations: the "driver" must still pay full attention to the road, whether a computer is driving or not. It would not be enough for a computer to be statistically better at driving, it would also have to not make mistakes that a reasonable person wouldn't make.
Also, we don't have information on how error-prone this system actually is. All we know is that a car it was driving had one accident. We don't know how many times the human driver had to take control because it would have otherwise crashed.
> argue that no computer program can be a match for a human's ability handle new and surprising situations
I suspect that this ability is due to our having more awareness of context. I imagine it wouldn't be terribly hard to dump a larger amount of contextual knowledge into a car, probably just attach an expert system and pay a few tens of millions of dollars to have people fill it with data. I'm not sure what the state of machine learning is, but they could perhaps pay owners to upload logs (maybe especially after any accident / close call, or maybe that would give a skewed perception) and then crunch those to find new rules.
> Also, we don't have information on how error-prone this system actually is. All we know is that a car it was driving had one accident. We don't know how many times the human driver had to take control because it would have otherwise crashed.
Yup, that the humans ever took control at all says it probably isn't quite ready yet. But it looks very promising. If I ever have kids I might not need to deal with them learning to drive, and maybe my next car could let me browse the internet on the way in to socialize (well, why else won't they let me telecommute?) at work.
What happens when this is due to a hardware bug, like with the Toyotas recently? I don't think criminal liability is involved except in special circumstances. Why should software be any different.
I think the difference between this and the Toyota incident is that the driver still maintains absolute control over the autonomous google car. So it could be argued that anything the AI does is still ultimately the driver's responsibility.
We deal with that issue today. Who is liable when brakes fail? It's usually determined on a case-by-case basis. Why would this system be any different?
I guess this could be different just because 'the public' does not accept it to be the same. That may be irrational, but it also is quite natural. We accept traffic deaths more than plane crashes, invest disproportionately in research on some diseases, etc.
Suppose there's an extreme edge case where a bicyclists fluorescent jacket appears as a road to the car; so the car chases down the cyclist and runs him over. This is not a hardware failure, but an active decision made by the car to cause injury to a person. So in such a situation, who would be responsible? I guess it is a question for technolawyers to decide.
How is that not a system failure? Because the car has some kind of intent? Well, we have an analogue for that today, too: who gets punished when a dog attacks someone?
I'm not saying there are no details to work out. I'm just saying that it's not some unfathomable chasm we have to cross. We deal with similar questions all the time right now.
> The insurance company, just like today. Possibly paid for by the manufacturer and entirely included in the sales price, or possibly paid for by the driver/owner like today.
duh, of course! The question is - at what premium? Because you know, actually it's customer in the end who pays the price.
The first few years the premium would probably be much higher, especially since these would be luxury cars and you just know there would be attempts to fool the cars into compromising situations (or simply steal them). After the adoption curve hits the early majority the premium balance will shift the other direction (you'll pay more if your car doesn't have the safety feature of being able to drive itself). And at some point it will (assuming that automatic cars are in fact safer) become illegal to operate a vehicle manually on a public road unless it's an emergency.
Also fully automated taxis start to look like a mass transit system, and if Zip car offered to pick you up at your house...
Cancel the bullet train, and instead plow all that money into rapidly deploying this.
There was a period before cars where trains were the most efficient form of long distance travel, so cities were built around this fact. Then cars got popular, so we built cities for those. Then they pushed light rail to combat suburban sprawl. It would be funny if automation made cars the ideal mode of travel again.
This is not a solution to much, apart from getting stupid people away from control of vehicles. We still need to move to mass transportation, to reduce pollution, decrease traffic, stop our reliance on fossil fuels, increase productivity, remove noise, free up space in cities, and lots of other reasons.
While cars that drive themselves are not the final solution they could be a necessary stepping stone to mass transportation.
Imagine a future where cars are not owned by individuals but are owned by few companies (or maybe even government, the way public transport is) and participate in what is essentially a global taxi network.
Instead of using your own car you would just step out of your home, enter destination address into your phone and press a button. That would be sent to a server which given your location, position and destination of all existing cars in route would pick one that is the closest to you that has available seat and goes in that direction. That car would just pull up and you would be notified in real time how far it is from you.
If a car is not available, it would be dispatched from the closest holding area.
Think ZipCar except with cars that come to you, drive themselves and you can share with other people while it drives itself (or you can have it for yourself, if you prefer, except at a higher cost).
This kind of a network would drastically reduce number of cars needed to drive people around and given global knowledge of where people are and where they want to go, it could plan the absolutely most efficient route.
For popular routes (e.g. if it detects a pattern that a lot of people commute from San Francisco to Mountain View around 9 AM on Mondays through Fridays) it could deploy bigger cars (i.e. buses), further reducing traffic.
When you think about it, it's not surprising that car companies will not work on a technology like that since it would drastically reduce demand for cars and if this comes anywhere close to reality, expect lots of legislative and political battles (after all, this will mean that Americans in Detroit will loose jobs).
You have described my personal vision of a PRT-connected future even better than I have myself.
What's so beautiful about it is that it takes a massive infrastructural liability -- a century of building our world around the automobile instead of around rail or other mass transit systems -- and turns it on its head. We get out of infrastructure jail free.
Think of the time humans spend behind the wheel of a car today. How many hours a day? How many solid days of driving a year? How many man-years of humanity are wasted on fruitless idling in traffic? A society with autonomous personal transport becomes a measurably more productive one and likely a happier and more satisfied one as well.
You also get to choose the best car for the task; electric cars for short hops, other technologies for longer shots. You may not need to do a trip in one shot, so local electrics can commute a bunch of people to a local bus that may commute 30 miles without stopping. Public transport would get a lot more economical as a result, too, and determining the best place to locate them would be a relatively simple computer problem.
Car companies may fight it but it won't matter, the economics are overwhelmingly against them on this, because signing up for this service will be overwhelmingly cheaper than owning your own car, probably by at least an order of magnitude for most people.
>and given global knowledge of where people are and where they want to go, it could plan the absolutely most efficient route.
Random digression, but I think you will find this is an NP-hard problem - I'm quite sure you can encode the Travelling salesman problem[1] within it. So the absolutely most efficient route is probably not feasible, although the system could certainly do a good job optimizing over what we have now.
That would be nice for commuters, but not practical for parents of small children. They need a bunch of stuff in the car all the time (child seats, diapers, clean clothes, sand toys, stroller, etc.).
I totally disagree. We don't need mass transport to deal with pollution, we need transportation that doesn't rely on fossil fuels.
Personally I view mass transport as the stop gap to deal with the fact that cars really suck today. Mass transport will never be optimal because (a) it is mass transport, which means sharing space with a bunch of (potentially sick) strangers and (b) the bigger the group the less common the final location. A car can go exactly from point A to point B but a train can only go between stations. Every change between train (city-to-city) to tram/bus (location within city) is wasted time, as is every tram/bus stop that you don't use.
In my perfect world we would get rid of mass transport and just have a sort of network of cars. Most trips people take are planned well in advance so you could schedule when you want to leave, walk outside and step into the car that's waiting there for you, kick back and wait to be let out at the door of where ever it is you're going. The car wont need to park because when it's done with you it goes for the next person.
Something like this would be much more efficient than mass transport that has to just keep on making the same trips regardless of how many or how few people are using it at those times because people might need to use it.
Packet-switched? Hmm. A Mini Cooper weighs about 2500 pounds, so with four people per Mini Cooper your packet header is still going to put at least twice as much load on the network as the packet data.
Ha, I saw these cars multiple times this past week in San Francisco, including turning from Broadway onto Columbus in heavy traffic. If that was done autonomously, then I'd be very impressed since I have trouble navigating through those intersections without hitting the many bold pedestrians.
It is clear that one day automated cars will make less mistakes than human drivers, and that such systems can reduce the amount of traffic accidents.
There is one big problem - convincing some people this is true, and that cars will make mistakes, but far less mistakes than humans.
The problem is there are many people that fail to see it that way. More than once I was in a situation of explaining the concept of automated cars to others, and always the opposing arguments go something like: "But, can you be absolutely certain that there isn't going to be one single case where a car makes a mistake and kills someone". No, I cannot, and I will not.
And, of course, once the self-driving cars become a reality, such cases will happen. And than there will be an article in some scandal-seeking newspaper how an automated car killed the father of two, and how this is a what happens when you let science control your life and when you let all those over-educated people do what they want... Or something along that lines...
The thing that excites me about this isn't the driving, it's the parking. Living in SF, it would be AMAZING to be able to pull up to whereever you were going, get out, and tell your car to go find it's own parking spot. Then, when it's time to leave, click that button on the key fob, and the car is waiting outside in five minutes.
In Google's blog post, they said that they think automated cars will "significantly reduce car usage". I have no idea what they're basing that claim on.
Automated cars eliminate many of the costs of driving--lost time, frustration, etc. When you decrease the cost of an activity, people are going to do it more, not less.
In fact, if you ask people that can afford cars but commute by train why they do it, many will say that it's because they can work on the train and not have to deal with driving in traffic. Those people would definitely consider switching from the train to automated cars.
Because an automated system would be less prone to congestion, and the average time for travel would go way down. You'd have to do the math to find the break-even point, but I'd imagine that you could add many more cars into an optimally flowing traffic system and still end up with less total running time.
Like I said, you'd have to do the math. It's very possible that's true. But it's not a certainty. People have a limit on how much they want to drive regardless of how easy it is. After all, the store itself will still suck when everyone decides to go at 5.
I'm in NYC, I could hop on the subway any time I want without any hassle at all, but it's not like I do just because the option's there. Transportation is a means much more than an end for most people.
Uh, maybe you'd have to look at the history of urban planning for the past 50 years? In densely populated urban areas, just about every time people add more roads, the capacity gets used up by increased development in the outlying areas served by those roads.
Network bandwidth and CPU capacity are subject to the same phenomenon. Do end-user desktops really have more functionality than 10 years ago? Besides things like increased 3D graphics capability and more things to do with a web browser and an internet connection, not so much. As bandwidth has increased, the amount of data in a webpage has increased, and it becomes practical to use a web browser for more.
Basically, there can be a certain amount of pent-up "latent demand" which is there but which can't manifest until the roads are built and such travel becomes practical.
I'm granting that miles driven or number of cars on the road could go up, maybe way up. I'm just guessing, subject to actually doing the math, that increased efficiency and assumptions of reasonable limits on how much latent demand is actually out there could be sufficient to make time spent in cars, fuel consumption, emissions, or other metrics better regardless. Since we're using networking analogies, basically I'm guessing that current uncoordinated human-driven traffic is copper wire to an automated system's fiber-optic cable.
But again, it's just a guess, and I'd be the first to accept that I was wrong if the numbers didn't actually work out.
It doesn't say "significantly reduce car usage" it says "reduce carbon emissions by fundamentally changing car use".
In the PR post two savings mentioned are "highway trains" and car sharing. Car sharing gets interesting if the car can drive by itself to pick up the next person, to leverage all the normal downtime in a typical day where your car is just sitting there waiting for you to use it next.
Nova did an excellent (and surprisingly technical) episode [1] on the 2005 race mentioned in this piece, including Thrun's Stanford team. It was especially impressive as a triumph of Thurn's software-based approach over several of the other veteran teams that relied heavily on complex hardware (e.g., those teams used camera gimbals while Thrun's team just compensated for camera bumps using code).
For those of you on Netflix, it's available as a streaming video [2], and I highly recommend it.
Speaking of being impressed by Google, I wouldn't have been able to decipher that reference a decade or so ago, but today, I can type "i have no * and i must *" into a text box, and the answer is presented to me. The future is pretty great.
I wonder if this is legal. I mean, if one of these cars went haywire and mowed down a bunch of pedestrians before the "driver" was able to take control, I reckon Google would have some splainin to do. With autonomous vehicles coming closer and closer to a reality, it's somewhat alarming that there appears to be no legal body attempting to define some guidelines on this subject. At the rate our government works, we'll have fully functioning, marketable autonomous vehicles long before it's legal to use them.
According to the article, Google's lawyers decided it was OK, because a human was sitting in the driver's seat ready to override the system at any time. I guess it's a little bit like student drivers being supervised by experienced drivers, but who knows what a random cop would think?
Well, cruise control is legal, and since there's a trained professional behind the wheel at all times accepting liability there's no legal issue. The real issue comes when you want to let the car drive itself alone...
I would guess that the drivers behave just like they would normally. If the car hasn't started breaking by the time the driver would typically, he just does it himself.
If you're interested in more, Brad Templeton has been singing the praises of a future with robot cars, and how to get there, for many years, both in all the futurism un-conferences I've visited in the bay area, and on his very thorough website:
That's very promising. Living in a pretty safe place (Canada), driving is probably the most dangerous thing that I regularly do. I'm very happy with the safety advances of the past decade (more airbags, electronic traction control, brake force distribution, better crumple zones, more high-strength materials, etc), but it's not quite "active" enough to make me feel truly safe (especially because it doesn't address human error).
Truly looking forward to the commercialization of technology.
Me too. I bet this could bring accident rates down substantially and maybe we will also get rid of those insidious traffic jams. But the best part: I'll have a ton of free time on my hands during transit!
“The technology is ahead of the law in many areas,” said Bernard Lu, senior staff counsel for the California Department of Motor Vehicles. “If you look at the vehicle code, there are dozens of laws pertaining to the driver of a vehicle, and they all presume to have a human being operating the vehicle.”
The Google researchers said they had carefully examined California’s motor vehicle regulations and determined that because a human driver can override any error, the experimental cars are legal. Mr. Lu agreed.
In the short term, I could easily see the value in a souped up cruise control for driving on interstates. combined with infrared/night vision, this could easily make nighttime driving safer.
Also, this has great potential (though would probably generate a lot of controversy) for drunk driving. Imagine if your car had a safety mode that could watch out for potential accidents and prevent them even with a human driver.
If these systems can be trained to learn your habits, you might be surprised to find your car driving you to that spot where you used to see the other girl...
Some cars out today have automated emergency braking systems. There are also assisted cruise control systems that use a variety of techniques for maintaining a safe distance from the car ahead of you, even in low-visibility situations, and sensor systems that will alert you when you start to drift from your lane (or someone starts to drift into yours). Not to mention traction control systems that have been in cars for years...
My car automatically throws on the brakes if it thinks I'm about to hit something (tested accidentally, once, and it stopped me with about one car length to spare.), and my cruise control watches for cars in front of me, so it will go the speed I set, unless there's a slower car in front of me, in which case it leaves a safe distance.
The google car seems very much reliant on its gps data, in the article there is this passage: "He did [interrupt auto mode] so twice, once when a bicyclist ran a red light and again when a car in front stopped and began to back into a parking space. But the car seemed likely to have prevented an accident itself."
Seemed likely is not really good enough though, you have to be sure enough not to interrupt the auto-driving at all for this to be usable.
The biggest problem with auto-driving cars that have a human supervisor is that there is nothing more dangerous than having to be alert for a long time when nothing happens. Invariably you will be distracted when something does over the course of tens of thousands of miles.
Regular traffic is one thing on mapped streets is one thing, but that 100's of thousands of emergency situations that a normal career on the road throws at you over a lifetime contains quite a few where I really wonder how a 'robot' driver would deal with them.
I'm all for 'automated drivers', but I'd like the researchers to be sitting in the passenger seat and to have their kids playing around the car (or come out on their tri-cycle from between two parked cars) while they drive it to prove that it is safe. Overrode but the car would have probably done it is not good enough yet, even though the achievement is very impressive this is not as close as it seems.
Computer programs are great at dealing with everything that you can think of beforehand, it's the exceptions and the response to those exceptions that matters in an application like this.
I also wonder how they'd deal with liability issues if their driver was not actually paying attention to traffic and the software made a mistake and caused an accident.
Where they'll get their foot in the door is closed campuses. I bet the military could run this on the base and keep the liability issues in the family. Similarly, perhaps a large corporation would license it. Or even better, airports.
Seems like quite a few people are working on projects such as these. I heard some hairdo on TV the other day musing that they will reach general availability "as early as 2022". How do they even come up with those numbers?
>Rather than cameras and lasers providing situational awareness and plotting course, the TTS uses high-resolution comparative GPS to follow a pre-plotted course at an accuracy up to 1 cm of deviation.
Don't think I like the sound of that...
>The only trouble with this set up is the car will be incapable of reacting to unexpected obstacles, say a boulder rolling onto the road or a spectator jumping out to get a picture. Careful spectators.
I'd expect a situation to arise where cars are allowed to go into automatic driving mode on motorways/freeways but not in cities. Gradually more and more roads would become available for automatic driving as more and more cars get the automatic driving capability. I don't think people will ever be ready for an instant complete switch over. I'd expect it to be a gradual switchover which takes years.
I think we all see where Google is headed with this: free self-driving cars in return for advertising, like http://www.kahdo.com.au/kahdo.htm
Driver, there's items you like on sale at that store. Driver, I think we should stop and check it out. Driver, we are stopping.
Dear, I don't feel safe having Lisa drive with that new Mark boy. You can just tell he dials aggressive.
But seriously, from a marketing perspective, I think the low-hanging fruit here is highway driving. It's a small, logical and therefore adoptable step up from cruise control, just with a little auto-steering (they already have auto-braking IIRC), and perhaps some networking for predicting lights, congestion and co-ordinating with other (online) cars.
I had no idea the technology was this advanced. Last I heard was the DARPA challenge where they were still struggling to navigate traffic-free dirtroads. Google blows mind once again.
It's gone way beyond that. The Stanford team is now trying to race up Pike's Peak in an autonomous Audi, drifting around corners at full speed. They already are doing trials at 130mph, drifting, in the Nevada salt flats.
DARPA had two follow-on challenges, a second Grand Challenge in 2005 that five vehicles successfully completed and the 2007 Urban Challenge in which vehicles had to autonomously navigate closed-off urban area, obey traffic rules, merge into traffic and avoid obstacles.
So much of the groundwork for what Google is doing today was already in place three years ago thanks to DARPA. Google's contribution, as far as I can tell, has been to pour more funding into the project and contribute its extensive mapping data and its phenomenal capabilities in collecting and crunching huge datasets.
When fully autonomous road transport does finally become a reality, it'll owe as much for its inception to DARPA, with its innovative X-Prize-style Grand Challenges, as it does to Google. And naturally to all the competing teams who over the course of three DARPA challenges drastically advanced the state of the art and overturned previous assumptions about what was believed possible.
This is a fascinating example of academic research, innovative government funding and long-sighted private funding combining to create something really cool and useful. I hope this sort of thing will continue.
I'm surprised nobody has mentioned one of the biggest issues with general acceptance of this system:
Police will be able to force anyone to pull over anywhere, at any time, and for any or no reason.
Even though they can do that now, people still decide to pull over. They are not forced to.
And what if there becomes a database that police forces compile, showing the history of movements of everyone? That database could potentially be leaked.
I think the privacy implications of this should get at least some thought.
I don't know how many episodes of World's Wildest Police Videos you've seen, but police can already force you to pull over anywhere, at any time, and for any or no reason. Spike strips, PIT maneuvers, etc. Letting them pull you over with a remote command would be a huge improvement--high speed chases are unsafe for all involved and (due to frustration and adrenalin) increase the risk of police brutality.
Of course, with computer-controlled cars, you can eliminate traffic infractions and hence most if any need for police to pull you over anyway.
I am very happy to be an engineer. I would be very uncomfortable turning the world over to robots if I was some other kind of job, but as I am an engineer I will hopefully be the one MAKING those robots, which is the only end of the stick I want to be on when robots reach the critical point in taking care of us.
It will take a decrease in sensor costs for this to become commercially viable. The "device" on top of the car is a Velodyne LIDAR system, which if I recall correctly, costs around $75k. Using a sensor that costs more than the vehicle it controls makes selling this difficult, to say the least.
Aren't car companies essentially all about wringing huge efficiencies out of mass production? Compare an automotive product to something requiring a similar manufacturing technique with more niche demand and there's usually a huge price difference.
> Can it tell when a head gasket blows and the engine is overheating?
Sorry, how do YOU know when your head gasket has blown and the car is overheating? What's that, your first indicator is the electronic temperature sensor on your dashboard!? My, there certainly is no way a computer could detect that.
If you are about to say, you detect it through loss of power, steam from the engine bay, coolant in the oil... I got news for you, you just detected it slower than a computer would have. A properly designed automatic driving system would monitor the vitals of the engine, and pull over in the event something goes critical, much the same way your PC shuts off when the CPU hits some predetermined temperature. Even better, if all cars were automatically driven, it would communicate this abnormality with the other cars on the road. It would be able to determine that this is unusual (it's not just a hot day), and it would also be able to co-operate with the other cars on the road to pull over safely as fast as possible.
The things you list are not "this is why automatic driving is impossible". They are just "here's special cases we have to make sure to catch". In the case engineers miss a case, good feedback control systems can often cope for the unexpected event, and in the event they can't, there's always manual control. Hell, let's be honest here- most drivers would not detect a blown head gasket until the car simply stopped functioning, and the engine was completely destroyed anyway. There's tons of case where humans perform terribly, including some on your list. Accidents are caused all the time by people braking for squirrels when they shouldn't.
> The things you list are not "this is why automatic driving is impossible". They are just "here's special cases we have to make sure to catch".
You are exactly correct; this is actually all I meant to express in my post. I suppose I should not be surprised that my comment was taken to be espousing one of the prevailing extremes! Sorry about that--I should have been a little less terse.
> most drivers would not detect a blown head gasket until the car simply stopped functioning, and the engine was completely destroyed anyway.
Ironically, this exact thing happened to me at the end of last year. That's actually why I included it in my list... not because I think it's un-doable, but because I really really hope it's an edge case they cover! :-)
This will never take off because Google or carmakers can not assume more liability for accidents than hundreds of millions of drivers can. Technically, cool. From a practical legal perspective, a non-starter.
Actually, most of the time we hear about their failures (Wave, Buzz, etc.); but this is to be expected if they are as fearful of opportunity costs as they're reputed to be.
I really hope that cars can become safely automated in my life time but I'm worried that people will start "jailbreaking" their cars so they can exceed the speed limit.
I expect that once these cars become common enough for that to be a realistic problem, there will be some kind of certification devised and cars running uncertified software will be prohibited from public roads.
I've pretty much assumed this to be a given at some point in the next 75 years, but what's most interesting to me here is that rapid-iterating, money-is-no-object Google is doing this instead of a slow-moving auto giant or even an unpredictably-funded university program.
Could Google shorten the timescale for this massive shift to just a decade or two instead of most of a century?
In Boulder, where I live, several crosswalks were identified as being particularly dangerous for pedestrians, and pedestrian crossing signs with big flashing lights were installed at these locations over the past 5 years. This spring, a report was released[1] that showed accidents at many of these crosswalks had actually increased since the lights were installed. "Taken together, the data suggests that approximately eight additional crossing accidents per year occur at these locations," says the report.
There are lots of theories about why, but I think it boils down to one thing: when pedestrians can hit a button and light up these big signs which are supposed to make everyone stop, it makes them feel much safer, to the point that many people will hit the button and start walking almost immediately, without taking the time to make sure that all lanes of traffic are aware of their presence and stopping.
The prospect of automated cars scares me because, obviously, they cannot be perfect, and they will not be able to identify every dangerous driving scenario. Of course, there is a manual override, but I fear that the car being right 99% of the time will lead to such a complacency in "drivers" that, in the 1% of cases where the car is wrong and about to hit something, we will not be able to stop in time. The more accurate the car is, the more safe we feel, and the less likely we are to monitor it as closely and notice when it is wrong.
Anyway, I rather hope I'm wrong. I'd really like to drink my coffee and read during my commute as my car drives me to work.
[1] http://www.dailycamera.com/boulder-county-news/ci_14859190