Hacker News new | past | comments | ask | show | jobs | submit login
A First Look at How Google's Self-Driving Car Handles City Streets (theatlanticcities.com)
106 points by jbredeche on April 28, 2014 | hide | past | favorite | 86 comments

The amount of sensor data that Google could pick up from a fleet of self-driving cars is staggering.

If Google wanted, they could use these Self-Driving cars to:

1. Convert Streetview to show same-day photos.

2. Bring in Traffic data that is WAY more precise than phone GPS pings.

3. Scan for and report open parking spots

4. Monitor for design-based congestion problems (e.g. no-bike-lane street with lots of bike traffic, areas where speeding is norm)

5. Track location of food trucks in real time.

6. Monitor foot traffic by street address.

Throw in some license plate recognition and they could track individual car locations with a pretty wide breadth.

Or track individuals with facial recognition; my understanding is that Facebook's facial recognition algorithm is very accurate, not sure where Google's tech is in this regard.

Eric Schmidt said a while back that they have the technology and is probably pretty accurate, but they want to avoid building a facial recognition database.


I've heard from people working on Facebook's stuff is that it's hilariously inaccurate if it has to choose between more than about 20-30 people. Most of the work is spent on figuring out which if your friends is likely in the photo.

If you wanted to solve the "identify person on street" problem, you'd probably have to augment it with things like scans of NFC enabled credit cards and phone MAC addresses to know who is in the area -- not an entirely impossible set of sensors to put in a self-driving-car.

Isnt Picasa owned by Google? Their facial recognition is pretty accurate too, not noticeably different from Facebook's imho.

OT here, but I've used Picasa's facial recognition extensively on a very large library of photos (about 50,000). Anecdotally, I would say it started out very good, but, strangely, it got much worse over time.

Part or all of this effect may have been due to UI bugs or deficiencies that didn't clearly show what it was really trying to tell me (does it really think this is a new person, or separate for some other reason?), or didn't allow for subtle variations on what I was trying to tell it (such as "no, but good guess", or "this looks nothing like the person, but actually is")[1].

I can only guess why this is, but at first it seemed to be quite good at finding new faces that looked very similar to the ones it had found so far. Over time, it's as if the wide variety of confirmed positives reduced the confidence in finding any new faces at all.

[1]: It was somewhat confusing whether I should select and move a bunch of wrong faces to the right person, or just say no and let it try again. It might have also helped if I'd been able to say "yes these are all the same person, but I don't want to name them in my database".

Imagine googling a license plate and finding 1) where it was last seen by a google car (perhaps just a few minutes ago), and 2) a picture of the car and its occupants.

Of course, no reason to stop there, might as well throw in some facial recognition around all those folks you're passing on the sidewalks.

(Don't get me wrong. I'm both excited and optimistic about self-driving cars. The privacy concerns and having Google collect yet more data about the universe? Not so much.)

"The privacy concerns and having Google collect yet more data about the universe? Not so much."

Why? As far as I'm aware, they haven't been snooping into people's private data. Nor have they been explicitly selling it to any third party. And the haven't been abusing the data we have given them.

Now, don't get me wrong, they have most certainly been monetizing that data. But that's a different story.

Are you simply just scared of what Google will store/infer about you?

You are responding to a charge I never made. I said I had privacy concerns, not that they were snooping into data, selling it to third parties, or abusing the data we gave them (not that we would all ever agree on what "abuse" meant, or ever know what might happen to all of our data in the foreseeable future)

No, you misunderstand what I was saying/askin. I asked "Why?" And then I proceeded to enumerate all the reasons I could think of that would lead a reasonable person to have "privacy concerns" about Google. And I also said that they did none of those.

Agreed. Abuse wouldn't be even very feasible to define, let alone get the majority of people to agree on.

Every repo has readers mounted on all of his cars. They pay for themselves after first hit.

https://www.aclu.org/alpr Police at least more or less have some form of retention polisy, Repo companies will keep that data forever.

How about scoring people's driving ability based on their car's actions, record licenses plate numbers, and report it to insurance companies. As someone who is a good-but-fast driver, I am not sure how I feel about this.

> As someone who is a good-but-fast driver, I am not sure how I feel about this

Every fast driver thinks this about themselves.

Slow drivers typically also think they're safer, because they equate speed to safety.

If all other circumstances are equal this is true from a physical standpoint, as E = 1/2 m v^2, at least as long no high speed driver crashes into them.

It's relative velocity that counts, and in most situations it's the other cars that are more likely to cut in front of you or rear-end you than the tree on the side of the road. People who think slow (relative to the ground) is safer are oversimplifying.

The problem is that we humans throw a wrench into the mix. Slow drivers even cause irrationality in other drivers; I would like to consider myself competent, but I'm willing to admit that a bumbling old fart in his Mercedes can get on my nerves.

I realize that, "if all other circumstances are equal," was specifically mentioned.

90% of drivers think they are above average.

Assuming a non-symmetric distribution of driving skill, 90% of drivers may in fact be above average.

If it reduces insurance rates and accident rates, is that good?

If such scoring technology is mandatory at the behest of insurance agencies, is that bad?

If 'naked' cars -- cars whom are lacking the scoring technology -- are prohibited on certain roads at certain times, is that good?

Self-driving cars will be the gold standard that humans are measured against, how we apply that standard is something else entirely.

#5 is probably more easily done by selling it as a service to the food trucks themselves, who then self-report.

Twitter serves this purpose currently (not terribly well, but with food tricks trying to be hip that's almost the point).

This would be a great ecological experiment.

The last few struck me a few years ago when this was first announced. The first company that rolls these out will have so much power over commerce in general. I can imagine Google offering a service to help optimize those kinds of variables for businesses and consumers alike.

> 2. Bring in Traffic data that is WAY more precise than phone GPS pings.

They already have that with Waze.

On the subject of city situations that a self-driving car might have trouble with, I'll relate a story I heard years ago when living in the Boston area:

An out-of-state driver pulled up to a Y intersection -- for concreteness, let's imagine this as an upside-down Y, with our driver arriving on the right leg. There was continuous traffic flowing up the left leg and out the top, oblivious to the stop sign on their side of the intersection. (On top of that, there was a police car parked on the side of the road nearby, oblivious to their obliviousness. It was clear that the social contract was that that particular stop sign was inoperative when traffic was heavy.)

So the driver turned to his passenger, a local, and asked "what do I do? how do I get in?" The answer came back: "You have to convince them you're serious."

That's the problem for the 99% humans 1% driverless scenario, the inverse is equally troubling to me.

Driving currently involves a range of aggressiveness strategies, it may have hit a certain equilibrium mix of insane and polite drivers.

If 99% of the cars dodge whatever you do, then you're messing with the metagame. Suddenly driving like a maniac, forcing all the polite computers out of your way gets you everywhere faster.

If driverless systems become common, they'll almost have to be required.

As long as my self-driving car is allowed to report you to the Highway Patrol, I can't see this becoming a problem. It's just another one of those issues that's a lot more easily solved than what's been done to get driverless cars to their current state of capability.

I had similar experiences with merges around Boston: the painted lane markings, if they were even visible, seemed to be treated as mere suggestions and people just squeezed in semi-randomly. Of course traffic barely moved, so it wasn't really dangerous, but I can imagine an automated, defensive-driving robot just waiting until the end of days.

This is an excellent point. My guess is that computer drivers will just have to be polite and will be taken advantage of.

But maybe they could offer some sort of incentive to other drivers on the road. Imagine getting a check in the mail for allowing a Google car to merge into your lane. Or on the flip side, a ticket for cutting one off?

I learned the same lesson crossing streets (as a pedestrian) in Rome by watching the locals. Make eye contact, go, and be consistent. The drivers are so alert (because you have to be), that they will stop. Once I learned how to walk across the street there, I had everything I needed to know in order to attempt driving.

The only way it could probably handle other drivers breaking the law in that type of scenario is for it to be programmed to break the law in a similar manner, or to do nothing.

Which oddly enough, is often the same choices we are given as human drivers.

All the more reason to move everyone to driver-less cars that respect the law and the space of other vehicles on the road.

While the technology they have built is very impressive, I can't help but think they are doing the wrong thing., in a way. I think the huge wins from automated cars will be in interstate cargo transport.. semis that drive themselves from city to city on the freeways and are dropped of and picked up by human drivers over the last mile. City driving is so chaotic and unpredictable that I'm not sure there's a win there.

For human transport, including in urban areas, I think the target should be aerial... automated electrical air taxis that recharge themselves as needed. Automating flight is a whole lot easier. No pedestrians, no bikes, no couches or trash cans in the road.

Dolgov logs the road work incident in the computer. He explains that feedback from the driving teams is critical to the car's development. "Every disengage has a severity associated with it," he says. "That was not the end of the world. We would have gotten through the cones. But it was a problem. Once we go back we'll pull the disk out of the car. We'll import the log from this run. This will get flagged to developers. It will go into our database of scenarios and test cases we track. We'll have more information about this on the desktops, but from what I saw on the screen it looks like we detected [the cones] correctly, but for some reason the planner was conservative and decided not to change lanes. We'll create a scenario that says, here, the right thing would have been to change lanes, and the next versions will have it addressed."

Interesting that this is done so much in a scenario-by-scenario way. I would have thought something like this would be covered by a general rule, such as "choose an unobstructed path if one exists, otherwise stop".

I suppose such general rules can conflict sometimes, and that's why you need the database of scenarios; but still, this case surprises me, since there was no reason the car couldn't simply change lanes.

And I would think you would need the general rules because there will always be scenarios you couldn't have anticipated. Maybe the answer is that as long as you have the alert test driver, it's better for the car not to try to apply general rules; but once the car is truly autonomous with no driver, it shouldn't be quite so conservative, particularly in a situation like this where a danger-free solution existed.

It sounds like they're just adding unit test cases--the new scenario is the test and what actually changes is separate (be it a new algorithm, a special case, changes to existing algorithms, etc).

This is exactly how it works - testing reveals corner cases and you address them.

The video of the LIDAR data and the software's interpretation of the environment is fantastic. The "augmented reality" of the long barrier in front of potential collisions, and the green/red road-ahead indicator makes it easy to see what the software is seeing and thinking.

I can't imagine not wanting to work on this. It must be really complicated at Google -- I'd take a 50-75% pay cut to work on this project at Google, rather than ads optimization. It must be an interesting organizational challenge figuring out how people get assigned to it; sort of like PARC vs. optimizing photocopier software at Xerox back in the day, except back then they kept PARC pretty much secret from engineering (from what I've read).

"It's those little tweaks that bridge the gap between a jerky robotic ride and an amazingly smooth one."

Properly taught robots know about jerk, snap, crackle and pop (the third, fourth, fifth and sixth derivatives of the position vector with respect to time, https://en.wikipedia.org/wiki/Jounce ). "Amazingly smooth" is probably the minimum jerk trajectory.

From the article: "The first rule of riding in Google's self-driving car, says Dmitri Dolgov, is not to compliment Google's self-driving car. (...) I have just announced that so far the trip has been "amazingly smooth." "The car knows," says Dolgov."

I'd say considering remarkably complex work, compliments are absolutely justified. Why shouldn't we compliment a creator of a for a job well done anymore? What's wrong with these people?

It's just a superstition. I give demos all the time, and I hate it when I do a test run and people tell me how awesome it's working. Tell me afterwards, you'll jinx it if you tell me before. It doesn't actually work that way, but hot damn it feels like it does.

And the demos I give don't put anyone's lives at risk!

The point is not to jinx it. It's not about about not giving credit where credit is due.

I read that to mean that the car knows and handles compliments poorly, i.e. an obvious joke.

It's pretty clear that autonomous vehicles are going to be standard, sooner or later. The technology will be mandatory for any new vehicle allowed on public roads.

I'm just curious what's Google plan for it. Are they solving the problems, getting patents and licensing to car manufacturers? Selling devices, with data collection built-in? Something else?

Regardless of Google's plans for the cars themselves, if people are freed from having to watch the road, they will probably use most of their newfound free time to stare at a screen. That benefits Google.

> autonomous vehicles are going to be standard, sooner or later.

It's hard to disagree, but I think it will be closer to later than sooner. At least decades, I'd guess.

Do the radar systems on these vehicles have any effects on health?


> The 76-77 GHz range is assigned by the Federal Communications Commission (FCC) for collision avoidance radar systems.

Some info on what that means: http://www.rfcom.ca/faq/answers.shtml

I want to know how it handles rain.

I think a self driving car will do a much better job operating in the rain than a human being. Rain makes it difficult to see and wipers are sometimes worn out and make visibility even worse.

Slick driving conditions are also dangerous for human drivers but these days there are electronic stability systems in place to make it less dangerous to drive in the rain. If you have a car that can turn all traction controls off you can easily see the difference of assisted and non assisted driving in the rain.

I think the bigger problem will be heavy snow. What happens when you park your car at the mall and come back to find the sensors covered in 4 inches of snow? Very curious to see how they deal with that issue.

> What happens when you park your car at the mall and come back to find the sensors covered in 4 inches of snow? Very curious to see how they deal with that issue.

You turn off the self driving and manually drive home? The car doesn't have to be perfect to have value. It's perfectly ok if the first version doesn't handle rain, or snow, or fog.

Not yet. That's why Google's been testing in Nevada.

Wait, did you actually want to know how it handles rain, or were you asking a rhetorical question implying that it can not handle rain? If the latter, it would've been friendlier to say "it doesn't handle rain well, here's why: [...]"

I think he probably started searching for answers after he posed his original question, and found some. Not that he was asking a question he knew the answer to.

Since the article didn't touch on environmental factors I suspect "it doesn't" is the answer to handling rain/snow. However, since one of the cameras is a laser that builds a 3d model of the world, I suspect that once the algorithms are developed it will have the potential to surpass human performance when visibility is poor (heavy rain/fog).

The LIDAR sensors should be able to handle most rain conditions, but it breaks down when you get into heavy rain. I'm not precisely sure where the line is drawn.

Snow however is a different problem. I think even light snow is impossible for LIDAR to penetrate. Not to mention the issues that arise when tires slip on ice, snow is covering signage, etc, etc.

Do you have any sources for any of this information?

Because they seem entirely made up and the "issues when tires slip on ice" have been mitigated with computerized ABS systems for the past 20 years

Well, I have some sources for that information:


It's a long article, but search for "rain." Here's the most important quote for our purposes:

    Left to its own devices, Thrun says, it could go only
    about fifty thousand miles on freeways without a major
    mistake. Google calls this the dog-food stage: not quite 
    fit for human consumption. “The risk is too high,” Thrun 
    says. “You would never accept it.” The car has trouble in 
    the rain, for instance, when its lasers bounce off shiny 
    surfaces. (The first drops call forth a small icon of a 
    cloud onscreen and a voice warning that auto-drive will 
    soon disengage.) It can’t tell wet concrete from dry or 
    fresh asphalt from firm. It can’t hear a traffic cop’s 
    whistle or follow hand signals.

ABS allows you to take maximum advantage of the traction you do have, but it doesn't magically improve your traction. The problem the GP is referencing is that the driver (be it human or computer) operates the vehicle using some sort of model for the maximum available traction. (Some) Humans are able to notice when it is snowing, re-tune their model and drive at an appropriate speed. It doesn't seem impossible that some heuristic could be invented to do the same for a computer driver, but I don't think anyone has demonstrated that heuristic yet.

Funny thing about ABS systems is that the pump actually forces the brake pads open. The designer's thinking is that a turning wheel is able to steer, while a vehicle with sliding wheels is directed pretty much by momentum alone. This results in longer stopping distances.

So far as ice -- it's coefficient of friction in the real world is not a constant -- it varies based on local conditions. So you can't predict your ability to stop & steer on it - you can only react. Your best defense is to install ice-rated tires in early winter to increase your traction. Google will likely have to do some testing in Wisconsin to see how the software reacts to icy conditions, and is able to somehow automatically recognize ice/slush and adjust stopping/acceleration rates appropriately.

Well, in principle ABS shouldn't necessarily result in longer stopping distance: sliding friction is lower than static friction, so if you can control the braking precisely enough, not locking the wheels should result in shorter stopping distance along with not losing control. I'm not saying current systems are that good, though.

On dry surfaces, a professional driver is able to do Threshold Braking, which in non-ABS cars maximizes stopping power by taking braking just up to the limit before the tires start sliding. On cars with ABS, the intent is to minimize the rate at which the ABS pump activates in order to maximize stopping power (pulse ... pulse ... pulse vs. pulse.pulse.pulse)

ABS has an advantage over regular systems in that the computer is able to control braking effort on a per-wheel basis. Something a human with only one pedal to push can't do. IIRC, the best ABS systems are able to cycle at about 10Hz.

Something I vaguely recall reading about is that in a panic stop on ice, the heat from the sliding friction will melt a very small bit of ice under the contact patch, turning it into a hydroplaning situation. I need to see if I can find that reference again...

"ABS generally offers improved vehicle control and decreases stopping distances on dry and slippery surfaces for many drivers; however, on loose surfaces like gravel or snow-covered pavement, ABS can significantly increase braking distance, although still improving vehicle control." (with citations) http://en.wikipedia.org/wiki/Anti-lock_braking_system

I've done a lot of work with perception systems and sensors. GP is right on the money with the problems. Rain is even difficult.

Snow is the worst though - it is both wet and super reflective. Super reflective dazzles your laser/obscures cameras. Wet screws up your radar.

Maybe it's just me, but the idea that Google is testing these cars on regular street is pretty amazing. What happens if someone dies while they're running their tests? I'm not trying to troll, but it seems to me that the liability concerns here would be huge.

I believe the tests are closely supervised, with a fully attentive person sitting in the front seat ready to take over. If anyone died or there was some accident, that person would be liable. (Although the public outcry/fear mongering over a death in that situation would likely be a huge setback. I'm sure they're being extraordinarily careful.)

> The Google car can now recognize temporary stop signs, making it less reliant on pre-programmed maps

1. Obtain handheld stop sign

2. Troll self-driving cars

My point is, a human would know when a kid is just messing with you by holding up a stop sign; a self-driving car would slam its brakes?

Picture taken of offender. Authorities called, charges filed; Impeding traffic.

The technology inside the self driving car is amazing. It almost makes me think that we will have AI at the same time we will have a self driving car.

Interesting article. Anyone catch this?

> Larry Burns ... says taxi-like fleets of shared autonomous vehicles can become viable business models if they can capture just 10 percent of all city trips.

"Just" 10%? Sounds like typical startup BS numbers.

The defensive style driving of the Google car won't work in San Francisco where you literally have to break the law in order to turn left (by stopping in the intersection and often finishing the turn on a red).

That's actually legal. Provided you entered the intersection legally, you have the right-of-way over cross traffic (but not over oncoming traffic, obviously) to exit it.

I don't have the California Vehicle Code in front of me, but I'm pretty sure about this because the situation arose during my road test to get my first license (yes, I still remember this, even though it was almost 40 years ago!). I had entered the intersection to turn left, and when the light turned yellow, the tester lady said "clear the intersection" or words to that effect. (I passed.)

Anyway it has to be that way -- you can't have cars stuck in the middle of the intersection while cross traffic tries to go around them!

I have actually gotten a ticket for this near Mendocino. From the DMV handbook:

> If you are turning left, make the turn only if you have enough space to complete the turn before creating a hazard for any oncoming vehicle, bicyclist, or pedestrian. Do not enter the intersection if you cannot get completely across before the light turns red. If you block the intersection, you can be cited.

I would think this rule (law?) applies more to prevent gridlock... If you don't have enough room to actually be on the other side of the intersection after you've finished the turn, you shouldn't start it.

What happened, exactly? I'm guessing you entered the intersection when the lane you wanted to turn into was backed up, so you would not have been able to clear the intersection even in the absence of oncoming traffic. Is that right?

I think your reading is correct.

I haven't lived in a lot of states but they've always said "one car can enter the intersection and wait to turn left." Implicit is the assumption that they have some place to be when the opposing traffic clears.

Actually it was non-stop oncoming traffic. The last person entered the intersection on a deep yellow. By the time they cleared my car, I was in a red in the middle of the intersection (as were they, but I got the ticket).

Assuming you were the lead car in the left-turn lane, I think the officer erred in citing you. The law can't require you to predict when other people are going to act illegally so you can plan accordingly.

It also doesn't make sense that not even one car per cycle is allowed to turn left across such a traffic flow. You could have sat there for hours following that advice.

That isn't against the law, that's the DMV-handbook-prescribed way to turn left.

This has changed somewhat recently.

A few years ago the Colorado (yes, I know, not California, but bear with me) driver's education handbook said that this was legal and advised... Now it too says the opposite. Too many people I guess couldn't handle the left turn on yellow.

I suspect if there are large numbers of these vehicles, San Francisco (or more likely California) will have to change the rules its lights follow instead of the other way around.

It's hard to argue with a perfectly reasonable, perfectly safe driver.

What is the alternative here? Not even enter the intersection until you believe you can complete the turn before the yellow light?

You sound like the kind of person who would watch an entire light sequence and not turn. Ugh.

It already does this just fine.

How is this a realistic future with an ambulance chaser around every corner? What company is going to insure the vehicle owner? It's gotten so bad that the the killers are suing the victims. http://www.usatoday.com/story/news/nation/2014/04/26/newser-...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact