If Google wanted, they could use these Self-Driving cars to:
1. Convert Streetview to show same-day photos.
2. Bring in Traffic data that is WAY more precise than phone GPS pings.
3. Scan for and report open parking spots
4. Monitor for design-based congestion problems (e.g. no-bike-lane street with lots of bike traffic, areas where speeding is norm)
5. Track location of food trucks in real time.
6. Monitor foot traffic by street address.
If you wanted to solve the "identify person on street" problem, you'd probably have to augment it with things like scans of NFC enabled credit cards and phone MAC addresses to know who is in the area -- not an entirely impossible set of sensors to put in a self-driving-car.
Part or all of this effect may have been due to UI bugs or deficiencies that didn't clearly show what it was really trying to tell me (does it really think this is a new person, or separate for some other reason?), or didn't allow for subtle variations on what I was trying to tell it (such as "no, but good guess", or "this looks nothing like the person, but actually is").
I can only guess why this is, but at first it seemed to be quite good at finding new faces that looked very similar to the ones it had found so far. Over time, it's as if the wide variety of confirmed positives reduced the confidence in finding any new faces at all.
: It was somewhat confusing whether I should select and move a bunch of wrong faces to the right person, or just say no and let it try again. It might have also helped if I'd been able to say "yes these are all the same person, but I don't want to name them in my database".
Of course, no reason to stop there, might as well throw in some facial recognition around all those folks you're passing on the sidewalks.
(Don't get me wrong. I'm both excited and optimistic about self-driving cars. The privacy concerns and having Google collect yet more data about the universe? Not so much.)
Why? As far as I'm aware, they haven't been snooping into people's private data. Nor have they been explicitly selling it to any third party. And the haven't been abusing the data we have given them.
Now, don't get me wrong, they have most certainly been monetizing that data. But that's a different story.
Are you simply just scared of what Google will store/infer about you?
Agreed. Abuse wouldn't be even very feasible to define, let alone get the majority of people to agree on.
Police at least more or less have some form of retention polisy, Repo companies will keep that data forever.
Every fast driver thinks this about themselves.
I realize that, "if all other circumstances are equal," was specifically mentioned.
If such scoring technology is mandatory at the behest of insurance agencies, is that bad?
If 'naked' cars -- cars whom are lacking the scoring technology -- are prohibited on certain roads at certain times, is that good?
Self-driving cars will be the gold standard that humans are measured against, how we apply that standard is something else entirely.
They already have that with Waze.
An out-of-state driver pulled up to a Y intersection -- for concreteness, let's imagine this as an upside-down Y, with our driver arriving on the right leg. There was continuous traffic flowing up the left leg and out the top, oblivious to the stop sign on their side of the intersection. (On top of that, there was a police car parked on the side of the road nearby, oblivious to their obliviousness. It was clear that the social contract was that that particular stop sign was inoperative when traffic was heavy.)
So the driver turned to his passenger, a local, and asked "what do I do? how do I get in?" The answer came back: "You have to convince them you're serious."
Driving currently involves a range of aggressiveness strategies, it may have hit a certain equilibrium mix of insane and polite drivers.
If 99% of the cars dodge whatever you do, then you're messing with the metagame. Suddenly driving like a maniac, forcing all the polite computers out of your way gets you everywhere faster.
If driverless systems become common, they'll almost have to be required.
But maybe they could offer some sort of incentive to other drivers on the road. Imagine getting a check in the mail for allowing a Google car to merge into your lane. Or on the flip side, a ticket for cutting one off?
Which oddly enough, is often the same choices we are given as human drivers.
All the more reason to move everyone to driver-less cars that respect the law and the space of other vehicles on the road.
For human transport, including in urban areas, I think the target should be aerial... automated electrical air taxis that recharge themselves as needed. Automating flight is a whole lot easier. No pedestrians, no bikes, no couches or trash cans in the road.
Interesting that this is done so much in a scenario-by-scenario way. I would have thought something like this would be covered by a general rule, such as "choose an unobstructed path if one exists, otherwise stop".
I suppose such general rules can conflict sometimes, and that's why you need the database of scenarios; but still, this case surprises me, since there was no reason the car couldn't simply change lanes.
And I would think you would need the general rules because there will always be scenarios you couldn't have anticipated. Maybe the answer is that as long as you have the alert test driver, it's better for the car not to try to apply general rules; but once the car is truly autonomous with no driver, it shouldn't be quite so conservative, particularly in a situation like this where a danger-free solution existed.
Properly taught robots know about jerk, snap, crackle and pop (the third, fourth, fifth and sixth derivatives of the position vector with respect to time, https://en.wikipedia.org/wiki/Jounce ). "Amazingly smooth" is probably the minimum jerk trajectory.
I'd say considering remarkably complex work, compliments are absolutely justified. Why shouldn't we compliment a creator of a for a job well done anymore? What's wrong with these people?
And the demos I give don't put anyone's lives at risk!
I'm just curious what's Google plan for it. Are they solving the problems, getting patents and licensing to car manufacturers? Selling devices, with data collection built-in? Something else?
It's hard to disagree, but I think it will be closer to later than sooner. At least decades, I'd guess.
> The 76-77 GHz range is assigned by the Federal Communications Commission (FCC) for collision avoidance radar systems.
Some info on what that means:
Slick driving conditions are also dangerous for human drivers but these days there are electronic stability systems in place to make it less dangerous to drive in the rain. If you have a car that can turn all traction controls off you can easily see the difference of assisted and non assisted driving in the rain.
I think the bigger problem will be heavy snow. What happens when you park your car at the mall and come back to find the sensors covered in 4 inches of snow? Very curious to see how they deal with that issue.
You turn off the self driving and manually drive home? The car doesn't have to be perfect to have value. It's perfectly ok if the first version doesn't handle rain, or snow, or fog.
Snow however is a different problem. I think even light snow is impossible for LIDAR to penetrate. Not to mention the issues that arise when tires slip on ice, snow is covering signage, etc, etc.
Because they seem entirely made up and the "issues when tires slip on ice" have been mitigated with computerized ABS systems for the past 20 years
It's a long article, but search for "rain." Here's the most important quote for our purposes:
Left to its own devices, Thrun says, it could go only
about fifty thousand miles on freeways without a major
mistake. Google calls this the dog-food stage: not quite
fit for human consumption. “The risk is too high,” Thrun
says. “You would never accept it.” The car has trouble in
the rain, for instance, when its lasers bounce off shiny
surfaces. (The first drops call forth a small icon of a
cloud onscreen and a voice warning that auto-drive will
soon disengage.) It can’t tell wet concrete from dry or
fresh asphalt from firm. It can’t hear a traffic cop’s
whistle or follow hand signals.
So far as ice -- it's coefficient of friction in the real world is not a constant -- it varies based on local conditions. So you can't predict your ability to stop & steer on it - you can only react. Your best defense is to install ice-rated tires in early winter to increase your traction. Google will likely have to do some testing in Wisconsin to see how the software reacts to icy conditions, and is able to somehow automatically recognize ice/slush and adjust stopping/acceleration rates appropriately.
ABS has an advantage over regular systems in that the computer is able to control braking effort on a per-wheel basis. Something a human with only one pedal to push can't do. IIRC, the best ABS systems are able to cycle at about 10Hz.
Something I vaguely recall reading about is that in a panic stop on ice, the heat from the sliding friction will melt a very small bit of ice under the contact patch, turning it into a hydroplaning situation. I need to see if I can find that reference again...
Snow is the worst though - it is both wet and super reflective. Super reflective dazzles your laser/obscures cameras. Wet screws up your radar.
1. Obtain handheld stop sign
2. Troll self-driving cars
My point is, a human would know when a kid is just messing with you by holding up a stop sign; a self-driving car would slam its brakes?
> Larry Burns ... says taxi-like fleets of shared autonomous vehicles can become viable business models if they can capture just 10 percent of all city trips.
"Just" 10%? Sounds like typical startup BS numbers.
I don't have the California Vehicle Code in front of me, but I'm pretty sure about this because the situation arose during my road test to get my first license (yes, I still remember this, even though it was almost 40 years ago!). I had entered the intersection to turn left, and when the light turned yellow, the tester lady said "clear the intersection" or words to that effect. (I passed.)
Anyway it has to be that way -- you can't have cars stuck in the middle of the intersection while cross traffic tries to go around them!
> If you are turning left, make the turn only if you have enough space to complete the turn before creating a hazard for any oncoming vehicle, bicyclist, or pedestrian. Do not enter the intersection if you cannot get completely across before the light turns red. If you block the intersection, you can be cited.
I haven't lived in a lot of states but they've always said "one car can enter the intersection and wait to turn left." Implicit is the assumption that they have some place to be when the opposing traffic clears.
It also doesn't make sense that not even one car per cycle is allowed to turn left across such a traffic flow. You could have sat there for hours following that advice.
A few years ago the Colorado (yes, I know, not California, but bear with me) driver's education handbook said that this was legal and advised... Now it too says the opposite. Too many people I guess couldn't handle the left turn on yellow.
It's hard to argue with a perfectly reasonable, perfectly safe driver.