
Inside Waymo's Secret World for Training Self-Driving Cars - jpm_sd
https://www.theatlantic.com/technology/archive/2017/08/inside-waymos-secret-testing-and-simulation-facilities/537648/?single_page=true
======
Animats
There's a lot of misunderstanding about self-driving. Mostly because nobody is
publishing much.

If you want to do it right, you start with geometry. The first step is
capturing range imagery and grinding it down to a 3D model of the world. This
tells you where you physically can go. That's where we were at the DARPA Grand
Challenge over a decade ago.

Then comes moving object popout. What out there isn't a stationary obstacle?
That comes from processing multiple frames and range rate data from radars.
Moving objects have to be tracked.

Only then does object recognition come into play. Moving objects should be
classified if possible, and fixed objects of special interest (signs,
barricades, etc.) identified. Not all objects will be identified, but that's
OK. If it can't be identified and it's moving, you have to assume the worst
case - it's vulnerable and it doesn't follow the road rules. This will force a
slowdown and careful maneuvering around the unknown object. (See Chris
Urmson's SXSW video of their vehicle encountering someone in a powered
wheelchair chasing a duck with a broom.)

Predicting the behavior of moving objects is a big part of handling heavy
traffic. That's what Google is working hard on now.

Machine learning is a part of this, but not the whole thing. You can't really
trust machine learning; it's a statistical method and sometimes it will be
totally wrong. You can trust geometry.

If you want to do it wrong, you start out with lane keeping and smart cruise
control, tack on recognition of common objects, throw in some machine learning
and hope for the best. This produced the Tesla self-crashing car, noted for
running into unusual obstacles which partially block the lane and for trusting
lane markings way too much. Look closely at the videos from Tesla and Cruise,
slowing them down to real time. They speed them up so you can't easily see how
bad the object recognition is.

~~~
dxbydt
> If you want to do it wrong, you start out with lane keeping...recognition of
> common objects...some machine learning....hope for the best

Thank you! Can I ask why the self-driving car programs like the 3-semester
Udacity one deliberately focus on going about it the wrong way ? Why spend 1
whole semester on lane detection using trivia like OpenCV, 1 whole semester on
object classification using CNNs with Keras, spending so much time on
combining lidar & radar data using Kalman filters for moving object tracking
.... surely all of these are peripheral issues and the wrong way to go about
the business. Then why ? Is it all one big clueless scam ?

To be fair, I am enjoying the course very much as a student. But it's becoming
clear from talking to colleagues in the autonomous car industry that what is
being taught is not the real deal. This isn't what happens in production, so
to speak.

~~~
ghaff
Isn't the practical reality that any course like that will address the theory
and essentially deal with more or less toy problems? It would be hard to get
into cutting edge research in an online course.

I'm not sure learning about natural language processing or image
classification would be all that different--although those problems are
arguably better understood and more bounded.

Even a site reliability engineering course isn't really going to give you deep
insights into what, say, Google does on a day-to-day basis.

~~~
joshuamorton
>Even a site reliability engineering course isn't really going to give you
deep insights into what, say, Google does on a day-to-day basis.

I think this really gets to the core of the issue. There aren't many courses
that will accurately reflect the actual state of the art in any field.
Sometimes, graduate courses with a small scope and a relevant professor taught
in person can partially address the state of the art, but an online course
about "how to self driving car" is not that.

~~~
pas
Lectures, tech talks, blogs, random experts in relevant subreddits, but yes,
an online course is not going to give more than the fundamentals and some
advanced, but already well understood examples.

------
pgodzin
Really well-written and engaging article. It seems the main point of an
article like this from Google's perspective is gaining the public's trust. I'm
sure most people are worried about the edge-cases that are so hard to run into
in the real world, but are so important to get right. This is Google saying
"we may not see that edge-case exactly, but we know when it could happen and
simulate the situation a million times to make sure we get it right if we ever
see it again."

~~~
stevenwoo
I occassionally come across construction that is marked in an unusual way - at
65 miles an hour when the speed limit is 75. Also, when they are changing
freeways and restriping them (leaving the original old faded lanes and before
new lines go up) sometimes it's confusing to me and the traffic is flowing at
80 miles an hour, IIRC this was happening on 880 for a while about 10 years
back.

If the AI is confused by a roundabout as described I wonder how they handle
that or if this is something that only fools me because I am only using visual
sensors.

~~~
pgodzin
Aren't those examples in which it'd be fine for it to be confused? That
confusion shouldn't lead to any dangerous situations, just a different speed
than other cars.

It would also be interesting to know whether Waymo factors in other cars'
behavior. If everyone is going 80 while the speed limit is 65, will it factor
that in and go 80 or keep with the speed limit?

~~~
user5994461
>>> If everyone is going 80 while the speed limit is 65, will it factor that
in and go 80 or keep with the speed limit?

No, that's illegal to drive over the speed limit.

~~~
ghshephard
Google's driverless cars designed to exceed speed limit

See:
[http://www.bbc.com/news/technology-28851996](http://www.bbc.com/news/technology-28851996)

------
Animats
Now this is doing it right. That's how car companies do it. Big test tracks,
and years of test track time, plus simulation and test rigs.

As the article points out, Google has far more autonomous driving miles than
everybody else put together. They're on the hard cases, too; not just driving
on freeways.

~~~
hueving
Do they have anything that works in rain/snow?

~~~
cbhl
Snow is challenging because the lane markers are obscured by the snow -- human
drivers guess at where the lanes should be, or remember where the lanes were
yesterday.

Tesla being able to record sensor/map data from human drivers driving through
snow will help them mitigate this. On the other hand, electric batteries have
less range in cold whether, so I imagine that in the near term electric cars
will be less popular in snowy markets versus, say, California.

~~~
turkeywelder
While I take the point cold weather affects range. Norway, Sweden, Finland and
many other cold climates have very high electric car usage. Mostly due to the
subsidies, but it appears range isn't a major concern there.

[https://cleantechnica.com/2017/06/23/tesla-electric-cars-
con...](https://cleantechnica.com/2017/06/23/tesla-electric-cars-conquered-
norway/)

------
skywhopper
I'm disappointed the article wasn't a bit more skeptical of some of the
claims. Certainly the simulation-based testing is a good thing, but stats
about how many billions of simulated miles have been driven can create a self-
reinforcing delusion if everyone involved isn't careful to remember that the
simulations can only work with well-known and expected situations. It sounds
like Waymo realizes this and is building a huge library of scenarios to
evaluate, but one million simulated miles are not worth a hundred real miles
in terms of confidence in the system, and that is not the message this article
portrays.

Overall this article does give hints though that autonomous vehicles are truly
much further away than anyone wants to admit. The story about being flummoxed
by multilane roundabouts is depressing. If they didn't know such things
existed then they were incredibly sloppy in their data gathering. If they
truly thought scaling from a simple roundabout to a multilane one would be
trivial then their staff don't have the right mindset for this problem space.

Also note that all the testing is happening in flat, desert landscapes where
there are no weather or lighting challenges. Their pictured model residential
street only has stubs of driveways. No houses, no trees. I'm sure they know
these are gaps but I worry they underestimate the challenges of adapting to
entirely different driving environments. Especially when machine learning is
involved and you're simulating 99% of your mileage...

Looking forward to checking back in 2040 though.

~~~
ghshephard
"One million simulated miles are not worth a hundred real miles in terms of
confidence in the system"

I'm wondering why you say that? Wouldn't the more relevant factor be the type
of miles, regardless of whether they are simulated or real? For example, 1
mile of simulated "Interesting Scenarios" (Duck on Street, Bicyclist going
wrong way, pedestrian running onto road) is likely worth 100,000 miles of
normal Freeway driving.

------
flamedoge
> But Peng also presented the position of the traditional automakers. He said
> that they are trying to do something fundamentally different. Instead of
> aiming for the full autonomy moon shot, they are trying to add driver-
> assistance technologies, “make a little money,” and then step forward toward
> full autonomy. It’s not fair to compare Waymo, which has the resources and
> corporate freedom to put a $70,000 laser range finder on top of a car, with
> an automaker like Chevy that might see $40,000 as its price ceiling for
> mass-market adoption.

> “GM, Ford, Toyota, and others are saying ‘Let me reduce the number of
> crashes and fatalities and increase safety for the mass market.’ Their
> target is totally different,” Peng said. “We need to think about the
> millions of vehicles, not just a few thousand.”

I wonder if an easier task to reduce fatalities is to target impaired driving
and develop a detection mechanism that tries to identify whether the driver is
impaired. It seems like an easier and more localized problem than building a
fully autonomous vehicle.

~~~
jooke
Who's going to buy a car that might tell them they can't drive?

~~~
grkvlt
Someone who saves a huge amount on their car insurance because they own such a
vehicle (and therefore should never be subjecting the insurance company to
claims caused by their impaired driving) I assume?

------
stirlo
Looks like this is the location of the test facility they talk about...
[https://www.google.com.au/maps/@37.3705986,-120.5747932,237m...](https://www.google.com.au/maps/@37.3705986,-120.5747932,237m/data=!3m1!1e3)
Interesting choice of name for their Expressway...

EDIT: Apple maps has up to date satellite imagery
[https://maps.apple.com/?q=37.3718,-120.5749&t=k](https://maps.apple.com/?q=37.3718,-120.5749&t=k)

~~~
ipsum2
Curious - does anyone know why the Apple maps redirect to Google maps? I'm on
Chrome.

~~~
slazaro
[https://techcrunch.com/2012/09/22/mapsception/](https://techcrunch.com/2012/09/22/mapsception/)

Apparently it's been doing this for years, and in different platforms.

------
foota
I found this to be especially interesting "She spent countless hours going up
and down 101 and 280, the highways that lead between San Francisco and
Mountain View. Like the rest of the drivers, she came to develop a feel for
how the cars performed on the open road. And this came to be seen as an
important kind of knowledge within the self-driving program. They developed an
intuition about what might be hard for the cars. “Doing some testing on newer
software and having a bit of tenure on the team, I began to think about ways
that we could potentially challenge the system,” she tells me."

------
altotrees
Clearly the progress being made in the area of self-driving cars is
undeniable. Seeing the articles and discussions crop up on a daily basis are
making me wonder if we are headed for an AI Winter scenario in this area
within the next decade or if this is the real deal and we will see self-
driving cars at dealerships within 20 years.

This is so far from being my area of expertise. Just one observer's
thoughts/questions. Insane how far we have come so quickly, at any rate.

~~~
arcanus
I don't disagree that AI is over-hyped. However, I think the research
environment is not as likely to dry up this time.

That is because much of the machine learning research is coming from industry,
particularly the mega-corps. And for extremely important disruptive events
such as AI, they are willing to drop $$ into collecting a stable of research
scientists to not fall behind in this space. So while the AI winter was driven
by a drop in academic and government spending on research, I think our present
situation is far less likely to result in a similar drying up.

Start ups could get hit, however.

~~~
dlubarov
Agreed. As a Google engineer, I think a lot of our AI efforts are "safe" (as
in we'll keep investing in them for a long time) because they're already
providing substantial business value. For example,

\- TTS and speech synthesis have lots of uses in Android (phones, Wear, Auto,
etc.)

\- Object recognition is very useful for photo search

\- Face recognition is also useful for photo search (if you tag people in
Google Photos)

\- Neural machine translation provides better translation accuracy (for
languages which we have enough training data for)

It's possible Google will cut back on some more speculative ML research
efforts, but certainly not those four, I would think.

~~~
peoplewindow
I doubt Google will cut back any time soon, but as a former Google engineer, I
find this an entertaining definition of business value ;)

How much money does Google Photos make? Last time I used it, there were no ads
in it.

Isn't speech recognition nearly at human levels of comprehension, at least for
US English and outside of highly technical jargon? Speech patterns change
slowly, so would Android continue to make money if investment in speech R&D
were cut back? My guess is yes, it'd be fine.

Does Google Translate make money in some specific, measurable way? I don't
mean "well it's neat to have translation links in web search", I mean, would
people cut back on the commercial queries that are Google's main source of
income if Translate quality stopped improving? I doubt it.

Much of their products are like this. They're products but they aren't
businesses. It's very easy to lose sight of the basics of business when inside
the Google bubble. The things you mention are not providing business value in
the conventional sense of the term because they aren't yielding independent
profits. So they're all vulnerable.

That said, vulnerable to what? I think Google is at risk of losing a lot of
trust around web search and political and informational search more generally,
but commercial queries are probably pretty much impregnable.

~~~
dragonwriter
> How much money does Google Photos make? Last time I used it, there were no
> ads in it.

Google Photos is, in a sense, a big ad with prominent calls to action for
selling storage space.

> Isn't speech recognition nearly at human levels of comprehension, at least
> for US English and outside of highly technical jargon?

Recognition, maybe (though I think no). Applying recognized text to get the
desired outcome, not even close.

> Does Google Translate make money in some specific, measurable way?

It powers a paid API, so, yes, it makes money in a specific, direct,
measurable way.

~~~
peoplewindow
But selling disk storage is a very marginal business and few people take huge
numbers of photos they value enough to pay for storage of (vs sticking them on
Facebook or Instagram which compresses the hell out of them but does it for
free). I really doubt this is a significant business for Google.

~~~
wastedhours
But this will only compound, no? If people keep using it, the switching "cost"
(cognitive/time) increases exponentially and people will near the storage
limits.

------
bit_logic
Self driving cars is based on machine learning which is basically processing
massive amounts of past data. This is great for routine situations and it
seems Waymo is making good progress covering most of these. However the key
weakness is the lack of true intelligence. When anything unusual or unexpected
happens, the best it can do is simply safely shutdown and wait for a human to
intervene. And the car can't really analyze and understand its environment
because that requires intelligence.

It seems a best of both worlds solution would be a new type of job: Data
gathering driver. It would be basically like the Google Streetview cars, but
with much more sensors and input from the driver. These drivers would be
assigned neighborhoods to drive through and note anything unusual or
noteworthy. Stop sign is down because of last night's storm? Driver notes this
and now all Waymo cars know. Street closed due to construction? All Waymo cars
will avoid that street. This type of data requires intelligence to analyze and
understand and it would be fed into the Waymo system for all self driving cars
to benefit from it. There could be millions of self driving cars and a few
thousands of these drivers feeding daily updated data to the system. You could
even increase the frequency of the drivers depending on weather conditions.
Maybe if there's a storm, the drivers keep driving in a loop hourly instead of
once daily.

~~~
iUsedToCode
There arent that many novel scenarios on the road. Sure, Google can't program
around the possibility of an airplane falling down on you, but how often does
that happen?

It doesn't have to be perfect. Just very good and improving. Some time ago
google shared a gif of a wheelchair chasing a duck in the middle of the road.
The car didn't understand it, so it just stopped. Good enough for me.

Obviously, they have a lot of work ahead of them, but don't be so pessimistic.
Most people drive shitty (myself included), we aren't impossible to improve
upon.

~~~
notahacker
Novel scenarios might not be common compared with miles of traffic-following
drudgery, but even really bad human drivers deal with novel scenarios on the
road more often than they have accidents.

Stopping might be a sensible safety protocol in some situations, but it isn't
in others (not to mention the situations where the car may stop too late
because it doesn't actually recognise that a novel scenario is _about_ to
occur).

So if you want Level 5 driving, and not just a very impressive demo which is
safe provided a human watches it attentively enough to take emergency measures
and is able to override it when it decides it can't process a situation well
enough to proceed, you need the AI to be pretty damn close to perfect in its
judgement of how to react to a huge number of edge cases.

~~~
Retric
Except self driving cars are paying full attention all the time and can react
significantly faster. This causes the difference in stopping distance to be
dramatic. Remember, humans are basically going to do the same thing for the
first 0.25 seconds in any emergency situation and that's the best case.

So, self driving cars can simply be very cautious without seeming to.

~~~
notahacker
I agree that self driving cars' response time can _sometimes_ compensate for
lack of general intelligence to anticipate a visible roadside activity
developing into a hazard or non-routine situations in which another driver
might cut into their lane. But lightning reflexes aren't going to eliminate
situations in which buggy, late or nonexistent responses to things an AI
hasn't been trained to deal with endanger other road users, especially when
said other road users don't have lightning reflexes themselves.

~~~
Retric
Can you give an actual example? Because, not being able to identify something
is not necessarily an issue as long as the car notices something is there and
it should not hit it.

EX: I am sure the car had no idea what this was: [https://youtu.be/Uj-
rK8V-rik?t=26m11s](https://youtu.be/Uj-rK8V-rik?t=26m11s) but as long as it
can tell it's bigger than a bread box and so it should not to hit it that's
enough.

~~~
notahacker
The obvious problem scenario is when unpredictable evasive action puts an
autonomous vehicle into the path of other drivers (with human response times).
That's when very sharp responses to an uncategorised "obstacle" that's
actually a drifting plastic bag or a reflection cause more problems than they
solve.

Similarly, instantaneous harsh braking might help an AI save the small child
it didn't anticipate might chase the football that flew past moments earlier,
but a human capable of grasping that footballs are associated with pedestrians
making rash decisions might have braked early and gently enough to not get
bumped by the car behind. (If they didn't, they might find their late reaction
blamed for the accident and possibly even get prosecuted for driving without
due care and attention).

The UK requires every learner driver to sit an exam consisting of identifying
CGI "developing hazards" where they're scored on ability to rapidly identify
_stuff that might happen_ before they're allowed to do the full driving test.
I'm sure a key focus of the teams the article discusses is teaching AI similar
cases like gently slowing in the event of a football-shaped object moving near
the road (which is likely far from the most difficult or obscure novelty to
teach an AI to handle) but the problem space of novelties humans handle by
understanding what things might be and how/if they are likely to move isn't
small or one there's good reason to believe plays to AI's strengths

(Meta: not sure why you're being downvoted, your contribution seems
constructive and on topic to me)

~~~
Retric
A bag* or child running into a street is not an usual event, also a car is not
going to 'evade' into another car. Rare events are the things people don't see
across multiple human lifetimes not just something you don't see every month.

Which IMO is what's missing from the debate, unusual events are in terms of
~10+ million miles of training data before these things are in production.
They are clearly out there, but I doubt people are going to react well to say
someone falling from an overpass onto the road very well either. So, it's that
narrow band of really odd but something a person would respond correctly to
that's the 'problem'.

PS: Of course the bag might relate to a bug which are likely. But, IMO that's
a completely different topic.

~~~
notahacker
> A bag* or child running into a street is not an usual event, also a car is
> not going to 'evade' into another car

Opting to drive around a (stationary, visible from a distance) bag in an
unpredictable manner is literally how Waymo's first "at fault" accident
occurred...

The point is that a human has a concept of a "ball" linked to the concept of
"children play football" and an understanding that if one sees the former, one
should be prepared for the latter to bursts onto the road from behind the
partially-obscured roadside. Appropriate action probably involves easing off
the accelerator and lightly tapping the brake _so the car behind gets a hint
that you might have to stop suddenly_ on if a child emerges from behind a
bush. An autonomous car which fails to anticipate even though it's lightning
fast at slamming the brakes on is going to get rear ended a lot more.

The neural network of a self driving car might be able to classify small
coloured spheres in the vicinity of the roadway as balls, and the AI will
certainly have been taught the concept of a human-shaped obstacle moving
across the roadway being a "need to stop" situation, but is unlikely to
"learn" the association between the two through a few tens of million miles of
regular driving, because only a very small proportion of "need to stop" events
involve balls (and only a very small number of sightings of spheres moving in
the vicinity of the roadway result in "need to stop" events). Of course, you
can hard code a machine to respond to ball-shaped objects moving near roads by
slowing down and you can construct a huge number of artificial test scenarios
involving balls to teach the AI the association between balls and small
children, but either of these options involves engineers envisaging the low
frequency hazard and teaching it enough permutations of the sensory input for
that hazard for it to be able to anticipate it (and there's a balance to be
struck, because nobody wants a paranoid AI which drives through the city
braking every time it sees something its neural network identifies as a
pedestrian or the front of a parked car protruding from a driveway) Suffice to
say, we take for granted our ability to know how to react to things like
children chasing balls, staggering 4am drunks, tiny puddles the car in front
just drove through versus a raging torrents of water through the usually
safely navigable ford, sandbags versus shopping bags, vehicles laden down with
loads which look like things which are not vehicles, and people frantically
gesturing to stop.

~~~
EmployedRussian
> Opting to drive around a (stationary, visible from a distance) bag in an
> unpredictable manner is literally how Waymo's first "at fault" accident
> occurred...

There wasn't anything unpredictable about the behavior of Google SDC. From the
official statement: [https://www.engadget.com/2016/02/29/google-self-driving-
car-...](https://www.engadget.com/2016/02/29/google-self-driving-car-
accident/)

"Our car had detected the approaching bus, but predicted that it would yield
to us because we were ahead of it.

Our test driver, who had been watching the bus in the mirror, also expected
the bus to slow or stop. And we can imagine the bus driver assumed we were
going to stay put. Unfortunately, all these assumptions led us to the same
spot in the lane at the same time. This type of misunderstanding happens
between human drivers on the road every day."

------
jpelecanos
Recently, Waymo and Lyft launched a self-driving vehicle partnership [0]. With
Trump nomination of Derek Kan (Lyft General Manager in Southern California) to
serve as Under Secretary of Transportation for Policy [1], would it ease Waymo
path towards autonomous car supremacy?

[0] [https://www.reuters.com/article/us-lyft-waymo-
collaboration-...](https://www.reuters.com/article/us-lyft-waymo-
collaboration-idUSKCN18B02L)

[1] [https://www.whitehouse.gov/the-press-
office/2017/04/06/presi...](https://www.whitehouse.gov/the-press-
office/2017/04/06/president-donald-j-trump-announces-intent-nominate-derek-
kan-department)

------
philjohn
Unless I didn't understand correctly, on the animated image of the car
turning, which included a wireframe representation of the scene, as well as
the cameras:
[https://cdn.theatlantic.com/assets/media/img/posts/2017/08/W...](https://cdn.theatlantic.com/assets/media/img/posts/2017/08/WaymoMovieGIF/a7325709f.gif)

The generated geometry doesn't seem to include the Bike that quickly passes
behind the car at the intersection ...

~~~
euyyn
The bike is the red box that crosses the intersection. It doesn't pass behind
the car, but by its side. The white wireframe box that "detaches" from the
Waymo car is where the cameras are (so what the car actually did). I assume
the Waymo car that doesn't stop in the animation is what it should have done
instead.

~~~
jvolkman
More specifically, the article indicates that the car that doesn't stop is
what the software - after changes to fix the original problem - would do now
if it encountered the same situation.

------
monkpit
> for an average of once every 890 miles, or 0.80 disengagements per 1,000
> miles.

Isn’t that 1.12~ per 1000mi?

~~~
0xB31B1B
Wow, and I must think we're looking to squeeze ~3 more orders of magnitude or
more of the reliability before going full driverless ~(1m miles/disengagement)

~~~
notatoad
is there a specific reason you think that, or is a million just a sufficiently
impressive-sounding number?

~~~
nopinsight
Motor vehicles are interestingly less fatal than our intuition suggests:
"only" 11.3 deaths per billion vehicle-miles. [1]

To my surprise, railroad travel is more fatal at 29.0 deaths per billion miles
traveled. [2]

[1]
[https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...](https://en.wikipedia.org/wiki/Transportation_safety_in_the_United_States#Traffic_safety_by_mode_by_traveled_distance)

[2] [http://www.caranddriver.com/features/safety-in-numbers-
chart...](http://www.caranddriver.com/features/safety-in-numbers-charting-
traffic-safety-and-fatality-data)

~~~
Judgmentality
Does that mean trains are more dangerous than cars? It seems that way, I just
wonder if I'm missing something.

~~~
waqf
Looking at the BTS statistics:

[https://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/pu...](https://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/national_transportation_statistics/html/table_02_01.html_mfd)

you can see that 95% of railroad fatalities are classified as either
"Trespassers" or "Highway–rail grade crossing". In both of those cases it
seems likely that the victims are generally not actually passengers on the
train, but rather are people doing something obviously dangerous near railroad
tracks.

(Also the denominator wasn't clear for the quoted statistic: is it per
passenger-mile, or per vehicle-mile? In any case here is a sensible
denominator:
[https://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/pu...](https://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/national_transportation_statistics/html/table_01_40.html.))

------
jacquesm
Interesting stuff. I wonder to what extent Google and Tesla could benefit from
sharing each others datasets, Tesla has far more real world data than Google
at this point in time but Google has the better virtual environment to test
in.

~~~
kyrra
Not all data is equal. Waymo test vehicled likely record a bunch of data from
all the sensors and can download them all at the end of the data.

Tesla just enabled in the last few months the option for people to upload data
to them. Determining which data to upload is its own problem as well.

~~~
jacquesm
[https://qz.com/694520/tesla-has-780-million-miles-of-
driving...](https://qz.com/694520/tesla-has-780-million-miles-of-driving-data-
and-adds-another-million-every-10-hours/)

Tesla has an enormous amount of data at their disposal. The recent change
concerned video footage from the onboard cameras.

~~~
danblick
And not much lidar data, right? I don't work on this, but I imagine the
quality of the data set makes a big difference.

From what I've learned, lidar and radar are typically used for object
detection (i.e., avoiding other cars; perhaps augmented with camera data),
while cameras are used for things like lane detection and traffic sign
detection. If Tesla is trying to solve the same hard problems as everyone
else, using weaker equipment, I wouldn't bet on them having a lot of success.

Also, kind of the point of this article is that Waymo has a test area where
they can get real data along with the ground truth. That seems much more
valuable than unlabeled/random data.

~~~
iancarroll
Tesla has decided to use only radar (+ cameras), so none of their vehicles
have/are planned to have lidar at all.

------
nopinsight
The fact that Waymo revealed their "secret" tools for advancing this crucial
technology implies that either:

1) They believe no one can quite catch up before they can launch the
technology. Since they know several competitors have huge resources and
brilliant people, it means they are quite close to launch.

2) These tools have become open secret within the industry, so no harm is done
to their competitive position by revealing it to the public, only good PR to
be gained perhaps to attract more bright engineers.

I suspect 2) is more likely since several key players have moved around so
much in the past couple of years. Relatively high-level knowledge about how
autonomous vehicles are being developed at Waymo might have become well-known
within the industry by now.

~~~
nharada
No doubt it's number 2, I've seen quite a few articles about other self
driving efforts and the general sentiment I've seen on HN is "Cruise looks
like they're far ahead" and "What about Waymo? Did they fall behind?" This
seems like Waymo PR is showing off some secrets to re-assert their technical
strength.

------
joering2
Great to see such progress made by Waymo and Co. But as the Murphy's law says,
if something might break, it will eventually break. With possibly millions of
self driving cars and trillions of unique situations, its inevitable that
someone will get hurt. So my question is: what kind of progress is being made
to draft a legal framework for situation in which I rode my bike in bike lane
and for whatever reason self-driving vehicle struck me. Who do I go after in
regards to my pilling up medical expenses? Do I sue driver? The car company?
The company who delivered self-driving hardware? The software manufacturer? I
get that self-driving cars comes with incredible advantages: less crashes
(hopefully), no drink drivers, etc. But a legal framework for such automobiles
should be in works as we speak.

Edited: of course Murphy's law; thanks for pointing it out!

~~~
Zigurd
Globally, the death toll on roads is about 1 million per year. If autonomous
vehicles make a significant cut in that number, people will start to see human
drivers as the more-difficult liability issue.

------
Judgmentality
That's pretty fucking cool.

------
diasp
Machine Learning for Self-Driving Cars. High-level Development Process for
Autonomous Vehicles. [https://www.slideshare.net/jwiegelmann/machine-learning-
for-...](https://www.slideshare.net/jwiegelmann/machine-learning-for-
selfdriving-cars)

------
skoocda
Recently got turned down for a job here, now I'm remarkably glad I did - this
is an incredibly difficult problem to tackle, and they need absolutely
brilliant people to achieve success.

I wish them the best of luck in their continued progress! I'll stick to easier
stuff :)

------
senatorobama
Waymo should just get training data in India.

~~~
pault
I'm having a nightmarish vision of a fleet of self driving cars trained to
ceaselessly honk and drive on the sidewalks.

------
senatorobama
Can they simulate a hurricane?

~~~
jacquesm
They don't have to simulate one, merely reliably detect one and then refuse to
drive. One of the most important lessons drilled into new drivers is that the
best way to avoid accidents is to know when _not_ to drive, self driving cars
should not be an exception to that rule. If you are going to attempt to drive
in hurricane you are essentially committing suicide, your best cause of action
is to find a safe place to weather the storm. And if you _really_ want to
drive a self driving car might give you the option to override the safeties
but those should only be used in actual emergencies, not just to putter around
in really bad and possibly dangerous weather.

The same goes for dense fog, flooding and other situations where normal
driving is no longer possible and/or responsible.

~~~
jvolkman
And Waymo has already said that its cars will safely pull over when the
weather gets too rough.

~~~
jpindar
So during each winter storm, the roads will be lined with cars full of people
freezing to death (literally) because their cars decided they can't drive
home. And people trying to walk (in the roadway) to shelter, which isn't much
safer.

~~~
bluGill
Most cars can stay warm for several days just running the engine at idle.
Assuming you get enough fresh air (not always a given in a snow storm that can
block air vents) stopping and just running the engine for heat is safe unless
your fuel tank is low - which it shouldn't be.

~~~
jacquesm
Electric cars can't.

