
Off road, but not offline: How simulation helps advance Waymo Driver - sfjkdsljklsfd
https://blog.waymo.com/2020/04/off-road-but-not-offline--simulation27.html
======
khazhoux
This headline (and article) is IMHO promoting a unit-less metric.

There's no mention of the yield in quality improvement per hour of simulation.
I.e., how much better does the vehicle drive using the learnings from these
876,000 sim hours?

Once a sim pipeline is set up, it's "easy" to scale up the number of hours it
runs (throw it more compute resources, throw it more scenarios). But that
doesn't mean the analysis scales up, or the quality of the sim results, or the
application of analysis to bug fixes or feature development.

~~~
angstrom
As far as I'm concerned the only number that matters is the real world mean
distance between failure/incident.

~~~
catalogia
As an driver, I think too much emphasis is placed on those global mean
crash/fatality rate statistics. That sort of analysis is too reductive. By not
being an alcoholic and choosing to always wear a seatbelt, not driving in bad
weather, choosing to obey the speed limit, etc.. individuals can deliberately
boost their chances well above the national mean. Insurance companies charge
different people different rates because they aren't so reductive as to only
consider national averages.

The response I usually get when I talk about is _" everybody thinks they're
better than average, so changes are equally good you're not"_, but that too is
too reductive. Loads of people drive drunk, while I know that I don't. That
alone significantly stacks the odds in my favor for the likelihood that I'm
correct in thinking I'm above the average.

------
dang
Please don't editorialize titles. This is in the site guidelines: " _Please
use the original title, unless it is misleading or linkbait; don 't
editorialize._"
([https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html))
Cherry-picking the detail you think is most important from an article is
editorializing.

If you want to say what you think is important about an article, please post
it as a comment to the thread. Then your view will be on a level playing field
with everyone else's:
[https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...](https://hn.algolia.com/?dateRange=all&page=0&prefix=false&query=by%3Adang%20%22level%20playing%20field%22&sort=byDate&type=comment)

If it's necessary to change a title, please do that by picking the most
neutral and accurate phrase you can find on the actual page which represents
what the article is about. More on that here:
[https://news.ycombinator.com/item?id=22932244](https://news.ycombinator.com/item?id=22932244)

(Submitted title was "Waymo's simulators are doing 100 years of driving per
day during WFH")

------
pochamago
I feel like HN is too skeptical of self driving. Having driven a Tesla for a
road trip, I would feel safe letting it do all the work on interstates without
the feedback it currently requires. That's a big step forward. Every day use
cases, even if it's not every moment you're in your car, are certainly within
reach.

~~~
microcolonel
> _I feel like HN is too skeptical of self driving. Having driven a Tesla for
> a road trip, I would feel safe letting it do all the work on interstates
> without the feedback it currently requires._

How many times does a Tesla have to actively accelerate and careen into a
solid concrete barrier before HN's modicum of skepticism for self driving,
particularly Tesla's implementation, is justified?

~~~
freepor
1/2 as many times as a human does. I've driven about 250,000 miles in my life
and had 5 accidents, none with injury. But I think a self driving car could do
a lot better.

~~~
ashtonkem
My skepticism of self driving tech is all weather based. Here in LA, I’m
confident that a self driving car could do fine; slow traffic (normally), good
roads, low precipitation and clear lines make for easy driving.

But back when I lived in Colorado I had multiple times when I had to drive on
a highway with completely obscured lines, poor weather, and extremely
dangerous consequences for failure. I recall one drive in a blizzard where all
the cars driving had to negotiate their own space without any feedback of
where the lines were supposed to be. If I recall correctly we all settled on 1
fewer lanes than the road “should” have. I do not believe that self driving
tech is capable of doing that yet, especially on mixed human/robot roads.

~~~
colejohnson66
Maybe lane marker tracking isn’t the way to go? I’ve always wondered if it
would be possible (in the US; through a massive initiative) to just paint
lines (or embed something in or on the road) in the center of the lanes and
have the cars just follow the lines. How much would that cost?

~~~
ashtonkem
No paint will solve the problem of snow covering said paint. So you’ll have to
embed them. The hardware alone would cost billions of dollars to purchase, and
installation would cost either trillions of dollars to rework all the roads
now, or decades to install as our road network ages out.

Long story short, it’s not feasible.

~~~
colejohnson66
But isn’t that what people said of the National Highway System? That it wasn’t
feasible? And yet here we are almost half a century later...

~~~
ashtonkem
I doubt it.

Sure, I’m sure there were complaints about the costs, but the value of a
massive road network has been known for millennia.

Spending a trillion ripping up a perfectly good road network in order to try
out a new technology that’ll generate profits for a small number of companies
is an insane plan, and genuinely might not work.

~~~
colejohnson66
We don’t necessarily need to rip up the existing roads. We could just put some
radio based markers on the road. Then when they do construction, they could be
easily moved, too.

~~~
ashtonkem
It’s true that markers _on_ the pavement are cheaper to implement than markers
_in_ the pavement, I agree.

I’m also not convinced it solves anything. In the scenario above the problem
wasn’t the fact that the lines weren’t visible, the problem was _other cars_.
This is particularly problematic when many of those other cars are human
beings who will never have access to modified road information. In this
scenario a smart car that knows where the lane should be is still forced to
dynamically negotiate for space with other vehicles, sans communication
protocols. That is a _very_ hard problem, one I do not believe is currently
solved.

------
01100011
Nvidia has a similar system: [https://www.nvidia.com/en-us/self-driving-
cars/drive-constel...](https://www.nvidia.com/en-us/self-driving-cars/drive-
constellation/)

~~~
typon
that was incredible...i know that video games like GTA look similar, but this
whole system with hardware-in-loop is amazing

------
xnx
It's a shame that Waymo (as well as Google Streetview?) isn't operating on the
road at this time, since there's probably no better chance to do mapping as
when there are far fewer cars and people to obstruct the view.

~~~
anonymousCar
Mapping isn't trivial, but I'm sure it's not what's blocking expanding
operations

------
lolc
I'm unnerved by the anthropomorphism of "driver". What would we think of a
human saying "I've got 100+ years sim experience but out on the street I need
safety watch." It obviously means their "driver's" sum of experience doesn't
account for much.

The closest you could compare human actions to what their machine is doing is
"dreaming". When we dream, we improve our reactions without having to go
through the actual situation. But again, picture this human saying "I'm a safe
driver because I've dreamt about driving a lot!"

Now you could say that theirs is just a product name. Surely they'd agree
machines are different. And maybe I'm pedantic when I say they shouldn't use
words that liken their system to a human. Still I think it's important to
acknowledge that machines will refuse to drive like we do. Our driving is too
probabilistic on multiple levels.

I think we wouldn't be stuck in uncanny valley if it were acknowledged that
first we have to equip highways and participants on them with transponders.
Then we can let the machines take over on boring, predictable highways.

------
axguscbklp
Waymo's simulators are doing 100 years of playing video games per day. They're
not doing any driving.

~~~
jrockway
Simulators can be pretty good. The FAA certifies some pretty low-end simulator
setups for commercial/private pilots to get IFR training in, for example. And
of course, airline pilots almost exclusively train in simulators. Very few
pilots have experienced an engine fire in their 787, but I'd bet that about
100% of them would handle the situation correctly because of simulator
training (and good documentation).

I don't know how good driving simulators are, but at least self-driving cars
are modular enough to test components. Even if you can't test the machine
vision, you can still invent data coming out of the vision to test the rest of
the system. (You can probably fuzz test it, too. Generate a bunch of random
scenarios, fail if an obstacle hitbox and vehicle hitbox touch. Tweak that,
and then set up the scenario in the field when COVID-19 is over.)

Certainly the vision part of the equation is one of the greatest engineering
challenges. That is why they are testing cars in the field. But just having a
working vision system doesn't get you a working self-driving car, so quite a
bit of effort has to be invested in the rest of the system. With noone allowed
to go to work, now is a great time to invest in that.

------
ape4
Can they randomly flip bits in the simulation - what happens when the
stoplight is green and red? An upside down stop sign, etc.

------
tanilama
But still not human-level performance.

------
btian
[DELETED]

~~~
rrmm
Depends on how dumb you are. Autopilots seem pretty dumb at this point.

------
Dahoon
And it will still no drive safely on a real road for at least 5 years.

~~~
axguscbklp
My prediction is that self-driving cars will not dominate the public roads
until sometime between 40 years from now and never.

------
unstatusthequo
That’s great but it’s 100 years a day of completely unusual inputs. It’s great
they can navigate fairly empty streets but what happens when the flood
returns?

~~~
gregkerzhner
This article is about cars driving in simulation, not on the real streets. But
I do wonder how they can introduce truly "unpredictable" events in the
simulations.

I don't know anything about self driving cars, but it seems like solving self
driving cars isn't far off having general artificial intelligence. How well
does a car react when there is a construction flagger telling it to turn
around, or when there is no signs and the traffic lights go down due to a
power failure?

As an aside, my recent experience ordering a burrito from a virtual assistant
over the phone did not leave me too dazzled with the state of the AI industry.
Of course, Chipotle's phone robots are probably not training on 100 years of
simulated burritos...

~~~
dmortin
> How well does a car react when there is a construction flagger telling it to
> turn around, or when there is no signs and the traffic lights go down due to
> a power failure?

It tells the driver to take over when it can't interpret the situation.

Total self driving is far in the future. Realistically in a few years we can
get to the point where the car can drive itself in usual situations, but the
human is needed for anything unusual.

~~~
colejohnson66
The problem in the meantime is that you have to keep the driver alert the
whole time. Deaths involving Teslas almost always are because the driver had
Autopilot on and was ignoring the “take control” warning.

