
Tesla’s Push to Build a Self-Driving Car Sparks Dissent Among Its Engineers - dcgudeman
https://www.wsj.com/articles/teslas-push-to-build-a-self-driving-car-sparks-dissent-among-its-engineers-1503593742
======
empath75
The level-2 driving that Tesla is pushing seems like a worst case scenario to
me. Requiring the driver to be awake and alert while not requiring them to
actually _do_ anything for long stretches of time is a recipe for disaster.

Neither the driver nor the car manufacturer will have clear responsibility
when there is an accident. The driver will blame the system for failing and
the manufacturer will blame the driver for not paying sufficient attention.
It's lose-lose for everyone. The company, the drivers, the insurance
companies, and other people on the road.

~~~
JumpCrisscross
> _The level-2 driving that Tesla is pushing seems like a worst case scenario
> to me_

What are you measuring? The current autopilot already appears to be materially
safer, in certain circumstances, than human drivers [1]. It seems probable
Level 2 systems will be better still.

A refrain I hear, and used to believe, is that machine accidents will cause
public uproar in a way human-mediated accidents don't. Yet Tesla's autopilot
accidents have produced no such reaction. Perhaps assumptions around public
perceptions of technology need revisiting.

> _Neither the driver nor the car manufacturer will have clear responsibility
> when there is an accident_

This is not how courts work. The specific circumstances will be considered.
Given the novelty of the situation, courts and prosecutors will likely pay
extra attention to every detail.

[1] [https://www.bloomberg.com/news/articles/2017-01-19/tesla-
s-a...](https://www.bloomberg.com/news/articles/2017-01-19/tesla-s-autopilot-
vindicated-with-40-percent-drop-in-crashes)

~~~
barrkel
That's not what the concern is based on. It's rooted in what we've learned
about autopilot on planes and dead men's switches in trains. Systems that do
stuff automatically most of the time and only require human input occasionally
are riskier than systems that require continuous human attention, even if the
automated portion is better on average than a human would be. There's a cost
to regaining situational awareness when retaking control that must be borne
exactly when it can't be afforded, in an emergency.

~~~
JumpCrisscross
> _It 's rooted in what we've learned about autopilot on planes and dead men's
> switches in trains_

Pilots and conductors are trained professionals. The bar is lower for the
drunk-driving, Facebooking and texting masses.

> _Systems that do stuff automatically most of the time and only require human
> input occasionally are riskier than systems that require continuous human
> attention, even if the automated portion is better on average than a human
> would be_

This does not appear to be bearing out in the data [1].

[1] [https://www.bloomberg.com/news/articles/2017-01-19/tesla-
s-a...](https://www.bloomberg.com/news/articles/2017-01-19/tesla-s-autopilot-
vindicated-with-40-percent-drop-in-crashes)

~~~
abalone
You're misunderstanding the data and the concern. Currently, Tesla Autopilot
frequently disengages as part of its expected operation, handing control back
to the driver. Thus, the human driver remains an attentive and competent
partner to the autopilot system. That data is based on today's effective
partnership between human and computer.

The concern is that as level 2 autopilot gets better and disengagements go
down, the human's attentiveness will degrade, making the remaining
disengagement scenarios more dangerous.

~~~
JumpCrisscross
> _The concern is that as level 2 autopilot gets better and disengagements go
> down, the human 's attentiveness will degrade, making the remaining
> disengagement scenarios more dangerous_

A Level 2 autopilot should be able to better predict when it will need human
intervention. If the autopilot keeps itself in situations where it does better
than humans most (not all) of the time, the system will outperform.

My view isn't one of technological optimism. Its derived from the low bar set
by humans.

~~~
nostrademons
The problem is that in L2, the bar for the system as a whole is _set_ by the
low bar for humans, specifically their reactions in an emergency. If the
computer safely drives itself 99% of the time but in that 1% when the human
needs to take control, the human fucks up, the occupants of the vehicle are
still dead. And what people are saying here is that L2 automation increases
the risk that the human will fuck up in that 1%, by decreasing their
situational awareness in the remainder of time.

That's why Google concluded that L5 was the only way to go. You only get the
benefit of computers being smarter than humans if the computer is in charge
100% of the time, which requires that its performance in the 1% of situations
where there is an emergency must be better than the human's performance.
_That_ is the low bar to meet, but you still have to meet it.

~~~
JumpCrisscross
> _If the computer safely drives itself 99% of the time but in that 1% when
> the human needs to take control, the human fucks up, the occupants of the
> vehicle are still dead. And what people are saying here is that L2
> automation increases the risk that the human will fuck up in that 1%, by
> decreasing their situational awareness in the remainder of time._

Humans regularly mess up in supposedly-safe scenarios. Consider a machine that
kills everyone in those 1% edge cases (which are in reality less frequent than
1%) and drives perfectly 99% of the time. I hypothesise it would _still_
outperform humans.

Of course, you won't have 100% death in the edge cases. Either way, making the
majority of travel safe in exchange for making edge cases more deadly to
untrained drivers has a simple solution: a higher bar for licensing human
drivers.

~~~
abalone
_> making the majority of travel safe in exchange for making edge cases more
deadly to untrained drivers has a simple solution: a higher bar for licensing
human drivers._

You are still misunderstanding the concern. The problem is not poorly trained
drivers. The problem is that humans become less attentive after an extended
period of problem-free automated operation.

I hear you trying to make a Trolley Problem argument, but that is not the
issue here. L2 is dependent on humans serving as a reliable backup.

~~~
JumpCrisscross
> _You are still misunderstanding the concern. The problem is not poorly
> trained drivers. The problem is that humans become less attentive after an
> extended period of problem-free automated operation._

I understand the concern. I am saying the problem of slow return from periods
of extended inattention is not significant in comparison to general human
ineptitude.

Level 2 systems may rely on "humans serving as a reliable backup," but they
won't always need their humans at a moment's notice. Being able to predict
failure modes and (a) give ample warning before handing over control, (b) take
default action, _e.g._ pulling over, _and /or_ (c) refusing to drive when
those conditions are likely all emerge as possible solutions.

In any case, I'm arguing that the predictable problem of inattention is
outweighed by the stupid mistakes Level 2 autopilots will avoid 99% of the
time. Yes, from time to time Level 2 autopilots will abruptly hand control
over to an inattentive human who runs off a cliff. But that balances against
all the accidents humans regularly get themselves into in situations a Level 2
system would handle with ease. It isn't a trolley problem, it's trading a big
problem for a small one.

~~~
snaily
If you actually look at the SAE J3016_201609 standard, your goalpost-moving
takes you beyond level 2. "Giving ample warning" puts you in level 3, whereas
"pulling over as a default action" puts you in level 4.

The original point - that level 2 is a terrible development goal for the
average human driver - still stands.

------
Animats
Tesla's system doesn't have enough sensors. Musk forced his engineers to try
to do this almost entirely with vision processing, and that was a terrible
decision. Vision processing isn't that good yet. Everybody else uses LIDAR.

I've been saying for years that the right approach was to take the technology
from Advanced Scientific Concepts' flash LIDAR and get the cost down. I first
saw that demonstrated in 2004 on an optical bench in Santa Monica. It became
an expensive product, mostly sold to DoD. It's expensive because the units
require exotic InGaAs custom silicon and aren't made in quantity. Space-X uses
one of their LIDAR units to dock the Dragon spacecraft with the space station.

Last year, Continental, the big century-old German auto parts maker, bought
the technology from Advanced Scientific Concepts and started getting the cost
down.[1] Volume production in 2020. Interim LIDAR products are already
shipping in volume. Continental is quietly making all the parts needed for
self-driving. LIDAR. Radar. Computers. Actuators. Cameras. Software for sensor
integration into an "environment model". They design and make all the parts
needed, and provide some of the system integration.

Apple and Google were trying to avoid becoming mere low-margin Tier I auto
parts suppliers. Continental, though, is quite successful as a Tier I auto
parts supplier. Revenue of €40 billion in 2016. Earnings about €2.8 billion.
Dividend of €850 million. They can make money on low-margin parts.

Continental may end up quietly ruling automatic driving.

[1] [https://www.continental-automotive.com/en-gl/Passenger-
Cars/...](https://www.continental-automotive.com/en-gl/Passenger-Cars/Chassis-
Safety/Advanced-Driver-Assistance-Systems/Lidars/High-Resolution-3D-Flash-
Lidar)

~~~
tyrw
It depends on what you're optimizing for. Others using LIDAR are optimizing
for speed to market, while potentially sacrificing ability to solve the
problem as fully. Musk's argument is that we know for certain that the entire
road system can be navigated by visual cues, because that's how humans do it.
We do not know for certain that this is possible with LIDAR.

~~~
sbierwagen

      Musk's argument is that we know for certain that the 
      entire road system can be navigated by visual cues
    

We know for certain that human brains can be assembled out of regular atoms,
but raising funds for a company that manufactures brains would be getting
rather ahead of our current level of technology. The same might be true of
computer vision and autonomous vehicles.

~~~
tyrw
So you believe the technology gap for assembling a human brain out of regular
atoms is similar to that for navigating a road using cameras?

~~~
bllguo
JFC. I don't get why otherwise rational people become so stupidly aggressive
when faced with analogies.

------
crocal
The only industry to have produced truly driverless public transportation
systems is the rail industry. Not aeronautics. Rail systems happens to be my
business and what I read here makes me very worried.

I don't think the majority understands what safety means in mass
transportation. It's not about running miles and miles without accidents and
basically saying "see"? It's about demonstrating /by design/ that the
/complete/ system over its /complete/ lifetime will not kill anyone. In terms
of probability of failure it translates in demonstrated hazard rates of less
than 1E-9 /including the control systems/. This take very special techniques
and if that could've been done using only vehicle sensors, it would have been
adopted by us long ago. I am also sorry to report that doubling cameras and
sensor fusion will not get you an acceptable safety level. We've tried that
too, rookies.

Is it "fair", to use Elon's argument? After all, isn't additional safety
enough compared to existing situation. Ah but we have been there too! For
driver assistance it is indeed better. Similar systems were deployed during
the second half of 20th century (e.g. KVB, ASFA, etc). But the limit is clear.
It only /improves/ driver's failure rate. It does not substitute for the
driver. If you substitute, you have to do much much much better. Nobody will
ride a driverless vehicle provided the explanation that it is, you know,
"already an improvement when compared to a typical driver". Is it fair? Maybe
not, but that's the whole point for entrusting lives to a machine.

~~~
cameldrv
This is a good summary of modern risk culture. It has good sides and bad
sides. On the good side, you have things like commercial aircraft and trains
that, despite them being inherently dangerous, manage to achieve extremely low
fatality rates.

On the bad side, this risk culture destroys large scale innovation, and even
safety in the long run. The problem is that we've adopted an attitude that
safety is always first, meaning that it is immoral to do something in a less
safe way, no matter what the other benefits might be. This means we get a
regulatory, tort, and engineering culture that is willing to use existing
systems, because they are grandfathered and therefore "reasonable", but will
only adopt new systems if they can be shown to be perfectly safe.

This culture is fairly new. I date it to sometime in the seventies. Ralph
Nader and the Pinto were both symptoms and causes. You can see the transition
in, for example, how America responded to the Apollo 1 fire vs. the Challenger
accident.

Since you're a a rail engineer, let's look at rail mass transit systems. The
NYC subway has a limit of roughly 30 trains per hour, a two minute headways.
All of the braking rates, margins of safety, and signaling systems do their
job, and you never see two trains hitting each other. During rush hour though,
trains are packed way over capacity, and this is mostly because of this
headway limitation. If you were to imagine this on the freeway, you'd have to
leave two miles between you and the next car. The cost for this level of
safety is that about half a million people have to spend an hour every day in
miserable conditions. Many of them choose to drive or take cabs instead, to
avoid this.

When they make this decision, none of them even give the slightest thought
that driving is maybe 10x more dangerous than taking the train. They
understand that safety for both is plenty good, and the way that they spend
two hours of their day, 13% of their waking hours, is a lot more important
than some tiny difference in their risk of dying on the way to work.

Ultimately, while very well intentioned, this safety culture is inhuman. It's
pessimistic. It says "nothing in your life could possibly be so important that
it's worth any possibility of you being injured. "

So, this is what we face now, over and over. With self driving cars, it's,
"sure, the chance of you getting killed is half as much as if you had driven,
plus you get all of that time back to have a nice conversation, read a book,
or idly stare out the window, but since it doesn't meet our aviation/rail
transit level of safety, you can't have it." You even see it with kids -- your
twelve year old kids can't be allowed out of the sight of an adult, or you're
a dangerous, neglectful parent. It does not matter that childhood is the
process of learning how to be an adult, and that becoming an adult is a
process of progressively mastering greater and greater freedoms -- the most
important thing is that we are never seen to expose children to even the most
minute level of risk.

I really want us to have a real conversation about acceptable risk, informed
consent, and human progress. We need it badly to regain our souls.

~~~
maxcan
> On the bad side, this risk culture destroys large scale innovation, and even
> safety in the long run.

So true. Its an issue in General Aviation where there has been huge progress
in safety devices (electronic cockpit gauges, airbag seat belts, etc) but it
has been illegal to install them in older aircraft because of the FAAs very
slow, expensive certification process. Finally, in the last year or so, the
FAA got serious about what's called "Part 23 reform" which will vastly
streamline the process for safety upgrades in older aircraft.

Also, I don't want this comment to be interpreted as shitting on the FAA. I'm
a libertarian leaning former liberal who generally has very low confidence in
our government but I consider the FAA, with their insane safety record of
which I'm a massive fanboy, to be a exception.

------
cletus
What befuddles me is that in all these discussions about self-driving cars
seemingly no one refers to the massive body of knowledge in this area that
comes from the aviation world.

I've posted variants of this same comment several times and I'm starting to
feel like a broken record.

Look at studies of efforts to make planes safer by removing the human element.
While efforts like autopilot have made things safer it reaches the point where
more automation can reduce safety as pilots are no longer alert and/or don't
trust the instruments and/or can't fully manually override the automation.

Call it the uncanny valley of automation safety.

Bridging that last few percent for true automation (ie where vehicles aren't
designed to have drivers or pilots at all) is going to be _incredibly_
difficult, to the point where I'm not convinced it won't require something
resembling a general AI.

All of this is why I think driverless cars are going to take much longer than
many expect.

~~~
tlb
There's a big difference: commercial pilots are highly trained, even-tempered,
and take their job seriously. Most drivers are lazy, distracted, and apt to do
something stupid in an emergency. It's very hard to make something safer than
a commercial pilot. It's much easier to make something safer than a typical
driver.

~~~
archagon
> _Most drivers are lazy, distracted, and apt to do something stupid in an
> emergency._

Er, citation needed. I think the vast majority of drivers are good drivers —
otherwise, vehicular transport would be a disaster.

~~~
tlb
There are around 1.25M vehicle fatalities every year worldwide [0]. It is a
disaster. Driving has killed more people than the world wars.

"Good drivers" \-- we have no benchmark to measure against. Maybe it's amazing
that 10x more people aren't killed, or maybe it's dismal that anyone is killed
at all. When we have autonomous vehicles, we'll have a reference to compare
against. I predict that "bad" will be the only word to describe the current
situation.

[0]
[https://en.wikipedia.org/wiki/List_of_countries_by_traffic-r...](https://en.wikipedia.org/wiki/List_of_countries_by_traffic-
related_death_rate)

------
manyoso
Biggest news buried at the end. It says that several engineers have quit since
October 2016 (including the head of autopilot) when Tesla started selling
"fully autonomous driving" hardware upgrade packages. Says the engineers don't
agree the hardware is capable of supporting this and that it was ultimately a
marketing decision.

~~~
RonanTheGrey
If I were one of those engineers, and didn't believe in the claims being made,
I'd personally be quite worried of being held personally liable if the company
gets sued in the case of accidental death. Last thing I'd want is my bug being
responsible for someone dying.

E.g. I would quit too.

~~~
Facemelters
Good thing it would be almost impossible to attach personal liability to an
employee of a corporation unless they intentionally designed the system to be
unsafe with malicious intent.

~~~
castle-bravo
Frankly, I would be more worried about someone dying using a system that I had
a hand in than whatever effect it might have on my career.

~~~
jacquesm
> Last thing I'd want is my bug being responsible for someone dying.

Is exactly what was written above.

~~~
Dylan16807
In the context of a lawsuit. Which is different from not wanting it for its
own sake.

But quitting doesn't retroactively remove your hand from the system if that's
your main worry...

------
tdees40
I just ordered a Model S with Autopilot, and as I've been reading the comments
on the various Tesla forums, I'm not sure I'm ever going to use it. Some of
the stories are honestly terrifying (sudden deceleration on the highway,
swerving into other lanes, etc).

~~~
aerovistae
_Absolutely_ do not use it.

I am one of the biggest Tesla fans out there. I fucking love the company. But
Autopilot in its current form is nothing short of dangerous.

I took a test drive in a Model S a couple months ago and enabled Autopilot at
the Tesla rep's encouragement while on a straight stretch of route 90 near
Boston. We were going 70mph, a safe speed.

The car came to a point where the highway curved, and a slight deceleration is
required to navigate the curve correctly.

Little did I know, Autopilot stays at the speed you set and does not alter it
as the environment requires, short of not hitting the car in front of you. So
of course it tried to take the curve at 70mph and swung out of the lane almost
instantly, prompting immediate corrective action from me to avoid a serious
accident.

I couldn't believe the Tesla rep hadn't made this clear. I was required to
have my hands on the wheel, but the position of my hands doesn't ensure that
I'm mentally ready for egregious errors on the car's part and prepared to
correct them at a split second's notice at all times.

Operational question mark aside, as an investor I was also astonished that the
software was still in such a rudimentary state that it didn't know to slow
down on curves. I found this troubling. It was scarcely more advanced than
cruise control, to be honest.

It's the one place where I think Elon is really gambling with people's lives
as well as his company's credibility, the former being an infinitely worse
transgression than the latter.

~~~
kuschku
It gets worse, if your Tesla is following a car in front of you, and they
switch lanes, but you can’t switch lanes because another car is coming from
behind in that lane, the Autopilot will switch nonetheless.

This almost killed a tester from the German federal motor vehicle approval
agency. Their overall report is devastating, and shows the Tesla autopilot is
little more than a glorified cruise control, marketed in a very deceptive way.

~~~
cr0sh
I'm not sure what to say when the code I recently turned in (and passed) for
the path planning project of term 3 in Udacity's Self-Driving Car Engineer
works better at changing lanes than Tesla's system:

[https://github.com/andrew-ayers/udacity-
sdc/blob/master/Term...](https://github.com/andrew-ayers/udacity-
sdc/blob/master/Term%203/06%20-%20Project%20-%20Path%20Planning/CarND-Path-
Planning-Project-Writeup.md)

Then again, it does have a failure mode where occasionally, for some reason,
it will direct the car to change lanes into the path of a much faster moving
vehicle in the lane being changed to. Most of this is because it only runs the
behavior planner every second or so in the simulation, and probably does get
everything perfectly correct in the prediction part (I honestly am not sure
where the problem lies, though).

~~~
kuschku
The problem Tesla has is that their system only has ~ 40 meters visibility to
back or front.

That means if you're on the Autobahn, at say 130km/h in the right lane, and a
Porsche is coming from behind at 300km/h, the Tesla will not be able to see
it, and consider the lane free.

~~~
boznz
Remind me never to drive on a German Autobahn.

~~~
kuschku
It’s quite interesting, because obviously an entirely different class of
issues becomes apparent when the speed between two lanes on a highway can
differ by a factor of 4.

This is what you get when the speed limit actually is "unlimited".

------
abalone
_> In May 2015, Eric Meadows, then a Tesla engineer, engaged Autopilot on a
drive in a Model S from San Francisco to Los Angeles. Cruising along Highway
1, the car jerked left toward oncoming traffic. He yelped and steered back on
course, according to his account and a video of the incident._

Is this video online?

~~~
revelation
This isn't exactly an isolated incident, YouTube has lots of videos of
autopilot steering wildly off course. The biggest problem is that Tesla allows
turning on autopilot on roads that are not a highway and feature significant
turns and hills obscuring "perfect lane vision", and the system is not
prepared to handle that at all:

[https://www.youtube.com/watch?v=ZBaolsFyD9I](https://www.youtube.com/watch?v=ZBaolsFyD9I)

[https://www.youtube.com/watch?v=IOnuKrzCLYc](https://www.youtube.com/watch?v=IOnuKrzCLYc)

~~~
Sohcahtoa82
I'd like to point out that the first video you linked is nearly 2 years old.

~~~
speakingmachine
What's your point? The second video is only a month old and in that video the
car still drives in the way only a total jackass would drive, in the most
favorable possible conditions for a self-driving car: clear and sunny with
little traffic, clear road markings, and no pedestrians or bicycles.

------
twsted
A reference to Chris Lattner:

"In recent months, the team has lost at least 10 engineers and four top
managers—including Mr. Anderson’s successor, who lasted less than six months
before leaving in June."

------
frgtpsswrdlame
Since we're finally getting some refutations to Self-Driving Hype, let me drop
some quotes here:

 _“I tell adult audiences not to expect it in their lifetimes. And I say the
same thing to students”

"Merely dealing with lighting conditions, weather conditions, and traffic
conditions is immensely complicated. The software requirements are extremely
daunting. Nobody even has the ability to verify and validate the software. I
estimate that the challenge of fully automated cars is 10 orders of magnitude
more complicated than [fully automated] commercial aviation."_

\- Steve Shladover, transportation researcher at the University of California,
Berkeley

[http://www.automobilemag.com/news/the-hurdles-facing-
autonom...](http://www.automobilemag.com/news/the-hurdles-facing-autonomous-
vehicles/)

 _" With autonomous cars, you see these videos from Google and Uber showing a
car driving around, but people have not taken it past 80 percent. It's one of
those problems where it's easy to get to the first 80 percent, but it's
incredibly difficult to solve the last 20 percent. If you have a good GPS,
nicely marked roads like in California, and nice weather without snow or rain,
it's actually not that hard. But guess what? To solve the real problem, for
you or me to buy a car that can drive autonomously from point A to point
B—it's not even close. There are fundamental problems that need to be
solved."_

\- Herman Herman, director of the Carnegie-Mellon University Robotics
Institute

[https://motherboard.vice.com/en_us/article/d7y49y/robotics-l...](https://motherboard.vice.com/en_us/article/d7y49y/robotics-
lab-uber-gutted-says-driving-cars-are-not-even-close-carnegie-mellon-nrec)

 _" While I enthusiastically support the research, development, and testing of
self-driving cars, as human limitations and the propensity for distraction are
real threats on the road, I am decidedly less optimistic about what I perceive
to be a rush to field systems that are absolutely not ready for widespread
deployment, and certainly not ready for humans to be completely taken out of
the driver’s seat."_

\- Mary Cummings, director of the Humans and Autonomy Laboratory at Duke

[https://www.commerce.senate.gov/public/_cache/files/c85cb4ef...](https://www.commerce.senate.gov/public/_cache/files/c85cb4ef-8d7f-40fb-968c-c476c5220a3c/8BC0CC7E137483CEFD0C928ECB14E74E.cummings-
senate-testimony-2016.pdf) [pdf]

All quotes pulled from this article (which is really quite good and you should
read it in full):

[https://www.nakedcapitalism.com/2016/10/self-driving-cars-
ho...](https://www.nakedcapitalism.com/2016/10/self-driving-cars-how-badly-is-
the-technology-hyped.html)

~~~
redler
The easy part is relatively easy, but it's hard to conceive how the hard part
will be solved.

Just driving around New York City for a while makes me think that generalized
autonomous driving, as a problem, is essentially "solving" strong AI. Consider
the case of approaching a complex intersection during rush hour. There's a
traffic cop in the intersection waving his hands around. You reach the
intersection, the light is green, you want to proceed straight, but cars are
blocking the way because they're backed up into the intersection on the cross
street. The cop points directly at you, making eye contact, blows a whistle,
and shouts at you, pointing and yelling "right right right". You're uncertain
whether he means you should try to weave around the blocking cars, but he
blows the whistle again and it becomes clear he is telling you you cannot go
straight, and you must divert and make a right turn right at the intersection.
He gestures again, indicating that he wants you to turn into the nearest lane
on the cross street, then points to the car behind you and indicates that it
should also turn, but into the center lane. You nod, and he looks away to
another car.

This happened. So a self-driving car would presumably have to understand and
interpret shouted commands, realize that they are the one being shouted at by
someone with the right authority, recognize gestures, somehow be able to
engage in the equivalent of recognized eye contact, be able to make an OK
gesture, and have some sort of theory-of-mind about the traffic cop as well as
the drivers of other cars.

Not easy.

~~~
archagon
Even worse: I've been on several mountain roads with stretches of one-way
traffic, where either an officer has to signal a switch in lane direction
every few minutes (which might not be clear otherwise, especially around a
bend), or _cars have to occasionally reverse to let opposing traffic through_.
Don't think I'll ever be letting an AI do _that!_

~~~
ghaff
I think you can argue that at least those are outlier very rural sorts of
places. It's harder to write off major US cities. (To be clear, interstates
are still compelling uses but they're not universal self-driving.)

~~~
jpindar
That kind of thing happens occasionally around Boston when snow piles turn two
lane streets into one lane, and it's common anywhere where construction or an
accident partly blocks a road.

------
jijojv
Level 2 still does not drive smoothly as many have confirmed on their forums.
They do require you to jiggle the wheel every few mins to ensure you're alert.
There's also [https://www.hbsslaw.com/cases/tesla-
autopilot-2-ap2-defect](https://www.hbsslaw.com/cases/tesla-
autopilot-2-ap2-defect)

~~~
strange_quark
It's amazing to me that Tesla is able to sell a car in its price range that
lacks basic features that come standard on a $17k Corolla like adaptive cruise
or automatic emergency breaking, especially since they effectively reduced the
capabilities of their cars by rolling out AP2. If any other company tried to
pull that, they'd be laughed out of the room, but somehow, Tesla is cheered.

~~~
mph1
You've clearly never driven a telsa if you think a $17k Corolla has more
advanced adaptive cruise control. I've driven my model S over 15,000 miles,
with probably over 50% of that on autopilot (mainly interstates), and NEVER
had an issue with the adaptive cruise control. Its absolutely flawless.
(autopilot has its quirks, although its still an extremely useful feature, and
the primary reason I bought a tesla)

~~~
modeless
Do you have the Mobileye hardware?

------
timdorr
Paywall bypass:
[http://facebook.com/l.php?u=https://www.wsj.com/articles/tes...](http://facebook.com/l.php?u=https://www.wsj.com/articles/teslas-
push-to-build-a-self-driving-car-sparks-dissent-among-its-
engineers-1503593742)

~~~
misterbowfinger
how'd you do that?

~~~
timdorr
Add [http://facebook.com/l.php?u=](http://facebook.com/l.php?u=) to the front
of the URL.

------
jballanc
Honestly, I don't understand why the automobile industry doesn't learn from
the airline industry. Airplanes have worked out how to balance autopilot
capabilities with the need for pilots to remain engaged and attentive for
years. Simply implement a Drive-By-Wire, similar to Airbus' Fly-By-Wire
systems. A driver's inputs to the controls would still be required, but the
autonomous systems could prevent or limit certain actions (such as
accelerating into a stopped vehicle or swerving off the road).

~~~
alexanderstears
There's a norm of competence in airline pilots. We can't say the same for
automobile drivers.

~~~
onewhonknocks
Can't we? (about the vast majority of automobile drivers)

~~~
alexanderstears
Depending on where you set your standards, sure. But we have an expectation in
the United States safety technology has to accommodate darn near everyone.

We have a law that airbags have to accommodate unbelted passengers:
[http://www.iihs.org/iihs/sr/statusreport/article/35/6/1](http://www.iihs.org/iihs/sr/statusreport/article/35/6/1)

Now sure, a passenger can be different than the driver but it's the same
philosophy.

The amount of illegal maneuvers I see every day on my commute is astonishing -
not using blinkers, intruding on cross walks, not moving over for emergency
vehicles, following too closely, etc. It doesn't help that the only traffic
law enforcement is really around speeding / running red lights / DUIs.

The problem with autonomous cars isn't the autonomous cars - it's
accommodating non-autonomous actors. It only takes one google car hitting an
old lady who's chasing a duck on the street to become CNN breaking news for
the next 3 months a la that airliner that disappeared.

------
arcanus
Good for the engineers having more ethics than VW, and resigning when they are
asked to go farther than the technology allows.

~~~
pkulak
VW engineers broke laws to do what they were asked to do. Tesla engineers are
just doing the best that they can and taking longer than marketing would like.
If that's a moral equivalence, then I guess we're all VW engineers.

------
localhost
FYI I've found a way around the WSJ paywall - copy and paste the title into
Facebook and click on that link.

------
manyoso
[http://www.cetusnews.com/tech/Tesla%E2%80%99s-Push-to-
Build-...](http://www.cetusnews.com/tech/Tesla%E2%80%99s-Push-to-Build-a-Self-
Driving-Car-Sparks-Dissent-Among-its-Engineers.B15UfxtwY3O-.html)

around paywall

------
rsp1984
Non-paywalled version:

[https://twitter.com/newsycombinator/status/90078007990679142...](https://twitter.com/newsycombinator/status/900780079906791424)

------
desireco42
Please no paywall articles. Please.

------
jaytaylor
Typical wsj paywall. Web link no longer seems to get around it, either,
unfortunately.

------
true_tuna
Please stop posting paywalled articles. Especially WSJ. This community
represents the future of the internet. I don't know what the answer is for
making sure content providers get paid, but the WSJ model isn't it. So let's
vote with our attention (or lack there of) and kill this annoying practice
before it makes the internet an even more walled and unpleasant place.

~~~
Strom
Your post lacks information on the part of why the WSJ model isn't acceptable.
Paying money for a newspaper isn't exactly unprecedented, and there are plenty
of people who are fine with that model.

~~~
devrandomguy
They want us to commit to a subscription, using a credit card, in US currency.
That is significantly different from the newspaper model, because it requires
some trust on my part, and it is quite awkward; my CC sits in a firesafe when
I am not traveling. Subscriptions also encourage FOMO behavior, something I am
currently having a bit of trouble with, personally.

I too would prefer that we do not allow subscription content here. If the
story is significant, then it will be covered elsewhere.

