
Tesla Is Decades Away from Full Self-Driving Cars - velmu
https://www.thestreet.com/investing/tesla-is-decades-away-from-full-self-driving-cars-14853271
======
alexknvl
> If you are at 98% accuracy for autonomous driving right now, you are
> practically nowhere. In 100 seconds of driving, you will crash and possibly
> die, twice.

98% accuracy, 100 seconds, crash twice. Take that, (data) scientists!

------
ricardobeat
The author has absolutely no idea what he is talking about. In 100ms of
driving you'd also die twice, apparently.

See the disclaimer at the bottom: 'sell-side analyst', TSLA short.

~~~
fncypants
Clearly. Not only bad at statistics, and creating a straw man argument from an
off handed comment from Musk, but motivated to trash Tesla. The short TLSA
disclaimer should be at the top, or better yet in the headline: “Guy that
profits from Tesla stock dropping says Tesla is not doing well...”

------
chrisco255
This is a poor article. It's arguing about Musk's use of "98% good" as a
metric for how far along Tesla's tech is. This was an off-hand comment,
certainly not a solid metric. The article claims a car would crash in 100
seconds at "98% good". But the truth about autonomous driving tech is it's
very reliable in 98% of driving conditions. It's the edge cases, extreme
weather conditions, etc that need more training and development.

~~~
Traster
I agree that the 98% comment is meaningless, 98% of what? 98% of a journey is
as good as 0 if the 2% is every junction on your route. But let's not pretend
its extreme weather conditions we're battling with. The Google cars are still
struggling to merge onto highways. We're not that close.

~~~
chrisco255
I've seen some pretty impressive footage of Cruise AI's navigation through the
crowded, narrow streets of San Francisco. I think autonomous tech will first
be geo-fenced to specific, well-tested routes. That's the move that makes the
most sense to me. Would still be incredibly useful if say, all interstate
driving is automated.

------
Traster
I think the author is actually making a very decent point: when you're looking
at how close we are to solving a problem 100% it's not relevant to say "We're
at 98%" because 98% of the problem could well be 1% of the work and the last
0.01% is 98% of the work. Having said that, Musk seems to be saying they've
done 98% of the work which frankly I would find laughable - because if that
were true then Musk owes his share holders a proper update on self-driving.
The key for me is the caveat - we'll have it done by EOY but _regulators_.
Let's all look forward to Elon Musk calling the regulators pedofiles and cucks
because we know how that story ends.

------
cashsterling
I think 99.999%+ trustworthy/reliable self-driving cars are a long, long away
for both technical and socioeconomic reasons. I am not an autonomous robotics
expert; however, I do have some background in control systems engineering &
system verification (i.e. how do you know your system will operate correctly).

Technical reason(s): \- neural networks are not even the right way to model
decision making for controlling a complex system in a complex environment. The
NN to adequately cover the system dimensionality will be so unpredictably non-
linear so as to make it inherently untrustworthy (i.e. you can't predict
behavior over the entire range of possible scenarios the system may/will
encounter); this applies to sensing / computer-vision aspects and decision
making aspects of the control problem. Actually controlling the vehicle by
comparison is easy and is already a solved problem using good ole
mechanics/dynamics and classical system control theory.

\- Adequate sensors for cars to 'see' as well as humans in all conditions are
really expensive and will be for some time... as the level of economically
tractable sensors improves, the expense of maintaining them will still be
high.

\- the embedded systems languages being used for CV and system control (mainly
C/C++) are not amenable to formal verification. Even if all embedded sys
programming is done in SPARK or Rust... there are hardware level bugs as well.

Socioeconomic reasons: \- our legal system is set up to reap maximum punitive
damage from car owners/manufacturers for "their" mistakes. All a jury has to
hear is "that autonomous car did something a human wouldn't do... and caused
the accident." and the car owner (and the car co. in the event of a bug) is on
the hook for the liability... possibly massive liability.

\- electric cars and autonomous cars are probably not even the right solution
to the problems they are trying to solve ("the environment", "too much
traffic", "I don't like driving" etc.). Shuttling around one or two people in
a 3000-4000 pound car is much, much less resource and energy efficient that
things like electric bikes: building a car has a huge, upfront
environmental/carbon impact that, imo, is not negated by the facts that is it
electric or autonomous. Quick example: for a 40k USD car; 15k-25k USD is
direct/indirect energy cost for material sourcing, manufacture and car
assembly and delivery. Even a really nice electric-bike, by comparison, is
easily well under 1k.

For this reason, I think we are going to see increased activity in cost
effective single person transport and local municipal change [not allowing
cars on some/many city streets] to help make things like bikes, trikes, and
scooters safer to use.

I could write a lot more on this... but that is all for now.

