Hacker News new | comments | ask | show | jobs | submit login
Tesla Was Kicked Off Fatal Crash Probe by NTSB (bloomberg.com)
357 points by Element_ 10 months ago | hide | past | web | favorite | 350 comments



Tesla has totally botched this. What Tesla has run into is the approach to safety that makes commercial air travel safe - figure out exactly what went wrong and fix it so it stays fixed. The aviation community is used to this. "Flying" magazine publishes crash reports in most issues.[1] Pilots read those. Manufacturers read those. People in aerospace read those. NASA collects near-miss reports to keep tabs on why close calls happened.

The aviation industry has experience with aircraft automation leading pilots into trouble, and so does the NTSB. Here's a former head of the NTSB commenting on the Asiana crash at SFO in 2014:[2] “How do we design this whole human-machine system to work better so we don’t lose the professionalism in the humans who are doing this?” Crash investigators are very aware of the problems of a poor man-machine interface leading pilots into a bad situation. That is the kind of thinking needed to make these kinds of systems work.

It's not about blame. Most air crashes have multiple causes. That's because the single points of failure were fixed long ago. This is somewhat different than the traffic law enforcement approach. The NTSB's previous investigation of a Tesla collision resulted in Tesla changing their system to enforce the hands-on-wheel requirement. That may not be enough.

Playing hardball with the NTSB is not going to work. The NTSB rarely uses it, but they have broad authority in investigations. They can get subpoenas. They can inspect Tesla's facilities, hardware, and software: "An officer or employee of the National Transportation Safety Board ... can ... during reasonable hours, may inspect any record, process, control, or facility related to an accident investigation under this chapter." The NTSB even has the authority to publicly disclose corporate trade secrets if necessary.[3] Aviation companies routinely cooperate, knowing this, and the NTSB seldom has to compel disclosure.

Incidentally, lying to the NTSB is a federal crime. The CEO of Platinum Jet Services tried to cover up some safety violations that resulted in a crash. He became Federal Prisoner #77960-004 from 2007 to 2013.[4]

[1] https://www.flyingmag.com/tags/aftermath [2] https://www.aopa.org/news-and-media/all-news/2014/october/21... [3] https://www.law.cornell.edu/uscode/text/49/1114#b [4] http://www.cnn.com/2010/CRIME/11/16/new.jersey.charter.convi...


Yep, the NTSB is a federal agency that you do NOT want to screw around with. To me it's like Tesla wants to be an auto company, but it's some weird cargo cult version with a few parts missing


I think when you look at it from that perspective Tesla are in real trouble. Essentially what they're publicly stating is "Our car detected it was in a dangerous situation, gave warnings to the driver but took no action". That's really not good enough. I don't know of any other system in a car which is designed to allow you to take your hands off the wheel - because that seems to actively encourage you not to pay attention and to increase reaction time. I could definitely imagine the NTSB coming back from this investigation saying that Tesla's active advertising campaign suggesting that drivers should take their hands off their steering wheel is bad and played a significant role in this accident.

The question is whether repeated incidences could cause the NTSB to suggest removal of the autopilot feature entirely (in its current state).


I disagree. It's the dawn of autonomous driving; Tesla needed to get the message out that lives had been saved by that tech already, and did; etc. The Mandarins aren't moved by context, let's put it that way. They had no legal power to squelch Tesla, so after blustering as if they did, they have opted to use their more petty powers, as far as I can see. The Feds could have given some quarter, here, given the stakes - and I don't mean money. The Feds had powers enough but wanted to reach for powers over publicity that they didn't have.


I’m confused about how hands on the wheel is “enforced”? Just chiming and blinking lights? Or does the car actually shut off auto driving mode when you remove your hands?


Audible and visual alerts, followed by bringing the car to a safe stop.


Not sure why you mentioned Platinum Jet Services lying to NTSB. Tesla did not lye. Tesla just want to communicate publicly regardless of the NTSB schedule. Tesla should not be bound by the NTSB agreement, it is a good thing that they got out of it. NTSB is good at providing recommendations, and they would likely be very good recommendations, in few months when they would wrap up their investigation.


I get frustrated when Tesla blames the driver for these crashes by saying that they aren't touching the wheel or are ignoring messages. It is extremely common for my car to warn me to keep my hands on the when WHILE I have my hands on the wheel. Even shaking it a little will sometimes not cancel out the warnings. It doesn't seem to matter if my hands are on the top or bottom of the wheel. Anything short of a deathgrip and constant jiggling of the wheel doesn't seem to consistantly register as keeping your hands on the wheel. Anyone who has used Autopilot for a significant amount of time has been beeped at for not having their hands on the wheel even though they are. And yes, the car will make crazy maneuvers at times that no driver in their right mind would make. I still like it and am glad I opted for it, but it is far from earning its namesake.


That is very significant!

Your comment suggests that Tesla's claims about the driver having his hands off the wheel for x seconds before a crash could be wrong. If the sensors detect the drivers hands are not on the wheel when they actually are on the wheel, then data logged about how long the driver's hands were off the wheel should be considered suspect.

I hope that's being investigated.

Personally, I'm having trouble believing the driver who died in the recent accident ignored the warnings for six seconds before the head-on collision, especially when he knew Autopilot didn't work well at that section of road.

I'm not familiar with how the system works. If the warning engages, is there a guarantee that the driver will have to take over shortly? Or are there scenarios when the warning turns off by itself and so the driver could have been waiting to see if the car would correct?

[Edit: lolc and Vik1ng pointed out that the warning isn't related to unsafe conditions as I implied. It's used whenever the sensors think the driver's hands are off the wheel.]


Tesla didn't claim that he didn't have his hands on the wheel, they claimed that it wasn't detected. Tesla knows how inaccurate that is. Tesla has always been shamefully misleading with all of their victim blaming PR pieces. I get that there's plenty of "unintended acceleration" cases where the driver straight up lies about hitting the wrong pedal but they included that information about what the driver did earlier in the trip and the time he would have had a view of the barrier solely to imply that it's his fault that autopilot killed him. That's never an okay thing to do yet it seems like this is Tesla's modus operandi when there's some high profile accident.

Who cares if he had 5 seconds to see the barrier if he only had 0.5 seconds to realize that the car went into casual murder mode right as it veers into the barrier. They make it sound like it had to have been driver error with facts that are irrelevant to the question at hand. Your assumption that Tesla claimed he didn't have his hands on the wheel at the time of the accident is exactly what I'm talking about, Tesla's PR release is filled with weasel words to do exactly that.

I'm not even trying to suggest that Autopilot is less safe than an alert human driver, but one thing is clear, we certainly can't trust Tesla to determine how safe it is.


Who cares if he had 5 seconds to see the barrier if he only had 0.5 seconds to realize that the car went into casual murder mode right as it veers into the barrier.

This video is quite relevant:

https://www.youtube.com/watch?v=6QCF8tVqM3I

A fully alert driver trying (and succeeding) in reproducing this behaviour. Observe how long the barrier is visible, how long it takes him to react, and how close the car was to hitting the barrier.


This video is horrifying, I assume even the sensors on cars with automatic braking sensors should be able to identify a barrier that large in front of the car. I am way oversimplifying this but I'm sure there is some level of prioritisation in Tesla's Autopilot software to decide what action the car should take. Is the Autopilot software prioritising staying in lane (the car appears to stick to the left white line) over collision avoidance?


It can’t, nor can most cars with automatic braking. The Tesla will happily run headfirst into a brick wall all day long. The reason is that the driving world is chock-full of objects with zero velocity relative to terrain - the trivial cases being a rise in the road ahead, or an at-grade bridge where the travel lanes suck below grade to pass. Therefore, autopilot and many auto-braking algorithms filter out completely static objects (see previous story about a Tesla ramming a fire truck stopped on the highway).

Musk has commented publicly before (though somewhat obliquely) about this flaw. He indicated that the company is trying to build a map of reference data so that it can be filtered out automatically, and real hazards can be seen/found.


Mapping can never solve this problem. If these cars don’t yet have the ability to detect stationary physical barriers that represent a crash risk, then they are further away from being practical than I thought, and I’m extremely pessimistic. It is clearly a hard problem to solve with naive tech. Unfortunately we have a huge industry and tens of billions of dollars and politicians, corporations, governments, and media who all believe this naive tech is close to perfect. But it can’t see a wall or a fire truck in its path? If this isn’t solved then this whole house of cards will fall apart. The sooner the better, IMO. Let’s start building cars that supplement driver awareness instead of numbing it.


This would be trivial to solve with a vertical mounted LIDAR. You vertically mount it and if the horzontal distance measured level with the bumper of the car are significantly closer than the horizontal distance at a lower angle you can classify that as a stationary object, if it's a continuous lengthening or shortening then it's a ramp. The classification is almost trivial (though the hardware may be expensive or difficult to implement for high speeds).


But in the path of travel??!


Not to exonerate Tesla, but cars are ALWAYS almost about to hit something. The next time you’re driving down a curvy road, pay close attention to all the obstacles that are by the side of the road and how often you are pointed right at them and how small the steering adjustments you make are and how little time there is between you almost hitting something and then not.

This is part of why trying to second guess a driver or autopilot is insane; a crashing and non-crashing car are almost identical, except for the crash.


True. But these aren't auto self driving cars. A self driving car should know its next move, 10 seconds before it makes it(or something similar). There's 0 excuse for it hitting a barrier in 0 traffic.


Tesla has likely built 80-90% of an autonomous car enough to fool those who use it into believing it can be trusted. But as with all engineering endeavors that suffer from the Pareto principal its going to take far longer to get close enough that it's better than a human whose paying attention.

Its foolish to oversell thru marketing the capabilities of AP, as it's fairly easy to conflate reliability with observed behavior.

As long as the system mimics enough of the capabilities of full autonomy and it's oversold as such customers will underestimate the risks and become victims of an unfinished product


tesla's system is lane following which has a radar that knows when the cars in front of you slow down or speed up. that's pretty much it. it's not an autonomous car in general. mine is very reliable for cruise control adjustment, good but not awesome for lane detection (falls down in bad rain for example, works well in clear weather). it tells you this, via display, it knows if its working. I can't understand these fools who thinks its some super advanced system. it really does little more than follow lanes and adjust speed based on detection of cars in front.


Video from 2016 showing exactly that:

https://www.tesla.com/videos/autopilot-self-driving-hardware...

If you use this to market your car without any caveats, don't be dismissal of some people who might read into that your car is capable of self-driving, when in reality its clearly absent of anything of the sort.


Some Tesla owners are being dismissive of attempts to paint Tesla owners as being unaware of the actual capabilities of Autopilot, based on Tesla's marketing.

Owners see marketing occasionally. Owners see the reminder that they have to stay alert 100% of the time each time they turn on Autopilot. Owners get the alert that they need to put their hands back on the wheel. And so forth.

If you've got data about owners being confused, please share.

Meanwhile, next time an owner offers anecdata about what they personally have learned about Autopilot's capabilities, perhaps you could respond to that, instead of telling the owner that they're being dismissive.


well probably fool was too strong when they call it "autopilot". once i explain to people (well, programmers) what the tesla system is, they go oh, that seems possible, tesla made it out like there was some magical ai running on the car.


> Tesla claimed that the driver had his hands off earlier in the drive, not at the time of the collision.

You are wrong: "the driver’s hands were not detected on the wheel for six seconds prior to the collision."

see: https://www.tesla.com/blog/update-last-week%E2%80%99s-accide...

Perhaps you shouldn't attack Tesla for using "weasel words" in their press release if you don't take the time to read it.

Edit: You seem to have edited your post without any disclaimer. Your edits were not visible when I responded. Most people on HN will edit their comments with an "Edit: I was wrong about ____" rather than ninja editing the mistaken claim away. This help make discussions easier to follow and limits misunderstandings.


I already corrected that, but the point still stands that their press release is intentionally misleading and Tesla knows very well that detecting hand presence on the wheel has problems. Read the press release in full and tell me that it isn't misleading.

Originally when this was posted there was concern over if "prior to the collision" was talking about the 6 seconds before the collision or earlier in the drive. In any case, Tesla is implying that the victim wasn't driving responsibly and there's no data released to the public that would back up such a claim.


I have read the press release. What was misleading? The only thing I see that is potentially misleading where they quote the safety statistics for vehicles with auto pilot equipped.


The sentence is only relevant if it carries information content about whether the driver's hands were in fact on the wheel—the reliability of the hand detection system is not believed to be relevant to the crash in any way.

So, if the statement is actually carrying more information about the reliability of the hand-detection system when it sounds like it should be a claim about the driver's hands (the subject of the passive-voice sentence), it is misleading.


That's the point of the investigation. Many things could have happened. Maybe the driver fell asleep, maybe he tried to move the wheel and it got stuck. Maybe the car slipped on a patch and had nothing to do with autopilot. We don't know yet. It takes time to review witnesses, multiple sources of information and that's the NTSB's job.

Tesla seems to think they know exactly what happened after only reviewing data collected from the car. They see what they want to see and have become blind to any other possibility. I think that's terrifying. I couldn't trust an automaker with that kind of arrogance.


Given that this video exists, I'm putting my bet on the Tesla actively swerving into the barrier: https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil...


In that video, it looks like the driver had about 1 second to react to the Tesla's mistake, to avoid hitting the barrier.

Dumb question: if I have to constantly stay vigilant with 1-second reaction time because at any time my assisted-driving car might try to kill me (exactly how changes every time the software updates) -- isn't it less effort to just drive my car myself, so I mostly just have to worry about the drivers around me?


Not dumb at all, it is the big unspoken elephant in the room. It is clear that the current, and next iteritave version of autopilot, self driving, etc. are woefully behild the hype. While in theory a "perfect" ai driver is better, is it true in the chaos of real world traffic? Will drivers even want to be subject to NTSB level scrutiny when an accident happens? There are more questions than answers still.


Hell, I don't even like letting cars shift for me. /sticksnob

;-)


> ignored the warnings for six seconds

There were never warnings at that time. Tesla just worded it like that to mislead people. It only talks about videos early in the drive.


I think the wording is pretty clear, but I think people didn't take the time to fully read the press release, just skimmed it.

This seems pretty clear:

"The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision."


The warning is not a reaction to unsafe conditions. It comes on when the system detects (rightly or wrongly) that the driver doesn't hold the wheel.


It could also be that a false warning took the driver's attention off the road in the second or two they had to save their life.


While naming a product should usually be up to the company who creates said product, my opinion on this is that Tesla should no longer be allowed to call this "autopilot", for public safety reasons.

Autopilot is a well known word which suggests a certain level of autonomousness and a car that might kill you if you take your attention away for 6 seconds simply does not qualify for that name. Tesla changing the name from autopilot to something better resembling reality would send a strong message to people to be more careful with using it while the attention is somewhere else


There is actually a semi-official categorization that Tesla seems to ignore

https://www.wikiwand.com/en/Autonomous_car#/Levels_of_drivin...

pilots are level 3 upwards according to this classification. Level 2 systems are called assistants. If tesla would adhere to the industry agreement, they would call their system an "assistant" and not a pilot.


Ok, so if they rename it to 'lane guidance' or whatever, everything would be cool?


Yes, it would be much better. They also need to take away the in-your-face Full Self Driving marketing on their Auto Pilot page to eliminate any misleading association.


I know nothing about aviation, but isn’t “autopilot” on a commercial plane pretty much just “maintain altitude and heading”? Pilots still have to be engaged and active, right?


Don't miss the point here. Tesla doesn't require that you be a licensed air pilot before you buy the car. It doesn't matter what pilots know about autopilots, it's about what the general public believes when they hear "autopilot". I wouldn't be surprised if 50% of the world believes that an autopilot allows both pilots to leave the cockpit for five or ten minutes. So the word "autopilot" misleads them, even if a pilot would know the limitations.


And in aviation you have the added advantage that you don't just smash into a divider on 35'000 feet with one second notice.


No, but you are required to be a licensed driver and the car warns you if you take your hands off the wheel. I don't know how you can claim that the meaning of "autopilot" isn't clear to the drivers.


Did your driver's license involve training in how to handle autopilot and its common errors and how to correct them? If not, I'm not sure how having a driver's license is relevant?


> If not, I'm not sure how having a driver's license is relevant?

The driver is licensed to operate the motor vehicle. It is their responsibility to be familiar with the safe operation of the vehicle.

> how to handle autopilot and its common errors and how to correct them?

Do you believe that anything additional was required here other than keeping the car in it's lane?


The problem is that if the accident was anything like this video[1] then the driver barely had any time to actively correct the problem, a problem that a well-worn and understood system with driver training would have prepared the driver for.

Also, note that the beep is the driver taking over from autopilot.

[1] https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil...


Based on that video, all that is required is that you stay in the lane. If you are paying attention, you have plenty of time to do that. If you are not paying attention and/or your hands are not on the wheel, I could see how you would struggle to react in time.

But blaming this on the use of the term "Autopilot" is idiotic and discussion centering around it is silly when there are much more important questions to answer about the design of autopilot, other driver assistance systems, and the future of self-driving cars.

99% invisible has a really interesting story about this topic: https://99percentinvisible.org/episode/children-of-the-magen...


In General Aviation at least, there are varying levels of autopilot (depending on how nice the system is you can maintain heading and altitude or fly a pre-programmed course with navigational waypoints), but all of them will fly you right into the side of a mountain or into restricted airspace if you let them. You can't just set it and stop paying attention. It seems that is what people want when they think of "autopilot" for their cars.


If that is the case, autopilot on a plane seems to be more like cruise control expanded to maintain pitch, yaw, and roll than autonomous flying.


There is tremendous variability, but this is basically correct. In addition, the sky is wide open and empty compared to even a not so busy street.

Airplane autopilots really are solving a much simpler problem than even a level 3 autonomous car has to solve.

The most advanced autopilots can automatically takeoff and land, but that requires substantial ground equipment at each airport to support.


+1, Instead of developing super smarts autopilots we could just instrument the highways so they can tell the car where to go. The instrumented roads could send out warnings and give control back to the driver when there are roadworks ahead.


That would require a massive infrastructure investment. And seeing as we can't seem to even get a good chunk of roads to even be in good repair, I don't have high expectations for the possibility of instrumented roads.


So be it. We already have extreme amounts of signage for traditional drivers, it's time we stop treating autonomous cars like second rate vehicles and start providing detailed surveys of roadway boundaries and all of the signage that drivers rely on to autonomous vehicles in a manner suited to them instead of a human.

You say that it would take a massive investment but realistically existing signage, reflectors, and road markings probably cost more than the equivalent for an autonomous vehicle would.


No more massive than it takes to maintain these roads to begin with.


If we were willing to do that, we could have had self-driving cars decades ago. They were showing off systems with magnetic lane markers on the Discovery Channel when I was a kid.

The exciting thing about self-driving cars this time was that they worked on existing roads, making them financially feasible.


GM is driving around with LIDAR to create very precise maps of specific highways and their cars use those detailed maps to navigate.


And what happens when the lanes are shifted for road construction? Killing a driver is bad enough, killing an innocent construction worker would be marginally worse.


Those maps will get outdated with construction, etc.


Key thing that unlike Tesla's self driving system. just about nothing can happen in a plane that will kill you if don't disable the auto pilot in 1-2 seconds.

Perhaps final stages of ILS approach? That'd be the exception, but pilots are trained and certified for this - and definitely not allowed to snooze off then.

(I'm not a pilot, but have spent a fair amount of time and money on various PC flight sims; edit: and used to drive a Mazda 3 with a very good radar cruise control and AEB, before ditching it for a bicycle because getting old and fat).


I'm a pilot and wouldn't consider them autonomous. In the most sophisticated systems, you can plug in a fairly complex route to destination but there's a disconnect in automaticity when transitioning from cruise to approach. There are standard terminal arrivals, but they all assume contact with ATC for specifics, and in the case of lost communication in instrument flight you're expected to use your best judgment for ambiguous sections of the route clearance. Autopilots don't have judgment, and they also can't do two way communications either, so they're not able to accept ATC clearances directly, a pilot has to do that. So yeah not autonomous.


Weird pro-Tesla, anti-airliner FUD. A modern airline’s autopilot will handle everything between takeoff and landing, and can land the plane if needed. The A380 auto pilot has been able to respond to TCAS (collision avoidance system) advisories automatically for almost a decade.

GA autopilots may be more limited, but that’s not what the general public is thinking of when you say autopilot.


I'd wager the general public has overestimated the capabilities of aviation autopilots for quite a long time and only recently have common autopilot systems begun to approach what the public thinks about them. For instance in 1947 a USAF C-54 is reported to have taken off, crossed the Atlantic and landed on autopilot (all under very careful supervision). Needless to say autoland on commercial airliners would not be near ubiquitous for many years following that, but I suspect the general public did not have an accurate understanding of this. Furthermore I suspect many among the general public greatly overestimate the extent to which autopilot controlled take-off exists. "Pilots just push button and take a nap" is a sentiment I've often heard expressed, perhaps only partially in jest.

This is far from a defense of Tesla. On the contrary, the public's ignorance about what capabilities are implied by 'autopilot' suggest to me that Tesla should not be using the word.


The capabilities you have with automation in modern planes is amazing.

I once had the privilege to sit in the cockpit during an entire flight. That was in the late nineties and probably would be completely out of the question nowadays.

The weather and visibility were shitty and what was really interesting was that they initiated the landing automatically, but when the runway become visible (about 400 meters) the first officer took over and landed the plane manually.

Two other things I learned is the amount of manuals they have available in that very cramped space (maybe maintained electronically nowadays) and that when you think that they just key in a couple sequences into the flight management system to then play solitaire you would be very wrong.

Both pilots where permanently active and on high alert during the entire one hour flight. That may be a bit more relaxed on long distance flights, but people who believe it's like driving a bus are really, really wrong.

it was an amazing and very interesting experience.


You’re correct, an airline autopilot can do all that under _continual pilot supervision_. The autopilot also cannot communicate with ATC so pilots will adjust the flight path in flight to account for ATC instructions and changing weather. It’s not hands-off.


It is "hands off", it is not "situational awareness off". And airplane autopilots do not require split-second interventions if something goes wrong, the buffer is usually in a couple of seconds range. Even then trained pilots sometimes fail to correctly take over the flying duties (see Air France Flight 447).

Tesla is claiming that if the drivers hands are not on the wheel then they are not responsible what what their "autopilot" does.


The most advanced autopilot systems today can navigate and even land the aircraft, but the majority of systems (especially on general aviation aircraft, but even many commercial) are closer to what you describe, and can kill you if you're not paying attention.


And those autopilots can only land under close supervision by specially trained aircrew.


Perhaps the Tesla autopilot should be subject to the same regulations and restrictions regarding use as airplane autopilot.


well one thing to note is that what the public thinks an aircraft autopilot can do is not exactly what it does do. hence the reason many object to Tesla's use of the name, they are exploiting the ignorance of the public with regards to the capability of the product.


Only in general aviation (Which has an appalling accident rate.)

Autopilots in commercial aircraft - the kind that you'd be flying from San Francisco to New York are capable of far more then just dumb cruise control, including navigation and landing.

If 'maintain pitch and direction' were enough to be 'autopilot', then my SO's '97 Toyota Avalon had 'autopilot'.


This is nonsensical. What kind of autopilots do you think airplane pilots and boat captains use? That's where the name comes from, and if people think otherwise maybe they shouldn't take their cues from science fiction.

If I name something warp drive, are you then entitled to believe that you will now be able to travel faster than light? Will you blame me for naming the product incorrectly if you travel at a speed less than c?


It is irrelevant whether you think it is _reasonable_ for people to associate the word ‘autopilot’ with autonomy. Reasonable or not, people do, and that has safety consequences for everyone who shares the road with them.


> blames the driver

well, you are driving a car from a paypal mafia founder. why everyone forget that Musk started by ignoring banking regulations and mostly laundering money quasi legally, while later on holding funds from everyone for arbitrarily enforced rules, while completely ignoring user complaints. (bye HN karma :)

seriously, why is that everyone hates paypal ethics but loves musk and hold him as holy?


I think you're overestimating the amount that people hate paypal ethics. For instance, I don't know what you mean by ignoring banking regulations and laundering money quasi-legally? I read his biography written by Ashlee Vance and I don't think this was covered.


PayPal has changed a lot. Back when my parents sold thousands of items on eBay, PayPal would screw them over many many times, and not just with disputes - PayPal often locked funds for months without cause.

It is fairly easy to find out what PayPal used to be like when Musk ran it - just ask some older eBay sellers.


PayPal really hasn't changed that much. They still pull the same bullshit and if they went out of business tomorrow the world of ecommerce would be a better place


>PayPal often locked funds for months without cause.

Not unique to PayPal. Deposit a check of an amount that's unusual for your "profile" at Chase and it'll sit on the money for three weeks.


That's just a check clearing. Paypal would lock entire account funds, payment clearance was not an issue.


No, it's beyond check clearing. The check clears in two days, but Chase still sits on it for a while as a matter of policy.


He's driven the first major innovation in space flight since the 70s.


Except the space shuttle, international space station, mars missions, hubble telescope, and a dozen other things. But other than that, sure.


My favorite is the DC-X that Space X and others are recreating: https://www.youtube.com/watch?v=wv9n9Casp1o


I was aware of the DC-X. It was tested in the 90s but it never went anywhere. Why didn't someone else do what SpaceX is now doing? They were too busy selling 70s tech and cashing checks.

When I said 'first new thing in aerospace since the 70s' I meant shipped. Things that never shipped don't exist.

The Shuttle I suppose could count but it was a mixed bag and in some ways a step backward from the Saturn V.

Seems like people have flipped from mindless Musk worship to being haters all the sudden.

Tesla may fail but they validated the market for high-end electric cars. Before Tesla the popular narrative was that EVs are impractical, slow, have poor range, and can't possibly ever compete with ICEs. People argued that modern transportation was absolutely and fundamentally inseparable from oil, even spinning this into popular peak oil doomsday narratives.

The biggest threat to Tesla today is that now that they've validated the market larger more experienced car companies are jumping on the EV bandwagon.


Yeah, the DC-X was neat and all but that's suborbital on a stubby cone shaped rocket. That's essentially what New Shepard accomplished and not really comparable to actually recovering the first stage on a real orbital launch. As for the shuttle, well there's a reason why Buran wound up being nothing more than a hanger queen and it was arguably superior to the shuttle. From a practical perspective I think it's fair to say that SpaceX has been the largest impact in access to orbit since the 70s.


The Hubble is amazing, but i'm not sure how it advanced space flight. wasn't it a shuttle package? ISS was Soyuz and shuttle launches as well, if i recall correctly. So, yes, there is a ton of innovation is space 'stuff'. but space flight? that seems like more of a stretch to me.

Perhaps Delta and Atlas rockets were amazing innovations. They did great jobs with the recent mars missions. I don't recall getting particularly excited for many mars launches. (the missions were cool though).

Those boosters landing together though made me fell like a little kid watching a shuttle launch. But those boosters man. Maybe that's not innovation but that landing sure felt like a big moment to me.

I think the parent poster is trying to say, for a long time it felt like small refinements.


Replace “space flight” with “launch vehicles” and it’s accurate. NASA stalled out in that department, between the Shuttle and SLS, due to bad management and politics.


> by saying that they aren't touching the wheel or are ignoring messages.

The funniest part of Tesla's statement is the fact that it basically renders "Autopilot" worthless. Or, at best, turns it into nothing more than an Adaptive Cruise Control system with an extremely misleading (and dangerous) name.

Tesla has been all too happy to let people believe that Autopilot is what the name implies. They only make a real effort to correct that misconception when something bad happens. Then they finally say "It's not actually autopilot, you're still supposed to keep your hands on the wheel and your eyes on the road at all times", to which we collectively shrug and say "Well then what the hell is the point of Autopilot? And why did you give it such a misleading name?"


I agree entirely. What's the point of an autopilot if I have to spend my entire drive watching it like a hawk to ensure it doesn't steer me into a wall?

You know what I really want? I want a vehicle to save me when I screw up. I love skiing, but it's a dangerous sport, and the most dangerous part is driving home. You're in the mountains in the dark, the road might be icy, and you're tired from a long day of strenuous exercise.

I want to see humans and machines working together. A car that could spot deer on or beside the road would save lives. They can be very difficult to see in the dark, and hitting one could send it through your windshield at highway speed.

Augmenting human ability has a lot of potential. Even just adding sensors that humans don't have would be huge. I bet those deer are way more obvious in infrared.


Mercedes has a night vision system like that https://youtu.be/d03SuJ0TVcY


Autopilot is a driver assistance feature, not a safety feature.


Tesla claims it is. “Tesla Autopilot does not prevent all accidents – such a standard would be impossible – but it makes them much less likely to occur. It unequivocally makes the world safer for the vehicle occupants, pedestrians and cyclists.” — https://www.tesla.com/blog/update-last-week%E2%80%99s-accide...


That is a really good point. I hadn't though until now about how that language makes it all so ambiguous. The implication that autopilot is safer than you doing it yourself has to be misleading as it implies that you shouldn't have to monitor it so closely.

Ugh.


For a non-safety feature they do scream a lot about how many lives it saved...


> What's the point of an autopilot if I have to spend my entire drive watching it like a hawk to ensure it doesn't steer me into a wall?

If you fall asleep while driving, it may prevent you from driving into a barrier or on-coming traffic. It's not meant to take over the act of driving from you, but to provide assistance.


you're still supposed to keep your hands on the wheel and your eyes on the road at all times

...and be ready to correct it should it try to steer into objects. IMHO that's even worse than plain old "manual" driving --- at least in that case, the car doesn't have a mind of its own and won't suddenly decide to steer itself. It'll keep going in a straight line even if (absolutely not recommended on a public road, but a good way of checking the suspension and steering) you take your hands off the wheel.

To me, it sounds more like driving with Tesla autopilot is like being a driving instructor for a not-too-great learner.


" Or, at best, turns it into nothing more than an Adaptive Cruise Control system"

This is literally what Teslas autopilot system is.

It is adaptive cruise control and lane assist.

They do not claim to be self driving. It is not a level 3 system. It is level 2.

The purpose of autopilot is to have both the features of lane assist and adaptive cruise control.

What's the big deal? Why is this so surprising to you? This is what it is. Go buy a car from Google if you want a self driving level 3 car.

If you don't want a level 2 system then don't buy it.


>it is far from earning its namesake.

The FTC should require them to stop using the term Autopilot. The common understanding of that term makes it misleading.

Words matter and Tesla is trying to have the benefit of the term without the responsibility for what it implies.


Autopilot, as used by the aviation industry, still requires an attentive pilot. It is a pilot-aid to reduce workload, it does not obviate the need for a pilot prepared to maintain control of the aircraft or correct for unexpected conditions. Autopilots can and do malfunction, in this video[0] it attempts to make a >1G maneuver unprompted. You'll note the pilot in command in this video is actively scanning for traffic, he is physically positioned to take control of the aircraft, he is paying attention to instrumentation, and is actively participating on frequency. In other words despite having an autopilot: he is still piloting the aircraft.

I don't think the issue here is Tesla misusing the word. The issue is that the common (non-pilot) understanding of the term is wrong. People piloting heavy machinery have an onus to maintain their currency & proficiency, as well as be ready to correct for faults in their instrumentation and pilot aids.

Autopilot is intended to be a tool to reduce pilot workload so you can focus on other aspects of maintaining correct control of the vehicle. (Namely in an aircraft it assists you with aviation, leaving you better able to navigate & communicate.) Instead what we are seeing with these assists in cars is that people are using these pilot aids and then engaging in unrelated distractions.

[0]: https://youtu.be/QbvfkKyurJI?t=13m25s


If you engage the autopilot in a (certified) plane at cruise level and glimpse over a topographical map of the area, you can practically fall asleep, wake up and you'll still be completely safe (until you run out of fuel). This argument/parallelism with aviation is null.

Airplane autopilot: Engage, divert attention for minutes, not die (consistently). Tesla autopilot: Engage, divert attention for 10 seconds, die (also consistently).

Airplane autopilot!=Tesla autopilot. Name is misused.

Also take a moment to appreciate what you say "It's ok to use this name on cars, because professional pilots know you can't completely rely on plane autopilots. People should know that and if they die it's their fault."

The video you provided shows a very unusual case. Most autopilots on small aircraft are not really running software so such "bugs" are virtually non-existent, and in large aircraft they only use tested-to-death systems and they practically never bug out.


Do most people think that autopilot is used to avoid other planes and obstacles like mountains? I always assumed autopilot was basically like cruise control for planes and just kept it level, at a fixed direction, and constant speed. Same thing as autopilot on a sailboat.


Old autopilots did exactly that, in addition to keeping a predefined climb/sink rate (long ascents/descents are really boring to do manually in small aircraft).

New autopilots in commercial airliners can do many things (follow GPS tracks or radio navigation waypoints, control engine throttle, line up and bring the airplane 50-100ft over the runway etc) but they never do collision avoidance, landings or terrain mapping. Airliners also have alarms for low altitude (most have radar altimeters) but they prompt the pilot for action, they're not supposed to avoid anything on their own. Apparently Tesla doesn't even do that.


Modern airliners can and sometimes do do automatic landings [1] [2] [3] [4]. It's done when visibility is very poor. If it is not required, most pilots prefer manual landing. It's less work. The autoland system is more complicated to set up, and is more work to monitor in order to take over if something goes wrong. But when the visibility is low enough, many airlines require their pilots to use it.

[1] https://www.quora.com/Why-cant-airplane-landings-be-conducte...

[2] https://www.quora.com/How-often-are-airliners-landed-using-a...

[3] https://www.usatoday.com/story/travel/columnist/cox/2014/02/...

[4] https://www.airlinepilotforums.com/technical/65056-autoland-...


Autopilot refers to a wide range of capabilities on boats and airplanes. Many of those require an active, attentive opertator for safety.

The car is warning you if you take your hands off the steering wheel. I don't see how anyone but a moron would value their misconceptions about the term "autopilot" over the clear signals that your hands are required to be on the steering wheel.

I wish people would stop bringing this idiotic point about the term "autopilot" and instead talk about how the different designs of autopilot can encourage or discourage attentiveness in operators.


>Autopilot, as used by the aviation industry

>I don't think the issue here is Tesla misusing the word. The issue is that the common (non-pilot) understanding of the term is wrong

In fact, because it's well-known that there's a common misunderstanding of the term (as you've acknowledged) but the company chooses to use it anyway, then that's sufficient to represent intentional misuse. They are leveraging this misunderstanding in their branding, then hiding behind the "real" meaning when it's convenient.

Companies test and invest heavily in their branding, which includes a full reasoning of the connotations associated with the words they choose. There is literally no way that Tesla is unaware of people's common misunderstanding.

So, maybe it's clearer if you look at it another way: why choose a word that could create any confusion when there are countless other choices?

>the pilot in command in this video is actively scanning for traffic, he is physically positioned to take control of the aircraft, he is paying attention to instrumentation, and is actively participating on frequency. In other words despite having an autopilot: he is still piloting the aircraft.

If a driver took a similar monitoring posture there are n-situations in which he/she would not have time to react to avoid an accident. There is generally far-greater margin of error and time for correction when an aircraft's autopilot fails. This is why a system that requires such monitoring in an automobile is a fundamentally flawed design. There are too many situations in which there is simply not enough time.

Because drivers are expected to a.) allow the system control of the vehicle but b.) recognize its failures and take back control to correct within milliseconds? That's super-human and, at best, adds n units to the human's reaction time--with potentially devastating consequences.

And, remember, it's beyond "environment monitoring". Drivers must now correct for when the vehicle does not recognize a hazardous situation and also respond when the vehicle itself suddenly creates a hazardous situation (like veering towards a barrier). There is no amount of "environment scanning" that can predict such a malfunction.


> Autopilot, as used by the aviation industry,

Pilots may realize that autopilot is a term that can refer to extremely simplistic systems that require constant pilot attention. The public at large that Tesla is selling to, however, equates the term with big airliner autopilots that could land the plane if needed.


> Autopilot, as used by the aviation industry

Which is a tiny tiny percentage of the general population.

As has been said it is what the general population understands "autopilot" to mean which is the important part. The fact that the niche and general understandings are different should not be a gap that a company inserts itself into to mislead the public as to their products capabilities.

Yes, most people don't know the nuances of what "Autopilot" means in the aviation industry, but trying to educate the general public is a losing proposition vs just telling Tesla to rename the system.


I agree, something more like Volvo's "Pilot Assist" makes sense to me.


Adaptive Cruise Control with Lane Keeping Assistance seems to be the most common industry name for it.


Adaptive Cruise Control seems even better.


ACC is a different feature that exists separately: it does not touch the steering.


Yep, this used to happen frequently on road trips. It relies on squeezing the wheel fairly firmly.

I got into the habit of periodically giving it a good squeeze while resting my hand on it.


It doesn't want a squeeze though, it needs resistance/feedback on turning the wheel. Having your hands on the wheel isn't enough, you actually have to give it a little tug, so the alert happens pretty often unless you have a good cadence of tugging on the wheel randomly while in Autopilot.


Hence the increasingly common tactic of Tesla owners to hang a water bottle from one side of the wheel, which provides constant torque, and fools the system into thinking you are always holding it.


That sounds almost dangerous.


Might those tugs affect its learning?


It is NOT learning.


As I understand it, other auto manufacturers provide similar capability, though they may call it "lane keeping" instead of "autopilot".

What I haven't seen discussed: Are these cars from other manufacturers having similar problems with crashing into things? Are Tesla crashes considered more newsworthy, which is why we hear more about them? Or are drivers of these other cars more attentive? Or do these other cars actually have better technology?

Just curious why the discussion is always Tesla "autopilot" vs Waymo "full autonomy" vs unaided humans, when other manufacturers are also putting level-2 "lane keeping" systems on the road.


Tesla brags and thinks it is OK to use their customer as guinea pigs more than other manufacturers.

Other car manufacturers are more careful than Tesla in what they let get out of the lab. For example, they wouldn’t have to send out a firmware update that, as a new requirement, forces users to keep their hands on the steering wheel (https://thenextweb.com/artificial-intelligence/2016/09/23/te...) because they wouldn’t yet dare deploy one that doesn’t.

And I don’t think their technology is worse than Tesla’s. See for example https://www.technologyreview.com/s/520431/driverless-cars-ar... (2013).


> Just curious why the discussion is always Tesla "autopilot" vs Waymo "full autonomy" vs unaided humans, when other manufacturers are also putting level-2 "lane keeping" systems on the road.

Because, ironically, Tesla and Waymo (and Uber) have been more suscessful at deliberately drawing media attention. The thing is, once you draw media attention, it tends to stick around even when the story goes someplace you don't want it to.


A lot of people also paid for autopilot before it had the hands on wheel every few minutes requirement.


> I still like it and am glad I opted for it, but it is far from earning its namesake.

I trust the intelligence of drivers, but this brings up a good point. Could calling it "autopilot" be perhaps a bit much right now? I'm not claiming it's false advertising kind of bad, but I can see it being better for marketing than giving a reasonable sense of what the feature is capable of.


I would argue that the "creates false expectations that leads to fatal automobile collisions" kind of bad is potentially a more serious kind of bad than the "false advertising" kind of bad.


It is an accurate use of the term. If the system is warning you when you take your hands of the wheel, you would have to be a moron to assume that your attention is not required.


I also often find that the car doesn't detect my hands on the wheel when they are. Sometimes a gentle shake isn't even enough and giving it a firmer shake causes it to disengage.


IMO CoPilot would have been a great name that much more accurately covers what it does.


As a pilot, I'd consider that name worse. Copilots literally fly the plane, often with no real supervision. Autopilots require supervision.


> It is extremely common for my car to warn me to keep my hands on the when WHILE I have my hands on the wheel. Even shaking it a little will sometimes not cancel out the warnings.

This is not what I experience. Maybe the sensor in your steering wheel is faulty? For me, the warning goes away as soon as I lightly jiggle the steering wheel (I stop as soon as I encounter any resistance).


Tesla blaming the driver for not using their steering wheel correctly is effectively admitting the obvious, that their car is badly designed. If the car needs to be used a certain way to be fully functional and they can't work that into the design in a way that compels the user to use it correctly, they have failed. The reason conventional wheels are an example of great design is that the user is forced to use them. Their no-knob, no-button interior was, until now the more obvious design fail. Some people see Tesla's badly-designed completely flat all-digital interface as the 'future of the automobile', while in reality what they're looking at is DeLorean 2.0.


Interesting. Have you reported those issues to Tesla?


It's a widely reported issue over at /r/teslamotors. I see a thread or mention of it at least once a week.


A reddit group is not a substitute for contacting the company directly.


Really? It seems companies respond to problems that are aired in public much quicker and more thoroughly than through their official support channels.

.@brand on a public twitter with enough followers to care about usually gets a pretty quick response.

Hell, how many times have YC companies prioritized support issues because a user posted it here on HN? I see it all the time with coinbase and crew, even googlers hanging out seem to be able to get account bans reviewed relatively easily.

I would argue the /r/teslamotors sub would be BETTER than going to Tesla directly. The motorist that died in this crash had contacted Tesla multiple times regarding this issue, since the crash plenty of others have made videos of similar behavior at similar median splits across the interstate system.

If the driver had made those complaints public perhaps others would have corroborated his accounts and or recreated the issue with video proof. Certainly putting somewhat public pressure on Tesla to fix the problem quickly.


I kinda disagree with this. Its the company's job to be on top of whichever forum its users are using to discuss its products, especially looking out for security/safety issues. While it would nice for the Op to directly contact the company, I'd place more of the weight of responsibility on the company than the user.


Are you implying that it is likely that his hands were ON the wheel when he drove into the barrier? I don't see why he would have driven into the wall if that were the case.


Why do you get frustrated if the driver failed to maintain control of the car?

When autopilot on airplane crashes the plane, what do you think is the primary cause?


Hands on the wheel is also like not going above the speed limit. Yes, 10 and 2. But in reality we all bend those rules a bit.


A lot, if not most advanced driver trainings recommend 9/3.


So if your hands are on the wheel is the autopilot still in control and you are not? I'm curious because if their telemetry says his hands were not on the wheel but they were, then did he just drive himself into the barrier?


Tesla's autopilot is setup to check if the driver's hands are on the wheel.

If it does not detect hands on the wheel it disengages.

The argument they make is that having hands on wheel is, effectively, having the driver in the loop as a part of the safety system ready and able to take over 'instantaneously' if necessary.


So his hands weren't detected on the wheel and the autopilot disengages at highway speeds? I can't really understand how this is supposed to work.


It doesn't work. That's my whole takeaway. It's just a fancy lane keeping and cruise control system.


what telemetry? Why would anyone trust this data?


Been driving one on long drives regularly the past year and I've never experienced the problems you describe before.


I’d be really concerned about the news then. Will you react quickly if the car decides to drive towards a post?


Same. My other annoyance about every single one of these threads are people making comments about how unsafe it is when you can tell they've never even driven a car equipped with it.


“We believe in transparency, so an agreement that prevents public release of information for over a year is unacceptable.”

As noted in the article, the information being released includes blaming the victim and other PR spin. This doesn't serve the public or further highway safety; it further's Tesla's commercial goals.

If Tesla's withdrawal hampers the investigation, let's hope the agency uses its subpoena power to get the info it needs to determine the cause of the accident and what Tesla needs to do to prevent similar incidents from happening in the future.


Ironic that Tesla touts the value of being transparent. It freely cites the NHTSA report that Autopilot seems to reduce crashes by 40%. But the NHTSA is currently being sued because it won't release the data behind that finding:

http://www.safetyresearch.net/blog/articles/quality-control-...


Right. NHTSA/DOT are being sued over Tesla's refusal to provide data from an existing FOIA.

So Tesla files an FOIA and claims they are the transparent ones. This is a bad attempt at PR.


> As noted in the article, the information being released includes blaming the victim and other PR spin. This doesn't serve the public or further highway safety; it further's Tesla's commercial goals.

I disagree. People need to know that autopilot doesn't eliminate their responsibility for safely driving the vehicle and paying attention. People need to know they HAVE to respect the autopilot warnings. If that is happening too much in an area and so is too cumbersome, then don't use autopilot. And to wait a year for that to be released would not be in the public interest. Neither would a year of unanswered BS from short interests and competitors, considering that Tesla's vehicles overall are incredibly safe.

I think NTSB's action to remove Tesla is fine. Tesla felt they had to release such information (and there's a good argument for that, IMHO), but that is against NTSB's policy as it could compromise the investigation. So be it. Removing them may be best for all.


There would be a very simple first step here. Stop calling it Autopilot. And stop promoting anything self driving near it

To tell people that you have to pay attention 100% of the time your don't need any of the details. You have stated that over and over again, it just doesn't do enough, because a few weeks later people have forgotten all these statements and new customers or people who just like the car, but don't follow the company might never even have read them.


Except it is autopilot; it performs the same function as an aircraft autopilot, no more nor less.

You could make the same silly linguistic argument about "cruise control."


The first words on the homepage for Tesla Autopilot is "full self-driving"


> To tell people that you have to pay attention 100% of the time your don't need any of the details.

The problem with this strategy is that literally no one is clamoring for a self-driving car that requires you to have a license, both hands on the wheel, both eyes on the road, and to be alert at all times.

If you have to mention it, fine, but engineer the ways you mention it to have low penetration and low retention.


It is strange seeing people claim Tesla is just doing PR spin here. They aren't saying this is the drivers fault because the driver decided to drive into the median. They are saying it is the driver's fault because he put too much faith into the autopilot system. Tesla's entire argument is that their autopilot is not as capable as some people believe. That is a weird stance to take if their only goal is to sell more cars.


>That is a weird stance to take if their only goal is to sell more cars.

That is only to weasel out of the legal issues. They are doing marketing stuff that pushes the feature as vastly more capable on the side..For ex Elon Musk tweeting videos that shows cars traveling by itself on unmarked roads..And having declarations such as "The human behind the wheel is only for legal reasons, the car is driving all be itself" and such..

So the actual marketing is based on this unrealistic projection, and only these "statements", which the public will soon forget, is based on its true capability..


>That is only to weasel out of the legal issues.

Which supports my point. Theses responses likely have had a bigger influence from lawyers than PR folks. The goals of those two groups are almost diametrically opposed to each other. The best thing for Tesla from a legal perspective is to downplay the capabilities of autopilot. The best thing for Tesla from a marketing and PR perspective is to exaggerate the capabilities of autopilot. Tesla is doing the former in these responses. They are choosing the legal response over the PR response. It also should be noted this is also probably the best move from a public safety standpoint. I don't know why people are claiming these responses are PR spin or how these response could possibly sell more cars.

Regarding the rest of your post, I was not talking about Tesla's general marketing. I haven't seen Musk do anything like you claim. That is certainly misleading and irresponsible if he does do that and implies that any current Tesla can "drive all be itself"


>I was not talking about Tesla's general marketing. I haven't seen Musk do anything like you claim. That is certainly misleading and irresponsible if he does do that and implies that any current Tesla can "drive all be itself"

Please take a look at the very start of the video attached here [1]. It says "The person in the driver seat is only there for legal reasons. He is not doing anything. The car is driving itself".

Now, the next one [2], where Elon Musk retweeted a video of a Tesla navigating an unmarked road. There is a reddit discussion about the same here [3]

[1] https://www.thesun.co.uk/news/2231417/tesla-shows-off-self-d...

[2] https://twitter.com/Teslarati/status/980476745106231297

[3] https://www.reddit.com/r/teslamotors/comments/88yk66/elon_ju...


That first video is not a demonstration of the current autopilot feature. It is a demonstration of a future full self-driving feature. Those two features are purchased separately from Tesla and have their own multi-thousand dollar price tags. I think it is reasonable to expect an owner of the car to know the difference between the two while a general public wouldn't have any clue about the differences. It is also reasonable to expect Tesla to do make the difference clearer. Their responses to this accident are certainly doing that by making it clear that the driver should be attentive.

Both videos also show attentive drivers who are able to jump in at a moments notice. The first video is a promo video and the second is from a Tesla fan. I wouldn't be surprised if they were specifically edited to showcase the car's features. I can certainly understand how someone might view these videos and think that autopilot is flawless, but I don't think Tesla is doing anything unethical in those videos.


I doubt it's really about legal issues. Could they just disable or scale back the autopilot? Or at least warn people to not rely on it while avoiding any commentary about the facts of the accident. That'd be more polite/deferential to the investigators and to the victim.

I wouldn't be surprised if there's some fear over allowing uncertainty to fester for a year while awaiting the results of the investigation. But I think there's a much bigger risk in being perceived as unwilling to play by the same rules as everyone else.


>Could they just disable or scale back the autopilot?

The problem with this is that Tesla clearly believes that autopilot plus an attentive driver is safer than an attentive driver alone. I think that is most likely true.

It is possible that autopilot plus an inattentive driver is less safe than an attentive driver alone. I think this is plausible, but I don't think there is any real evidence to prove this one way or another.

The question then becomes does Tesla have an obligation to save the people in the second group from themselves and in turn put the people in the first group in greater danger?


Still doesn't explain why they couldn't just release a statement warning people not to rely on the autopilot until the authorities have completed their investigation.

That avoids prejudging whether it was primarily a defect with the vehicle, or driver error, or both.

Above all just respect the process. It's important to yield to an impartial entity when people have been hurt or killed.


> They are saying it is the driver's fault because he put too much faith into the autopilot system.

Any faith the driver had in Tesla's autopilot system would be due to Tesla's marketing. If Tesla believes its self driving capabilities are not quite ready for market, maybe they shouldn't be selling it and calling it an "autopilot".


Maybe they are on some corporate autopilot, why would they steer themselves into a wall otherwise?


Their CEO has many responsibilities. He might be involved with setting up 4000 rocket launches for another company.


Is there any indication that the information that Tesla released wasn't true?

And Tesla said they'd continue to cooperate with the investigation, so why do you think a subpoena is suddenly needed?


> Is there any indication that the information that Tesla released wasn't true?

We don't know and that's exactly the reason why NTSB has their disclosure policy in place.

It's to stop biased actors from dominating or setting the narrative early, before impartial actors have had a chance to corroborate what happened.

What if the car malfunctioned and the warnings weren't sounding? What if the warnings were sounding even though Huang had his hands on the wheel (as another poster above indicated has happened to them)?

I simply can't take what Tesla says at face value given the huge vested interest they have in controlling the story here.


Have you looked at that information. Here, I'll get it for you from https://www.tesla.com/blog/update-last-week%E2%80%99s-accide...

> Over a year ago, our first iteration of Autopilot was found by the U.S. government to reduce crash rates by as much as 40%.

Why don't they talk only about Autosteer instead of Autosteer+AEB (and other features combined). Why not discuss how many accidents could have been caused by Autopilot if the driver did not correct it. Why is there no commentary on if whether their active driving system would be better run as a passive driver error correction system, as other manufacturers do.

> In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.

Ahh, so a Tesla is safer than a 10 year old beaten down car with no airbags driven on country roads, I never expected that. Tesla is safer than even a motorbike, what a surprise. And just having Autopilot in your car (even disabled), makes you 3.7 times less likely to die. Does that say anything about Autopilot, or are they just touting non-Autopilot premium safety features?

> There are about 1.25 million automotive deaths worldwide. If the current safety level of a Tesla vehicle were to be applied, it would mean about 900,000 lives saved per year.

So, motorbikes driving in Himalyas are more deadly than a Tesla on California highways, who would have guessed. What does this say about Autopilot again?

> We expect the safety level of autonomous cars to be 10 times safer than non-autonomous cars.

Elon musk also expected driverless cars to be everywhere in 2017. And autonomous doesn't need to mean hands off driverless cars, they keep conflating assistive feature benefits with their hands-off technology.


- The average US fatality is not in a ten year old car on a country road. - US crash rates are lower than worldwide rates, so Tesla’s estimate of lives saved worldwide is low. - If autopilot is only used part of the time, that implies its safety benefit is significantly more than 3.7x. A model S is not 3.7x better at passive safety than other cars.

Of course none of this data is very meaningful. You have to compare the rates of autopilot users to non-autopilot users of similar demographics, and have all crashes recorded, not just fatalities, to get close to a comparative samples.


> US crash rates are lower than worldwide rates, so Tesla’s estimate of lives saved worldwide is low.

Road fatalities per 100,000 inhabitants per year (2013, WHO):

    UK      2.9
    Spain   3.7
    Germany 4.3
    France  5.1
    USA    10.6
Among the rich countries, the US crash rate is one of the worst. With a similar set of vehicles, this rate is very fluctuent, per country and per year. Hint: the problem may not be purely mechanical.

Seriously, would Tesla's autopilot lower the crash rate in the UK? In Thailand? I guess it wouldn't change much in European countries, and any good car would do the same in poor countries.


> I guess it wouldn't change much in European countries

Why not?

Via https://en.wikipedia.org/wiki/List_of_countries_by_traffic-r..., here is another table (same countries, same order) giving road fatalities per 10⁹km in 2015:

    UK      3.6
    Spain   7.8
    Germany 4.9
    France  5.8
    USA     7.1


> Why not discuss how many accidents could have been caused by Autopilot if the driver did not correct it.

It's hard to gather statistics about thing like that because it just look like a normal drive unless the driver records the near miss which is unusual unless they're driving with a dashcam or have had something happen in the same place multiple times.


It's not about whether what Tesla released was true or not, it is about whether or not it is the whole story and whether or not the investigation has run its course. Tesla reporting any data before the investigation is complete - and so before all the facts are in - as long as that data paints Telsa in a good light is extremely un-ethical.


I'm sure what they admitted was true, but they left out information as well. They should release the entire set of logs for the crash, for transparency.


There are 19 SEASONS of Air Crash Investigation. I can already hear the stern speaker’s voice of TCI - Tesla Crash Investigation - saying that Tesla interfered with the first dozen investigations... Let’s hope, indeed, this one will give us better look into the real causes of the accident, including the malfunction of the handsoff detector which distracted us from the real causes for the other accidents.


>As noted in the article, the information being released includes blaming the victim and other PR spin. This doesn't serve the public or further highway safety; it further's Tesla's commercial goals.

Saying clearly that abdicating your responsibility to drive the car can (and has) resulted in deaths does serve public safety. Its critically important that people driving this section of road (and others like it) know about this as soon as possible.

Call it victim blaming if you want, and certainly there was some spin, but it is unfair to say that this has no safety element.


Right as you mention and Tesla did mention in their press release yesterday they will continue to cooperate on technical matters. That part is not optional.


They could justify that by saying: Autopilot saves lives. Therefore not setting the record straight as fast as possible could lead to less people using the autopilot and more casualties as the result. (They could also be doing this out of pure financial motives)


Can they actually say "Autopilot saves lives"? It's never been proven that Tesla's system is safer than human driving so I would think they would run afoul of some marketing law if they were to claim that.



The version of code tested doesn't necessarily reflect the current state. Bugs are introduced and things that worked before, stop working properly after an update.

One such example of this is at https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil...

> Yep, works for 6 months with zero issues. Then one Friday night you get an update. Everything works that weekend, and on your way to work on Monday. Then, 18 minutes into your commute home, it drives straight at a barrier at 60 MPH.

> It's important to remember that it's not like you got the update 5 minutes before this happened. Even worse, you may not know you got an update if you are in a multi-driver household and the other driver installed the update.


The NHTSA report makes no claims about Tesla's safety relative to human drivers, only that Teslas with TACC + Autosteer are safer than Teslas with TACC alone. The NHTSA never even looks at how often Autosteer is activated so both buckets are filled with tons of human driving miles, from the report:

The crash rates are for all miles travelled before and after Autopilot installation and are not limited to actual Autopilot use.

Both buckets also have plenty of miles with adaptive cruise control on - the data can't be used for a human vs autopilot comparison.

And even taking those considerations into account, there is a lawsuit [1] asking for the data used to come to those findings which NHTSA and Tesla have both resisted.

Tesla has never proven that Autopilot is safer than a human driver.

[1] https://jalopnik.com/does-teslas-autosteer-really-cut-crashe...


"The agency found that Teslas without Autosteer crashed an average of 1.3 times per million miles driven, compared with 0.8 per million miles for vehicles equipped with Autosteer. That shows that just having Autosteer available on a car — not necessarily enabled at the time of crash — reduced crash rates by about 40 percent. That’s a truly staggering improvement."

That's (a) comparing Teslas to Teslas, not to all vehicles, and (b) not necessarily comparing safety, but crash rates.


The report conflated Autosteer and AEB, plus AEB was rolled out at the same time, so you can't compare.

however, if you want to compare AutoSteer vs no Autosteer, you need to compare Teslas to teslas, and compare crash rates not fatalities.


I think the difference between autopilot and other safety innovations is that the failure state of autopilot is rather spectacular (maybe it's closest to seatbelts/airbags in this regard). When it works, it's great. When it doesn't work and causes harm, it's easy to point to as a reason not to use it.

A lot of the other car-safety innovations usually have a failure state where it doesn't put you in a worse off position that you normally would have been in, should that innovation not have existed (e.g., crumple zones, crash-safe interiors, highway design/signage, ABS, etc.).

Reducing the absolute # of car deaths is always a good thing, and undoubtedly # of deaths saved by AutoPilot > # of deaths caused by AutoPilot, but looking at that comparison makes people worrisome, especially if you can picture yourself on the RHS of that equation.

Personally I'm not sure where I stand. I do find it uncomfortable having ever-increasing levels of software complexity in my car.


The headline has been updated to say that:

> Tesla Was Kicked Off Fatal Crash Probe by NTSB

> The National Transportation Safety Board told Tesla Inc. on Wednesday that the carmaker was being removed from the investigation of a fatal accident, prior to the company announcing it had withdrawn from it, according to a person familiar with the discussion.

> NTSB Chairman Robert Sumwalt relayed the decision in a call to Tesla’s Elon Musk that was described as tense by the person because the chief executive officer was unhappy with the safety board’s action. NTSB is expected to make a formal announcement in a release later Thursday, said the person, who spoke on the condition of anonymity.


"Tesla, in a statement issued late Wednesday, suggested the company chose to leave the probe."

Technically, that's sorta true. By releasing information and opinions about the ongoing investigation, Tesla decided not to be a part of it.


> Technically, that's sorta true. By releasing information and opinions about the ongoing investigation, Tesla decided not to be a part of it.

"You can't fire me, I quit!"

Also laughable from Tesla's statement. "We are committed to transparency as a company, and NTSB was telling us we couldn't release information about AutoPilot, and that was unacceptable to us".

Huh, all those people looking for service manuals may be curious to hear about Tesla's corporate commitment to transparency.


That is, terrifyingly, the same logic my friends use with their young children.


Only in this case, it's like a parent giving a kid 'a good shaking' to 'wake them up'.

Any 'artificially intelligent' car that can't avoid running straight into something solid right in front of it is dangerously stupid. Elon's age is starting to show.


The age comment manages to be both a non sequitor and an ad hominem so lets ignore that.

The issue at play here is far larger than a car having a crash. The issue I was referring to is the actions of Tesla as they respond to the crash. That is cultural and it affects both the process of creating and releasing products as well as the process of responding to their failure. If your product can kill people, and lets be clear - cars can easily kill people SDV or not - you need to act like professionals when something happens. Playing the blame game, in public, and violating the expectations of a body like the NTSB shouldn't garner trust, no matter who is right at a technical level.

EDIT: fixed my abysmal grammar.


Can we keep age discrimination out of this discussion please?


Yes, that was ill-considered. My apologies.

So instead, Asimov's Rule#1: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."


The HN headline should be updated too.


> Tesla said that the Model X driver’s hands weren’t on the steering wheel for six seconds prior to the fatal crash

Call me a sceptic, but saying that with absolute certainty is troubling. As an engineer, knowing how a lot of software and hardware systems are actually implemented, 6 seconds is such a minuscule time frame. Data about hand position could be in a write cache somewhere waiting to be flushed, inside monitoring service, inside OS, inside RAID controller, inside storage controller, or even de-prioritized by a busy IO scheduler. Then at a moment of a crash that the data might never make it to the storage for recovery.

I'm not saying that it is, but if the hand monitoring service is glitching out, it will take a lot of time to resolve the issues completely, mostly because just looking at the data as a remote observer (without camera installed in a car) it would just look like the hands were not on the wheel.

Also it doesn't help that for Tesla it seems to be a nice way out of this kind of crashes – they can just say that driver's hands were not on the wheel and that's that. Who even has the data and can decode it? How can we make sure the data was not tampered with?

This needs to be investigated properly and not just by Tesla, they are a biased party.


> As an engineer, knowing how a lot of software and hardware systems are actually implemented, 6 seconds is such a minuscule time frame.

It's a geological amount of time to a real-time system, which anything related to controlling a car absolutely will be. "Multiple seconds of unpredictable latency are OK" is an attitude that doesn't get you far when interacting with the physical world, and that's why there is an entire discipline dedicated to dealing with it that has to use special chips (e.g. the ARM R series), special OSes (e.g. VxWorks), and special tools (special IDEs, oscilloscopes, special cameras) to rise to the occasion.

If they let a "latency doesn't matter" system near any of these data streams, that's the scandal.


> It's a geological amount of time to a real-time system, which anything related to controlling a car absolutely will be.

The logging service that writes it down isn't really critical to the operation of the car. Even the notification has a geological amount of time before it has to notify the driver when compared to many other systems that operate on a near self driving car. I wouldn't be surprised if less critical components shared a non realtime timeslot between the actual realtime tasks.


One would assume they know that this log information is critical to the operation of the whole company, and engineer the logging system with that in mind - otherwise they can hardly make any claims about what a car was or was not doing when issues like this crash occurred.


I do understand that, and that may be the case, but I don't think it will surprise anyone if there's a piece of nodejs somewhere handling something it shouldn't have, because someone just needed to ship. We don't know the quality of their code base and hardware or of all the code bases and hardware modules provided to them by 3rd parties.


I would be surprised if nodejs were anywhere near the real time, safety critical systems here. I'd also not buy a Tesla in that case.


I've seen so many pieces of safety-critical hardware BSOD at this point that I would not be at all surprised.


I don't think you know what actual safety-critical hardware is if you are running Windows or even Linux on it...


Well, obviously the maker of said hardware doesn't know what safety critical hardware means, but that doesn't mean it doesn't exist.


I've seen both IV pumps, ventilators, and MRI machines BSOD.


Which almost always happens because someone messed up their C or machine code, not because of using nodejs.


> Which almost always happens because someone messed up their C or machine code,

Or hardware failure, possibly injecting faults into the system. See Page 14: https://www.hq.nasa.gov/alsj/ApolloLMRadarTND6849.pdf


What does that have to do with nodejs?


If there is node.js component anywhere near the controls of my car, someone needs to be taken out back and taught a lesson.


Dude, this ain't a webapp. It's a car, our lives are in her hands.


> It's a geological amount of time to a real-time system, which anything related to controlling a car absolutely will be.

s/will be/should be/ -- defects in design notwithstanding. Note that the latency involved with the sensing equipment (skin conductivity? pressure sensor?) could be large enough to be significant.


You're making the assumption here that it's only recording when the hands are on the wheel. Presumably it also records when the hands are off the wheel. So if the crash happened at (to completely make up a timestamp) 13:24:01.123, and the latest log entry says at 13:24:01.108 the hands were off the wheel, that would be good reason to say that the driver did not have their hands on the wheel at the time of the crash.

Of course, I don't actually know what the logs look like, but I'd rather err on the side of assuming that the designers of a real-time system for monitoring something related to driver safety took into account the idea that, when the car is involved a severe crash, they should have as much accurate data recorded as possible rather than leaving it sitting around for ages in un-flushed caches.


A driver of a Tesla car has mentioned in this thread that the car sometimes doesn't register the hands on the wheel.


Sure, but I would expect the logs to have more information than just the boolean "are there hands?", e.g. maybe containing the actual sensor data that was used to determine if there were hands.

The car should be conservative about the threshold it uses for the sensors in determining if there are hands (because it would rather err on the side of telling someone to put their hands back on, than err on the side of thinking someone has their hands on when they don't), but engineers analyzing the logs after a crash wouldn't have to use that same threshold when trying to determine if the driver was in control of the car at the time of the crash.


It might not be 100%, but it is probably safe to assume that their hands weren't on the wheel because they went head first into a concrete wall.


That's anecdotal information.


And Tesla is claiming otherwise. The driver in this thread likely has nothing to gain or lose by saying that, other than maybe the resale value of his car might be hurt. Tesla has a lot of reasons to be deceptive here.

Plus, it's a pretty common complaint in Tesla forums.

On the other hand, this is effectively an admission by Tesla that there are serious problems with the autopilot system. It should never guide the car into a barrier, no matter where the driver's hands are.


I believe it. Do you?


With the note that Tesla's 'hands on wheel' detection is pretty flaky.


$DEITY help us all if murderous robots can absolve themselves of responsibility after 15 hundredths of a second.


> Data about hand position could be in a write cache somewhere waiting to be flushed, inside monitoring service, inside OS, inside RAID controller, inside storage controller, or even de-prioritized by a busy IO scheduler.

You are vastly overestimating things here.

Critical functionality in a car would rarely if ever involve non-deterministic delays. Furthermore, the OS itself (if there even is one) would be a real-time OS.

But even if we assume such delays are present in a Tesla steering wheel monitoring task, absolutely none of the delays you listed would approach anything near a second...


> Critical functionality in a car would rarely if ever involve non-deterministic delays. Furthermore, the OS itself (if there even is one) would be a real-time OS.

It might not be safety-design-critical to have a record of the hand-presence sensor with high resolution, merely high enough resolution for safe operation of the vehicle.

Flushing these records to storage is an independent function for the sake of service/support/maintenance and is likely not a regulatory requirement. Though regulatory agencies are bound to be interested in the data.


"This needs to be investigated properly and not just by Tesla, they are a biased party."

That may be why they were removed, there's an inherent conflict of interest when only Tesla engineers are able to decipher Tesla's crash data.


You and the other commenter that says the wheel sensor is finicky should somehow contact the investigators (AFAIU that's NTSB). These are really important possibilities that may never be discovered by them.


It is interesting (to me at least) to compare the "first update":

https://www.tesla.com/blog/what-we-know-about-last-weeks-acc...

with the "last update" (the one that triggered the NTSB reaction):

https://www.tesla.com/blog/update-last-week’s-accident

as it allows to easily identify the "boilerplate" parts, but, more than that, the "first update" closes with:

"Out of respect for the privacy of our customer and his family, we do not plan to share any additional details until we conclude the investigation.

We would like to extend our deepest sympathies to the family and friends of our customer."

So, logically, either the respect for the family vanished or Tesla concluded their investigation (in three days time).


I think as mentioned in the article - the family started making public statements and Tesla figured "Oh, you don't want PRIVACY (read: to STFU) then we're going to be TRANSPARENT (read: slander your dead relative to boost our shareprice)"


This goes from bad to worse for Tesla. It's all related to the original PR statement, but I'm guessing that Tesla have made further actions to anger the NTSB.

I don't understand why the change. They've already co-operated with the NTSB in the first Autopilot death (back in 2016) [0] with a much simpler PR statement [1] that didn't upset the NTSB.

[0]: https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF

[1]: https://www.tesla.com/blog/tragic-loss


Tesla has made three statement about this accident:

- Original: https://www.tesla.com/blog/what-we-know-about-last-weeks-acc...

- Update: https://www.tesla.com/blog/update-last-week%E2%80%99s-accide...

- Newest: http://abc7news.com/automotive/exclusive-tesla-issues-strong...

None of which address the multiple videos of people reproducing the accident at the same and different locations, and none take any responsibility for this likely bug (lane centering in non-lanes) introduced in an over the air update.

Even Reddit's Tesla Motors sub which used to be a 24/7 Tesla celebrating has been pretty negative about Tesla's handling of this incident, and I'm talking regular posters and verified Tesla owners.


Tesla's terrible response to this and the detrimental effect on its image must be pretty obvious to most of the HN crowd. I have to wonder why they persist. Is their PR spin actually having a positive effect on the public at large?


Culture (like shit) runs downhill.

Mr. Musk seems like he's becoming increasingly erratic and controlling, and there's a record of him being unable to take criticism. This is a pretty intense form of criticism.


Tesla's progress in ramping up production for the Model 3 could charitably be described as "rocky," and there's been a steady drumbeat of news stories over the last couple of months saying Tesla's financials don't look great either.

Given all that, it seems fair to assume that Musk is under a lot of pressure at the moment; and stressed-out people make bad decisions.


> increasingly erratic

citation needed


Your comment seems like trolling.


You didn't get the narcissism over the twitter spat with that urban planner?


FWIW: Generally, if you feel like that, AFAIK, you should flag it and let moderators handle it, not accuse them of trolling.


It's bizarre indeed. Have their ever been criticisms that the NTSB isn't a fair and impartial investigator? I haven't heard of any.

Now Tesla is kicked out of the investigation, so I assume any leverage they had to help with the investigation is now gone.


How could the NTSB be fair and impartial if Tesla ever had "leverage" with the investigation?

The investigation will continue on, and the NTSB will conduct it fairly, Tesla just won't be involved. I understand both sides, Tesla desperately wants to tell it's side publicly, and the NTSB wants all participants to stay quiet till investigation completes.


The NTSB can be fair and impartial with whatever information they have on hand.

Telsa just gave up their opportunity to share their own perspective on the investigation. That could have consequences.


How is their response terrible or PR spin? Was it not factual?

Why should they wait a year before making a public statement?


You can make a statement, you just can't leak information from the investigation to the public. They could have talked broadly about autopilot and what you should and shouldn't do without divulging the details of this specific case.


The fact that sensors reported the driver didn’t have their hands on the wheel, and ignored warnings to put their hands back on the wheel is important, and there is no reason why this needs to stay secret for a year. It’s a stark reminder to Tesla owners to follow usage instructions.

And releasing it doesn’t prejudice the investigation in any way.


1. This early in the investigation process, you can't actually put that in the proper context.

2. As the NTSB noted, part of the reason for these rules is also PR-related - they want to make sure that other parties' decision-making processes on information-sharing aren't warped by the need to get favorable information out in public faster.

3. Most importantly - showing no respect for the procedures of the agency investigating you indicates a mentality that puts you above third-party judgment.


The death of a person is not a material to teach things to the public. A better approach would be naming the thing honestly.


The fact that sensors reported the driver didn’t have their hands on the wheel

...says nothing about the hands. It only says something about the sensors, until there is conclusive evidence from a secondary source.


Drawing a conclusion about the investigation before it's finished seems like pretty obvious spin.

> TESLA: "The only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, ..."

Nobody knows what happened. Huang may have had his hands on the wheel, which the system failed to register, and the car made a turn that he wasn't able to correct in time without hitting other vehicles.


On one hand I agree.

The other hands ask: what don't we know that's causing them to act like this.

Moi? I would presume the NTSB is being manipulated somehow in someway to slow Tesla down and/or knock them from there perch. Look at their valuation. Certainly their must be long time auto industry ego who find that offensive.

Something is drifting sideways but it feels like there are other less obvious forces and influences involved.


NTSB does an excellent job in its investigations in getting to the root cause of accidents. The only way they can do that is in a depoliticized atmosphere where they can focus on getting to the truth of something, not worrying about one of the parties to the accident spinning things for their commercial advantage. Tesla trying to corrupt an NTSB investigation like this is the worst kind of bad faith. Particularly for a technology as important as self-driving cars.


> NTSB ... The only way they can do that is in a depoliticized atmosphere where they can focus on getting to the truth of something

And, if there is evidence Tesla has been negligent, the only way they can win is by not being forced to release this evidence, perhaps by attacking the NTSB as being biased against Tesla.


I see many comments wondering why does Tesla not recognize stationary objects when it has a forward looking radar.

The radar can't distinguish between overhead signs, trees and things on the side of road. It only track moving things, otherwise there would be too many false positives and the car will keep braking every few minutes. Other Level 2 autonomous systems have the same limitation.

You might recall, Tesla hit a stationary fire truck in Jan [0]. This was caused due to a similar limitation in their software. Tesla was following a car at a slow speed. When that car suddenly moved out of the way to avoid hitting the Fire truck, Tesla AutoPilot accelerated to match the set speed and hit the fire truck. To summarize, if a moving car suddenly moves out of the way, Tesla will almost always hit whatever is stopped further down the lane. They even call this out in their User Manual.

Tesla is hoping that it would be able to get it's Vision capabilities to the point where it would be able figure out obstructions using cameras working in conjunction with the radar. But there is no committed timeline for this.

LIDAR is able to avoid this by using directed LASERs to build a 3d outline of all the surrounding objects. A system relying on LIDAR would have challenges in RAIN and falling snow. However under natural conditions (where most Tesla deaths with Autopilot have occurred - hitting a SEMI from side, hitting the barrier recently), LIDAR would be able to completely avoid the accidents.

However, if and when, Tesla vision capabilities are able to detect obstructions, they might be much cheaper and more effective than LIDAR in all kind of weather condition. Hoping they can get there soon. And I also wish they would fix their marketing to more accurately reflect the capabilities of the driver assist system and stop misleading the public.

[0] https://www.mercurynews.com/2018/01/22/tesla-on-autopilot-sl...


If this is true then how can we have had Collision Avoidance systems[1] in cars for years that are able to automatically apply the brakes to prevent this sort of accident? And If the answer is 'autopilot doesn't work that way' then why not use both systems, with the collision avoidance system taking precedence as it deems necessary?

1. https://en.wikipedia.org/wiki/Collision_avoidance_system


All the systems which work on radar have similar limitations. More details: https://www.wired.com/story/tesla-autopilot-why-crash-radar/


After reading about this for a while, just now, I'm much less comfortable with the idea of trusting an automotive autopilot system. Just imagine if your taxi driver told you "I will drive perfectly safely, except if there is a truck parked in the road. Then I will run into it at speed." It seems like such a glaring omission. If I were an engineer I don't think I would be comfortable releasing an autopilot system with this kind of safety issue.


Which is why most companies aren't willing to call these things "autopilots" - Mobileye, for example, pitches its systems as assistants which will catch some of the driver's mistakes, similar to the "collision avoidance" language. This also comes with the corollary that they often aren't willing to give these systems as much control of the car as Tesla does.


Don't many cars have simple collision prevention or auto braking that kicks in as soon as there is a stationary obstacle in the front? Why doesn't or can't tesla use the same approach?

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: