Hacker News new | past | comments | ask | show | jobs | submit login

Am I extreme in feeling someone should go to jail for that? It was bad enough when they were originally advertising it, but now that they're defending it even after people died...ugh.

https://www.theverge.com/2018/3/28/17172178/tesla-model-x-cr...




Musk's statements, and Tesla in general, have gone to cultivating an impression that Tesla almost has self-driving. An impression that, okay, it's not perfect, but it's good enough that you can act like it is for most of the time. This impression is far enough from the truth that it's fundamentally dishonest, particularly when the response from Tesla after incidents amounts to "the driver is completely at fault here, we don't need to change anything" (in other words, their marketing says it's basically self-driving, their legal defense says self-driving is only at your own risk).

In the moralistic sense, yes, Tesla needs to be reprimanded for its actions. However, lying in this manner is often well-protected by law, if you manage to include enough legalese disclaimers somewhere, and I suspect Tesla has enough lawyers to make sure that legalese is sufficient to disclaim all liability.


While I don't necessarily agree with how they've advertised it I think they are legally safe. All Model 3's DO have the HARDWARE to support full-self driving, even if the software is not there. And regarding the software, it has been shown to be about 40% safer than humans where it is used, which is what they've claimed.


How do we know the hardware will adequately support full self-driving if it doesn't actually exist yet?

That seems overly optimistic, if not an outright fabrication.


Because humans can do driving with comparable (worse, actually) hardware. Superhuman reflexes mean that software with superhuman driving abilities can exist.


Humans also have human brains, which are much more important to driving than eyes.


This is the part that can be emulated in software. All you have to prove is that what ever platform/language you're using for the programming is Turing Complete, which is the case for almost all of the most popular languages.


So can I run Crysis on my TI calculator? I'm sure whatever platform/language is running on it is Turing Complete. I think you missed the point that the brain is also hardware.


Good point, I'm assuming their Nvidia GPUs have enough power/memory to do the processing they want to do.


> All Model 3's DO have the HARDWARE to support full-self driving, even if the software is not there.

Please provide evidence.

Given that the only full-self driving system out there is Waymo's, which uses completely different hardware than Tesla's, it is impossible to back your claim, unless you develop a fully-self-driving system on top of Tesla's hardware.

So until that is done, your claim is false, no matter how many caps you use.


Sure, they have cameras to see and a computer to do rigorous processing. We can compare this to a human, who uses their vision to perceive the outside world while driving. (You could also argue humans use their hearing, well the Tesla's have mic's if they really wanted to use that).


> they have cameras to see and a computer to do rigorous processing. We can compare this to a human, who uses their vision to perceive the outside world while driving

My phone also has cameras and a processor. Are you implying that my phone is as equipped to drive a car as a human?

A monkey has eyes, a brain, arms and legs. Are you implying that a monkey is as equipped to drive a car as a human?


I'd say the monkey has most of the hardware but not the software, analogously. As for the phone, right I was assuming the cars have enough processing power and memory to do the necessary processing. But granted they haven't exemplified full self-driving on their current hardware, I will have to concede that we do not know how much processing and power are needed to be truly self-driving. For all we know, that last 10% of self-driving capability may require an exponential increase in the amount of processing required.


if we ask what the reasonable expectation is after an advertisement tells a customer that the capability "exceeds human safety" I would say that the average customer thinks of a fully automated system.

This couldn't be further from the truth as automated vehicles still suffer from edge cases (remember the lethal accident involving a concrete pillar) where humans can easily make better decisions.

A system that is advertised as superior to human judgement ought to strictly improve on human performance. Nobody expects that to mean that the car drives perfect 95% of the time but accidentally kills you in a weird situation. This 'idiot savant' characteristic of ML algorithms is what makes them still dangerous in day-to-day situations.


Yes I totally agree, I think there should be some regulation regarding this area. At least in terms of being clear when advertising. I think it's ok to deploy such a system where in some/most cases the AI will help, but it needs to be made apparent that it can and will fail in some seemingly simple cases.


That’s probably a gambit to avoiding being easily sued but it’s a really bad faith attempt to mislead. Most people are going to read that and assume that a Tesla is self-driving, as evidenced by the multiple accidents caused by someone trusting the car too much. Until that’s real they shouldn’t be allowed to advertise something which hasn’t shipped.


So...one person died?

Do you know how many people die in normal, non-self-driving cars? It's never possible to get 100% accuracy.


I hand you a gun and say "you can point this gun at anyone, pull the trigger, and it wont harm them." You point the gun at a crowd, pull the trigger, it goes off and kills someone.

Who do you blame?

No one is saying guns, or cars, aren't dangerous. However this kind of false advertising leads people to use cars in dangerous ways while believing they are safe because they've been lied to.

(Side note, there is some fine print you never read that says the safe-gun technology only works when you point the gun at a single person and doesn't work on crowds.)


Safer than human hardware does not imply safe.

It means this will kill people but fewer people than humans would and they have actual data that backs this assertion up.

The benchmark is not an alert, cautious, and sober driver because that's often not how actual people drive. So right now it's often safer to drive other times it really is safer to use autopilot, net result fewer people die.


But if you do happen to be an alert, cautious, and sober driver, it'd be unfortunate if Tesla's marketing led you to overly rely on Autopilot.

Ideally Tesla's marketing would say it's safer than drunk, sleepy, and reckless drivers, though it might not sell as many Autopilots.


Autopilot should not be accessible to drivers until their driving habits have been assessed and baselined. If autopilot is known to drive more safely than the individual human driver in environments X, Y, and Z then it should be made available in those environments, if not encouraged. That might not make an easy sell since a major value prop is inaccessible to many of the people who really want to use it, but it's the most reasonable, safest path.

I also imagine that cars will learn from both human and each others' driving patterns over time, which (under effective guidance) should enable an enormous improvement in a relatively short period of time.


Drunk, sleepy and reckless drives from an age bracket not normally buying premium sedans.


I can grab the data, but the surprising thing about fatal crashes are the typical circumstance. It's an experienced driver driving during the day, in good weather, on a highway, in a sedan style car, sober. There are two ways to interpret this data. The first is we assume that crashes are semi-randomly distributed, so since this is probably the most typical driving condition it therefore naturally follows that that's where we'd expect to see the most fatalities.

However, I take the other interpretation. I don't think crashes are randomly distributed. That that scenario is absolutely perfect is really the problem because of humans. All a crash takes is a second of attention lapse at a really bad time. And in such perfect circumstances we get bored and take road safety for granted as opposed to driving at night or navigating around some tricky curves. And that's a perfect scenario to end up getting yourself killed in. And more importantly here, that's also the absolutely perfect scenario for self driving vehicles who will drive under such conditions far better than humans simply because it's extremely trivial, but they will never suffer for boredome or lack of attention.


Their claim is about hardware, not software.


Just about every expert agrees though that their hardware is NOT capable without LiDAR (which is probably why they churn through people running their Autopilot program), although proving that in court is a whole thing.


That is not a distinction most people understand.


Or maybe there is no basis for making such a claim.

Or well, I just put four cameras on my car and inserted their USB wires into an orange. I declare that this is enough for FSD now. I don't have to prove anything for such an absurd statement?


And how do you know that the present hardware is enough to deliver full self-driving capability? I don't think anyone knows what precise hardware is required at this point as it wasn't achieved yet.

So, do you think it may be possible that people do understand the distinction and are STILL not convinced?


Well, humans have 2 cameras that they are able to rotate & a brain and that seems to be sufficient for them to drive in terms of hardware... :-)


I, and the law would blame the person. They should have no particular reason to believe your claim and absolutely no reason to perform a potentially lethal experiment on people based on your claim.

You may be breaking the law as well, depending on the gun laws in your area, but as I understand it (IANAL) the manslaughter charge of entirely on the shooter.


I would blame the person, because the best case scenario of shooting an allegedly harmless gun into a crowd is equivalent in effect to the worst case scenario of doing nothing in the first place.


I noticed this in the news... I can't believe someone actually tryed this.

Tesla owner faces 18-month ban for leaving the driver's seat https://www.engadget.com/2018/04/29/tesla-owner-faces-road-b...


One person died from that accident. There are now at least 4 deaths where Tesla's autopilot was involved (most of the deaths in China don't get much publicity, and I wouldn't be surprised if there are more). And the statistics do not back up your claim that Tesla is safer (despite their attempts to spin it that way).


This seems to contradict your assertion: http://www.businessinsider.com/tesla-autopilot-cuts-crash-ra...


No, the NHTSA report says nothing about how Tesla's autopilot compares to human driving. Here are two comments I made last week about this:

In that study there are two buckets, one which is total Tesla miles in TACC-enabled cars and then after the update total Tesla miles in cars with TACC + Autosteer and they calculated on airbag deployments. Human driven miles are going to dominate both of those buckets and there's a reason the NHTSA report makes zero claims about Tesla's safety relative to human drivers. It's totally outside the scope of the study. Then add in that some researchers who are skeptical of the methodology and have been asking for the raw data from NHTSA/Tesla have yet to receive it.

https://news.ycombinator.com/item?id=16932350

Autosteer, however, is relatively unique to Tesla. That’s what makes singling out Autosteer as the source of a 40 percent drop so curious. Forward collision warning and automatic emergency braking were introduced just months before the introduction of Autosteer in October 2015. A previous IIHS study shows that both the collision warning and auto emergency braking can deliver a similar reduction in crashes.

https://news.ycombinator.com/item?id=16932406


I'm not sure what "safer" means. Not sure what posting your insistence in other threads has to do with this. 10x or 100x the number of deaths would be small price to pay to get everyone in autonomous vehicles. It's air travel, all over again and there's a price to pay.


>I'm not sure what "safer" means.

You can define it however you want. By any common sense definition of safety Tesla has not proven that their autopilot is 'safer' than a human driver.

>10x or 100x the number of deaths would be small price to pay to get everyone in autonomous vehicles.

This supposes a couple things. Mainly that autonomous vehicles will become safer than human drivers, that you know roughly how many humans will have to die to achieve that and that those humans have to die to achieve it. Those are all unknown at this point and even if you disagree about the first one (which I expect you might) you still have to grant me two and three.


Ignoring Tesla, self driving cars will almost definitely be safer. People die in cars constantly. People don't have to die if companies don't rush the tech to market like Tesla plans to. To be fair, I blame the drivers, but I still think the aggressive marketing pretty much guaranteed someone would be stupid like that, so Tesla should share some of the blame as well.


I don't think we are opposed to autonomous vehicles.

Can Autopilot not run passively and prevent human errors? Why are the two options presented seem to be only "only a human behind the wheel, not assisted by even AEB" and "Full autopilot with no human in the car at all".


Many people also die when they fall from the stairs.

But if you build faulty staircases that make people slip, they can still put you in jail or sue you.

Same if you push them down the stairs.


Self-driving cars or Tesla autopilot?


> Do you know how many people die in normal, non-self-driving cars

false equivalency. you need at least compare per mile, at best by driver demographic since most tesla drivers are in the high income and maybe in the safety conscious bracket


If those people die because the car fails catastrophically, and predictably, rather than the usual reasons it would be news. This is not about 100% or “the perfect is the enemy of the good” or any bullshit Utopianism. This is about a company marketing a flawed product deceptively for money, while letting useful idiots cover their asses with dreams of level 5 automation that aren’t even close to the horizon.


US advertising laws are very lax, just like data collection laws. You can getaway with saying a lot of shit in the name of advertising.

Deaths caused by Uber and Tesla hold little repercussions for them other than bad PR.

The US govt celebrates removing regulations which benefit its corporations at the cost of its citizens


I think if anything you’re overly conservative for thinking somone rather than many people need jail time for it. I would look up the line of people who made and supported the decision for that kind of fraudulent marketing and drag them all into court.


Jail, no?

Financial ruin for the company that was willing to put that BS below it's letterhead? Yeah, sure, in proportion with the harm caused. That said, human life isn't sacred. It and everything else gets trades for dollars all the time. A death or three shouldn't cripple a big company like Tesla unless they were playing so fast and loose that there's significant punitive damages.

In a large company like Tesla it shouldn't be marketing's job to restrain itself. That's not safe at that scale. There should be someone or a group who's job it is to prevent marketing from getting ahead of reality just like it's security's job to spend all day playing whack-a-mole with the stupid ideas devs (especially the web ones) dream up. Efficiently mediating conflicting interests like that is what the corporate control structure is for.

People using your product in accordance with your marketing should be considered almost the same as in accordance with your documentation. While "people dying while using the product in accordance with TFM" is not treated as strict liability it's pushing awfully close.

I see it as a simple case of the company owning up to its actions. It failed to manage itself properly, marketing got ahead of reality. You screw up and someone gets hurt then you pay up.


Are you saying that you see human life as something that can be bought and sold?

It's one thing to talk about punitive damages and liability, these are factual mechanisms of our legal system. But just because damages can be paid and are on a regular basis they do not imply that there is some socially acceptable let alone codified price for a human life. And we should hope for our own sake there never is.

I agree that marketing should not be allowed to let their imagination run wild to the detriment of the company.

In the case of the liability bit IANAL but that's likely to differ between industries. Some sectors like aviation are highly regulated and require certification of the aircraft and the airplane flight manual is tied to that serial number and is expected to be, correct and free of gross errors for normal operation. So liability can vary. Are you suggesting from experience that there is no liability in the case of Tesla taking into account their industry's context? I don't know enough about their industry to judge, just looking for clarification.


"That said, human life isn't sacred. It and everything else gets trades for dollars all the time."

Wow. Uhm, slavery is illegal if you haven't heard. We made it illegal because life is sacred.


OP is probably referring to the fact that in wrongful death suits, society has put a very tangible financial number on the value of human life. This made it possible for corporations to make trade-off between profit and liability, giving the potential that someone could get enough profit to justify risking others’ lives.

Punitive damages go part way to help prevent this, but not far enough to guarantee that it never happens.

Had the society truly believed life to be sacred, I suspect we’d have very different business practices and penalties that are not limited to financial ruin.


Well, unfortunately we also believe that corporations are sacred, so when bad things happen we shake our fists and collect a couple dollars. But the guilty corporation is never put to death. (Well, rarely ever..)


It's not that cut and clear. Yeah killing and maiming is bad but those things always (and will for the foreseeable future) happen at large scale and you have to be able to have a reasonable discussion about the trade-offs. "Well we can't guarantee we won't kill anyone in an edge case so let's all just go home" isn't an option.

You can build a highway overpass with a center support on the median for X and there will by a small chance of someone crashing into it and getting killed. You could design one without a support on the median but it will cost Y (Y is substantially more than X). Now scale that decision to all overpasses and you've got a substantial body count. At the end of the day it's a trade-off between lives/injury and dollars.


Agreed that their are trade-offs made, of course. But this society spends a ton of time trying to prevent all kinds of death. That's because life is sacred.

It seems odd to argue that. Yeah of course we can't stop doing things, but it doesn't mean we don't try really hard to avoid killing people.


> try really hard to avoid killing people

Yes, and both Elon Musk as an individual and Tesla Motors as an organization agree. How they approach that idea is somewhat different from what we're used to though.

Their basic assertions are (in my words): 1. Vision and radar based technology, along with current generation GPUs and related hardware, along with sufficiently developed software, will be able to make cars 10x safer. 2. How quickly total deaths are reduced is tied directly to how quickly and widely such technology is rolled out and used. 3. Running in 'shadow' mode is a good source of data collection to inform improvements in the software. 4. Having the software/hardware actually control cars is an even better source of data collection to accelerate development. 5. There is additional, incremental risk created when the software/hardware is used in an early state. 6. This is key: the total risk over time is lessened with fast, aggressive rollouts of incomplete software and hardware, because it will allow a larger group of people to have access to more robust, safer software sooner than otherwise would be possible.

That last point is the balance: is the small additional risk Tesla is subjecting early participants to outweighed by how much more quickly the collected data will allow Tesla to produce a more complete safety solution?

We don't know for sure yet, but I think the odds are pretty good that pushing hard now will produce more total safety over time.

> life is sacred

This is my background as well, and its an opinion I personally hold.

At the same time, larger decisions, made by society, by individuals, and by companies, must put some sort of value on life. And different values on different lives. Talking about how much a life is worth is a taboo topic, but it's something that is considered, consciously or otherwise, all day, every day, by many people, myself included.

Most every big company, Tesla Motors included,make decisions based on these calculations all the time. Being a 'different kind of company' in many ways, Tesla makes these calculations somewhat differently.


That's a pretty cynical calculation to make. And no, we don't typically accept untested additional risk in the name of saving untold later. We test first. There's a reason why drugs are tested on animals first, then trials, then broad availability, but still with scrutiny and standards. This is a well trod philosophical argument, but we seem to have accepted that we don't kill a few to save others. We don't fly with untested jet engines. We don't even sell cars without crashing a few to test them. The other companies involved in self driving technology have been in testing mode. They have not skipped a step and headed straight for broad availability.

Why then does Tesla have a pass? There's no evidence it's actually safer. And there's no evidence that the company is truthful. We don't accept when a pharmaceutical company says, "no, it's good. Trust us." That would be crazy. We should not accept Tesla's assurances with blind faith simply because they have better marketing and a questionable ethical standard.

http://driving.ca/tesla/model-s/auto-news/news/iihs-study-sh...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: