Hacker News new | past | comments | ask | show | jobs | submit login

I'm one of the more vocal Tesla critics on here, but I think we can do better than this example.

The left turn arrow turned green, the car inched forward a little, and Elon immediately put on the brakes. The situation was no more dangerous than when a human driver mistakes which light is theirs and does the same thing, which happens pretty regularly in my experience.

And that this is a test version of the software isn't irrelevant, it makes a huge difference—I am much less opposed to internal company testers who know what they're doing than I am to a public beta in the hands of people who believe Tesla's (really egregious) marketing.




> the car inched forward

It felt like more than a few inches. (I'm not splitting hairs here, I really do feel it was qualitatively significantly different from your description, which made it sound like the car was cautiously starting to move just a bit.)

> and Elon immediately put on the brakes. The situation was no more dangerous than when a human driver mistakes which light is theirs and does the same thing, which happens pretty regularly in my experience.

When humans make this mistake, they stop the car themselves. When this car made this mistake, someone else (Elon) had to stop it. These are not equivalent. In the former case you can argue no accident was going to happen, but in the latter you can't.

> that this is a test version of the software isn't irrelevant, it makes a huge difference—I am much less opposed to internal company testers who know what they're doing

This is on a public road. Try pulling this off (well, please don't) in medicine and see how it goes.


> When humans make this mistake, they stop the car themselves. When this car made this mistake, someone else (Elon) had to stop it.

For me this is the important part. Too many people already drive far too distracted. Imagine giving many of those same distracted drivers a great reason to ignore what they're doing even more. Very frequently when I'm stopped at a traffic light I'll see the person next to me on their phone waiting for the light. Would we trust that someone doing that would lookup when their car starts driving into the intersection?

I really do want self driving cars to be a thing but I wish we were going about it a bit differently and I wish other people who wanted it didn't play things down as much as they do.


> For me this is the important part. Too many people already drive far too distracted. Imagine giving many of those same distracted drivers a great reason to ignore what they're doing even more. Very frequently when I'm stopped at a traffic light I'll see the person next to me on their phone waiting for the light. Would we trust that someone doing that would lookup when their car starts driving into the intersection?

Isn't this precisely why self-driving (even if it isn't perfect) could actually make the roads safer. People, as you say, are already super distracted (partly because driving is boring).


I am one of the believers that self driving cars will one day make roads safer. I just think that the technology we have now that keeps getting called "self driving" is not there yet. If it requires the driver's full attention to keep things safe but also makes individual drivers feel like they can be a little less attentive then we don't really have a very safe situation on our hands.

When I say that the technology makes people feel like they can be less attentive I really do mean it. There was the SF tech worker who was playing candy crush or something when his Tesla smashed into a barrier on the highway. I have friends who own Teslas and talk frequently about how they like taking them on road trips because they can relax a bit more and pay less attention to the flow of traffic (it'll brake for you!!! they say). In a world where these cars have to share the road with human drivers and drive on roads that are under construction or in poor weather conditions I just don't see how we can say this is safe.


Defeat device detection: https://www.reddit.com/r/teslamotors/comments/zv5xp9/a_warni...

How attention-monitoring strikes work: https://www.youtube.com/watch?v=otknSUQ4ICY

Top comment: "The cabin camera really does feel super solid at detecting when I’m distracted. Even if I’m just like searching a song on the infotainment it will get onto me which is annoying but I completely understand and am glad that it works so well ..."


Most probably no.

First, let's assume that perfect law-abiding, self-driving cars exist. One one hand they would eliminate incidents caused by inattentive driving, on the other hand they would in fact create incidents where attentive driver in the right of way would yield for an inattentive driver. Change in total incidents would depend on proportion of these events. Anecdotal evidence is anecdotal, but in my own experience number of incidents I have avoided simply yielding when having right of way is much higher then the number of incidents I have gotten into due to my own mistake.

Second, actual "self driving" cars are far from that, especially in their interaction with other drivers.

Third, there second order effects. E.g. a car quickly, unexpectedly maneuvering could cause other car to break sharp, which could end up in an accident where the original car is not even involved. With an increase of cars behaving differently than the local custom such accidents are bound to happen.

Most probably we are going to see an increase in the number of accidents with proliferation of semi-autonomous vehicles before that number starts to dwindle.


> giving many of those same distracted drivers a great reason to ignore

the attentiveness monitoring and strike system? If they ignore it, they will quickly exhaust their strikes and get locked out for bad behavior until they learn to take it seriously.


> It felt like more than a few inches.

It's hard to tell because the camera is at such a weird angle, but from what I can see the vehicle remains pretty firmly behind the stop line throughout the entire encounter. We can quibble about how fast it was accelerating, but I regularly see worse false starts at lights.

> When this car made this mistake, someone else (Elon) had to stop it. These are not equivalent. In the former case you can argue no accident was going to happen, but in the latter you can't.

I'm actually more worried about the human case than the human+autonomous case. In the human case it is up to the entity that made the mistake to correct their own mistake. In the autonomous vehicle case you effectively have a second set of eyes as long as the driver is paying attention (which they should be and Musk was). This is why I say that it makes a difference that this was internal testing—the driver wasn't a dumb consumer trusting the vehicle, it was the CEO of the company who knew he was using in-progress software.

> This is on a public road. Try pulling this off (well, please don't) in medicine and see how it goes.

Requiring that autonomous vehicles never be tested on a public road in real world conditions is another way of saying that you do not believe autonomous vehicles should ever exist. At some point they have to be tested in the real world, and they will make mistakes when they leave the controlled conditions of a closed course.


> remains pretty firmly behind the stop line throughout the entire encounter

That's not the same thing as "inched forward" is my point.

> In the human case it is up to the entity that made the mistake to correct their own mistake.

You're completely ignoring how likely these events are or how severe the errors are in the first place. You can't jus count the number of correction points and measure safety solely based on that.

> Requiring that autonomous vehicles never be tested on a public road

I never said that. (How did you get from "look at how it's done in medicine" to "this should never be done"?) What I do expect is responsible testing, which implies you don't test in production until at least you yourself are damn sure that you've done everything you possibly can otherwise. Given everything in the video I see no reason to believe that was the case here.


The dash visualization clearly shows the blue line that indicates the intended driving course heading straight into the center of the intersection.

External intervention was required to stop the car from entering the center of the intersection.


> Requiring that autonomous vehicles never be tested on a public road in real world conditions is another way of saying that you do not believe autonomous vehicles should ever exist.

Sure, but that's a wildly different case than "it's ready for public roads we super promise"


As I noted at the very beginning of my first comment, I am a huge critic of many things that Tesla does, and releasing their software in beta to casual drivers is something I'm strongly opposed to. All I'm saying here is that this specific critique of this specific video is misplaced.


Fair.


One day I hope the New Drug Application process can have a monitoring and supervision system as sophisticated as FSD Beta.

Just imagine: Constant 100% always-on supervision, supervision of the supervisors with 3-strikes you're out attentiveness monitoring, automatic and manual reporting of possible anomalies with full system+surroundings snapshots to inform diagnostics and development, immediate feedback of these into the simulations that validate new versions, and staged rollout that starts at smaller N (driving simulators are actually pretty good) and continues intensive monitoring up to larger N. Even Phase 3 trials only involve thousands of people, while FSD beta is driving a million miles per day with monitoring that feels more like Phase 1 or mayyybe Phase 2.

One day drug development will be this sophisticated, and it will be glorious.


> When humans make this mistake, they stop the car themselves. When this car made this mistake, someone else (Elon) had to stop it. These are not equivalent. In the former case you can argue no accident was going to happen, but in the latter you can't.

This is not nuanced enough. I've been in a cab to JFK in the snow and the driver was speeding around a turn that the car started sliding and eventually crashed into the side of the road.

There are now over 40,000 car accident fatalities in the US per year. So no, humans make mistakes that they cannot correct. https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...


Er, your comment is the one lacking nuance. There was no snow here, nor did I claim accidents never happen. I was trying to get across a point about the parent's argument.


Your point boils down to a "what if" though. If it's as dangerous as you make it, then you should be able to show plenty of examples where actual harm is happening. Showcase those.


Over 700 allegedly fatal crashes attributable to FSD [1] that Tesla has officially reported to the government over a estimated 400M miles on FSD. Making the driver 150x more likely to be involved in a fatal crash than if they were driving on their own.

Note that these are based on auditable published statistics and are likely a overestimate of the risk as we must assume the worst when doing safety-critical analysis. Tesla could improve these numbers by not deliberately suppressing incident reports and by not deliberately choosing not investigate to avoid confirming fatality reports. But, until they do so we need to err on the side of caution and the consumer instead of the for-profit corporation.

[1] https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_In...


Good, so use those as an example of how bad it is, not the car moving forward a little and the driver stopping it before anything bad happens.


> And that this is a test version of the software isn't irrelevant, it makes a huge difference—I am much less opposed to internal company testers who know what they're doing than I am to a public beta in the hands of people who believe Tesla's (really egregious) marketing.

There is absolutely no reason you should assume "testers" know what they are doing. I have met plenty of people with decades of experience in "testing" barely know what they are doing. Even in the case they know what they are doing, they shouldn't be testing a *deadly* vehicle with potentially broken software on heavily populated *public* roads.


> Even in the case they know what they are doing, they shouldn't be testing a deadly vehicle with potentially broken software on heavily populated public roads.

This is another way of saying that self-driving vehicles shouldn't exist at all. At some point we have to test them on public roads, preferably before putting the software into the hands of regular users. If you ban even internal company testing, then what you're saying is that self-driving vehicles should never exist.


Not even remotely the same thing. You completely glossed over me saying potentially broken software and heavily populated. Surely there is a way to simulate a left turn signal in a more safe manner on software this early in the testing process.


All software is potentially broken. If you've only ever tested your software in a controlled driving range, then you don't know how it will behave when you take it out into the real world. If you've only ever tested it on lightly populated roads, then you don't know how it will behave when you take it onto heavily populated roads.

It's not surprising that it made a mistake during testing. That's what testing is for: rooting out mistakes.


No. Public road deployment is for validation, not testing.

Testing is for rooting out errors. Validation is for proving you achieve the desired specifications and failure rate. Validation occurs when you believe you are done or when testing becomes inadequate to discover failures. It is about “proving” the negative with respect to errors.

Broken, incomplete software where defects can be routinely discovered after light usage has no place being used by consumers on public roads.


"potentially broken software" = "all software", therefore you said self driving software should never exist and you are happy to support 40,000+ people dying in the US every year who could be saved with autonomous vehicle technology. Your lack of ethics is worrying.


Agreed, but I'd go further.

> This is another way of saying that self-driving vehicles shouldn't exist at all.

It's another way of saying that self-driving vehicles should be invented somewhere else, so that in 10 years you have to beg and plead for an overpriced second-rate implementation with a worse safety profile.


They seem to be serious about attentiveness monitoring and strikes.

No more weights: https://www.reddit.com/r/teslamotors/comments/zv5xp9/a_warni...

How strikes work: https://www.youtube.com/watch?v=otknSUQ4ICY


Except for the part where they can not even detect trivial cases. One of the easiest possible cases, a giant teddy bear in the seat “holding” the wheel (by attaching a simple weight to the wheel), is determined to be a attentive driver [1]. A T-ball and they strike out.

The inability to robustly solve simple, obvious cases and not failing safe is indicative of a sloppy development and validation process that is incompatible with the deployment of a safe safety-critical device.

[1] https://www.youtube.com/watch?v=CPMoLmQgxTw


The fervor over this is unwarranted.

When cars first debuted, they were incredibly dangerous and killed lots of people. Yet we persisted.

Airplanes, too.

Fully autonomous cars are on the streets of San Francisco. They run into trouble, yet the experiment persists.

We should allow for civil suits, especially when negligence is found, but the only way to progress forward is to make real world attempts.

The promise of autonomous driving is worth trillions of dollars for all of the opportunity it will unlock. And as macabre as it may be, there will continue to be deaths along that development path.


I wouldn't really say “we persisted” in the case of cars, as much as the auto industry threw boatloads of money at lobbying to make people shift the blame from cars to their victims. Cars didn't get safer (for the people outside them), we just started blaming the people they killed instead.


That's absolutely part of the equation.

Back when we used horses for everything, people used to walk aimlessly in streets and children used to play in them. We advocate against that now.

Society chose a new mandate. It wasn't just the auto companies - it was all of us.


If we allow criminal suits when drivers kill someone due to carelessness we should allow criminal suits when companies kill people due to carelessness. Companies shouldn’t be held to lower standards than individuals.


> The promise of autonomous driving is worth trillions of dollars for all of the opportunity it will unlock. And as macabre as it may be, there will continue to be deaths along that development path.

“We should kill people because it will unlock market opportunity”, is why we need to start charging the people in charge of these companies with murder when they inevitably do just that. I don’t care about Tesla stock, I care about not being killed by one of Musk’s “hype” projects.


Cars were not "incredibly dangerous" when they first debuted because they topped out at about 40 mph (and went considerably slower most of the time). That's pretty comparable to a horse and carraige.


Yes they were. Besides the speed aspect, there were other things too.

For example, the hand crank was known to be fatal and pushed the founder of Cadillac, Henry Leland, to get it rid of it when it his friend was killed after helping a lady on the side of road.

https://www.studebakermuseum.org/blog/the-1912-cadillac-a-se...


Yeah, the danger posed by cars was initially limited because they had to share the road with lots of slower traffic (carts, bicycles, pedestrians etc. etc.). Only forbidding pedestrians from using the same space as cars enabled cars to become dangerous in the first place...


Also, horses were incredibly dangerous. In NYC in 1900 for example the pedestrian fatality rate from horses was higher than the NYC 2003 pedestrian fatality rate from cars [1].

Similarly in England and Wales deaths from horse-drawn vehicles were around 70 per 1 million people per year in the early 1900s. That's in the same ballpark as motor vehicle deaths in the 1980s and '90s (80-100 per 1 million people per year) [2].

(It should be noted that medical technology is better now. A lot of those 1900 fatalities would probably have been preventable if they had current medical technology).

A lot of people on HN seem to view the horse era as some sort of idyllic safe, quiet, and unobtrusive pedestrian and rider paradise. It was not. From that second article:

> It is easy to imagine that a hundred years ago, when cars were first appearing on our roads, they replaced previously peaceful, gentle and safe forms of travel. In fact, motor vehicles were welcomed as the answer to a desperate state of affairs. In 1900 it was calculated that in England and Wales there were around 100,000 horse drawn public passenger vehicles, half a million trade vehicles and about half a million private carriages. Towns in England had to cope with over 100 million tons of horse droppings a year (much of it was dumped at night in the slums) and countless gallons of urine. Men wore spats and women favoured outdoor ankle-length coats not out of a sense of fashion but because of the splash of liquified manure; and it was so noisy that straw had to be put down outside hospitals to muffle the clatter of horses’ hooves. Worst of all, with horses and carriages locked in immovable traffic jams, transport was grinding to a halt in London and other cities.

and

> Motor vehicles were welcomed because they were faster, safer, unlikely to swerve or bolt, better able brake in an emergency, and took up less room: a single large lorry could pull a load that would take several teams of horses and wagons – and do so without producing any dung. By World War One industry had become dependent on lorries, traffic cruised freely down Oxford Street and Piccadilly, specialists parked their expensive cars ouside their houses in Harley and Wimpole Street, and the lives of general practitioners were transformed. By using even the cheapest of cars doctors no longer had to wake the stable lad and harness the horse to attend a night call. Instead it was ‘one pull of the handle and they were off’. Further, general practitioners could visit nearly twice as many patients in a day than they could in the days of the horse and trap.

[1] https://legallysociable.com/2012/09/07/figures-more-deaths-p...

[2] https://www.bmj.com/rapid-response/2011/10/31/cars-and-horse...


I am much less opposed to internal company testers who know what they're doing than I am to a public beta in the hands of people who believe Tesla's (really egregious) marketing

Which category do you think Elon Musk is more likely to belong to?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: