It took a combination of problems to cause that crash. The police lieutenant who had informed the railroad of the parade in previous years had retired, and his replacement didn't do it. The police marshalling the parade let it go through red lights. They were unaware that the traffic light near the railroad crossing was tied in to the crossing gates and signals. That's done to clear traffic from the tracks when a train is approaching before the gates go down. So ignoring the traffic signal took away 10 seconds of warning time. The driver thought the police had taken care of safety issues and was looking backwards at the trailer he was pulling, not sideways along the track. People at the parade were using air horns which sounded like a train horn, so the driver didn't notice the real train horn. That's what an NTSB investigation digs up. Those are worth reading to see how to analyze a failure.
Even if autopilot was faulty in some way, if more people were living than dying - does it matter?
Similarly, 9/11 would be a footnote compared to sketchy pain killer drug trials. The black lives matter movement would focus more on sentencing disparities and for-profit prison conditions than police shootings, etc, etc.
The reason it matters here is that it's presumably easy for Tesla to fix any bugs that are killing people.
That is, Autopilot makes certain kinds of accidents more likely — those in which the human driver is not monitoring the vehicle carefully enough, Autopilot misses something critical, and an accident results. It is possible, though not necessarily the case, that these accidents tend to be more serious than accidents that Autopilot prevents, such as lane-changing bumps or other less-fatal accidents.
To be clear, I'm not saying that evidence has been presented that the type of accidents prevented are less important than the type facilitated. I'm just pointing out that not all accidents are equal, and that there may be correlation/causation that makes the overall calculus more complicated.
Now, someone makes a robot that has 100% success rate, but every 100 operations, it has a glitch in the sofware, where it stabs and kills the patient instead. It has been calculated that on average, using the robot only kills 1-2 out of 100 people going in for surgery, compared to the old method of 80 people out of 100. The robot is clearly 80 times better than manual procedure, so why not keep using it?
The answer seems to be - because robot killing people is not an unavoidable outcome. It can be fixed. It's a software problem, a mistake that costs someone their lives. Similarily, an auto-driving Tesla may be killing less people on average than manual drivers, but that doesn't mean that any software problems that lead to it don't matter.
Step 2: Correct the bug, so the robot stops killing patients.
The Tesla autopilot may be in a similar situation. Even if there is a number of fatality inducing bugs, disabling it would be even worse. That said, 40% crash reduction is not enough. I want to see 75%, 90%, and more.
Don't even speak of the bug. Just state what matters: this stuff is better than what we had before, and it can (and will) be even better. This may prevent talks about killer robots.
The biggest difference between an autopilot accident and a traditional accident is fault. Most accidents are driver error. Even if it drives accidents, overall, down (which it will), changing the responsibility for safe travel from mostly the individual to mostly the manufacturer has huge legal repercussions. Not to mention many people's discomfort with no longer being in control of their own safety, even if study after study shows that they shouldn't be.
The therac arguably saved way more people than it killed. However... it's one of the most infamous software bugs in history.
It's the same story you came up with above... except it's real, and it got pulled.
They did however improve since then.
Apart from that, more people living than dying could also imply that certain groups of people, maybe kids or bikers, could be at a higher risk of dying while the majority has a much lower risk, resulting in a net positive. So the acceptance of self-driving systems would also depend on the distribution of the risk.
If you let your friend drive you somewhere in your car because you think they're a better driver, is it their fault if they crash?
The reason could be anything - you're drunk and they're not, you're in really congested unfamiliar area near where they live, whatever. If you think they're statistically going to be a better driver it does not absolve them of liability. Someone is always to blame.
Autopilot will reduced the number of fatalities in the same way that going to the hospital when you are sick will reduce the number of fatalities. However some people that use Autopilot will die when they otherwise would not have if they hadn't used Autopilot, just like how some people going to the hospital will die from infection when they otherwise would not have if they hadn't gone to the hospital.
But it doesn't mean that people dying from faults "don't matter". Every HAI-caused death is a tragedy, and they certainly matter.
The issue is much less black-and-white than the GP's simplistic view.
There's no requirement that driver assist software magically become self-driving software in that situation. Put another way: it's nice if the software can avoid a problem, but not required.
The main remaining concern is that drivers not be confused about what they are required to do; there's discussion about that in this document.
This is probably why Musk pulled radar and vision processing in-house. Combining the data from both sensors is more effective, but more difficult, than combining the results from processing both sensors separately.
Other companies are being even more cautious; Ford isn't putting autonomy on the road until 2021.
In the meanwhile Tesla can sell products as Level 2, and perhaps should do so until roads are as safe as commercial aircraft.
So far NHTSA has encouraged such data sharing but they haven't outright mandated it.
ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to and after Autopilot installation. Figure 11 shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.
I had hoped to see more information about this specific incident. For instance, any data on whether the driver had his hands on the wheel, what steps the car had taken to prompt his attention, etc. But that doesn't seem to be included.
IIHS research shows that AEB systems meeting the commitment would reduce rear-end crashes[emphasis added] by 40 percent. IIHS estimates that by 2025 – the earliest NHTSA believes it could realistically implement a regulatory requirement for AEB – the commitment will prevent 28,000 crashes and 12,000 injuries.
Some excerpts from "Effectiveness of Forward Collision Warning Systems with and without Autonomous Emergency Braking in Reducing Police-Reported Crash Rates", January 2016:
"FCW alone and FCW with AEB reduced rear-end striking crash involvement rates by 23% and 39%, respectively. "
"Among the 15,802 injury crash involvements in these states, the percentage of injury crash involvements that were rear-end striking crashes was larger among vehicles without front crash prevention (15%) than among vehicles with FCW alone (12%) or FCW with AEB (9%). Only 4% of rear- end injury crashes involved fatalities or serious (A-level) injuries."
"Approximately 700,000 U.S. police-reported rear-end crashes in 2013 and 300,000 injuries in such crashes could have been prevented if all vehicles were equipped with FCW with AEB that performs similarly as it did for study vehicles."
Did Tesla cars have AEB prior to Autopilot installation? If not, then this suggests the 40% reduction in crashes may simply be due to the installation of an AEB system. What effect Autopilot's other features may have would remain uncertain.
If AEB did lead to a 40% reduction in crash rates for Tesla cars, as it did for other car models, I suspect that moving the dividing line by five months later wouldn't change the figures much: you would still have a lot more crashes prior to AEB and fewer after.
I don't follow the statement about sensor availability. That doesn't seem to change the fact that all the miles driven "after Autosteer" benefited from AEB, while at most a small fraction of the miles driven "before Autosteer" would have had AEB available.
Given that we know AEB systems do reduce frontal collision rates by 40% for all cars, as the NHTSA report stresses, that implies we cannot attribute the reduction in crashes "after Autosteer" to Autosteer alone.
(Which matters because articles such as the one posted are claiming a cause-and-effect relationship between the introduction of Tesla Autopilot and a reduction in crashes, but if a significant portion or even all of the reduction is due to AEB systems that other cars have also started to adopt then we're mistaking correlation for cause.)
Finally, who said the reduction in crashes is Autosteer alone? Not me. It appears to be a combination of AEB and autosteer. That's what "some of the AEB benefit is in the earlier figure" means.
(Edit: note that the above comment was edited without marking the edit. See below.)
Thanks for explaining. The NHTSA report says:
> ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY
2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology
Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by
miles travelled prior to and after Autopilot installation.
> : Approximately one-third of the subject vehicles accumulated mileage prior to Autopilot installation.
So yes, cars which never had Autopilot installed were not counted, but they obviously did count cars from calendar year 2013 which later had Autopilot installed or else they wouldn't have said "MY 2014".
I don't think it's safe to say that the bulk of the miles driven "before Autosteer" happened between March and October 2015 on cars which had AEB systems available.
> Finally, who said the reduction in crashes is Autosteer alone? Not me. It appears to be a combination of AEB and autosteer. That's what "some of the AEB benefit is in the earlier figure" means.
It's implied by Elon Musk's tweet and all the press articles I've seen that Autopilot/Autosteer is responsible for the 40% reduction in crashes, as I mentioned in an edit to my previous comment.
If you want to debate Elon Musk's tweets and not what I'm saying, respond to Elon. If you respond to me, please respond to what I'm saying. You've totally wasted my time, and all the readers of this thread.
(Edit: Oh, and thanks for not marking your edit an edit.)
I didn't say you did. I suppose you could interpret "I don't think it's safe to say X" as referring to your speech, but I read it as "I don't think X".
> If you want to debate Elon Musk's tweets and not what I'm saying, respond to Elon. If you respond to me, please respond to what I'm saying. You've totally wasted my time, and all the readers of this thread.
Well, I never asked you to debate it. It's a relevant because the TechCrunch article that we're discussing does imply that Autosteer or Autopilot is principally responsible for the 40% reduction in crashes, and in my first comment I expressed my doubts on the claim.
I would agree that "a combination of AEB and autosteer" is possibly the real reason behind the reduction in crash rate, but I would point out that while we already have strong evidence that AEB alone can lead to a large reduction in crash rate we cannot prove or disprove that Autosteer's impact is comparable.
Tesla rolled out AEB in March 2015 as a software update for all cars with the appropriate hardware. All Teslas produced starting in mid-September 2014 have the appropriate hardware, so several thousand MY 2014 Teslas have AEB as of early 2015.
I will admit that there have been a couple times it saw the car I was about to merge into before I did. It is not clear if I would have seen the car a moment latter anyway or not.
We can estimate the absolute number from Tesla's press release in December stating that 1.3 billion miles have been driven on Autopilot, while the NHTSA report states the accidents they count occur 0.8 times per million miles.
So in that case it's data from about 1000 crashes in the Autopilot case, and more in the non-Autopilot case. Which tells us that the statistics should be good enough (anything else would be a major scandal).
However, that is not what they are comparing! As the NHTSA report states in footnote 21, they're not actually comparing Autopilot miles to normal driving. They are comparing crash rates in vehicles before and after Autopilot was installed in those vehicles. That muddies the waters quite a bit in my opinion, especially when they don't tell us how significant this decrease is. I'm not saying it's wrong, but I'm also not very confident that it is a causal effect, as opposed to just a correlation.
Here is a PDF link to the NHTSA report, since TFA only links to a Scribd (thus unusable) version.
It just seems...odd.
A lot of people drive a lot of miles which is why there are a lot of crashes. So you can argue either side of this depending on how you want to twist the numbers.
Of course, whatever algorithm they are using won't work if the car is driving itself, you can't judge the reaction time if all the driver is doing is holding the wheel.
implying that law enforcement won't work with homeland security to subpoena your data at cost under a gag order :)
[Seriously, though. I don't want to downvote everyone in a chain but there's probably a more HN/SNR-friendly way to comment on potential abuses of automaker access to social media feeds.]
immediately pull over and shut down
If your goal is to make things safer I don't think this will help.
And my other suggestion about a TomTomagotchi that you have to regularly drive around to different places to keep happy didn't go over very well, either. Maybe a petroleum company would be more receptive to that idea, though.
Its such an odd patent, perhaps its a high quality signal about their ability to execute on self driving.
I think there is a good SaaS business with A/B testing robot/car software OTA.
I would imagine some metrics would be:
1. car speed
2. battery consumption per mile.
3. user comfort metrics (if it's possible to measure, maybe like hearthrate, body temperature etc - i can imagine if robots were driving bad, it'd be visible in heart rate or something like that).4
The list probably goes on. It would be interesting to see how much battery consumption changed after OTA, for example.
6.2 was announced in March, 2015.
You mean drivers like the ones behind the wheel of all the BMWs and Audis that cut me off with no turn signals, speed, make aggressive lane changes, tailgate, and generally act like giant douches because a simple traffic ticket is nothing to their pocketbook while their time == money?
Those people are going to be a hell of a lot safer behind the wheel of an autonomous (level 4) vehicle where they can be on the phone and their laptop as the vehicle obeys the speed limits and safe following distance.
Could just be the numbers of those kinds of cars in my area though.
Generally I feel that a douche can drive any type of car, and I try to drive in fear like everyone else on the road is a drunk prison escapee.
(1) provisional licence holders - new drivers (and also people who have lost their licence) have to display "P" plates for a year here (Australia).
I've seen jalopies driven like someone has been up on speed for 4 days straight.
Or the drivers in cheap sensible cars that are terrified of merging onto the freeway faster than 40 mph.
And the distracted drivers who realize they're about to miss their exit so they hammer on the brakes and put on their turn signal and completely fail to match speeds.
Ricers with after market exhaust are another kind of car that usually drives way too fast and aggressive compared to how their lowered suspension actually performs.
The people in road tanks who seem to split the difference between terrified and aggressive and expect you to just get out of their way because they're bigger.
I'm probably missing entire swaths of shitty drivers that I can't think of right now because my blood sugar is a bit low.
Somehow I manage to consistently undertake people who are going slow in the left lane safely. Its not that hard.
Finland has a system where a traffic ticket is a certain percentage of your monthly income. For example, running a red light is typically fined for 25% of your monthly income.
You can buy traction control, ABS, better suspension, steering, better tires, etc. None of that will make the idiot behind the wheel any better, although it may lead to overconfidence.
Thanks for making my point for me though.
"Results Studies 1 and 2. Our first two studies were naturalistic field studies, and examined whether upper-class individuals behave more unethically than lower-class individuals while driving. In study 1, we investigated whether upper-class drivers were more likely to cut off other vehicles at a busy four-way intersection with stop signs on all sides. As vehicles are reliable indicators of a person's social rank and wealth (15), we used observers’ codes of vehicle status (make, age, and appearance) to index drivers’ social class. Observers stood near the intersection, coded the status of approaching vehicles, and recorded whether the driver cut off other vehicles by crossing the intersection before waiting their turn, a behavior that defies the California Vehicle Code. In the present study, 12.4% of drivers cut in front of other vehicles. A binary logistic regression indicated that upper-class drivers were the most likely to cut off other vehicles at the intersection, even when controlling for time of day, driver's perceived sex and age, and amount of traffic, b = 0.36, SE b = 0.18, P < 0.05"
But given that they've shown this 40% crash rate reduction in their current demographic, it's reasonable to expect to see a similar reduction in other demographics (maybe not of the same magnitude, but at least in the same direction).
A Tesla is not a cheap car. I kind of suspect the average Tesla driver is more responsible than the average driver. It may be the case that the average Tesla driver is very well off, and treat the cars as expensive toys though, so it's tough to say without numbers. If i had to bet right now, i'd bet on the more responsible side though.
If you study rich drivers I have no doubt that you would find that they cut people off more, overtake in more dangerous situations, use their indicators less, drive over the speed limit more, proceed through amber signals later, run red lights more often, and generally show less regard for the rules than other drivers.
This behaviour will also correlate to other areas of life where richer people simply care less for the rules. They're more likely to use social contacts or money (e.g.: "lawyer up") to get themselves out of any trouble they find themselves in.
Teslas are not cheap cars. Given that money is (social evidence of) power, the people buying Teslas (and other expensive cars in general) are likely to expect other drivers to respect their status as powerful people.
I would be willing to bet money that the insurance actuaries that actually calculate risk disagree with your extremely prejudiced opinion.
You're assuming that accident rates are directly correlated with claim rates, however. A wealthy individual is less likely claim for a minor accident, preferring to pay out of pocket for small repairs rather than take a big hit on their premiums. Someone less wealthy may not have that option.
There are other reasons why wealth may reduce your risk to insurers that are unrelated to safe driving. You're less likely to leave a car parked insecurely in a dodgy part of town, for example.
Finally, "drivers of luxury brand cars" certainly aren't the same set as "wealthy people with good credit records". Plenty of lower net worth people can still manage a lease on a BMW or Mercedes, or buy a used one.
Drivers of luxury cars cause more accidents, insurers say:
I'm still waiting to hear how your personal opinion about wealthy drivers is more accurate than auto insurance actuaries that calculate risk for a living.
FWIW, in case you were unaware, Hacker News doesn't allow commenters to downvote direct replies. So whoever might've downvoted you wasn't the parent, and someone corrected it with an upvote or undown (your comment isn't gray as of this writing).
This list is not strongly related to risk /of/ accident, but is correlated with severity (a small change is likely to have a small correlation with severity).
* over the speed limit
* overtake in (more) dangerous situations
* through amber signals later
All of these are very bad things that everyone should avoid doing, and signals should be given starting 5-10 seconds before the action.
* use indicators less
* cut people off
* run red lights
Which seems like the current driver profile is possibly safer than the overall population since it's unlikely a large percentage of young drivers own a $100k car. It seems like the effect of widespread autopilot could be greater if it included this cohort.
In 2015 2.44 million people were injured in car accidents. That's a much more telling figure of just how bad most drivers are. Apparently our cars are just really safe.
There are some truly consistently horrible drivers. But in general most people do ok most of the time. I think the real challenge is cutting of that really bad end of the distribution, which autopilot seems to be doing.
Similarily, airplane pilots spend so much time in autopilot mode that they forget how to operate a plane in emergency situations.
20 years ago they redid the traffic pattern where I used to live to something many people found really confusing and many people claimed felt dangerous. A study 2 years later found that, while it was true that accidents on that part of the road had increased, the number of injuries due to accidents had plummeted. A follow up study a few years later found that accidents had dropped back to around their original level and serious injuries where still basically at zero.
Another factor is the extent to which bad driving doesn't result in an accident, either because of infrastructure, other drivers reacting, or just a lack of cars. For example, most red light runners don't cause a crash because stoplights usually have an all-red period to compensate, there often aren't other cars to crash into, and other cars might brake to avoid crashing into the culprit.
Many crashes require two bad drivers to misbehave at about the same time. For example, I couldn't tell you how many times I've had somebody change lanes into me, but it hasn't (yet) resulted in a crash because I've seen it and gotten out of the way before they hit.
love you, internet.
For my assumption I assume that due to the higher cost of a Tesla better educated people are opting for them over the alternative because of the environmental benefits.
I mean, who do you think is more likely to buy a Tesla: A 20 year old fresh college graduate, or his 'uneducated' father who has been saving and investing for 30 years and just sold his lifelong home in San Francisco? My bet is on the latter. The boomers in particular are considered the wealthiest generation ever, but not the most educated generation ever.
I'm rather fascinated by how many are questioning the education thing here though. Coming from a farming community, where every (older) farmer I know is multimillions simply by virtue of having purchased farmland when they were young, it's difficult to see how education had any impact on that wealth. Some of them do have educations, some don't. It doesn't seem to make any difference.
OP linked wealth with being better educated, he also inferred that better educated and wealthy meant they would be better drivers. However as others have pointed out there are studies showing that wealthier individuals make worse drivers (the studies don't appear factor in education level and wealth is determined by brand of vehicle).
I disagreed with OPs sentiment that fewer wealthy (and therefore worse educated - in OPs opinion) drivers would result in more crashes.
Doesn't that contradict what you said before: "Are you assuming the more wealthy owners have lower levels of education?" Although I think it is quite reasonable to assume that the wealthy do have less education, statistically speaking, give the nature of wealth and the more recent focus of educating the populace.
Heck, when I graduated high school in 2000, only 64% of us did graduate. The graduation rate for high school, less than two decades later, is now over 80%. That's substantial growth over what is a fairly short period of time, all things considered. And the rate gets worse the further back you go.
> I disagreed with OPs sentiment that fewer wealthy (and therefore worse educated - in OPs opinion) drivers would result in more crashes.
That was my misunderstanding. I thought you meant that you did not agree with the sentiment that wealthy people could be less educated. Thanks for the clarification.
The imperfect, incomplete, beta, level 2 self driving cars that were supposed to be the "dangerous" area of self driving are ALREADY better than human drivers.
Can we stop the politics and deploy all the real self driving cars to the road immediately, since the government has proven that even the shitty variety is safer than humans?
Tesla's system is level 2, meaning that the driver must (as in, is supposed to) remain engaged and aware of the driving task at all times, although the car will handle common cases on its own.
I believe the plan for everybody is to go straight from level 2 to level 4, since it looks like level 3 is just not going to work.
In any case, we can't stop the politics and deploy all the real self driving cars to the road immediately, because there aren't any real self driving cars to deploy yet. The technology is advancing extremely quickly and it's getting close, but politics is not the only obstacle.
If and when we have a real, production-worthy self-driving system that's being blocked by politics for no good reason, we can revisit this. I get the sense that, at least in most places, the politics will be pretty easy once the technology gets there. Getting a lot fewer registered voters killed has a way of swaying legislators. Especially in the US, where it's a state-by-state decision, a few states will want to get a jump on it and then the rest will face immense pressure to follow.
The current Tesla autopilot is obviously working well for many people, but for myself I have to wait until level 4. Anything more hands on than that is inherently unsafe because of my habits.
Computer vision and speech recognition could be useful for proving the driver's awake, as well as driving the car itself.
In addition, not having to spend as much energy just keeping the car on the road may help to prevent getting tired. Then again, maybe it won't have any impact.
As for durations at the wheel, I did a Virginia to Wisconsin round-trip over the holidays, and have done other trips of similar lengths in the past. Of course, I'm not spending more than about three hours at the wheel at a time, since the car needs to charge and I have various physiological needs.
Modulating my speed keeps me engaged with driving and I use it as the trigger to cycle through my responsibilities. I check ahead / speed, adjust throttle, check rearview mirror, check left mirror, check right mirror, repeat. When the throttle step is gone I find myself losing siutational focus very quickly. I've always thought it was because it was the only action that involved actual physical engagement, all the rest are just eye movement.
I just bought a new car that alerts when it's below 4 degrees C about the possible icy conditions, but it only does so after the car has been running for several minutes. By the time it goes off I'm already driving and I mistake it for an engine warning or other catastrophic failure. Every time. I live in Canada, of course it's icy in the winter. Toddlers could have told you that. This is an absurdly dangerous thing to have the car do, because when the beep goes off all my attention is on processing the alert, not about the pedestrian I'm about to run over. I have the same basic fear about getting control of a level 2 or 3 car. All my focus will be on why the car is handing me control, not on the actual task of driving.
>The Uniform Vehicle Code states: Upon all roadways any vehicle proceeding at less than the normal speed of traffic at the time and place and under the conditions then existing shall be driven in the right-hand lane then available for traffic ...
>All states allow drivers to use the left lane (when there is more than one in the same direction) to pass. Most states restrict use of the left lane by slow-moving traffic that is not passing. The table below describes the law in effect in each state.
Or you are driving at the speed limit on a 2x2 highway used as a trucking corridor. (For example: most of I-5.)
The right lane is pretty much for trucks going 55.
The left lane is a combination of:
* Trucks going 60-65 to pass a truck in the right lane
* 'Typical' cars who are going 75ish
* Fast cars who are going 80-90
So there is a huge disparity of speed in that lane -- and that disparity seems like it must be a huge factor in many highway accidents.
It always feels really dangerous. I wonder if it is safer than just having the trucks have the same speed limit as the cars.
I thought it allowed the car to require some parameters (such as road type or weather). If my car can handle the hours-long uninterrupted highway sections of a road trip, yet still require manual driving on city streets, that is still a huge value proposition for time & safety.
In my mind, if a level 4 vehicle is unable to reach a given valid destination without assistance, that would be considered a failure.
In reality, macro-scale rerouting around obstacles that the self-driving system does not know how to overcome is not a particular challenge, GPS systems do this easily. The definition doesn't say it has to take the most efficient route. ;)
Oh, that's going to go over really well. Say you're drunk, and want to go home in your self-driving car. However, there's a blind left turn that your car knows it can't safely navigate, so it happily routes you a 15-minute drive through back roads for a trip that should take 5. People are going to love that.
(I've heard that UPS does route planning with a system that discourages left turns, even sometimes doing three rights to avoid some dangerous lefts.)
Many of those scenarios would allow some time for the driver to take over, and would most likely be able to pull over if the driver can't or won't for some reason.
However, Arizona would be a beautiful place to have fully autonomous cars.
The snow bunnies make driving there a nightmare.
The truck would know well in advance which exit it would take and could give the driver a heads-up some minutes before they need to take the wheel.
>22 The crash rates are for all miles travelled before and after Autopilot installation and are not limited to actual
Because of course there were no autopilot miles in the before bucket you would be skewing the analysis if you only looked at autopilot miles in the after bucket.
What we know is after installing the autopilot feature, accidents with airbag deployments reduced 40%. Not just during so-called "safe" autopilot miles, but over all miles driven.
Now it's plausible that autopilot makes you a better driver both while it's on and off, but I'd say it's just exceedingly common to rear-end people due to distracted driving.
the driving mode-specific execution by a driver assistance system of either steering or acceleration/deceleration using information about the driving environment and with the expectation that the human driver perform all remaining aspects of the dynamic driving task
Moreover, if experience showed that TACC reduces safety, NHTSA and its counterparts in other countries would have noticed and put the brakes on it. If you don't think this is a reasonable assumption, so be it.
The problem here is that you're mis specifying the null hypothesis. We have had human drivers for 100 years. New automated systems start to come into the fray. We're trying to figure out if they are safer than humans. The null in this case is that they are not. That's what we assume until proven otherwise.
No, we're not. We're trying to figure out if TACC has any impact either way. Accordingly, the null hypothesis is that they are no more and no less safe than human drivers alone. You need data to show the direction of the change. You just picked a direction (i.e., towards worse) that was more intuitive to you.
>I think it is reasonable to assume that Tesla TACC is not less safe than Tesla non-TACC, therefore the conclusion is also relevant to comparisons between Tesla level 0 (entirely human-driven, no assist) vs. Tesla level 2.
Is it really reasonable to assume it's not less safe in this case? Neither of us can say either way, seems like it would be impossible to assume either way.
Yes, because this assumption is consistent with the null hypothesis. The negation of this statement is that it is strictly less safe, which is not consistent with the null hypothesis, and for which there is no evidence.
These two lead us to your null which is that it is as safe.
When a new unproven technology comes out does it seem correct to automatically assume that it is as safe as the current standard? Would you not expect some proof?
In the absence of any data or given priors whatsoever, we should give the three possibilities (less safe, as safe, more safe) equal weight. This means that there is a 33% chance it's less safe, which is too large to ignore. That's why we want more proof when we deploy new tech, introduce a new drug, etc.
But note that specifically in the context of TACC as it stands today, we are not in that situation. There has been plenty of data accumulated already, which constitutes the proof you are looking for (i.e., the data we do have doesn't show a decrease in safety.) That's why today it is reasonable to assume that TACC is not less safe than no TACC.
Haha, well we could have totally sidestepped this whole thing then! Can you show me the data?
So yes, it is literally human Vs computer driven safety.
You are bringing it up, but I don't think you are doing this for any reason other than to be contrarian.
Also, ALL cars have cruise control right now. So the 40 percent STILL applies to all these cars.
The correct way to frase the study is that "99 percent of cars are level 1 autonomous cars. For this 99 percent, going from level 1 to level 2 would make them 40 % safer"
You shouldn't assume intentions. A ton of people here are misreading this study (or should I say not reading it).
> Also, ALL cars have cruise control right now. So the 40 percent STILL applies to all these cars. The correct way to frase the study is that "99 percent of cars are level 1 autonomous cars. For this 99 percent, going from level 1 to level 2 would make them 40 % safer"
The comparison here is only on Tesla so accordingly it's only on TACC vs TACC+Autosteer. TACC is fundamentally different than the type of cruise control that is in basically all cars.
The name is good for technical people, not so much for lay people who don't know that autopilots are more like cruse controls than something that completely flies the plane (they have been getting better though). The 40% reduction in air-bag activation seems to indicate the name is good enough while its probably not optimal.
Depending on how all the numbers would work out, making other drivers safer could well make you safer too. It might not, but it could.