As far back as 2016 they were claiming they had full SDC capability above human driver safety , and their recent Model Y announcement suggests that the only thing holding it up is regulatory approval, and not failure to achieve the desired spec.
>Model Y will have Full Self-Driving capability, enabling automatic driving on city streets and highways pending regulatory approval, as well as the ability to come find you anywhere in a parking lot.
 https://news.ycombinator.com/item?id=19397942 (linking HN because it's hard to find the text in the page with their UI)
It is obviously troubling to see self driving cars run into solid and stationary objects, but human drivers do that all the time too. The question shouldn't be whether this technology is perfect, it should be whether this technology is safer than humans. You and I certainly don't have enough data to say one way or another on that. I would bet even Tesla doesn't have enough data to say definitively. However writing this tech off as unsafe just because it makes what seems like an obvious mistake is a great way to slow progress which will result in more human deaths long term.
Indeed, I also find it disingenuous to claim the Autopilot is safer than a human driver even if it actually can't function at all in 95% of situations on the road. Guess saying "possibly safer than a human driver at these 3 things and only under these very specific conditions" isn't so catchy. Definitely not thousands-of-dollars catchy.
# only on specific workloads
& don't ever get distracted because it can literally kill you
I know that sometimes these lines are objective, but acting like you can't tell okay and not okay apart because the line is gray in some cases just seems to be bad faith.
If you are okay with Tesla misleading selling a prototype feature in the name of disruption, own your argument. If this level of risk is just okay to you because of the potential long term upsides...I disagree but I respect your willingness to say it. Just don't make a different one because you are afraid of the social consequences of your actual point.
Like here's a recent article swooning over it coming out sometime soon:
The article there also says that the more capable version will require new hardware, which isn't something they have been admitting very readily over the years.
As long as Tesla says that driver vigilance is required at all times, while simultaneously promoting it as if it were true automation, the risk to third parties, from Tesla autopilot users who don't understand what's going on here, is unacceptable.
The feature is called 'Full self driving'. Their sales reps regularly told customers to take their hands off the steering wheel of their "Autopilot". Sure, the fine print says "always pay attention", but there's an entire marketing scheme going on here which is borderline dishonest.
In fact the not-so-fine print in the user manual says to keep your hands on the wheels, but the promotional material tells a different story, as does Musk when he takes his hands off the wheel on national television.
And where is the research to show that? To prove that within any significance, they would have to have millions and millions of miles to compare against human drivers. This is new technology and making false claims about its safety is dangerous.
We as a society force drug companies to rigorously show that their drugs do what they say they do and all of their side effects are properly accounted for. This should be the case here too.
Bizarrely, this report was from the NHTSA, raising the sort of regulatory capture issues that have surfaced between Boeing and the FAA.
Also, when such a study is performed, we must be careful that technologies are not conflated to claim more than properly can be. The effectiveness of less powerful technologies such as lane-keeping assist and automatic emergency braking tells us nothing about the safety of full-authority automated driving.
Human drivers who are distracted do that all the time, AP is supposed to avoid that, it is supposed to be alert all the time, but when it sees a stationary object, the result of their algo is, "it must be a sign we can somehow go through"
We need to make it very clear that self driving cars are better than driving drunk, or when you haven't slept in 24+ hours, but if you are a driver who pays attention, don't use this tech.
And it is not that I don't want the tech to take over the world, I wish I could just put my kids in a self driving car and have the car take them to the school that is 4 miles from home, but we are nowhere close to that, even with me in the driver seat, if I only have seconds to take over before I end up on a ditch or worse.
Sure, a bug fix for a routine being called is maybe easier to fix, but the real world performance is still the important metric to track.
I am not saying we shouldn't be critical of Tesla or hold them accountable for their product. My point is simply that their product doesn't have to be infallible for it to still be an improvement over the current solution.
A self-driving Tesla is not objectively better than a regular, human-driven car. The jury's still out on whether any self-driving car is even as good as the average human driver, so if one of them has a bug that causes serious accidents in reproducible situations, that's not better than the baseline situation it's being compared to.
Unless it goes of when it's not needed and kills you.
The same cannot yet be said about "self-driving" cars.
That assumes the pacemaker works more often than it doesn't. (Which is the case now.) It's an unstated assumption that doesn't always apply when generalizing your example.
Even if pacemaker fails 90% of the time 90% chance of death is better than a 100% without it.
Unless the person wearing pacemaker is driving a non-self-driving car.
so it's not enough to be better then average driver. It needs to be as good as "good" driver or better before I would trust it.
It's unethical to leave life-and-death decisions to a black-box algorithm, when I know it was written for the primary purpose of gathering more money. Safety is a constraint, not their goal.
The power grid, railway, traffic lights, elevators, etc. all these systems are critical and closed source, and you don't see them killing people on a wide scale nor even on a small scale but regular basis.
And it may be irrational, but wrongly-activating brakes feel like less of a risk than wrongly-activating steering or accelerators.
And anyway, “lots of things are untrustworthy” is not a good argument for trusting something else.
If the car is anywhere from slightly to quite a lot safer, but accidents that result in injuries/deaths occur, then it is not ok.
Psychologically you may feel this to be ‘right’ but I would prefer a world with less injuries and deaths all round. And one day the courts will too
imagine lung cancer for example, if we take average then it's 6% risk taking smookers and not smookes, but if I decided not to smoke then it's 0.2%. so in this case I can do better for lowering the chances then when I would accept some other solution that would let us all have it at 5% witch might seem ok but not for me.
Also you aren't considering that other drivers can be the cause of a potential accident. You can assume that the other drivers on the road are average drivers with all the distractions that come along with that. If you make the other drivers on average slightly safer, that improves your safety even if your behavior is completely unchanged.
If you have to pay attention, then you might as well be driving.
This is basic human nature.
Anyway, I'm yet to see a video of Tesla doing emergency breaks because of highway pileup where it's not only a couple of seconds after human driver could see a row of red breaking lights in front.
As for the rare unpredictable stuff, I might not be able to predict, say, a ladder falling off a work truck, but I can recognize it could fall off, prepare for that type of event, probably even recognize it before the car does, and if I don’t die, I can learn from it.
I don’t really see how also having to worry about the car itself creating a rare event helps. Now I’ve got to spend additional time deciding whether to take over and if I do, the time I have to react is reduced.
yea, other drivers are also part of this, but even then I would like something alot better then average. sure there is a chance a distracted teenager is riding somewhere around you, but it's not better if there is now 10 slightly less distracted teenagers around you.
average is really not a great measure here if we are talking about self driving cars.
All that being said, I do think self driving is what we will inevitably arrive at...we just need to have a higher degree of confidence in it before it's widely used in my opinion.
Unless something has changed, the Telsa self driving tech intentionally ignores stationary objects.
And Telsa are claiming this is a feature not a bug.
Am I understanding this correctly?
I don't know anyone who isn't suicidal who intentionally ignores stationary objects while driving.
Edit to add: apologies for terse response, a bit distracted at present
I fail to understand this line of reasoning. Are you saying because humans tend to run into solid objects it is okay if the self driving cars do it, only if it does less times than humans?
puts a lot of arguments (and young people) to rest.
> The question shouldn't be whether this technology is perfect, it should be whether this technology is safer than humans.
If Tesla crashes 1 in 100 drives, it's absolutely a death trap since that's still an obscenely bad accident rate.
See https://arstechnica.com/cars/2019/02/in-2017-the-feds-said-t... for details and verification.
>So does that mean that Autosteer actually makes crashes 59 percent more likely? Probably not. Those 5,714 vehicles represent only a small portion of Tesla's fleet, and there's no way to know if they're representative. And that's the point: it's reckless to try to draw conclusions from such flawed data. NHTSA should have either asked Tesla for more data or left that calculation out of its report entirely.
I will go back to my original statement at the start of this thread. No one in these comments has the data to say definitively whether Autopilot is safer than a human driver. I am skeptical that Tesla even has enough data for that. But I am also skeptical of people who take that unknown and the occasional anecdotal data point like the above video as proof that Autopilot is inherently less safe than humans.
We've heard that before, for the Model X.
One crash/incident doesn’t mean it’s less safe than a human driver on average, even if it’s something a human driver might have avoided.
We should expect different failure modes from a machine, but adopt it anyway if it avoids enough human failure modes to make up for it.
Source: my thermostat thought 33C was an acceptable temp once, but I still didn’t switch to full-manual HVAC control.
Like you say, this number is really valuable only for Tesla marketing.
There can be a difference between a $75k ICE driver and a $75k electric driver.
The point is, one stupid decision by a machine doesn’t prove much.
Or how about 1 stupid decision by a radiotherapy machine? Ever heard of THERAC-25?
In each case, it was just 1 stupid decision by a machine.
The entire reason Engineering as a practice is a thing is because when you implement the capacity for a stupid decision into a system that is then mass produced, dire consequences can result.
I look down on any thought process that doesn't discriminate between the difference between 0 and 1.
If the system provably worked, that decision would not have happened (0). The stupid decision happened (1), however, which means it can happen again at a poorly understood confluence of circumstances.
To err is human, and we forgive each other every day for it.
To err as a machine is a condemnation to the refuse bucket, repair shop, or back to the drawing board.
To err so egregiously as a machine to cause an operator and those around them to lose their life is willful and moral negligence on the part of the system's designer. Slack is cut when good faith is demonstrated, but liability is unambiguous. The hazard would not be there if you hadn't put it there.
Does that mean the flu vaccine should get dumped in the refuse bin?
Similarly, some other aircraft flying today/tomorrow has automation with an unknown bug/issue that will cause loss of life. Should we disable everything except the 6-pack and stick and rudder?
It would have saved the lives killed by automation, but we would have more aviation death overall.
Aviation does not have that excuse. The 737 MAX 8 system description is enumerated from the ground up. Seeing as there was so much recertification effort that didn't need to be done, it makes the failure to properly handle the MCAS implementation all the more damning.
This wasn't some subtle bug. This was an outright terrible design choice. Anyone with any experience composing complex systems out of smaller functional building blocks should have been able to look at the outputs, look at the inputs, and realize there was the potential for catastrophic malfunction.
As I've said elsewhere, automation should make flying a plane easier when functional. When non-functional, however, the pilot should still be able to salvage the plane. That requires clear communication of what automation does, and what it's failure modes are.
They need sensor fusion. The system needs to make maximum use of all the information available to it: Where is the road striping? Where are the other cars going? Where are the road signs and signals? (If there's one in your path, you certainly shouldn't drive into it!) Are there camera-visible obstructions? What were the interpretations and actions of previous Tesla trips along the same route?
In these problem cases, all data except the left and right lane striping seems to be completely ignored. There was even more information at the fatal offramp location (cross-striping over the lane separation zone), which the vehicle drove straight over. The system is not making maximum use of the information available to it, in fact it is using hardly any of it at all, and fixating on what it thinks is a single most salient piece of data.
Sensor fusion algorithms tend to behave the opposite way-- each additional piece of data informs the interpretation of all the other data. You can have very poor-quality data, but if it is even moderately over-constrained, your state estimate can be very good in spite of it. I think it would be completely reasonable to have a neural net in the loop of a sensor fusion algorithm, with fusion constraints informing the NN's interpretation, and the NN's estimates feeding back into the fusion algorithm as uncertain data.
IMO Tesla will do at least one of:
* Very expensively retract their promise of full self driving for delivered vehicles
* Completely overhaul/redesign their driving software and start again nearly from scratch
* Get into a regulatory/legal tangle with the NTSA/courts/DOJ over all the dead people their system is making.
I notice that the temporal coherence is pretty bad-- Pedestrians pop out of recognition when they go behind trees; lane/exit boundaries wiggle all over the place and occasionally frame-pop into different configurations. A kalman filter, for example, is a state estimator which maintains temporal coherence, and makes heavy use of previous estimates/sensor inference when computing the most updated estimate. It doesn't look to me like that kind of strategy is being used to maintain the vehicle's world model. IMO a good estimator wouldn't treat "a pedestrian popping out of existence" as the most likely estimate for any circumstance, let alone one where they were clearly present in the previous 50 frames. I don't doubt they're using KF on the vehicle's inertial movement, but based on the failures and this video, it sure doesn't look like it's using a fusion technique for the world model.
There are left and right-looking cameras, but the FOV overlap between them is not very substantial, and there can't be stereopsis where there is no overlap. Per the Tesla website, there are three forward-looking cameras, and they each have a different FOV. The parallax baseline between them is only a few centimeters, too, so the depth sensitivity isn't going to be spectacular. It's certainly possible that there could be some narrow-baseline stereo fusion, but it could only really happen inside the narrowest field of view, where the coverage overlaps with more than one camera. That's the circumstance where having a narrow baseline would hurt the most. Based on that it doesn't really seem like the system is well set-up for stereopsis; if it's there it seems like an afterthought.
I could certainly be wrong, as I don't have access to the code. Are you going by some other secondary source/information?
So it's not so clear cut as you make it out to be.
(And you know what? Even if it were legal for me to drive without those glasses, I'd still drive with them. Because ranging is important!)
Also quite interestingly he says there will be a big jump forward in quality when they switch to their own computing hardware (18m40 or so)
Musk is often overly optimistic and he keeps underestimating the problem at hand. I call BS on this, it won't be ready in 5 or even 10 years. And then there is a regulatory approval.
What Musk says and what Tesla delivers are two completely separate things.
A Disney park engineer once relayed to me the philosophy for designing safe attractions in the parks: "If there's a one in a million chance of it happening, it'll happen multiple times per year," given attendance numbers which are in the millions.
A self-driving car needs to handle ordinary commute circumstances with 100.0% reliability, and one-in-a-million circumstances (which statistically you will have never personally encountered) with reliability literally above 99%.
Do you find yourself paying attention to the road context the entire time? I'm curious if your mental acuity has dropped over time as the car has driven you.
It is very sad that a lot of people in this forum are hoping this never works. It is one of the most exciting advancements in technology that can benefit us all, but people here seem to be more interested in seeing Musk and Tesla fail rather than hoping they achieve this and bring the whole industry forward one more time, affecting millions of lives.
Top level OP here. I am personally rooting for self-driving to succeed and catch on. I just don't think Tesla's current strategy is likely to work, and their cavalier, unjustified overconfidence is either going to sink them or kill people, or both, neither of which is good for the future of self-driving.
NTSB analysis of the March 23 2018 collision: https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...
Tesla's statement of the March 23 2018 collision: https://www.tesla.com/blog/update-last-week%E2%80%99s-accide...
Just discussing the incident in terms of a "software bug" does a disservice to the severity of the issue.
That being said, drowsy driving is a thing, and it's very easy to fall asleep behind the wheel. The car really needs a better strategy to handle this situation.
For the entire next year, my AP 1.0 (which is non-Tesla technology -- Mobileye rocks) had no trouble doing adaptive cruise control and lane assist. Meanwhile his AP 2.0 would brake suddenly and swerve all over the place. It took a full year of OTA updates before his AP 2.0 was finally on-par with the functionality that I had the whole time. Of course, by then Tesla pulled a "we're sorry, but the princess is in another castle" and came out with AP 2.5.
Now this kind of stuff doesn't matter to me. I got tired of that company's shit and have pulled out of the Teslasphere entirely. I'm now driving a non-Tesla EV, and I'll never look back. I'm also letting my government representatives know that they should support a common EV charging standard and keep Tesla so-called "self-driving" shit off public roads.
These OTA updates are not ok for large machinery and endagers not only the Tesla driver but all others on the road.
Nothing warning that behavior of autopilot might change.
It can "tentatively assist" you in killing yourself and potentially others, but we already know that from the disaster in Mountain View, where this exact same thing happened.
The problem with behavior changes in autopilot is that the car needs to react the same way every time.
If you're going around a corner on a road which you've driven on for years without issue, and the car all of a sudden does something unexpected, the panic and over-correction reaction that everyone instinctively has tends to cause more accidents than just holding your desired course does.
It doesn't matter if it's a skid, or your Tesla trying to accelerate you into a wall at 70 miles an hour. If the car does something the driver is not expecting it to do for any reason (from fatally defective software to ice on the ground) the driver performs sub-optimally as a result.
And since this is Tesla we're talking about (who bakes in features like your car not starting before you upgrade its software, which is another "feature" about these cars that's going to get someone killed sooner or later), I'm willing to bet that the car doesn't warn you that this might occur. It just works fine for 6 months, then an update gets pushed, the car tries to put you into a wall, and you cause an accident because you're trying to stop the car from killing you and didn't check your blind spot before that.
That is a UI and UX failure of the highest magnitude, and it is completely unacceptable, no matter how well it otherwise tends to work.
That's not true. There are many situations where the AP could make a sudden turn and kill you long before you had time to react even if you were paying attention. And in fact, the situation in the video seem to be not so far away from that.
You're begging the question . The autopilot has full control authority over the steering wheel, so if it fails, nothing constrains it from making a sudden turn. If it is "just lane following" then it hasn't failed (yet).
You can test this yourself in a tesla by engaging cruise control, then hitting a turn signal. This would normally initiate an automatic lane change - but keep your hands tightly on the wheel as if you wanted to stay in the lane you're in. The wheel will attempt to turn, fail as you're preventing it from turning - and the AP disables.
Sure, if you react fast enough.
Let's not lose the plot here. The original claim was:
"The AP can only kill you if you're distracted/asleep at the wheel and not paying attention."
And that's not true. It can kill you, quite simply, by producing the wrong control input in a situation where the available recovery time is less than your reaction time.
If you doubt this, then I challenge you to drive a car where the autopilot is under my control. (It will have to be remote control because no fucking way am I willing to be in the car with you when we do this experiment.)
The "attention checking" has a delay of a few seconds on it before it'll start warning you to grip the wheel. If your hands were on the wheel, and you were paying attention, there's no reaction time, since the erroneous control input would be overridden by your hands keeping you in your lane.
Put simply, if your grip on the wheel was loose enough to where the computer-generated move could physically move the wheel, you weren't in control of the vehicle and would be generating warnings.
>And if you need to be paying attention 100% of the time anyway, what's the point of having an autopilot?
The same reason cruise control is a thing on every modern car. It down on fatigue, which in turn, should improve safety and comfort. You're still required to be in control of your speed, but the vehicle manages keeping you at the set speed.
Here's autopilot following a lane straight into a concrete barrier.
You can't assume that autopilot won't screw up lane following and swerve into a large obstacle. In those situations it's not as simple as making sure the lane ahead of you is clear, you might only have a split second warning between Autopilot going into "casual murder mode" and a collision.
Granted, a ton of active research is going on trying to prevent these sorts of problems, but it definitely isn't a solved problem.
Basically, ask yourself this: how robust is this software? How are you measuring that?
I'd love for my VW to get updates OTA without taking it to the dealer. But, I don't want to receive those updates without knowledge that the update has been tested sufficiently (and given I don't trust the vendor, I'd like NTSB or similar government body to do this on my behalf).
You mean, similar to what FAA might do if it didn't allow manufucturer self-certification instead, right?
Also, wouldn't NHSTA, which does safety regulation and standards for autos, not NTSB, that does accident investigation, be the natural agency?
The timing of this is precarious (given the recent 737 Max allegations)
I don't see why a more rigorous release process would slow progress down at all. All the iterations that lead to progress should be done on test vehicles, not customer vehicles.
"Move fast and break things" is a development model that should only be applied to low-importance, low-risk systems. Most software development work occurs on such systems, and I think that narrows the perspective of the software development community as a whole.
Disclaimer: I own a Model S
Of course it's also looking like they should have grounded the fleet after the first crash, given the history of the aircraft prior to its last flight.
It almost seems like a right-to-repair issue, where manufacturers are going out of their way to avoid documenting how their products actually work for fear of losing control over them or of disclosing details that a competitor or patent holder might find useful.
There definitely needs to be a strong regulatory response to that kind of behavior on the manufacturer's part when it comes to safety-critical updates, or even updates that might conceivably impact safety of life. Which, in the airplane business, is basically all of them.
It's quite clear in this case that Tesla doesn't have an organisational structure that fulfills the requirements for functional safety.
It will if they run it through regression tests that Tesla doesn't seem to have the discipline to.
This trope that humans are bad drivers is, in general, crap. Humans are very good drivers. The US has 7.3 deaths per billion km driven. This means if you drive 50km a day, every day, you are (essentially) guaranteed to die... after 7500 YEARS. You have less than a 1 in 10 million chance of dying on any given trip you take. That is NOT risky, and is NOT dangerous.
This is a meaningless measurement. Segment those km by where they occur. Most of them are either:
1. "cruise-control compatible" kilometers—e.g. freeway straightaways—where you're surrounded by cars but they're all going the same speed in a straight line, and all you need to do to be safe is to go the same speed in a straight line as well.
2. "closed-course" kilometers—e.g. most rural roads, and most suburban roads any time other than rush-hour—where the road may curve and have intersections and such, but at any given time of day, it's probable that there aren't any other cars (or even pedestrians) on the road for you to collide with, no matter how bad your driving is. (Think "roads you'd have a teenager practice driving on." These roads are good for practice because there are effectively no accidents to get into.)
3. (a smaller segment, but still relevant because of the number of freight kilometers driven here:) "empty" kilometers—this has all the properties of segment 2, but also, the road is at grade, and there's nothing abutting the road (i.e. the road isn't a street), so even if you veer off the road, you're unlikely to hit anything. (Examples: the Nevada desert; Saskatchewan; most farmland.)
People point out that the safety-per-km stats for airplanes are a nonsense measurement, because what little crashing that airplanes do tends to mostly occur during the first and last 50km of the planned flight-path—so short flights are just as dangerous as long flights.
Well, the same goes for car accidents. Subtract all the "trivial" driving that humans and AIs can both do by just doing... nothing much at all, with no obstacles/hazards to evaluate, let alone react to. The kilometers that are left (freeway merging; city driving; suburban streets during rush-hour; parking in parking lots) are a lot more crash-y, and are the place where both human and AI competence is questionable.
I try to explain this to friends who are far too optimistic on self-crashing car technology. Self-driving cars (SDC) ultimately trade one class of problems that result in death (human-attention deficit) for another class of growing issues (sensor malfunctions/incapabilities, software defects).
Ultimately SDC deaths end up as bugs/features on some random devteam's backlog, and I have no desire to have a JIRA ticket named in my honor.
In my opinion, by the time all the money and effort is spent making SDC's capable of successfully driving from point-A to point-B in the near infinite possible conditions they could encounter, it would have been 50x cheaper to simply build a fully modernized high-speed rail network over existing highways and roads.
One big difference between those classes is what you can do after an accident. With both, you can investigate why it happened and then make recommended changes to prevent it or reduce its chances in the future.
But with driver attention problems, such as drunk driving or driving while texting, it is easy for those recommendations to be ignored. We've been telling people not to drive drunk, not to text while driving, and so on for probably a century in the case of drunk driving, and for as long as texting has existed for texting...and people still do them frequently.
With a software defect, it is a lot easier to make sure that the fix actually gets deployed. Make it part of the annual registration renewal for cars that all safety updates have been applied to their self-driving systems.
Where do you get that number from?
Instead you can be the victim in a vehicular manslaughter case due to DUI/texting/etc.
As stated elsewhere on this thread, self-driving needs to be measurably better than a human. My state displays how many people have been killed thus far this year on the automatic traffic signs (used for amber alert, traffic info, etc as well). A 20% reduction in the 3000+ folks killed in 2018 would mean a whole lot for those saved.
Having commuting time available as what amounts to "free time" is an insane boost to daily life. A quick google says Americans spend 12.2 days per year in their cars. If 300 million people can save 12 days' time per year, you're freeing up ten million years of time every year.
I'm not advocating for throwing everyone in self driving cars untested and who cares how many people die, but if, on the journey to saving many millennia of time every year, a person is killed, why should the company be sued into oblivion? If we're going to sue everyone into oblivion whenever anything doesn't go quite right, why would any company ever try to take on difficult problems?
Huh? Have you never seen or heard of anyone falling, hitting their head, and causing a concussion and/or death?
I mean just being caught up in a traffic accident and have no bodily harm done to you can be a traumatic event.
So if we assume that serious injuries are 10x more common than fatalities, that raises the odds to one incident per 750 years, or 1 incident per 75 years of any accident at all. You're still talking about things that happen to people once or twice in their entire lives. Of course there will be outliers (I've been in 5 accidents myself, only 1 with injuries) but the odds of being in a crash don't seem to be worth the risk of trying to automate mousetraps... we should concentrate on replacing car-based transportation instead of trying to use magical black boxes to make it theoretically safer.
So far, neither are Teslas...And the point of this reddit thread is that any progress that Tesla does make on this front can be instantly reverted in a future update.
Humans are bad in general. At least, self-driving cars will not be texting or using their phone while driving or involving in road rage. This list is really long.
"Nearly 1.25 million people die in road crashes each year, on average 3,287 deaths a day. An additional 20-50 million are injured or disabled. More than half of all road traffic deaths occur among young adults ages 15-44"
Also, 2.45% of deaths worldwide are from road accidents:
Progress to where it’s safer is going to be a killer and we the drivers on the road are unwilling guinea pigs to billionaires’ dreams/goals.
Granted, individual drivers are awful enough that it probably doesn't make that huge a difference in danger. But would you still feel the way if a family member was killed in an accident where their human-controlled car was rear-ended by a Tesla?
Teslas, on the other hand, apparently change their handling and driving profile overnight at the whim of the software engineers at Tesla, without even telling the drivers, and introducing bugs like the OP that are liable to get someone killed.
They are not the same, and comparing them only highlights the issues that Tesla has around their OTA update practice.
OTA updates only happen after confirmed by the users. Where did you hear that it happens without user intervention?
* Improved DOG MODE
* Improved SENTRY MODE
Which were the release notes for the update?
There's only so much you can learn from even the best release notes, period. The ever-so-common "bug fixes," for example, is so broad that it effectively means nothing at all. At best, it's telling the end user "this little update just changes some stuff hidden under the hood. You won't notice anything, so don't give it any thought."
You can't ever look at consent outside of the context of the best available alternative to agreeing to something.
I don't have a dog in this fight, but appeals to emotion in order to drive irrational thinking do not make for constructive debate.
In a perfect world, sure. Real world, you will never have an inherently emotional situation (road safety) where the only voices heard are those of completely detached individuals.
As humans, we have to figure out ways to connect with them, and empathize with what they're feeling. Simply dismissing their concerns as driven by emotion isn't a winning strategy.
The gp said they're a "Willing guinea pig".
The parent pointed out it isn't just their lives on the line, but others, potentially including their family.
That isn't irrational at all. Or an appeal to emotion.
we'll adapt, ie. adjust our behavior to account for that new factor. Police for example have already learnt how to pretty safely stop a Tesla on autopilot with a driver sleeping behind the wheel and not reacting to any signals (because of being deadly drunk for example).
We just accept that amateurs should be hauling around at high speeds in several thousand pounds of missile.
The most relevant question is whether Tesla AP is safer or less safe than typical amateur drivers per 1,000 vehicle miles driven. I don't know the answer to that question.
That doesn't protect me from being killed by a Tesla. I am pretty neutral on the topic but I am getting the feeling that they are in danger of pushing out half baked stuff like we tend to do in software. For most things like software this is OK but maybe not for things that are moving at high speeds.
Better yet, texting while driving increases the risk of an accident while driving 23x
Is that impression accurate?
Volvo's tech is last among the ones compared.
You could replay some testing video frames and make sure the objects are correctly identified, but i suppose that's already what training is about...
If an issue like that resurface, does it mean that the original frames leading to the 2018 accident aren't part of the training (or at least frames from someone driving in this kind of scenario) ?
Even if there was training based on the 2018 frames, that doesn't mean that you have verifiably fixed the problem. It's difficult to train a neural network selectively – every time you "train" the network with additional data, you are increasing the chance that you are also teaching it something you didn't intend which then can have a side-effect in some seemingly unrelated scenario.
You can see this in real life with image recognition networks. Teach them too much and they gradually become less effective at identifying anything.
This is why serious scientific training is needed to understand these complex systems when health and safety are on the line.
Seriously though, I wonder if that sort of physical test track will become popular. You would load your build onto an idle car, queue it up, and make sure that it didn't hit any of the silhouettes which spring up, unusual traffic and weather conditions, etc. They must already do that in some capacity, right?
Just say "the 101", it's shorter.
The entire area of safety and quality assurance with neural networks is still in active research actively, from multiple angles. For some examples of how chaotic neural networks can be, look up 'adversarial examples'.
It seems to me there are 3 layers:
(level 0: Just Adaptive Cruise Control (use radar to adjust speed up to a max). Human still steers.
- 1: Adaptive Cruise Control (use radar to adjust speed up to a max) + Autosteer (camera's watch the lane markers and follow them). This is referred to as "Autopilot"
NOTE: This is where the accidents happen. The car isn't driving towards a barrier, it's following the lane markers and hits an error state. This is also NOT self driving.
- 2: "Nav on Autopilot". This is an additional function you turn on where the car has more intelligent (use this word loosely) capabilities on highways. The car will still do everything on level 1 combined with lane changes (using cameras to detect objects and trajectories differentiating cars from trucks from pedestrians from bikes from motorcycles etc). It will still follow lane lines, but with a lot of additional information (is there an object? am I merging? am I exiting? etc)
- 3: "Full Self Driving". This is an additional package that doesn't exist to the public. Internally I'm sure they're testing the functionality but this is using all of the sensors and algorithms and likely neural networks to decide what to do. A cool point though is that all Tesla's are likely running this code in "shadow mode" where data can be collected and assumptions can be tested without endangering any actual drivers. (see here for some cool data on this: https://electrek.co/2019/03/05/tesla-autopilot-detects-stop-...). "Hey, I think the car, if full self driving SHOULD take action X. *compare to what the driver actually does and log the data" over BILLIONS of miles
So when a Tesla hits the barrier or gets in an accident, we're actually running "#1" and people start freaking out. But when we get to the capabilities of #3 a lot of the "object permanence and continuity" stuff starts to come into play.
- I drive a Model 3 every day on Autopilot 75% of the time
- I only 75% think self driving is capability under the current Tesla software suite but I bought the package anyways.
In theory, ANNs could have an output layer that passes data from one frame to another frame to assist things. But there's no real programming to "hardcode" something like object permanence into an ANN. You pretty much throw a bunch of data into the system and hope for the best.
Considering the path-planning requirements I would be absolutely shocked if Autopilot wasn't build history models and estimated paths for objects around the vehicle (other cars etc).
I expect what happened was that they trained their NNs for improved detection in one area but unknowingly reduced it in another. Perhaps now it can detect tricycles 99% but road barriers went down to only 30%. Having worked with NNs it's very common to see gains in one domain which come at a cost of reduced performance in another.
Barriers are a pretty big part of driving on roads and highways and the only reason it would have been unknowingly reduced would be if they just weren’t testing the NN against data with them.