It may come down to deeper analysis that looks at both root and proximate causes. If Tesla's design was considered faulty from a safety standpoint and substantially contributed to the accident, I would imagine they will have some portion of responsibility.
I also think the generalized caveat that it's always the drivers fault is a dangerous one that can incentivize manufacturer's to take unnecessary risks (e.g., less testing/quality in favor of delivering on schedule because they can always push the risk - sometimes unknown - to the end user).
I think an industry wide analysis of driver assist systems is likely warranted by NHTSA to form a framework for liability and communications/marketing capabilities to consumers. Think EPA emissions stickers for autos, but for driver assist.
> I also think the generalized caveat that it's always the drivers fault is a dangerous one that can incentivize manufacturer's to take unnecessary risks (e.g., less testing/quality in favor of delivering on schedule because they can always push the risk - sometimes unknown - to the end user).
If the driver is not paying attention, that is the driver's fault. If you can't provide your attention, you shouldn't be in the driver's seat. And if you kill people, do not pass go, go directly to jail. Such are the consequences of being responsible for thousands of pounds of mobility moving at speed.
>If the driver is not paying attention, that is the driver's fault.
No doubt. But by extension, if a safety feature did not work appropriately, that would be Tesla's fault, no?
I deliberately put both root and proximate causes in my first post because there can be multiple contributors to the accident, and each may bear some responsibility.
My main issue with the OP was that it lays the groundwork for absolving the manufacturer of any responsibility whatsoever, which I don't feel is appropriate. Society generally requires professionals (engineers, lawyers, doctors) that work in areas of public safety/public good to bear some responsibility. I don't want a system where that professional standard is eroded because it's easier to push the risk down to the end user with a simple clause in a manual/contract that may never be read.
> No doubt. But by extension, if a safety feature did not work appropriately, that would be Tesla's fault, no?
Not necessarily. Automatic Emergency Braking systems across all automakers have serious constraints [1], and are still sold with broad legal disclaimers. They will attempt to stop you in the event an object is in the vehicle path, but importantly, there are no guarantees (and this is made clear in each vehicle's user manual). At the time of this accident, Tesla had no safety system for detecting red lights or stop signs, therefore no safety system to fail (when Autopilot is active, the vehicle has a "dead man switch" that commands you to torque the steering wheel and provide other input to ensure you're still alive and attentive every 10-30 seconds, depending on a variety of factors [speed, road curvature, visibility, path planning confidence, etc]).
This idea that these safety systems are foolproof and manufacturers are liable versus the driver are bizarre to say the least, but as I mention in another comment, it speaks to the broad lack of understanding and personal responsibility that has permeated society. People get in and drive, and it's someone else's problem if an adverse event occurs.
> I don't want a system where that professional standard is eroded because it's easier to push the risk down to the end user with a simple clause in a manual/contract that may never be read.
I agree with this position, and that engineering in general should be held to a high standard. If manufacturers have built what the industry has determined is industry standard, and regulators sign off (NHTSA has made no attempt to instruct Tesla to disable Autopilot fleet wide with an OTA software update), I'm unsure there's much more to do when the human pushes beyond system limits.
> To understand the strengths and weaknesses of these systems and how they differ, we piloted a Cadillac CT6, a Subaru Impreza, a Tesla Model S, and a Toyota Camry through four tests at FT Techno of America's Fowlerville, Michigan, proving ground. The balloon car is built like a bounce house but with the radar reflectivity of a real car, along with a five-figure price and a Volkswagen wrapper. For the tests with a moving target, a heavy-duty pickup tows the balloon car on 42-foot rails, which allow it to slide forward after impact.
> The car companies don't hide the fact that today's AEB systems have blind spots. It's all there in the owner's manuals, typically covered by both an all-encompassing legal disclaimer and explicit examples of why the systems might fail to intervene. For instance, the Camry's AEB system may not work when you're driving on a hill. It might not spot vehicles with high ground clearance or those with low rear ends. It may not work if a wiper blade blocks the camera. Toyota says the system could also fail if the vehicle is wobbling, whatever that means. It may not function when the sun shines directly on the vehicle ahead or into the camera mounted near the rearview mirror.
> There's truth in these legal warnings. AEB isn't intended to address low-visibility conditions or a car that suddenly swerves into your path. These systems do their best work preventing the kind of crashes that are easily avoided by an attentive driver.
The Mazda 3 Astina 2014's Radar Cruise Control had clear warnings that it would not detect motorcycle riders. Yet not once did it fail to actually do so ..
I guess they just weren't confident enough, and that I hadn't been behind behind the required number.
I was very impressed by that system, very simple yet solid. By always using radar cruise control you'd kind of take away the need to go into AEB territory, which never happened (activated) for me outside of when I tested the system using cardboard boxes.
(I want to know how my stuff works - 2/3 times came to a full stop from 30kmh just in time, third time it put 10cm into them ; I concluded this is extremely good technology that should be made mandatory everywhere).
I don't disagree with you and you make some good points. But I don't think it's fair to assume I'm claiming the safety systems are foolproof. I've worked on safety-critical systems in automotive, healthcare, and aerospace so I know better.
I think it may ultimately come down to the way Tesla's marketing is perceived. If it's found that a reasonable person would insinuate that Tesla implied their system had capabilities it did not, that gets into ethical/legal trouble and speaks to what I meant about working "appropriately". But I think we agree on this based on your previous comment about creation of a regulatory framework for communication.
As far as the personal responsibility goes, I also agree on that point. But in an immensely complex and interconnected society, this has limits because humans don't have the bandwidth to make risk-informed decisions on everything. As I mentioned in a separate comment, there are certain professions (namely: engineer, lawyer, doctor) who have obligations/responsibilities to public safety. (Hence the term "profession" which comes from professing an oath of ethics). I think it's a bad precedent to push the responsibility away from these professions. The talking point about personal responsibility seems to only go one way, and it (unsurprisingly) is the direction that allows corporations to maximize profits while also absolving themselves of risk.
If you drive over a bridge and it collapses because of a bad design, I don't think this gets chalked up to "welp, you needed to take personal responsibility for deciding on that route". If you buy flooring for your home that makes your kids sick, I wouldn't blame you for not doing due diligence on the manufacturing or chemical process. In both cases, the end-user has a reasonable expectation of safety and the professional who designed it would usually be held responsible. Maybe, as you said, the AV world needs some more oversight and regulation to communicate those risks.
I also think the generalized caveat that it's always the drivers fault is a dangerous one that can incentivize manufacturer's to take unnecessary risks (e.g., less testing/quality in favor of delivering on schedule because they can always push the risk - sometimes unknown - to the end user).