This is what I've now said 3 or so times in various autopilot threads. It has to be an all or nothing thing. Part of responsible engineering is engineering out the many and varied ways that humans can fuck it all up. Look at how UX works in software. Good engineering eliminates users being able to do the wrong thing as much as possible.
You don't design a feature that invites misuse and then use instructions to try to prevent that misuse. That's irresponsible, bad engineering.
The heirachy of hazard control [1] in fact puts administrative controls at the 2nd-to-bottom, just above personal protective equipment. Elimination, substitution and engineering controls all fall above it.
Guards on the trucks to stop cars going under are an engineering control and also perhaps a substituion - you go from decapitation to driving into a wall instead. It's better than no guards and just expecting drivers to be alert - that's administration - but it's worse than elimination which is what you need if you provide a system where the driver is encouraged to be inattentive.
User alertness is a very fucking difficult problem to solve and an extremely unreliable hazard control. Never rely on it, ever. That's what they're doing here and it was only a matter of time that this happened. It's irresponsible engineering.
edit: My source for the above: I work in rail. We battle with driver inattention constantly because like autopilot, you don't steer but you do have to be in control. I could write novels on the battles we've gone through just to keep drivers paying attention.
> I could write novels on the battles we've gone through just to keep drivers paying attention.
Please do, and link them here. I'd be very interested in reading about your battles and I figure many others would too. This is where the cutting edge is today and likely will be for years to come so your experience is extremely valuable and has wide applicability.
Here's a comment from 5 months ago about one example - not me personally but it's one of the major case studies in the AU rail industry - that covers exactly this topic. It also sort of morphs into general discussion about alertness tools in physical design.
I understand your point that it has to be all-or-nothing, but if you were asked to redesign the UX to make autopilot (as it currently stands) safer, how would you change it?
You don't design a feature that invites misuse and then use instructions to try to prevent that misuse. That's irresponsible, bad engineering.
The heirachy of hazard control [1] in fact puts administrative controls at the 2nd-to-bottom, just above personal protective equipment. Elimination, substitution and engineering controls all fall above it.
Guards on the trucks to stop cars going under are an engineering control and also perhaps a substituion - you go from decapitation to driving into a wall instead. It's better than no guards and just expecting drivers to be alert - that's administration - but it's worse than elimination which is what you need if you provide a system where the driver is encouraged to be inattentive.
User alertness is a very fucking difficult problem to solve and an extremely unreliable hazard control. Never rely on it, ever. That's what they're doing here and it was only a matter of time that this happened. It's irresponsible engineering.
edit: My source for the above: I work in rail. We battle with driver inattention constantly because like autopilot, you don't steer but you do have to be in control. I could write novels on the battles we've gone through just to keep drivers paying attention.
[1]: https://en.wikipedia.org/wiki/Hierarchy_of_hazard_control