Hacker News new | past | comments | ask | show | jobs | submit login

This is why driving AI is 'all or nothing' for me.

Assisted systems will lead to drivers paying less attention as the systems get better.

The figures quoted by Tesla seem impressive but you have to assume the majority of the drivers is still paying attention all the time. As auto-pilots get better you'll see them paying attention less and then the accident rate will go up, not down for a while at least until the bugs are ironed out.

Note that this could have happened to a non-electric car just as easily, it's a human-computer hybrid issue related to having to pay attention to some instrument for a long time without anything interesting happening. The longer the interval that you don't need to act the bigger the chance that when you do need to act you will not be in time.




This is what I've now said 3 or so times in various autopilot threads. It has to be an all or nothing thing. Part of responsible engineering is engineering out the many and varied ways that humans can fuck it all up. Look at how UX works in software. Good engineering eliminates users being able to do the wrong thing as much as possible.

You don't design a feature that invites misuse and then use instructions to try to prevent that misuse. That's irresponsible, bad engineering.

The heirachy of hazard control [1] in fact puts administrative controls at the 2nd-to-bottom, just above personal protective equipment. Elimination, substitution and engineering controls all fall above it.

Guards on the trucks to stop cars going under are an engineering control and also perhaps a substituion - you go from decapitation to driving into a wall instead. It's better than no guards and just expecting drivers to be alert - that's administration - but it's worse than elimination which is what you need if you provide a system where the driver is encouraged to be inattentive.

User alertness is a very fucking difficult problem to solve and an extremely unreliable hazard control. Never rely on it, ever. That's what they're doing here and it was only a matter of time that this happened. It's irresponsible engineering.

edit: My source for the above: I work in rail. We battle with driver inattention constantly because like autopilot, you don't steer but you do have to be in control. I could write novels on the battles we've gone through just to keep drivers paying attention.

[1]: https://en.wikipedia.org/wiki/Hierarchy_of_hazard_control


> I could write novels on the battles we've gone through just to keep drivers paying attention.

Please do, and link them here. I'd be very interested in reading about your battles and I figure many others would too. This is where the cutting edge is today and likely will be for years to come so your experience is extremely valuable and has wide applicability.


Here's a comment from 5 months ago about one example - not me personally but it's one of the major case studies in the AU rail industry - that covers exactly this topic. It also sort of morphs into general discussion about alertness tools in physical design.

https://news.ycombinator.com/item?id=11017034


I understand your point that it has to be all-or-nothing, but if you were asked to redesign the UX to make autopilot (as it currently stands) safer, how would you change it?


Philip Greenspun's impressions after trying out the Model X for a weekend:

"You need to keep your hands on the steering wheel at all times during autosteering, yet not crank the wheel hard enough to generate what the car thinks is an actual steering input (thereby disconnecting autosteer). I found this to be about the same amount of effort as simply driving."

I thought that was an interesting observation.

http://blogs.harvard.edu/philg/2016/06/27/smug-rich-bastard-...


It's pretty well established that humans (e.g. [1]) that humans have a lot of trouble paying when they don't need to be actively engaged most of the time and also have trouble taking back control. In a consumer driving context, I have zero doubt that, as systems like these develop, people will start watching videos and reading absent draconian monitoring systems to ensure they keep their eyes on the road. I'm not sure how we get past that "uncanny valley."

[1] http://hal.pratt.duke.edu/sites/hal.pratt.duke.edu/files/u7/...


Its worth pointing out that Prof. Missy Cummings, who authored the paper, is a former F/A-18 pilot who specializes in human-machine interaction research.

One option is the Tesla autopilot should have an indication when it approaches "low confidence" areas without disengaging, so the driver is not startled if they have to take back manual control.


The Mercedes E-Class Drive Pilot seems to do this to some extent. For example even going from demanding the driver to place one hand on the steering wheel to requiring both hands.

https://youtu.be/Pq5LDi3-ChU?t=40m5s


Yep. I saw her speak at an event a couple years ago. She's also done a fair bit of research around humans splitting their attention across multiple vehicles--mostly drones.

I agree there can be handoffs but they mostly need to be in the vein of: I'm slowing down and turning over control in 30 seconds because there's something I don't understand coming up


I'm not sure you understand what some of the words you wrote mean.


I suspect an alternative input device.


In the aviation community, there is the major concerns over pilots becoming over-reliant cockpit automation instead of flying the jet.

Asiana 214 [0] is a classic example of crashing a perfectly good airliner into a seawall on landing.

In the Boeing 777, one example of the (auto)pilot interface showing safety critical information is the stall speed indication on the cockpit display [1], warning the pilot if they are are approaching that stall speed.

Hopefully Tesla will optimize the autopilot interface to minimize driver inattention, without becoming annoying.

[0] https://en.wikipedia.org/wiki/Asiana_Airlines_Flight_214

[1] http://imgur.com/bGsFTCG


In aviation, autopilots became successful because the human-machine handoff latency required is relatively large --- despite how fast planes fly, the separation between them and other objects is large and there is usually still time (seconds) to react when the autopilot decides it can't do the job ( https://www.youtube.com/watch?v=8XxEFFX586k )

On the road, where relative separation is much less (and there's even been talk of how self-driving cars can reduce following distances significantly, which just scares me more), the driver might not have even a second to react when he/she needs to take over from the autopilot.


In other words:

The driver might have needed to react before the auto-pilot realized it needed to react (let along could humanly respond).

There are two things that I take away from this.

* Auto-pilot should probably just keep going (or bring to a as controlled stop as possible).

* It should also collect more data to hopefully warn the driver more in advance.


My conclusion is that "autopilot" is insufficient in this context, and that a fully automatic AI driver is needed.


My understanding of the Asiana crash was that the autopilot would have landed the plane fine, and that it was the humans turning it off that caused the problem.

Your point is still valid, but perhaps we approach a time when over-reliance is better than all but the best human pilots (Sully, perhaps).


The Asiana pilots were not able to fly a coupled (automatic) landing due to the ILS glideslope being out of service.

The pilots were under the misguided impression that the aircraft would automatically spool-up the engines if the aircraft became to slow. This was a safety feature that didn't engage for a obscure technical reason. Even with a manual visual approach the pilot can still use the autothrust for landing.

A more rigorously trained pilot (eg. Capt. Sully) would have aborted the approach and performed an immediate go-around if he got below the glidepath (or too slow) below a certain altitude (eg. 400ft Above Ground Level).

The rules requiring a go-around (or missed approach) apply for a fully automated approach and landing, just as much as manually flown approach and landing.


The Air France 447 accident is a better fitting example of pitfalls that may obtain in complex "humans-with-automation" types of systems.

There, automation lowered both the standard for situational awareness and fundamental stick and rudder skills. Then, when a quirky corner case happened, the pilots did all manner of wrong on the problem: so much so, they amplified a condition from "mostly harmless" to fatal for all.

Vanity Fair has a nice piece on this accident that's easy to dig up. Good read.


I heard it was the Airbus weirdness of steering setup that noticeably added to the problems (Separate, disjointed joysticks) One pilot pulled up as hard as he could while the other one thought he was pushing down, making the confusion this much worse


That's true, but was well known (and trained on), so I'd categorize that domain as "How the machine responds when you're hands are on the controls," which is nearly a synonym for "stick and rudder skills" category I cited.

Sure, to nearly every pilot that behavior is wacky, but it shouldn't have been a surprise for more than an instant to pilots who were "operating as designed."

It seems there's no free lunch: when skills atrophy as a natural response to helpful automation it requires advancement in some other skills, should the goal of an ever improving error (accident) rate be achieved.



Wow, that is some very damning criticism: "The distinction is that a Level 3 [Tesla] autonomous system still relinquishes the controls back to the driver in the event of extreme conditions the computer can no longer manage, which Victor (and Volvo) finds extremely dangerous."


As someone very excited about this space, I unfortunately have to agree that Tesla is playing fast and loose with autonomous safety (and more importantly, public opinion!) to be first to market. You can't be half in and half out, which is what these "assist" features are.

They're adding new features to inadequate/improperly configured hardware for what they're asking the car to do, and waiving away all liability for stupid peoples' actions with disclaimers (always be ready to take over).

Whether that's right or wrong is really subjective, especially when you take natural selection into account.

Tesla's (only!) radar sensor is located at the bottom of the bumper, if I'm not mistaken. Compare this with Google's, which is located in the arguably correct position, the roof. Also compare other manufacturers' solutions that are utilizing 2-3 radar sensors, as well as sonar.

https://forums.teslamotors.com/forum/forums/model-s-will-be-...


Precisely. This half-way 'self driving but if something goes wrong it's the human's fault' is a terrible idea, because the more reliable it gets (while still being a 'driver assist' rather than committing to actually being a fully autonomous control system) the more likely the human is to not be paying attention when the shit hits the fan.


Apart from the software part of it. I wonder how they handle issues like sensor malfunction.

If your eyes aren't at their best, you know well to go to a doctor and not be driving in the meanwhile. Will the car with autopilot refuse to start or go on autopilot if the camera/sensor/radar has an issue?

So yes you are right, its either full AI or nothing.


Toyota is taking the opposite approach to Tesla: they are introducing automated features as a backstop against human error, rather than a substitute for human attention. Your Toyota (or Lexus) won't drive itself, but it might slam on the brake or swerve to avoid an obstacle you couldn't see.

http://www.autonews.com/article/20160321/OEM11/160329991/toy...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: