Hacker News new | past | comments | ask | show | jobs | submit login

I lent out my copy, but one particularly relevant section compares results from the aviation industry's use of pure-autopilot and HUD-augmented landings. Pilots maintain their flying skills, and get better overall results by staying in the loop with an augmented vision system. Good pilots often avoid full-autopilot entirely, or are forced to abort only seconds from touchdown, resulting in poorer landings. Bad pilots over rely on autopilot, and are therefore under-trained to deal with exceptional situations, resulting in incidents like the infamous Air France crash.

https://youtu.be/4nDdqGUMdAY?t=183




Cool. Tesla should review this book and talk. Paraphrasing a few quotes from your link ...

"The most difficult challenging tech problem is not full autonomy, but rather the interaction between automated systems and humans"

The subtitle of the book is also notable: "Robotics and the Myths of Autonomy"

The speaker almost seems to be arguing that full autonomy is impossible. If we start up a system, then its behavior has still been kicked off and wrapped by humans. Interesting perspective.

I'd like to read the book myself after listening to a bit of this talk. Thanks!


It's not so much that full autonomy is impossible. We "fire and forget" many type of systems with no possibility of human intervention--at least not in a timely manner. What you can't do is start an automated process, comforting yourself that an inattentive human can always take over in a split-second if the automation messes up.


> What you can't do is start an automated process, comforting yourself that an inattentive human can always take over in a split-second if the automation messes up.

I think that depends quite a bit on how much is automated. Traditional cruise control is an automation, and I'm sure there have been accidents that resulted from it, but generally it automates so little that you really can't become inattentive (unless you fall asleep). Variable cruise control has more automation, and I would guess the percentage of automation related accidents it played a part in compared to traditional cruise control is higher due to people trusting the system when it is misbehaving. Tesla's autopilot is much farther along this spectrum.


> We "fire and forget" many type of systems with no possibility of human intervention--at least not in a timely manner

Practically speaking, I agree with what you're saying. For autonomous systems as we think of them today, humans can set them and walk away.

Philosophically, if I throw a ball into space, is its continued movement autonomous? Kicking off a computer program is just a complex ball. We don't have true randomness in computers.

> What you can't do is start an automated process, comforting yourself that an inattentive human can always take over in a split-second if the automation messes up.

I 100% agree with this. I think that's a fundamental flaw in Tesla's current plan to achieve autonomous vehicles. It's also a huge liability risk, for which they are not insured, to be dishing out 15,000 systems like this every month. At least the other consumer-available self-driving car systems require hands on the wheel. Tesla doesn't even seem to do that.


> We don't have true randomness in computers.

Yes we do.


Computers are deterministic. Maybe you can convince yourself that something in the outside world isn't, and generate numbers based off of that, like radioactive decay. Even that may be deterministic. I think it's a question for quantum-mechanics or philosophers.

Anyway, practically speaking, we tend to be happy with random numbers that other people would have a really difficult time copying.


I suppose you could declare that true randomness doesn't exist, but that doesn't make computers special...

Anyway, most of the transistors in a chip are barely kept in a range where they're deterministic. Just push some out of that range, and you get randomness that's as true as anything else.


Interesting. I didn't realize humans have already attempted so many different automated systems. In the talk you linked, he points out how, historically, systems are never fully automated because humans are not comfortable with that. He seems to predict that cars will not be the first systems that we allow to become fully autonomous, and admits he could be wrong.

It's surprising to me that he feels texting should be permissible while driving with a driver-assist system [1]. His whole argument here doesn't add up for me. I'll have to check out his book.

It sounds like he supports driver-assist mechanisms, though maybe without Tesla's beta release schedule of OTA updates. He seems unconvinced that full autonomy will ever be achievable.

I think I still believe in Google's plan, despite the shadow he casts on full autonomy. Zero deaths and full disclosure seems better than selective reports and the possibility for more fatal accidents.

Given there are now a decent number of investors on both sides, I imagine the debate will continue as to which is the best path: testing driver-assist in consumer vehicles, or testing full autonomy with company-controlled cars.

I'd like to hear what Mindell thinks would be the best hand-off from computer to driver in a driver-assist vehicle such as Tesla's. He answers a question about that generally here [2], but is mainly talking about airplanes where there is a chance for a smoother transition. How does one slow down time such that a hand off is possible in a vehicle during an adverse event that the car cannot handle itself? I feel that is the most pertinent question to today.

It seems like the car needs to see the adverse event coming, but since it is not ready to handle the event, the car is unlikely able to give the driver much extra time. By definition of adverse event, it would now seem to have been better if the driver had been paying complete attention the whole time.

[1] https://youtu.be/4nDdqGUMdAY?t=41m32s

[2] https://youtu.be/4nDdqGUMdAY?t=35m37s


When he talks of humans being richly involved in the environment, not sleeping in the trunk, I imagine an interface with several levels.

Example: you are driving down a highway that has full monitoring by overhead cameras. No obstacles are going to jump out at you, and the position of the car is mapped precisely in real time. The human controls via a joystick to select a lane position, and brakes are automatically applied if avoidance is needed.

Now the car leaves the highway, onto an isolated road. Before leaving the known environment, the interface switches. Now the lane position becomes a trajectory projection, and the driver must observe the road. The task of safely following the trajectory is automated. The interface adds random noise to the trajectory projection, to test that the driver is paying attention. Failure to demonstrate control results in the car pulling off to a rest stop.

Finally the car goes downtown. A well known road, but with many pedestrians and bicycles. The interface is based around identifying obstacles, with a joystick for max safe speed <-> stop. Trajectory is automated, with the driver choosing turns. The car scans for pedestrians, but if the driver does not acknowledge them, the car slows since the driver is not paying attention. If the car misses a pedestrian, the driver is aware and has been conditioned and trained to react in time.

All of these interfaces would require driver attention (minimal on the highway), but would significantly reduce fatigue associated with driving.


> All of these interfaces would require driver attention (minimal on the highway), but would significantly reduce fatigue associated with driving.

I see now, that makes sense. Thanks!

I did a little more hunting on Mindell and it seems his words are easily misinterpreted [1] [2]

It sounds like he is almost anti-Google in that article, though I imagine he's really trying to argue for something like you describe.

I think Google's on the right track towards such a system. Right now they're constraining themselves to having sensors inside the vehicle to see how far they can safely travel. If that does not work out, presumably they can invest in developing a living road like you describe to aid in tracking cars' movements.

If Google does start some sort of taxi network, presumably passengers will be able to enter destinations, or perhaps request to change lanes. Under Mindell's definition, perhaps that is not completely autonomous. I think it is full autonomy in the way Google thinks of it. Or maybe Mindell would call that autonomy and just admit he was wrong and that cars are the first human transporters that can be fully autonomous.

Thanks for your thoughts. I like your ideas about adding noise. You've obviously put some thought into this. I haven't seen others mention that elsewhere.

[1] https://www.reddit.com/r/tech/comments/3whcj0/googles_vision...

[2] http://www.cnet.com/news/googles-vision-of-self-driving-cars...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: