Hacker News new | comments | ask | show | jobs | submit login

A common phrase in aircraft cockpits nowadays is "What the heck is it doing now?" as pilots have migrated from actually flying the plane to simply being glorified systems managers.

While planes have become so, so, so much safer because of all this automation, pilots uncertainty regarding autopilot functioning is a major concern nowadays, and the reason for several accidents.

There are very interesting HCI challenges around properly communicating to the pilot/driver "what the heck it is doing" and clearly communicating just how much control the human has or doesn't have at any given point.

This "announcement" certainly doesn't inspire any confidence that they have really thought this through deeply enough (I think they probably have, but it should be communicated like it). As a huge Tesla fan, I can't help but feel like I need to hold my breath now and make sure something terrible doesn't happen because of this, and it ends up leading to even more regs setting us back on the path to full car automation.




99% invisible did a pretty good episode on that: http://99percentinvisible.org/episode/children-of-the-magent...


Yes! - <3 Roman Mars' voice.


I like 99%I's content but Mars' voice is my least favourite part of it. Sounds a bit smug and is not always clearly audible in the car. Much prefer Freakonomics.


There's an old joke about how the new planes have a pilot, for the sake of safety, and a dog, to bite the pilot if he tries to touch the controls.

It sounds like it's time to introduce the dog?


More like a dog to gently nudge the pilot if he is relying on a system that isn't working. Most issues I've read about happened when the systems either weren't engaged properly or automatically disengaged because their operating parameters weren't met anymore, suddenly forcing the pilots to take control.

Example Air France 447, where (presumably) the airspeed sensors where blocked by ice, which lead to the autopilot to disengage. There is also the theory that the pilots then made some of their mistakes based on the belief that the avionics would stop them from bringing the plane in an unsafe state and didn't realize that the system wasn't able to do so with missing information. (I hope I'm presenting this correctly, but that's what I remember reading afterwards)

There is also the training aspect: if a system takes care of something 99,9% of the time, the pilot is less experienced in the 0,1% where it doesn't. There is a reason the safest airline pilots often fly way smaller aircraft as a hobby and get some instinct for manual flying there.


I think the main problem with aircraft is lack of redundancy. They still count on the ability of letting the pilots control the aircraft (e.g. .1% of the time as you mentioned), I think they should just try to go all the way as much as possible. For example, if they had 4 redundant pilot tubes on different places and other redundant methods to measure altitude and airspeed the AP would never have disengaged. For a large plane those sensors are basically free, and it's very easy to tell whether they're working correctly by cross-verification/calibration (i.e. maintenance/reliability is easy) -- they just don't go all the way because they're still on the paradigm that "well if a couple pilot tubes and/or other systems fail we can just hand it to the pilot".


>> I think the main problem with aircraft is lack of redundancy. They still count on the ability of letting the pilots control the aircraft (e.g. .1% of the time as you mentioned), I think they should just try to go all the way as much as possible.

I'm not an airliner pilot, but I have the impression almost everyone is always grossly underestimating how often a human pilot is still required to safely fly a commercial airliner. Yes, the autopilot may be able to handle 99% of the flights without intervention, or even 99.9% of the flights, but simple math shows that that means there are still tens if not hundreds of thousands of flights every day where the pilot needs to take action, and that any individual pilot will get into a situation where the autopilot will fail to handle a situation probably once or twice a year. Likely a lot more often.

Large parts of the world have weather conditions, diversions, or airports with spotty or unreliable ground systems by the way, in those areas the need for a human pilot will be several times higher than once every hundred flights. There's ample examples of plane crashes that were caused by automated systems, rather than pilot error (I don't remember the exact location and date, but I know that e.g. not very long ago a plane crashed because the ILS beacon at the airport was malfunctioning). In other cases the level of automation enabled lousy pilots with bad training to fly the airplane (case in point: the Air Asiana crash at SFO), which IMO is not 'pilot error' because these people should not have been flying the plane in the first place.

All this makes me think the solution to improve airliner safety even further is not more automation, but better pilot training.

As for autonomous driving, besides AI assisted highway cruising, I don't believe in the concept at all, and my bet is that in 25 years we've realized that we've been wasting most of the time trying to build fully autonomous vehicles. Limiting the possibilities of driver error seems like a much better investment (e.g. automatic cruise control, automatic emergency braking, etc).


Modern aircraft have excellent redundancy. Commercial aircraft already typically have 3 Pitot* tubes and 3 static ports. They need unobstructed airflow and are placed accordingly. In this kind of aircraft they do cross-verify each other as well, but if they all are reading different values, there's not much to be done. Conditions were such on AF447 that they all experienced some amount of icing until the aircraft descended enough. I don't see how having one more pitot tube is the proper response.


Well clearly having altitude/velocity readings is critical to autopilot function, so some kind of redundancy should be put in place. Not necessarily specifically more pilot tubes, but some kind of solution, like heating/deicing the tubes inlets, better placing them, etc -- guaranteeing in some way the chances that all measurements are unavailable is astronomically low.


"For example, if they had 4 redundant pilot tubes on different places and other redundant methods to measure altitude and airspeed the AP would never have disengaged."

There's a lot to be said for that. The pilots didn't know their altitude and airspeed either. They thought they were overspeeding when they were stalling.

Some military aircraft, mostly the stealth ones with terrible aerodynamic stability, need that sensor data to stay in the air at all. If they lose the sensor data or the stability control system, the only option is to eject. So they tend to go in for more sensor redundancy. There's certainly no reason that large transport aircraft can't have more sources of basic attitude/altitude/airspeed info.


Airbus A330s (the type used by AF447) already have 3 pitot tubes, two on one side of the airplane, and one on the other.


Yes, the reconstruction was that they stalled the plane deliberately because they thought the autopilot would prevent the stall.

They needed to descend to gain airspeed and lift. But they were pulling up and literally falling out of the sky without realizing it.


That sounds like a UI issue. Like, it shouldn't be too hard to put a blinkenlight on the control that indicates it's under autopilot control (or the absence indicates it isn't).


The actual report states that there was a repeated audio cue that this was the case.


There was also a stall warning that shutoff when the pilot pulled up because it no longer had reliable info, the pilot assumed pulling up was somehow stopping a stall


Under which circumstance does pulling up stop a stall, exactly?


It doesn't but the alarm shut off when the pilot pulled up causing confusion. He continued to increase angle of attack the entire time


Wasn't there also the problem with other pilot pushing down in panic, rudder being so enclosed that nobody noticed, and the fact that plane averaged the two inputs?


No need for theories we know very well what happened there in that flight and its basically the copilots who crashed a perfectly fly worthy planed.


There is no basically in an aircraft accident. If you wish to make such a statement, please explain what actually happened in this flight (icing of the pitot tubes, deactivation of normal flight control law for the fly-by-wire system, flight at night over the ocean and over a storm, stall under the specifications for the stall warning system). It's respectful for the crew.


There's an old joke about a pilot lost in a deep fog who shouts out the window "Where am I?", hears in response "You're in an airplane!", and immediately realizes that he is over the Microsoft tech support building, because no one else could be so accurate and so useless at the same time.

That's what you're doing here. Saying "the copilots crashed it" teaches us nothing; we need to know why they crashed it, what cues they misunderstood and what skills they lacked, so we can keep it from happening again.


There is more to being a pilot than the stick-and-rudder of flying the plane. Judgment calls such as whether to attempt a landing under less-than-ideal conditions are far more important.


Direct control vs abstract control vs intelligence interaction has been a long running topic in HCI. For the most part it has been evolving in funny ways (i.e. skumorphism used metaphors from direct control to execute abstract control).

I think the design language will evolve slowly with the users at a speed roughly related to the adoption curve.

It may tell users everything that the car is deciding to do now, and as confidence in the system increases with adoption, it will do less and less.

That is why more advance self-driving car researchers are working on the harder problems of getting abstract and intelligent interaction with a car working. The real market problem is just telling you car "take me to the store" rather than just getting it to drive down a straight lane.


Queue the James Bond movie sub plot where an evil villain takes control of his tesla/whatever autonomous car and he has to use a combination of hacker shell commands and sheer brute strength to save both him self and bond girl he was seducing while the car chauffeured him back to his luxury hotel suite...


In the newer Ghost in the Shell series, an antagonist writes a virus that takes control of all the cars in the city, causing a massive physical/real-world denial of service.


https://www.youtube.com/watch?v=qKAME9fAA-4

If the bad guys had only hacked Bond in this scene, it could have been much different!


I recommend the Scorpions episode S01E20 "Crossroads", where a similar situation unfolds at the end.


There are videos on YouTube of Bosch fitting their automated driving technology into a Model S [1] and performing demonstration runs on a private road. The software shown in the video displays quite a lot of information about what the sensors consider relevant from the car's surroundings.

[1] https://youtu.be/KwD1hjlbhwU


She's barely 49% confident. I understand she doesn't want to enjoy a demo effect in a moving vehicle with 2 guests but it's a bit disturbing to see her hands constantly willing to grab the wheel.


This is so clearly a fake demonstration. Watch the way the "automation" follows the probe car. It's so clearly leading it on and the woman is just going through an act.


In another video a different pair of accompanying passengers record a similar sequence of events, so the demonstration itself is probably scripted and rehearsed. But I wouldn't rule out that there is an actual piece of software working under these (controlled) conditions.


>While planes have become so, so, so much safer because of all this automation, pilots uncertainty regarding autopilot functioning is a major concern nowadays, and the reason for several accidents.

If % of accidents caused by pilot interference on a working system > % of accidents caused by system malfunctioning and pilot ignoring it: people will still be against not allowing pilots to interfere. Even when it causes more accidents...

There's something about humans trusting humans more than machines that I don't fully understand. Systems can make mistakes but the amount of human mistakes is often exponentially greater to a degree of absurdity that humans are even trusted at all and yet people will side with the human over the machine.

Humans will always want human oversight - even when that oversight does more harm than good once automation reaches a certain threshold...

Special note: I'm not aware of avionics and the data on pilot interference w/ the system vs failure of pilot to intervene. So maybe this example doesn't hold very well for avionics...


A human can make judgement calls in unexpected situations. Objectively, a human is more likely to see falling debris and react to it than a self-driving car (of course depending on the programming and sensors).

Or when the road is slick and you are doing 5 under, and you see someone pass you doing 5 over, with a car following too closely to it... you have a better understanding that there's likely to be an accident ahead... (I've seen this happen several times). A computer wouldn't predict that most likely.

Unless ALL cars are automated, it is a must that a driver be able to take over quickly.


Strongly disagree.

> A human can make judgement calls in unexpected situations.

A properly programmed machine can behave smarter and faster and it also knows its limitations, so it can account for them. "Judgement calls" is something not needed because the machine always keeps good judgement.

> Objectively, a human is more likely to see falling debris and react to it than a self-driving car (of course depending on the programming and sensors).

Very much depending on programming and sensors, but from some point on, I'll always be betting on programming and sensors.

> Or when the road is slick and you are doing 5 under, and you see someone pass you doing 5 over, with a car following too closely to it... you have a better understanding that there's likely to be an accident ahead... (I've seen this happen several times). A computer wouldn't predict that most likely.

The computer will track all vehicles, their velocities, past observed movement history, perceive road conditions that you can't due to limitations of Mark I Eyeballs, and will push all that data through an inference system that is capable to stay rational all the time, without being affected by "stress" or "surprise". Solutions will be computed literally faster than the blink of an eye. The machine will see the phase space of the road-cars system and be able to navigate it to safety.

> Unless ALL cars are automated, it is a must that a driver be able to take over quickly.

The driver needs to never take over. A dog should be placed, trained to bite the driver if he tries to touch the controls. Better yet, remove the controls.

The machine knows its limitations. The right way to avoid an accident is not to perform stunts based on intuition, it's to keep the entire system (of cars and the road) in a stable and known state, and navigate that state to safety.

People seem to get machines really wrong. Machines today are limited in their creativity. But they are orders of magnitude better than humans in getting things right. On the road, we need more of the latter than the former.


The thing that humans can do that machines cannot is react to something that they never expected and haven't been programmed for. A human can generalise or invent a solution to a new situation that a computer simply will not be able to do.

You are correct that in the vast majority of situations a computer will outperform a human. I've avoided accidents by luck more than judgement, guessing that the car will be able to do what I'm asking (whilst the autopilot would know at all times and react in milliseconds).

All the world needs to accept is that at some point someone will die because of an auto pilot error and that's ok because it's net lower numbers of deaths than the same number of humans driving.


I still am yet to understand how a self driving car deals with a cop directing traffic. Or even a construction worker holding traffic back from a backing up backhoe. But maybe I am not aware of the genius of the technology yet.


From my limited and slightly hopeful understanding they don't try to understand much more than 'something is moving toward our planned path and that is no good'. As long as the car avoid going over the cop or the worker it's ok. Every decision above that is optional.


They will (eventually have to) recognize hand signals from construction workers and police officers. Current generation, they won't run over the person, but you should probably take over and follow their instructions.


Our best of the best image recognition tech cannot see a difference between a zebra, and a sofa in a zebra print. I think "reading hand signals of a policeman/workman on the road" is far far far beyond what we can currently do. Or rather - I'm sure we can make a solution which will work right now, in perfect conditions. I'm sure it will fail in the dark/rain/snow or if the worker is making small gestures near his hip rather than moving hands high up in the air. There's just so much uncertainty in driving that I think for cars to be absolutely 100% automated, where you can genuinely go to sleep when the car drives, the roads would need to be 100% automated as well, with beacons everywhere. That will probably happen, but it's definitely not "3 years away".


Zebra vs. zebra-patterned sofa is a kind of problem designed to be hard for image processing algorithms. From the practical point of view however, you just have to require that policemen / construction workers wear specific patterns on the uniforms or even have special IR-reflective buttons/threads. That would solve like 95% of cases, and by the time self-driving cars become the norm, we'll have figured out at least some specific algorithms for that very purpose that would work in general case.


They will have devices that communicate directly with the car's systems.


It's called image processing.


As a Tesla enthusiast, would you buy a Tesla that wasn't capable of breaking the speed limit? Perhaps one that couldn't ever under any circumstances move faster than 70mph? That doesn't require fancy tech. It could be done today in an instant.

That's where I see the real sticking point for automation. Driving isn't about getting from A to B, nor is safety the top priority. If it were, there wouldn't be such a thing as a V-8 (or whatever the electric equivalent is). I find it very ironic that a performance car like the Tesla might promote "the path to full car automation". You think the gun debate is tricky? Try telling people they cannot do 51 in a 50 anymore.


> Driving isn't about getting from A to B, nor is safety the top priority.

Yeah, this is a huge problem with drivers today. 99% of driving is absolutely about getting from A to B in as safe way as possible; anything else is reckless endangering of the lives of others. But people grow up dreaming about fast cars and freedom and adrenaline, then they get their licenses and can't confront the boring reality. That's the very reason we need self-driving cars on public roads. Driving for fun needs to be separated out and put somewhere else.


So my friends and I riding motocycles through the mountains on a sunny afternoon are reckless endangerment?

How about sightseeing buses? Or bicyclists riding purely to stay in shape? Or people learning how to drive? Or cops paroling neighborhoods? Or ice cream vans? There are lots of perfectly reasonable uses of roads that have little to do with A-to-B. The elimination of "fun" in the name of safety is a very uphill battle.


I said 99%, not 100%. Most of the driving is commuting. But fair enough, there is some wiggle room there; still:

> So my friends and I riding motocycles through the mountains on a sunny afternoon are reckless endangerment?

If you're breaking the law then yes, you're reckless, period. If not, then enjoy it all you want.

> How about sightseeing buses?

They're driving A-B-C-D in a sightseeing-optimal way within the limits of law and safety.

> Or bicyclists riding purely to stay in shape?

They aren't doing it next to cars, are they? There are dedicated areas for riding bikes for sports.

> Or cops paroling neighborhoods?

A-to-B driving, with lots of As and Bs. Also, there are two of them, one is doing the driving, the other is doing the patrolling.

> Or ice cream vans?

A-to-B, with lots of As and Bs.

Public roads are not the place for having fun. You can enjoy driving on them however much you want as long as it doesn't affect the safety of others.


Many cyclists do ride on public roads for fitness reasons. Beginners might ride in local parks or smth, but I'm pretty sure most of the cycling hours are spent on the roads. Since people who ride more put in significantly more hours and tend to ride roads more. There's offroad cycling, but in many cases it involves cycling on unpaved public roads or other public areas not reserved to bikes only.

I myself spend more hours on public roads on a bicycle than driving a car.


In my personal and not so humble opinion, you shouldn't ride on public roads for fitness reasons. If you need to get from A to B, like from home to work, and you want to do it on a bike, fine. If you want to add some fitness routine to it, it's fine too (your lungs would probably disagree) - but you have to focus primarily on staying safe and not endangering others.

But pure sports? There are parks for that.

Don't get me wrong, I'm all for reclaiming cities for pedestrian and bike travel and creating a maximally energy-efficient public transport system (ironically, the best idea would probably be self-driving, publicly-owned electric cars forming a PRT network). But in current situation one has to stay pragmatic.


There are parks? Where exactly? Riding anywhere but on the road is dangerous. Most reasonably fit road bike cyclist can average 18-19 mph and possibly much faster on a slight declines.

We should be promoting cycling not making it even more difficult. Self driving cars can make cyclist significantly safer for everyone. The most dangerous thing on the roads right now is human controlled cars.


If I'd do "pure sports" at parks, many parents with children would be not so happy about that.

I love to go on long bike rides. Like 4 or 10 hours long. Parks is not an option for that. While backroads are awesome. Once out of town, I usually meet a car once in a while. And while in town and suburbia, I guess I can qualify as A to B commuter. Even out of town, sometimes it's sort of A to B commute, because I want to go to some sightseeing spot or fancy cafe in the woods or smth. I just happen to cycle instead of drive. Or sometimes I dispatch my family in a car and go by bicycle myself. It's still A-to-B, isn't it?

Saying what is and what is not a "proper road use" is a very slippery slope. Is it OK to drive to get to a grocery store? Is it OK to drive to some spot in the middle of nowhere for sightseeing? Is it OK to drive for few hours to bbq in an unseen place? Is it OK to drive to go fishing in a nicer lake instead of the one nearby? Is it OK to drive to see friends if you can video call them instead?


You want pragmatism, but what you're giving cyclists is the exact opposite of a pragmatic situation. Cyclists are being extremely pragmatic when they cycle on regular roads...


In most places a bicycle is given specific rights to use the road.

Generally such laws include things like:

1) May use the turn lanes when turning.

2) Must ride to a specific side of the road unless: You can keep with the speed of traffic, there is not enough safe distance for a car to pass you, or it is not safe to ride on the side of the road. (These rules are generally true of any vehicle that drives slower than general traffic.)

3) Protections to the bicyclist. Either in the form of "vulnerable traffic" laws, or specific bike protection laws.

4) There are even places with laws preventing bicyclists from using the sidewalk.

5) About the only place a bicyclist isn't allowed to ride is on high speed traffic areas like a freeway/highway/etc.

6) Protections for the bicyclist against negligent drivers such as mandating safe driving distances, when to pass, and protecting against specific activities that may endanger the bicyclist.

I do agree that a bike path is certainly much safer (and perhaps more scenic) for fitness, but many people are interested in traveling to a specific location. (Ex, the beach or a park. Riding along the coastline. Etc.)

The main problem with biking on the road isn't that you are doing something wrong... but that drivers do not respect you. Tailing too close, not giving you the right of way when you have it, etc.


Breaking the law /= recklessness. You are probably thinking of negligence per se, something less than recklessness.

And I wonder why you assume that a bunch of motorcycles driving through the mountains would be breaking any laws. I know a group of lesbian harley riders who never go faster than 50 who would take issue with any assumption that riding a bike suggests illegality. For them it is about being seen and showing pride. I did a ride for burn victims a few years back. The kid riding on my bike was all smiles even though I doubt we broke 30 the entire way. He just liked the wind, a bit like my dog when she hangs her head out the window. Riding for fun does not mean riding for speed.


I didn't mean to imply that driving a motorcycle or a bike == illegal, or == driving for speed. As for breaking the law - traffic laws exist to both protect people from deadly accidents and ensure some predictability on the road, the latter of which is needed because humans suck at dealing with surprises, especially over a longer period of time.

What I mean is - going on a motorcycle trip on a scenic route? Sure, it's fine (a self-driving motorbike would probably be even safer though :)). The problem really starts when some people put fun in front of safety and e.g. start speeding.

TL;DR: have as much fun as you want, but not at the expense of safety. Want to have additional fun coming from doing dangerous things? Go do it somewhere where you endanger only yourself (and those who consciously decided to participate in such activities). Racetrack, private roads, whatever.


Lol, a self-driving motorcycle isn't anywhere near possible atm. Driving a car is to riding a motorcycle as walking is to ballet. There are all manner of strange physics (Google "countersteering"). Add to this the vast differences in threat profiles, the regular need for evasive maneuvers, serious judgment calls re emergency braking in corners, the weight-shifting of the rider, the potential 'bail out' decision, the lowside v highside decision (laydowns) ... I haven't heard even a passing joke about an autodrive two-wheeler. Even automatic transmissions are near fantasy beyond small 2-gear scooters.

Any sudden, unexpected, movement by an autodrive motorcycle would probably see the drive thrown free, or at least result in a weight shift great enough to down the entire package.


I meant it as a little joke, but since you bring it up - no, I actually think that autodrive motorcycle is not only feasible, it's not that harder than a self-driving car. Why?

Because all those "ballet-like" things is basic feedback control issues, the kind of which you learn about on control theory 101 in college. You can pretty much convert the problem of steering the self-driving motorcycle into the problem of self-driving car by adding a module that accepts car-like inputs and translates them to dynamic balance control. We've solved the basics with segways, which are smart-high-schooler level electronics projects.

Also, generally, whatever you can do on your vehicle a machine can do the same using the same control inputs, only better. Manual gear shift included.


I take it you haven't driven a motorcycle. I can ride one, at speed and around corners, without touching the controls. Remember that the rider may represent 30+% of the vehicle mass. Moving your weight around on the bike has as much control over direction as the handlebars. Unless you are going to strap the rider into a seat (ie make it a car) attach him to a hydraulic arm, or install a 200lb gyro, no machine can take directional control.

This all comes to a head at the transition point of countersteering, where the controls reverse around 20ish mph. The rider's balance controls whether turning the handlebars right means right or left. If the rider isn't in sync with the autodrive, everyone is in the ditch.


I'm genuinely curious, as an occasional motorcyle enthusiast myself, whether it's physically possible to counter the movements of the rider in addition to and counter to their movements, which represent a large portion of the vehicle mass. My gut feeling based on the segway is, yes, it's possible. But I'd love to hear any analyses to the negative.


It's probably possible to some extent. It's like having a anxious pillion rider on your bike. I remember when I first took my little sister along on my motorcycle, she'd lean the wrong way in the curves because she was scared the bike might fall over or something. Made controlling the bike harder, but not impossible.

(That experience taught me to always tell people to lean into the curve if they've never sat on a motorcycle before)


My guess is, you could put a 1m vertical pole behind the driver with some mass attached to the top end of it and motors controlling it at the bottom. That should give you enough angular control to compensate for whatever rider is doing.

Given the close interaction between the rider and the bike, I'm not sure how comfortable would such a vehicle be. It may turn out to be nausea-inducing.


I wonder if a "feet first" design would be once solution. The lower centre of gravity should be easier to stabilise. I don't know whether the riding style of an FF makes the same use of the rider shifting their weight as with a regular bike.


> > So my friends and I riding motocycles through the mountains on a sunny afternoon are reckless endangerment? > If you're breaking the law then yes, you're reckless, period. If not, then enjoy it all you want.

The speed limit (and various other laws) are very much all or nothing. Riddle me this, which of the following is safer to do on the same road with a speed limit of 50:

a) Driving a brand new porche with brand new tyres on a sunny day, doing 55

b) Driving a car from the 70s with tyres that are almost worn out (but still in the legal range), at night, in heavy rain, doing 48

now consider that a) is illegal and b) is legal.

Breaking a law should not be a binary decision, making it a 0/1 choice makes the world worse for everyone, because to be completely safe, the restriction has to account for the worst case and that means that the vast majority will be artificially limited and feel like the law is needlessly overreaching. This is not a good situation to be in.


We can barely handle a binary law. Humans don't have enough cognitive power to have fuzzy laws. If cars in both a) and b) were self-driving, we could consider flexible traffic laws, because the cars would be accurately aware exactly how dangerous the situation is. Humans are not smart enough.


His point was that you can't just say "they're reckless if they're breaking the law", and you certainly can't say "they're only reckless if they're breaking the law". There is a lot of nuance to it and in some situations, breaking the law is even required to prevent a catastrophy and it's reckless not to.


Fair enough.


It's a temporary problem though. Speed limits are set to try and keep people safe, by enforcing a maximum speed. Once computers drive the cars, the speed limit can be cranked up because reaction time goes up drastically and distracted drivers are nonexistent. I expect to see lots of changes in driving laws once automation takes over for exactly this reason.


"Once computers drive the cars, the speed limit can be cranked up because reaction time goes up drastically and distracted drivers are nonexistent."

That doesn't mesh with areas where robots have already taken over. The robots moving freight around shipping terminals don't move any faster than humans driving trucks. The robots I see delivering drugs in hospitals don't move faster than the nurses.

Speed limits aren't just about the cars. Noise and pedestrian safety are big factors. There is no technology yet that can allow a robot to see around blind corners. They may have a 1/4second faster reaction time, but that won't mean they could be trusted move much faster towards a crosswalk that might see a bike cross at any second.


The hospital robots seem like the closest analogue for now, as it's automation being held back by needing to coexist with inferior-performing humans. If it were only robots in the hallway, it'd likely go a lot faster, no? Just like if it were only computers driving the cars, following distances could be greatly reduced because they can react faster.


If it were only robots in the hallways, it wouldn't be a hospital. They did try giving the machines their own pathways. That's what pneumatic tubes were: a fast lane for robotic delivery. Frankly, for a hospital, pneumatic tubes would seem the far more elegant solution.

I guess it boils down to perspective. The hospital robot is in no hurry. His life is no better or worse if he is faster or slower. But a human inside a robot-driven car does care. He doesn't see the overall traffic efficiencies, and even if he did he wouldn't take much notice. All he cares about is getting to the destination asap. So it is a different speed decision than the robot.


The hospital I'm employed at utilizes both pneumatic tubes and robots. Robots roam the basement and employee only areas shuttling around food carts, linen carts, and trash. The pneumatic tube system reaches everywhere in the hospital and sends drugs, lab specimens, and basically anything else small enough to fit in the container.


I knew of a hospital when I was a kid that had raceways mounted just below the ceilings. It was like little toy trains bringing paperwork and drugs to each room. But if you watched them long enough, four-legged critters also used the raceways.


While the robot can't see round blind corners in the manner a person could, visually processing it, we could require aircraft style transponders on all road vehicles, which would then allow your robotic car approaching a blind corner to know of there's anyone on the other side. It would also allow unambiguous communication as to how fast each vehicle is travelling, and their intentions - think in terms of the brake and indicator lights also being part of the transponder signal.


So you want to fit all humans with radio transponders? That's going to be fun to enforce.


It's more likely to be cranked down though, because that's more efficient and people might be less frustrated with slow driving if they can kill time on their phones during the drive.


The improvements in traffic flow gained through full automation may even offset the decreased speed limits.


Or full automation might increase the number of vehicles and exacerbate the problem. (On the assumption that the vastly reduced cost of fielding driveless vehicles could result in many more commercial vehicles on the road.)


But automation can also increase vehicle density: "Stop and go" traffic is way more efficient when it's not groggy humans doing the stopping and going.


I'm not sure about that. Keeping the cars closer together would suggest more abrupt acceleration and braking to such to avoid space being created as the car ahead accelerates from a stop. Given these accelerations will not be initiated by the driver, they will come as a surprise and thus be more jarring. I imagine that might result in autodrives that accelerate/brake more slowly, leaving greater distances between cars in stop=and-go environs. And I cannot see autodrives leaving much less room between cars while in steady motion or stopped. Humans are pretty good at tailgating.


At least in theory with proper communication channels between vehicles, vehicles could alter their speed just enough to make space for merging traffic or to allow a vehicle to turn. In that way nobody ever actually stops or starts, they just become slightly slower or slightly quicker, which should be much less jarring.


I think the key is that computer controlled vehicles could synchronize their stopping and starting. You wouldn't have to wait for the few feet moved by the car in front to bubble down the line as drivers notice the extra space. Instead, all stopped cars could coordinate to advance simultaneously.


You wouldn't even have to explicitly sync them; it'll take just few milliseconds for a self-driving car to notice another car is beginning to accelerate. Machines can keep precise control to the level simply unavailable to and unperceivable by humans.


s/kill time/do something useful/, like reading books, working, having fun, sleeping. Time spent driving where a "self-driving" alternative is available is absolutely wasted.


Kill time? Regardless of who is driving, time spend sitting the car is time spend away from family/pets/work/food/bathrooms and everything else important.


Don't worry, Tesla model S comes with a bathroom, a car office, a mini kitchen and a petting zoo


So Telsa is branching out into RVs? I guess that would be the logical synergy of telsa performance cars and tesla wall-mounted battery packs.


That won't work for all the people who'll suffer car sickness when they spend time focusing on their phone while in a vehicle. Intermittent messaging is OK for most people but have you tried reading a book as a passenger? We might find that most people have to look out the windows so often that they can't get much productive work done.


I used to have car sickness as a kid. I recall some of it being caused by vibrations made by the engine. The smell of gasoline was also vomit-inducing for me. Both of those issues are solved by electric cars.


> Speed limits are set to try and keep people safe, by enforcing a maximum speed.

That's one reason. Then there's the money printing aspect of it and the environmental aspect of it.


Don't forget the occasional "keeping cyclists and pedestrians alive" aspect.


I thought that was covered by 'keep people safe'.


Some problems are less temporary. The required energy over distance goes up as well, thus limiting your action radius. Furthermore, the amount of road noise produced rises as well, which is another important limiting factor in high-density areas.


I doubt if the speed will increase in 25 mph residential areas.


Have you ever ridden a horse? They're mad fun to ride. That's why there's a whole industry for racing and riding them.

But they're inconvenient for getting from point A to point B. Which is why they've been replaced by cars.

I suspect the same will happen with cars. I love my fast cars, but apart from that once a month weekend drive, I really don't get to flex its muscles.

I'd much rather relax in my car on my way to work every day. Then I could buy a more powerful car which I could take out to the race track on weekends.


I don't think inability to speed will be as much of a problem with automation. People speed regularly today because other than getting to a destination, driving is so unproductive. If you're able to do what you want while driving (read, watch videos, communicate, etc) people won't be in as much of a hurry.


Who cares about speeding when you have all the time in the world.


Agreed. Speed, particularly on sub 1 hour drives, is mostly about competitiveness. If you're no longer competing with other divers, because you have a robot that is, what will that extra 10mph do for you?

Particularly when you can work in your car. Or watch a movie in your car. Or have sex.


Why assume that the robots won't compete? Cars today are marketed based on their potential for speed. With that out of the picture, why shouldn't we expect manufacturers to compete to have the "fastest" autodrive systems?


I want this. A rollercoaster like experience on the autobahn, nice :D


Sounds analagous to the transparency/traceability Sussman wants from AI: https://news.ycombinator.com/item?id=10388795


The "what the heck is it doing" problem may not be a problem with inexperienced pilots.


As an inexperienced pilot who has quite recently had the "what the heck is it doing" experience with my (40 yr old, analog computer) autopilot system, I can assure you that it is.


Maybe it should be no-experience pilots then :)

But joking aside thats where these systems will eventually take us.


> pilots uncertainty regarding autopilot functioning is [...] the reason for several accidents.

And compared to that, how many accidents were prevented by the use of autopilot? How many times did the autopilot make a better decision than a human would have? You can't compare to an absolutely perfect state (zero accidents). You have to compare to the current situation.


I think in my comment i mentioned the fact that, taken as a whole, autopilot systems have made planes "so, so, so much safer."

I was only remarking on the possibility that some of the issues that this automation have now created in aircrafts might also be seen in cars, especially when there are things like still keeping your hands on the steering wheel despite the fact that the car is in control.

And, unlike your rational analysis, I don't trust politicians or the media to think about it like that. Instead, I'm sure the first accident that could even be remotely attributable to an automated system would immediately ignite a firestorm of bull-shit articles and possibly damaging regulation.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: