

When Will We Have Unmanned Commercial Airliners? - yread
http://spectrum.ieee.org/aerospace/aviation/when-will-we-have-unmanned-commercial-airliners/0

======
lucasjung
I fly for a living, and I currently work in developmental flight test. One of
the programs I am working on is a UAV.

It's going to be a very, very long time before we see autonomous airliners.
I'll talk about specific technical hurdles, but I think the biggest issue is
psychological: it's one thing to entrust a bunch of freight to an autonomous
vehicle, another thing entirely to entrust dozens of living, breathing human
beings to such (the article discusses this, including the concept of "shared
fate"). I am confident that autonomous airliners will only come into service
when autonomous aircraft technology reaches a point where the computers are
able to handle _every_ aspect of flight safety better than humans. Right now
they can already do _some_ of those things better than humans, and those tasks
have already largely been shed by human pilots and entrusted to their
computers. I think there will be a gradual transition as the computers are
able to take on more and more of the tasks. The article also mentions how this
process is already causing basic aviation skills to atrophy in pilots who
leave too much to the computers. I think this is a very real problem, and I
think it was a major contributor to the Air France flight 447 crash. Prudent
pilots do more manual flying than is strictly necessary because that's the
only way to maintain proficiency. If this problem becomes severe enough,
expect to see the FAA establish more granular proficiency requirements.

The author talks a lot about how much the military is using UAVs, and seems to
think that this is a good model for civilian applications. It isn't. The
military has an entirely different set of risk considerations than civil
aviation. Take the example of medivac UAVs: a medivac UAV will almost
certainly have a higher mishap rate than a manned medivac helo, which would be
completely unacceptable for civilian purposes. However, for the military that
increased mishap risk is more than offset by the risks of putting an entire
human crew into harm's way just to medivac a single wounded soldier.

Military UAVs currently in use generally have much higher mishap rates than
their manned counterparts, but the military tolerates this because aicrew
don't die in UAV mishaps, and UAVs are generally less expensive to replace
than manned vehicles. Part of the reason for this is that features designed to
prevent or mitigate mishaps cost money, and it is often cheaper to leave many
of them out and accept the higher mishap rate, especially when no human crew
is involved. However, part of it is that autonomous systems still just aren't
as good at flying safely. For the military, the benefits outweigh the costs,
but I really can't see a for-profit corporation reachig the same conclusion.

> _Northrop Grumman has built some sense-and-avoid savvy into the unmanned
> helicopters and other UAVs it's developing for the U.S. Navy._

I happen to be intimitely familiar with one such system, and somewhat familiar
with another. This sentence is utter BS. I can only assume that the author was
fed a line by an NGC PR-type and took it at face value. A more honest way to
say it would be:

 _Northrop Grumman is trying to incorporate limited sense-and-avoid
capabilities into the unmanned helicopters and other UAVs it's developing for
the U.S. Navy._

Moreover, FAA has a requirement for " _see_ and avoid," not "sense and avoid."
The military is trying hard to sell them on the idea that it should be "sense
and avoid," of which "see and avoid" would be just a subset, but so far the
FAA has remained deeply skeptical. The FAA is right to be conservative about
this change: none of the currently proposed systems would be as effective at
collision avoidance as the Mark I Eyeball, and thus far the systems I am aware
of (to which the quote from the article was referring) are still a _very_ long
way from working properly. This seems to me like the kind of technical problem
that eventually can be overcome, but it's going to take a lot of work to make
that happen.

Because "see and avoid" is so critical to safety of flight, and because UAVs
can't currently do it, the FAA does not allow UAVs to operate in its airspace,
with some tightly restricted exceptions for military and law enforcement UAVs.
I doubt very highly if they would make similar exceptions for civilian
purposes, and even if they did, the restrictions involved are so limiting that
there aren't many viable applications.

~~~
JoeAltmaier
In just a couple of years the DARPA challenge yielded cars that can drive
themselves in complex urban environments.

I imagine with sufficient smart people working on it, flying airplanes in
relatively uncluttered sky would yield results faster.

As for see-and-avoid, perhaps it works for obstacles on the ground, but other
aircraft are moving so fast, is it really feasible to eyeball them before they
are on you? Perhaps in pursuit, but at any significant angle they flash past
at hundreds of miles an hour. Only radar etc has a chance of
identifying/avoiding at those speeds. <Edit: spelling>

~~~
brlewis
Commercial airliners, last I checked, suffered one mishap per 1.5 million
flights. Human automobile drivers set a much lower standard to beat.

~~~
JoeAltmaier
Sure; but the flight environment is trivial in comparison with urban traffic.
Even so, with concerted effort the progress was phenomenal.

~~~
lucasjung
> _the flight environment is trivial in comparison with urban traffic._

Having experienced plenty of both, I can tell you that this is utterly false.

~~~
marshallp
Says the human. To a computer driving is much more difficult than flying.

You could equally say adding up big numbers is much harder for you than
walking across the street. To a computer the former is trivial, the latter no
walking robot has yet done.

~~~
lucasjung
As I've mentioned elsewhere, I'm directly involved in the test and evaluation
of one of the current attempts at "sense and avoid" technology. It is a _long_
way from being safe for fully autonomous use. It is my understanding that the
DARPA autonomous land vehicle contest required autonomous collision avoidance,
and that more than one of the entrants did so effectively. Their budgets were
a pittance compared to what is being spent on "sense and avoid" for UAVs, and
yet they had much greater success. That tells me that urban traffic avoidance
is easier for computers than air traffic avoidance.

I think that two big reasons for this are probably:

1: For land vehicles, simply stopping in place is almost always an effective
collision avoidance tactic (unless the other vehicle is deliberately seeking a
collision). This simple solution is not available to airplanes.

2: Tracking objects thar are moving in three dimensions with a sensor that is
also moving in three dimensions is an immensely more complex problem than
tracking objects that are constrained to move on a fixed surface in two
dimensions with a sensor that is also constrained to move on a fixed surface
in two dimensions.

~~~
JoeAltmaier
Circular reasoning? They solved the land-nav problem because its easier. Its
easier because they solved it.

My point was, there is another wrinkle. The land-nav problem was opened up,
made a competition with a big marketing budget. It was also intractable,
unsolved, too hard. Until lots of smart people started brainstorming and
trying crazy things and cooperating.

Airplanes can change speed drastically, which is at high speeds about as
effective as stopping. And no, you don't get to say collisions are hard to
avoid because 3 dimensions are hard to calculate, making that not a solution.

I think I begin to see why the problem hasn't been solved yet.

~~~
lucasjung
It's not circular reasoning: it's a conclusion based on observation. Several
groups tried to solve each problem. All of the groups that tried to solve
problem A have thus far failed (although some have made measurable progress)
while some of the groups that tried to solve problem B succeeded despite
having considerably less resources at their disposal. "Hard to solve" can be a
somewhat difficult label to define, but those results are a strong indicator
that problem A is harder to solve.

By your logic, every "impossible" problem could be solved easily if just DARPA
would offer a small prize to whoever solves it. Unfortunately, the real world
doesn't work that way. There's a reason why DARPA chooses the tasks they do
for their challenges: they spend a lot of time and effort identifying tasks
that are highly likely to be amenable to novel solutions.

> _Airplanes can change speed drastically, which is at high speeds about as
> effective as stopping._

Incorrect, on two counts. First, not all airplanes can change speed
"drastically." Second, it is not as effective at preventing a collision as
stopping. If both cars in an impending collision stop (and in many cases, even
if only one of them stops), a collision becomes _impossible_. On the other
hand, there are a lot of situations where deceleration merely delays, but does
not prevent collision. That has value, but it's not as good.

> _And no, you don't get to say collisions are hard to avoid because 3
> dimensions are hard to calculate, making that not a solution._

I never said it was "not a solution," but I definitely _do_ get to say that
it's a much harder problem to solve. Here's the steps you have to perform to
avoid a collision:

1: Detect an object.

2: Track the object to determine it's course and speed.

3: Compare the object's course and speed to yours to determine how likely a
collision is.

4: If the probability of a collision is unacceptably high, determine a change
of course and/or speed which will reduce the probability of collision to an
acceptable level. If the probability of collision is already acceptably low,
return to step 2.

5: Maneuver to change course and speed accordingly, then return to step 2.

If you are moving in three dimensions and the objects with which you might
collide are moving in three dimensions, step 2 is hard to do accurately.
(Unless you've studied radar tracking, you probably don't appreciate exactly
how hard, but if you're genuinely interested, Skolnik's _Radar Handbook_ is a
good place to start.) The less accurate you are at step 2, the harder steps 3,
4, and 5 become, because you have to deal with more uncertainty. Is that
_really_ where the other object is? Is that _really_ where it will be in
twenty seconds? How certain are you of that? How certain are you that there
really is something even there at all? If you're wrong in one direction,
you'll have a midair; if you're wrong in the other direction, you'll perform
some extreme maneuver for absolutely no good reason.

------
jarin
"And for nearly two decades, automatic landing systems have been able to drop
and stop a jet on the fog-shrouded deck of an aircraft carrier that's barely
twice as wide and three times as long as the jet's wingspan—and the ship is
moving. Meanwhile, the pilot sits in the cockpit, hands folded."

I know that aircraft carriers have these systems, but having been on two
deployments on an aircraft carrier I have never heard of it being used. I
think it's only for emergencies when the pilot is injured or incapacitated.

~~~
lucasjung
There are actually a few levels of automation available to hornet pilots,
ranging from (in layman's terms) "a little help staying on-speed and on
glideslope" to "fully automatic." Some pilots hardly ever use them, some
pilots use them as much as they are allowed (in order to insure proficiency,
there are limits in place to make sure the pilots don't become overly reliant
on the automation). Most pilots are in-between, using them occasionally,
typically when they feel like they're not at their best (e.g. at the end of a
6 hr+ combat sortie into Afghanistan). Even when on full-auto, I don't know
any of them who sit "hands folded:" they're ready to take over immediately in
case something goes wrong. I've heard stories that some of the test pilots
eventually got confortable to take their hands completely off the controls
back when this stuff was first being tested, but I doubt if even they made a
habit of it.

~~~
grecy
> I don't know any of them who sit "hands folded:" they're ready to take over
> immediately in case something goes wrong.

Did you ever hear of something going wrong with the automation?

~~~
lucasjung
Occasionally it will disengage, leaving the pilot in full control. IIRC, when
in full-auto it will automatically wave off (add power and climb out to go
around and try again) when it disengages unless the pilot actively takes over;
so it wouldn't create an immediately unsafe situation, but it could result in
an unneccessary missed opportunity to land.

~~~
grecy
> when in full-auto it will automatically wave off (add power and climb out to
> go around and try again) when it disengages

Interesting. So it hasn't disengaged at all, it's just decided to abort the
current landing attempt, and go around to try again.

------
danielschonfeld
I am an airline pilot and a programmer who loves automation. While the
psychological aspect I think is easy to get over with the public by producing
ultra low fares the technological one is quite different.

The author states that even in easy adaptations such as traffic watch we still
do not see UAVs and what the author is missing is the cost! Those military
UAVs cost almost as much as a private jet in what today using 1950s technology
can be achieved for the cost of a used cluncker.

Also, the same inheritable problems that we face fully automating cars, exist
in fully automating airplanes. Each is an individual unit and very autonomous
with no single 'track' to follow. In addition to this largely autonomous
environment you have integration problems, where you would integrate the old
with the new automated environment and still be able to offer the same
capacity of airspace usage.

As frustrating (and yes it is for pilots too - more than passengers realize)
as air travel seems now days. The achievements of airspace utilization in
places like New York, Chicago, Los Angeles, Phoenix, San Fran etc is almost
mind boggling considering it's used with a mosaic of 1930s radar equipment
which honestly can't pin point you to an exact location within a circle of 1
mile around yourself.

~~~
edelweiss
What Commercial Aircraft do you fly, and for how long have you been a
Commercial Airline Pilot?

------
vaksel
I don't think we'll ever see it.

There are too many hurdles to overcome. The psychological and liability one
for actual plane manufacturers.

\+ it just doesn't make sense economically. Pilots get paid fairly little now,
and as automation progresses, the salaries will go down to ~40K(so ~80K for
the 2). At that point, it's really not that much money to give passengers the
extra sense of security.

~~~
onemoreact
It's not just the pilots, cockpits for something like a 747 is not all that
cheap to build and adds a fair amount of weight on every trip. If you really
need manual supervision of takeoffs and landings you could have a set of
pilots at major airports supervising these functions with vary low latency.

~~~
mseebach
Modern glass cockpits are just thin interfaces on avionics you need anyway. I
would be surprised if a cockpit-less 777 was significantly lighter than one
including it.

As for the weight of the pilots themselves - judging by the cost of excess
luggage, I'd say it's negligible for all but the smallest aircrafts.

~~~
onemoreact
I would say just the weight of the pilots, their seats, the extra structural
support and reinforced door, and the need to wire control surfaces to the
front of an airplane vs it's mid section etc. Your probably down ~2-4
passengers.

------
tjmc
When someone can tell me with a straight face that the 440 passengers and 29
crew members of QF32[1] would still be alive without the heroes they had up
front that day.

[1] <http://en.wikipedia.org/wiki/Qantas_Flight_32>

~~~
pak
That was a fascinating read, although I must play devil's advocate and ask
what prevents a ground crew from taking over a commercial airliner remotely in
case of distress? None of what this flight crew did necessitated their
physical presence onboard, only their decision-making and operation of the
existing aircraft controls.

~~~
tjmc
Given that the plane was on battery power when it landed, a ground crew may
not have still had contact and control of the aircraft. In the case of the
"Gimli Glider" mentioned below, there was a total power failure due to fuel
exhaustion and IIRC the pilots had to crank down the nosewheel by hand.

------
rbanffy
Hopefully only when the last time a pilot had to intervene on a plane flying
itself is only remembered in history books.

Automation today is really good, but only until something goes wrong. Until
automation becomes absolutely perfect, I will argue it's even starting to
become dangerous - as pilots are thrown into emergencies without quite knowing
how the plane got into them and based on what flawed assumption.

~~~
weavejester
Automation doesn't have to be perfect; it just has to have better judgement
than a human pilot.

~~~
lucasjung
Automation is incapable of "judgement;" automation consistently and reliably
responds to inputs according to pre-defined instructions. That's why
automation works really well in some situations but not others. The more
complex the task, the harder it is make a complete instruction set that will
result in a satisfactory outcome for every possible situation.

Even human pilots behave somewhat like automatons in some situations: in most
situations they follow procedures, which could be described as "responding to
inputs according to pre-defined instructions." However, they often encounter
situations not covered by the procedures, in which case they must instead
exercise their judgement.

The problem with judgement is that it is neither consistent nor reliable. Some
humans have better judgement than others. Eventually we will have automation
sophisticated enough to handle even to full complexity of aviation, at which
point automation will yield safe results more consistently and reliably than
human judgement.

~~~
_delirium
I suppose it gets into philosophical debates about AI, but I don't see any
reason in principle that we can't describe at least some kinds of computerized
systems as exercising something described as "judgment". We already have one
existence proof, the human brain, of a system that can exercise something we
call "judgment", and I don't see a strong reason to believe that it's due to
anything magical about the human brain in particular (like a soul or something
along those lines), rather than just being a complex reasoning system that's
able to balance many contextual factors.

~~~
lucasjung
Automated systems as they currently exist have fixed responses to fixed
inputs. If an automatic system encounters a set of inputs it wasn't programmed
for, it has no capacity to determine a best course of action for those inputs.
Depending on how it was programmed, it will either keep doing what it was
doing previously, or switch to a pre-programmed "contingency plan" that
hopefully will result in a tolerable outcome (but which might result in
catastrophe), or possibly execute a random set of instructions (which can
happen to poorly-designed state machines, for example).

A human being, on the other hand, has the capacity, when faced with unexpected
or unfamiliar conditions, to exercise something we call "judgement" in an
attempt to develop an appropriate response. I'm not saying that it's
impossible for an automated system to have this capacity, I'm saying that no
current automated systems have it, and that we're nowhere near to developing
such a system any time soon.

~~~
_delirium
That's definitely the case with deployed civilian aircraft systems, but I'd be
surprised if there wasn't some unmanned system somewhere doing more complex
reasoning. There was a talk _years_ ago at IJCAI from some people from NASA
Ames on a prototype aircraft-control system they'd built that used a reasoning
system to assist with performing emergency landings in situations with no
preprogrammed contingency, by taking into account some telemetry information
(e.g. aircraft damage), map information, an aerodynamic model, and risk
models.

I do believe they were planning to deploy it as a suggestion system though,
which would suggest a course of action, and then leave it to the pilot to
implement it or not. Then the judgment gets more murky; now the system is
doing some of the judgment (evaluation of alternatives, etc.) that a human
pilot would normally do, but leaving some of the judgment (accept the
suggestion, modify it) still to the human.

edit: Here's a more recent paper than the one I'm thinking of, but must be the
same project:
[http://ti.arc.nasa.gov/m/profile/de2smith/publications/IAAI0...](http://ti.arc.nasa.gov/m/profile/de2smith/publications/IAAI09-ELP.pdf)

------
crikli
lucasjung has done such a fantastic job of debunking this idea that they's
very little to add; however one aspect that hasn't been expanded upon is the
glacially slow pace at which the FAA approves systems for use.

Approved aviation systems are far from bleeding edge; at best the technology
in use is five years old, and the 5-year old technology that's in use is only
so because some company(ies) invested the considerable resources to jump
through all of the hoops required by the FAA to get approval for commercial
use.

Furthermore, military aviation is an incredibly poor analogue to commercial
(and general) aviation because it lacks the onerous regulatory oversight
brought to bear by the FAA. Military aviation is the bleeding edge and the
advancements made there trickle backwards to CA/GA at a very slow pace due to
the incredible cost of getting certificated by the FAA as well as the fact
that the FAA (correctly) places safety and reliability at the top of their
priority list.

