
An AI defeated human fighter pilots in an air combat simulator (2016) - wasi0013
https://binaryloom.com/an-ai-just-defeated-human-fighter-pilots-in-an-air-combat-simulator/
======
Nition
The article doesn't say anything about what data the AI had. Did it have full
access to the simulation, or did it have to "see" the opponent's plane, the
terrain etc a bit more like it would in real life?

That could make a big difference. It's easy to make a self-driving car for
instance if you can feed it the position of every other car in the world, with
their movement data, and all other obstacles.

~~~
obstinate
It seems very unlikely that they gave the AI global awareness. What would a
test like that prove? The people who are working on military software are not
idiots.

The article also refers to ALPHA being able to defeat the human pilot with
impeded sensors, which implies that it was being fed information from
simulated sensor systems.

~~~
StavrosK
It would prove that an AI could fly, maneuver, target, shoot and take down
enemy aircraft, with good enough sensors.

------
coob
The real kicker is that an AI controlled aircraft wouldn't be as limited in
manoeuvrability as one that has to sustain a human.

~~~
sneak
I feel like Terminator 2 should be required watching for these devs, no?

~~~
Jach
Not really, because if you ignore the rest of the series T2 ends with humans
winning, with the cautionary moral of 'no fate but what we make'. They need to
watch something where the machines quickly go from seeming under control to
dominating and wiping out humanity with no notion of a 'resistance'. Is there
something in the Outer Limits/Twilight Zone archives for that? The closest
thing I can think of right now is Stargate Universe's self replicating ships
that unlike other replicators in the series actually won and couldn't be
stopped, forcing the crew to hope they could make it to the next galaxy and
escape (but if the bots are already there, they're screwed).

~~~
antod
_> Not really, because if you ignore the rest of the series T2 ends with
humans winning_

Coz a pointless and massively destructive war is OK as long as we end up
winning in the end?

~~~
Jach
Not exactly OK, but it along with most everything else instills the spirit of
muddling through to victory even with big screw ups. Better to fear extinction
than mere war.

------
vmarsy
"just defeated" but should have a 2016 tag.

Paper link for those interested :
[https://www.omicsgroup.org/journals/genetic-fuzzy-based-
arti...](https://www.omicsgroup.org/journals/genetic-fuzzy-based-artificial-
intelligence-for-unmanned-combat-aerialvehicle-control-in-simulated-air-
combat-missions-2167-0374-1000144.pdf)

(from a June 2016 press article :
[http://magazine.uc.edu/editors_picks/recent_features/alpha.h...](http://magazine.uc.edu/editors_picks/recent_features/alpha.html)
)

~~~
gtirloni
Previous discussion:
[https://news.ycombinator.com/item?id=11993366](https://news.ycombinator.com/item?id=11993366)

------
terravion
Love this! My least favorite part of sci-fi is that the heroes are always
buzzing around in some advanced fighter aircraft. Crap, they might as well
ride around on sci-fi horse and joust like knights. The era of the fighter ace
is as gone as the age of horse cavalry--it would be great if the Pentagon
figured this out before squandering literally $1T (not an exaggeration) on the
Joint Strike Fighter.

~~~
pavement
Okay, so aerial combat in the form of interceptor dog fights is an
anachronism. Big deal.

It's still worthwhile to develop high-performance piloted airframes, since
people are still actors in any conflict, and will need to operate within the
theater of combat.

Even if primary operations are handled by unmanned systems, redundancy and
fallback systems are needed. Not just troop transport from A to B. Well-armed
vehicles capable of deploying weapons will still need to be piloted and
maintained.

Police still use horses. The military will still find utility in piloted
warbirds.

~~~
MarkPNeyer
police still use horses, but they don't spend a trillion dollars on a new
laserhorse

~~~
stouset
Most of that money was spent before drones were considered well-tested and
viable. Would you rather we simply throw away the $1tn already spent and start
over?

~~~
mdekkers
_Would you rather we simply throw away the $1tn already spent and start over?_

[https://en.wikipedia.org/wiki/Sunk_cost#Loss_aversion_and_th...](https://en.wikipedia.org/wiki/Sunk_cost#Loss_aversion_and_the_sunk_cost_fallacy)

The sunk cost fallacy is in game theory sometimes known as the "Concorde
Fallacy",[8] referring to the fact that the British and French governments
continued to fund the joint development of Concorde even after it became
apparent that there was no longer an economic case for the aircraft. The
project was regarded privately by the British government as a "commercial
disaster" which should never have been started and was almost cancelled, but
political and legal issues had ultimately made it impossible for either
government to pull out.

~~~
stouset
This is not simply a case of the sunk cost fallacy.

Stopping the program now _would_ result in a trillion dollars having mostly
gone to waste. Most of that money went to the initial development of the
program, and a comparatively small amount is now spent on each new airframe.
Scrapping it right as the upfront costs begin to pay off as airframes are
delivered makes no fiscal sense whatsoever.

Operational F-35Bs have been delivered to the military since 2015. F-35As
since 2016. Drones capable of fulfilling the roles the F-35 will _do not yet
exist_. You'd be sinking a program where the overwhelming majority of upfront
R&D has been paid off in exchange for starting a new program from scratch that
may not deliver a drone for another ten, maybe fifteen years.

Yes, today we could design and build a new set of aircraft with more modern
capabilities. But with your logic, after fifteen years of R&D — just as units
begin rolling off the production line — we'd be looking to scrap the program
to start anew with whatever advancements those fifteen years in technological
advancement might have unlocked.

You're comparing a plane that's been in development for twenty years with a
hypothetical drone with today's tech that wouldn't be ready for another
fifteen to twenty years.

------
NicoJuicy
From reading:

Amazing that the AI was running on a Raspberry PI... I'm amusing image
processing happens on a "low resolution". Also, this seems to be interesting:

"They tackled the problem using language-based control (vs. numeric based) and
using what’s called a “Genetic Fuzzy Tree” (GFT) system, a subtype of what’s
known as fuzzy logic algorithms.

States UC’s Cohen, “Genetic fuzzy systems have been shown to have high
performance, and a problem with four or five inputs can be solved handily.
However, boost that to a hundred inputs, and no computing system on planet
Earth could currently solve the processing challenge involved – unless that
challenge and all those inputs are broken down into a cascade of sub
decisions.”

That’s where the Genetic Fuzzy Tree system and Cohen and Ernest’s years’ worth
of work come in."

"The training happend on a 500$ commercially available computer"

"It uses linguistic input instead of numeric input"

Can i conclude that ( after reading the article), that the training matches
"situations"/"variables" to words. Eg. The enemy is far, close. The friend is
behind me/in front of me/above me and then i go up/down/do a roll.

Multiple generations later, some linguistic variables are left away, so only
the necessary ones remain.

From the comments on Reddit (
[https://www.reddit.com/r/CredibleDefense/comments/4q8h8z/pap...](https://www.reddit.com/r/CredibleDefense/comments/4q8h8z/paper_on_ai_in_air_combat_genetic_fuzzy_based/)
)

"This is not another opinion piece vaguely arguing how AI will "one day"
replace pilots. The authors seem very cognizant of all the challenges and
complexities. They clearly limit and define the scope of their work."

"This "research paper" is a commercial. The expert fighter pilot is an
employee of the company and his job is to promote it."

None the less, gonna read on some Fussy Logic :p -
[https://www.tutorialspoint.com/artificial_intelligence/artif...](https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_fuzzy_logic_systems.htm)

Edit: Link to previous HN discussion --
[https://news.ycombinator.com/item?id=11993366](https://news.ycombinator.com/item?id=11993366)

~~~
AstralStorm
They main upside of this system is that we can learn from it as well. (Though
it does not explain they why well, it does explain the how well enough that we
can figure out why it takes such an approach.)

------
huangc10
#OldNews, this article is from 2016.

> What sets ALPHA apart is the genetic fuzzy tree decision system that can
> calculate an opponent’s movements or strategies 250 times faster than you
> can blink.

That's just not fair.

One more comment, can this thing (ALPHA) land an Airbus A320 on the Hudson?

~~~
phire
Water landings are a documented procedure for planes, there is a checklist
(which Sully had practiced in simulators).

Assuming an AI like this was given 100% control over a commercial plane, it
would have been programmed with such a checklist. It would have also been
tested on this ability in a simulator (just like regular pilots). The AI would
have easily worked out in an emergency that it couldn't reach the runway, or
any alternative runway and not even bothered trying.

The only question is, would the AI have decided that the Hudson was a valid
surface for a water landing attempt? And that really depends on the database
programmed into the plane. It can't really see the river, so the database
would need to have the river marked as water, at the correct elevation. It
also can't really see the bridges (maybe if it had LIDAR), so the bridges
would have to be included in the database too, so it could pick a good stretch
to try and land it. Oh and boats too... either it would have to dynamically
spot them, or get lucky.

But if the database correctly reflected reality, then it would see the
opportunity, try to execute it and potentially succeed.

~~~
autokad
AIs are still struggling with 'is that white space a tractor trailer or sky?
eh just drive through it' with cars. they are immensely far off from aerial
combat and emergency maneuvers in aircraft. and the whole point of emergencies
is that they are a bit unexpected and the situations tend to be rather
specific.

I'm not saying it wont eventually happen, but no: they unequivocally cannot
fly our aircraft in takeoff/landings, emergency situations, etc.

------
graycat
For some of that, there's some old and quite solid math -- differential game
theory. See, say, Rufus Isaacs, long a professor at Johns Hopkins and who IIRC
did his first work at a US Navy lab and see also Avner Friedman.

------
shusson
Why spend resources developing AI for a plane designed for Humans? I'd imagine
that AI on cheap drones/missiles vs traditional human fighters would be a more
interesting simulation.

~~~
rbanffy
A good reason to let it control normally maned craft is because, since it and
its peers will have a high kill rate, no human will want to fly a combat
mission against them. There are a couple trillion dollars already spent in
meatware-friendly fighters and it'd be a shame to mothball all that.

~~~
davidwihl
It would be a greater shame to send hundreds of well trained pilots in
expensive aircraft only to get heavily defeated by cheaper and more flexible
AI flown craft.

~~~
rbanffy
That's why converting those expensive aircraft to AI drones is a good idea. An
intermediary step could be a hybrid system where the pilot can engage the AI
to reduce their workload and work in tandem with the machine.

------
FrozenVoid
Current fighter planes are limited in their design to accommodate a slow-
thinking, living human being. A plane can't reach certain acceleration, has to
provide life support, space and computing power to cockpit instead of just
functional components and armaments. When AI is polished enough, fighter
planes will go the way of horse and carriage.

------
chimen
On top of that, it will make things much cheaper cutting the need to carry an
extra 90~kg (the pilot), a huge interface with commands that must be operated
by the pilot and a cockpit to carry him. Not to mention the human errors, and
efforts invested in training them.

Humans will most likely be used in "decision-making" roles.

------
squarefoot
A side effect of employing AI to pilot war aircrafts, like with smaller
drones, is that there won't be families in grief for the loss of their kid
pilot on the attacking side, so that warmongering politicians will get public
approval of campaigns a lot easier.

------
brilliantcode
This is like that movie where US pilots have to battle a super advanced AI
stealth drone.

AI really should have no place in military but it's going to happen. Some
spineless fucks are going to spread their legs for a 5 star general to sell
their company.

The defence industry is in the business of killing humans and you play a
direct role by selling your knowledge and skills to make it more efficient.
Fuck that shit.

~~~
landryraccoon
I don't think greed or spinelessness have anything to do with it. AI is
similar to nuclear weapons : Any country that doesn't have military AI will
have a huge disadvantage against a country that does. Every country rationally
believes that AI will improve their military chances, and every country
rationally believes that every other country desires AI for their military as
well.

China and Russia will certainly build AI drones as soon as they possibly can.
The only thing that happens if western countries don't build them is that they
are at a significant disadvantage.

~~~
Jach
This is also true of chemical and biological weapons, and weaponizing outer
space, but countries have managed pretty well at not going into an arms race
in those areas; even nuclear weapons have been reasonably well kept in check.
But these mutual agreements to not pursue certain lines of military technology
(at least for offensive purposes) really need to start from the strongest
military agreeing to not do it and allow monitoring/audits to get other
countries on board. The US doesn't seem all that interested in that with AI,
or other scary technologies on the horizon. (Without even getting into the
issue that AI is trickier than something like a chemical weapon, countries
would have a field day on what actually constitutes AI vs normal digital
automation...)

~~~
landryraccoon
> This is also true of chemical and biological weapons

I disagree actually. Nuclear weapons render chemical and biological weapons
moot, and no major superpower is willing to give up nukes. The United States
has no need for chemical and biological weapons because it's keeping it's
nukes. I think that AI will present a comparable advantage.

Also, AI does not (yet) create the sense of dread in the popular imagination
that nukes did during the cold war. Incinerating an entire city in a few
seconds is terrifying on an existential level to any human. Having computers
control drones and tanks seems.. almost boringly inevitable, by comparison. I
doubt there will be unified political will to prevent it.

Also, I think the international climate has radically changed. Agreements like
TPP, which the powers that be supported, were not able to pass. In the era of
Trump I highly doubt there is enough global trust for Russia or China to agree
not to research militarized AI. Basically I don't see a single major power
agreeing not to develop militarized AI in the near future.

