Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Possibly, autonomous weapons like chemical weapons won't be important to victory, or like most biological weapons (AFAIK) they won't be cost-effective. But it's hard to imagine a human defeating a bot in a shootout; consider human stock market traders who try to compete with flash trading computers, for example. In fact, I wonder if some of the tech is the same for optimizing decision speed and accuracy.

The only way for human adversaries to fight autonomous weapons would be with brute, lethal force (nuclear/neutron weapons). It ends poorly for all involved.



"The only way for human adversaries to fight autonomous weapons would be with brute, lethal force"

No it's not. You could use EMP. You could use signal jamming. Neither are lethal, both have the potential to be effective against autonomous weapons.


Pretty much everything the military uses has a measure of EMP shielding. We've know about it's effects for over 50 years now.

Signal jamming is an obvious weak point - one that disappears as autonomy is increased. Distributed control would reduce this issue, (as in, have a single soldier/operator manage 10-15 units). Eventually, you remove human control entirely, and along with it, this issue.


You can also attack the infrastructure and supply lines, same as with regular fighters. A drone only has a limited amount of ammo and fuel.


Drones have been successful in refueling from tanker aircraft, specifically the "Salty Dog" X-47B Navy UAV testbed:

http://www.navytimes.com/story/military/2015/04/22/navy-nava...


>You could use EMP.

Which, to my knowledge, are only currently generated using a nuclear weapon. You might be able to create one using solid state gear with enough time, R&D, and power.

> You could use signal jamming.

Machine intelligence frowns upon your silly attempts at jamming its uplinks. Predator drones and other autonomous, existing military kit already use high frequency satellite communications techniques that are essentially jam proof.


> Which, to my knowledge, are only currently generated using a nuclear weapon.

An EMP gun is just a directed energy weapon.[1]

Though you can generate an undirected EMP pulse through various means, there not as useful.[2]

1. https://en.wikipedia.org/wiki/Directed-energy_weapon

2. https://en.wikipedia.org/wiki/Electromagnetic_pulse#Non-nucl...


I understand that. My point was, there is no practical method yet to provide the energy required and appropriately direct EM energy at a target except through a crude weapon like an omnidirectional nuclear weapon


> "Which, to my knowledge, are only currently generated using a nuclear weapon. You might be able to create one using solid state gear with enough time, R&D, and power."

Some use a nuclear source, but not all... https://en.wikipedia.org/wiki/Directed-energy_weapon

> "Machine intelligence frowns upon your silly attempts at jamming its uplinks. Predator drones and other autonomous, existing military kit already use high frequency satellite communications techniques that are essentially jam proof."

Your idea of jamming is too narrow. Think about it like this, even if it's mostly automated, these machines still get sent signals to inform them of changes to their mission. That signal can be blocked and/or modified. Even satellite links can be altered, either you hack the satellite system or you intercept the signal at a higher altitude than the receiver is operating in.


>Even satellite links can be altered, either you hack the satellite system or you intercept the signal at a higher altitude than the receiver is operating in.

Or, if the case of total war, you blow the freaking satellites out of space with missiles. Yes, I know space weapons systems are technically banned, but how long do you think a nation like the US, Russia, India, or China would put up with satellite controlled autonomous drones running roughshod over their sovereign territory before they just blow the satellites out of space?


Satellites can actually be destroyed using weapons that aren't in space. Back in 1985, the US had a F15 launch a missile which took out a satellite in orbit. China also recently destroyed a satellite with a ship-launched missile.


Yes, that's certainly possible too.


This certainly brings the (Russian or Chinese?) satellite killer to mind. I wonder what its purpose is?


You energize a coil and then blow it up. Makes an EMP. No nuke needed.

http://science.howstuffworks.com/e-bomb3.htm



>Machine intelligence frowns upon your silly attempts at jamming its uplinks. Predator drones and other autonomous, existing military kit already use high frequency satellite communications techniques that are essentially jam proof.

US military drones may be effectively jam proof, but they are still vulnerable to techniques like GPS spoofing: https://en.wikipedia.org/wiki/Iran%E2%80%93U.S._RQ-170_incid...


From the Wiki article:

American aeronautical engineers dispute this, pointing out that as is the case with the MQ-1 Predator, the MQ-9 Reaper, and the Tomahawk, "GPS is not the primary navigation sensor for the RQ-170... The vehicle gets its flight path orders from an inertial navigation system".[20] Inertial navigation continues to be used on military aircraft despite the advent of GPS because GPS signal jamming and spoofing are relatively simple operations.


Actually, any use of radio frequency at all is retarded. Propagation's CAN be stopped. You just haven't had access to that information, or you have, and are providing disinformation for someone.


What are the resource costs for shielding from EMP vs creating an EMP strong enough to make a difference?


If it's autonomous it doesn't need signals to operate. As far as EMPs, they don't really exist.



Even autonomous military robots need some way to receive new instructions.


Not necessarily, especially if their cheap enough (and the beautiful thing about software is that its marginal cost is 0.) Think of them like bullets or bombs. And then you've eliminated that possibility of defending against them.


The biggest weakness of drones is that they cannot make decisions themselves; they need input, communication channels.

The military advantage of putting autonomous AI on drones is so that they no longer need to communicate with home base. The purpose of the AI is to eliminate the weakness of communications being jammed. The requirement to "receive new instructions" is eliminated.


Then how do you coordinate attacks? Even elite military units, deep behind enemy lines, have the ability to receive new intel. You aren't going to build a swarm of robotic generals, each fighting their own war, with no communication between them.


You're not going to launch these things with the order to "go fight the war" and hope to update them on the specifics later.

You're going to launch them with the latest intelligence on board manually uploaded, for missions less than 12 hours in duration. It's like firing a missile - you don't need to recall it once you've hit the red button.

So - AI 1 and 2 - drop 2x 500lb bombs on target at 6759 5974 at 03:12 hours. Go.

They complete the mission and head back. Even better, you give them 4x 500lb bombs and they figure out themselves how much to drop to destroy the target.

Communication worries are overblown, you just have to design around them.


What if you want to call the mission off? Let's say the enemy gets a few key hostages, and holds them in this building. They'll be killed by their own side.


Revokable weapons are weak; irrevocable weapons are strong. It's the same logic as mutually assured destruction, and evolutionarily similar to blind rage.

FWIW I believe autonomous weapons are inevitable because drones cannot be used against technologically sophisticated enemies that can jam them. The hard requirement for continuous communication is exactly what autonomy is eliminating.


Oh well. Such is war.

You send ten more drones on different missions to some daycare or something to "punish" the enemy for breaking the Geneva convention.

This is pretty much recorded history here. At some point, you are pulling the trigger. And yeah, you make mistakes.


The enemy didn't necessarily break the Geneva convention.

Pulling the trigger far in advance of the resultant action increases the risk of disaster, disaster that could've been averted based on the richer dataset available closer to the scheduled time.


They didn't have to. We're just going to say they did anyway because they are evil bastards (TM) and we can't possibly be anything but the good guys.

This scenario is the exact same scenario as a current ballistic missile launch. There are no safeguards for those systems that could be intercepted and interfere with the use of the weapon.


Send more drones to kill your own drones? If the drones can be fed new instructions in the field, then the enemy can feed them fake instructions to shut down.


Send more drones to kill your own drones?


Wouldn't it be feasible to build an autonomous weapon that doesn't target people, to fight the autonomous weapons that do target people? I would assume at that point whichever AI has better hardware and better algorithms would win, right?


Possibly? Anything past a few years out in tech is hard to predict. I wouldn't outright dismiss the idea, but its a crapshoot what machine intelligence is/isn't going to be able to do.

I'm an educated, practical tech professional and machine intelligence worries me more than any other technology out there (except possibly a virus paired with CRISPR CAS 9 for targeted, precise genome modification driven through a species' population).


biological weapons using CRISPR to target genetic markers is much more worrying to me because once released to the wild, there is zero control over it


Yee-ouch. That's a worrying thought. It's the holocaust on steroids. I'd rather be facing a nuclear armed foe than one that can eradicate my race.


I'm way more worried that society will fall apart for reasons not very related to technology.


Why would you design something with such an obvious flaw like not being useful against human solders?


If the defensive weapon is initially stationary it could win by being harder to detect.


Don't overlook the covert soldier, blending in with the population, taking a rifle to those building/launching/directing those autonomous weapons and those they care for. One guy infiltrating the homeland with a US$100 rifle and a case of ammo (about the size of a shoebox) can do enormous homeland damage against an enemy obsessed with >$100,000 drones operated by >$10,000,000 staff & facilities.

(That is one of several sufficient reasons why many Americans are obsessed with guns & self-defense. We predict, and see, increasing "spontaneous/lone-wolf" mainland attacks.)


Elaborate?

Drone command-and-control facilities would surely be protected from a lone gunman, more-so, how would guns & self-defense protect against a targeted agent taking down someone important? (who presumably already has defense which already needs to be circumvented).

I'm failing to see the common area between targeted spec-ops style missions (and protection against those) and home/civil defense.


Actually, It's ridiculously easy to simply ship dormant AI into the country in boxes, have them establish operational state once here and have them sow the havoc you are looking to create.

Homeland c&c facilitates are certainly defended from terrorist actions, but less so from 20-30 kamikaze drones launched from within the victim country.

When you fight someone, the idea is to use their strength against them - the strength of the west is economic trade. All the security measures in the world won't stop fed ex. And if they do, well, in a way you've already won.


Drone defense is indeed a hot thing right now but it's not fundamentally different from protecting yourself from any other new type of threat. There's measures and there's countermeasures (http://petapixel.com/2015/07/23/anti-drone-systems-are-start...). At the point of (strong, general) AI though all bets are off the table.

Warfare is becoming more and more asymmetric and nuanced, that for sure. I'd posit some form of media training enabling one to be less vulnerable to say https://en.wikipedia.org/wiki/Information_warfare would do more good than rifles and bullets at home though.


I don't disagree at all, but I'd add that at the point of "strong, general AI" everything is off the table. Never mind warfare.

At that point (and I agree it will be on us before we are ready) it's a whole new world in so many ways.


You're way overthinking my point.

A plane ticket and $500 can go a long way when your enemy's homeland is wantonly undefended by law & policy.


Terror targets are basically useless though in a real conflict. A determined foe will simply ignore them.

My point is that for some shipping fees, you have a real, realistic and effective way of substantially reducing your enemy's ability to fight the war you are engaged in.

That's a real vulnerability that can be exploited.


Staff leave facilities to go home. I really don't want to write out the math on that one.


Think about the disruptions caused by Chris Dorner and Eric Frein.

In each case, a lone operator with an axe to grind caused some serious problems for law enforcement.

Any kind of external support would have made them much more destructive.


Tactically to survive the immediate onslaught, perhaps, but strategically you don't fight autonomous weapons by attacking the weapons, but by attacking the people controlling them. 1 minute after the nuclear/neutron/EMP bomb has detonated, the next wave of killer robots is released from the hardened bunkers by the remote staff, and you're back where you started; it's the remote staff - and anyone/everything they care about - who must be taken down until surrender.

An "open borders" policy, tolerating & assimilating anyone who brazenly bypasses the checkpoints, is a gaping security void with a giant "STRIKE HERE" sign in flashing neon. [I don't say that to start that argument, but to point to the stark reality of the parent post's premise.]


> strategically you don't fight autonomous weapons by attacking the weapons, but by attacking the people controlling them.

They are autonomous, so human control might not be a factor.


They still need to be fed objectives/missions or something. Hopefully you are not suggesting to release robotic serial killers with no strategic purpose, are you?


> you are not suggesting to release robotic serial killers with no strategic purpose, are you?

It won't be my idea, but someone may do it. Consider someone without the resources or motivation to code the decision-making component, but they can code 'shoot every living thing' and drop the bot into enemy territory (preferrably far from their own territory).

Also, to some degree the AI can generate it's own objectives. Also, IIRC one objective of autonomy is for the AI to be able to identify and attack unforseen targets.


The cost of biological weapons is likely to marginalize once the know-how is public and with things like in-home sequencers we're well on the towards home-labs being feasible and cheap. The limiting factor right now might be ordering necessary chemicals/cultures but those too are soon to be easy to manufacture at home.


Other hunter AI-s, good ole flak cannons, something nano that just assimilates metal to replicate, hacking into their network and genociding the nation that made them ... the list is long and nasty.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: