I think "Responsible" raises some hard questions. Today, any weapons use by a robot might require a responsible human operator to first (remotely) pull the trigger. But what if an adversary allows their robots to fire without waiting for human permission? The first side to do so would gain a critical advantage, and we cannot have an autonomy gap.
Some have raised conerns that in the extreme case, this could lead to a world of machines anonymously killing without attribution. If you haven't, see Stuart Russell's compelling short film "Slaughterbots" .
And somehow these rules tend to put restrictions on things which are a force multiplier for smaller forces against larger ones or would make it very costly for a larger country to attack a smaller one. They protect the incumbents. It makes sense why big militaries would want rules like this - this makes it much less costly for big countries to war with smaller ones. Russia didn't want Ukraine to have nukes. A few decades later Russia invaded Ukraine.
It is okay for a US drone to strike a target and kill civilians. The civilians will just be defined as "enemy combatants" and the world moves on. However, if people from that same place somehow manage to kill the drone operator then that would be a terror attack.
The most concerned party at that time was the US. Having multiple nuclear armed ex-USSR states was contrary to their interests.
The better example is the US not wanting Gaddafi to have nukes and later invading Libya.
Whats relevant though is that most soldiers obeyed the orders and preferred to commit war crimes. Even in the US military which has a strong moral ethos (or likes to portray itself that way which for the common soldier is the same thing) But what about all the other, smaller countries?
I don't feel we've become profoundly nicer since then.
We've since developed PGMs which allow us to minimize collateral damage more than ever before. We can destroy strategic targets of military interest with pinpoint accuracy. People that commit war crimes are reported by their fellow soldiers and justice is (usually) served. Sometimes war criminals are pardoned even after being turned in (like Gallagher), but this is rare and usually results in national outrage.
Committing war crimes has effectively been the national defense policy of last resort since WW2. That's what the nukes (and formerly chemical and biological weapons) are for. The implicit threat is that the US will massacre your civilian population.
And I don't mean that as any special criticism of the US. Most countries have a similar attitude of the rules of war going out the window when things get bad enough.
I'm not sure about the "square full" bit.
So..how do you explain, to give but one example, the hundreds of thousands the US killed in Iraq?
If insurgents hide and fight among civilians, civilians are going to die in the process of destroying the insurgents. It's like cancer - you can't kill cancer without killing tons of healthy innocent cells around it.
I don't necessarily agree with the US's prolonged occupation in the middle east, but it's highly disingenuous to imply US military leaders are commanding their soldiers and pilots to intentionally kill civilians.
A landmine is indiscriminate, by that logic anything with a triggerable fuze is autonomous (even a snare trap would be). The difference would be if the landmine chose to explode based on some other factor.
Autonomy and intelligence are not the same thing.
Autonomy gets truly dangerous because you dont need to create death zones and no mans land. They are not a less violent version of just bombing the area. They can run considerations and decide by themselves who to kill and with this, their usage outside of active front lines will be considered. And with that, the people deploying them give up the decision and delegate it to an algorithm or worse yet, a trained black box. Reality will be to have completely unaccountable killers bots flying over countries who cant defend themselves against foreign aggression.
This stuff has absolutely horrible potential and i find it prudent to shame and cut out anyone with a sufficient lack of a spine to work in this area. As unpopular as it is to consider the impact of the technology you are developing, it is taught for a reason in a CS degree. This work has the potential to screw over humanity for a while to come.
There have already been cases of "autonomous" targeting, such as in 1982 when Exocet missiles fired by Argentine aircraft at a British warship selected a cargo ship instead (possibly due to countermeasures used by the warship). In this case, the weapon hit a target not intended by the firing unit. Of course, unguided weapons do this all the time...
The combination of the two is interesting. The Mark 60 CAPTOR is a naval mine that launches an acoustically guided Mark 46 torpedo.
What if a convoy passes a group of nuns? What if the nuns "fire without waiting"? Should we let the nuns have the "critical advantage" of first shot? Well, yes, it'd be absurd to do otherwise.
If some other country wants to develop auto kill bots that commit war crimes, I don't think we have an imperative to match them.
Tolerant rules of engagement are feasible because reactions are "similar" in timescale. Enemy fires on you, you fire back, etc.
But there are many developments that make that system less tenable.
Firstly, increased weapon lethality. When the first shot has a 90%+ kill probability (say, putting a Hellfire missile on a civilian vehicle), can you afford to let the enemy take the first shot? Or do your rules account for reality and back up a step? Maybe now, radar detection, firing preparations, or even entering a given airspace are sufficiently provocative to counterattack.
In literature and experience, this is casualty modeling moving from attrition (some constant, scaled by time) to volley (majority of casualties caused in bursts, at specific points in time).
Secondly, and this specific issue: reaction time between autonomous vs manual systems. If an opponent is using autonomous targeting and engagement, and you're using autonomous targeting and manual engagement, there may be some orders of magnitude difference in reaction time. Coupled with the first point, that may mean your entire force is wiped out before you have a chance to react.
A question for the AI Researchers and Engineers On the topic of _Traceability_ and as someone outside the field this is something that has puzzled me:
(Broad assumptions) Assuming a 'neural network' is a black box probabilistic model. How do we _prove_ in the context of _safety-critical_ systems that they will behave in a known determined and traceable way for all known cases?
I would be curious to understand the process/methods that people use in ADAS system or Medical Imaging (and now Defense applications as well) in order to provide 'verification' evidence for such a system? (Focusing more on verification/provability from a regulatory standpoint)
Also, remember that AI includes more than NN. You can use some other models, as a Linear Regression or a Random Forest which are perfectly explainable.
My question was more towards people who are actively using NN as a solution and _have_already_ shipped products in a 'regulated' industry.
The products are obviously out there, so to me it is confusing what evidence can a team provide at this point in time (with the current NN implementations) in order to get a product using NN (with a non-deterministic uncertainty) approved by a regulatory body? (automotive, medical, etc.)
Examples that jump to mind are Tesla's stereovision sub-systems (I am assuming NN are used), or NN based medical imaging classifier software (NN instead of Clustering or SVM techniques)
See slide 11 for some of the technical approaches to making AI more interpretable.
Some of these schemes are motivated by rather robust theory. Gaussian Processes are a good starting point to learn more.
It would (I assume) reduce instances of non-desirable behaviour. But how would it improve the evidence I am able to provide to a regulating authority?
For autonomous vehicles the 'proof' to the regulator can be the signature of the engineer. - Fortunately some engineers have higher standards. - I think in the future the trainig data will be from simulations that create curriculum learning datasets so that the noise characteristics are perfectly known. The ML algorithms can be written with dependent types, so that you can prove your code does what you think it does.
Another challenge is inductive bias, which is a lot like confirmation bias. This bias comes from choosing an ML algo which is sensitive to certain information and blind to others. You need to navigate the set of all possible functions, AKA Hilbert space, to overcome it. Fortunately only a small corner of this space is relevant to our universe and Tensor Networks seem to address this problem.
It looks like a lot of work to put these pieces together but at least it looks like the problem is tractable.
You're making the bold assumption that the robots will actually work, instead of wasting ammunition on all the wrong targets, including friendlies. I'm pretty sure the first side to allow fully autonomous weapons will do so before the technology is ready (humans are never patient enough to wait until the technology is ready) and end up regretting it.
I could easily imagine the US military deploying autonomous combat bots to some middle-eastern country unaccompanied by human troops. They would care approximately zero about wasted ammo, and we already know from our drone strike policy what the military thinks about accidentally killing non-combatants. As long as there are no Americans around to get killed by rogue bots, it doesn't really matter how much collateral damage they cause, within reason.
I'm pretty skeptical the U.S. military is concerned about wasted ammunition. They waste so much on training and preparation that anything wasted in combat is minimal.
> I'm pretty sure the first side to allow fully autonomous weapons will do so before the technology is ready
It doesn't have to be autonomous to be deployed before it's ready. Iran managed to should down a civil aircraft based on human error. The U.S. has done the same decades earlier. Clearly there were either technology or human process problems, I suspect that autonomous military robots will follow this trend.
They may not care about the ammunition itself so much but they care about budget and combat effectiveness. It's not uncommon to have training be missed due to ammunition budget shortfalls and a robot without ammunition isn't terribly effective.
It's not something that's unique to autonomous systems, though I suppose it might terrify some people a bit more than when it's humans doing it.