And then one day it isn't a test anymore. The real strike comes. It isn't a nuke or an invasion. There's no need. Those things are as outdated as tying sharp stones to sticks. The AI knows just where to hit, in just the right way, and like a Vulcan death grip one pinch sets the whole system collapsing on itself. It will take humans a decade to even fully understand what happened. If they have that luxury.
The atom bomb changed warfare via incomprehensible destruction. AI will change warfare via incomprehensible strategy.
Attributing magical pattern-finding expert-beating behaviour to AI is common among those who do not know much about it or how it works, or who haven't used it to solve problems. I don't want to guess at your experience using AI tools to solve problems (maybe you have tons!), but from my own modest experience, 'AI' is not very good at the things you are painting a picture about.
AI in most contexts today is not a way to replace a scarce expensive human expert with a cheap multipliable mechanical expert, let alone a cheap multipliable mechanical super-expert. It is a way to replace a scarce expensive human expert with a cheap multipliable mechanical 10 year old. Anyone can get AI can do what a thousand reasonably trained 10 year olds can do, but it can not out-see or out-intuition most experts at most tasks, especially many-factor soft tasks, because humans can quickly sort through what to pay attention to among many diverse signal types. An expert comedian is still a lot better at looking at, say, a tag cloud and coming up with a great joke than GPT-3 is.
In some tasks, like chess or go, AI can evaluate many many paths and store the results and train on the specific, measurable outcomes. This is valuable to it, and allows it to outpace experts in these domains. It may apply to many other domains in the future.
Is propaganda like those domains? Are exposed networks like those domains? Is trading bitcoin like those domains? I agree that it may be possible for AI to do very surprising, clever things in the future -- it maybe even is likely that it will. But today, and for the near future, what you're discussing is very far into the realm of fun science fiction.
They just look like people that are really stupid in some ways and really good in others.
Since we cannot meaningfully codify, much less rapidly simulate, the potential actions and real consequences in this supposed global all-fronts war, we cannot train an algorithm on it.
I think AI rule of much of our top-level, super-important decision making is either a dead-end that will never work or inevitable and something we won't have the practical choice to avoid (barring, say, a real and effective world government) with little room for anything in between.
And if it pans out, I expect it to be much as you wrote: the decisions will seem nonsensical much of the time, but those who second-guess the AI will be at a disadvantage against those who do not.
It is a lot of nothing. You have to conflate science fiction with reality because reality is a non-story here.
I worry we have become dangerously complacent or deluded that nuclear war is impossible. I mean if you read anything about the early 60s you almost have to conclude we just got lucky. So many chances for people to do something stupid.
Imagine if military brass in the early 60s thought the Soviets had developed science fiction "AI" the way people view "AI" today? Defcon 1 no doubt.
But for me it breaks at a crucial point: how exactly will a strike look like that is way more sophisticated than a nuke or an invasion?
I think your comment is giving AI a life of its own. Whereas I believe that AI is just a continuation of what we have seen for the past 70 years:
Computational power will increasingly aid existing attack vectors. The nuclear strike will be better planned. The invasion much more effective. Through better computer programs, that some happen to call AI these days.
If (when) that level of AI becomes feasible, it’s going to be created in the private sector, and they’re not going to be interested in toppling nations.
Although they’re actions with that AI could very well lead to the same effect.
It's not that they don't know the rules, it's that they don't even know they are playing.
The AI think that could do as well as you described...
On the other hand if strong/general AI pops into existence we're at a singularity point anyway.
> sift through vast piles of data, find previously unknown patterns, and then exploit them.
is much harder to pull of sucessfuly than you think. Having worked in the weeds with many projects of this nature, in practice these models are wildly less useful and powerful than the people buying the output from them believe.
For an individual case, you can exploit this data pretty well to find out shocking amount of personal information about a person. Doing this at scale in a way that doesn't converge into noise is much harder.
I also believe this strategy is not only about conducting occasional violent hot wars but very much about conducting a non-violent day-to-day cold war which is largely invisible to the public (both China's and the US's). China has been waging such a "cold war" in the US for over a decade in their many break-ins onto US computers to steal IP, especially from high-tech and military/intelligence sources. In the last 5 years, they more active in international PR initiatives to promote their economic and political interests. China's AI and data-driven social engineering techniques will surely play an increasing role in daily practices like these.
Info-based warfare isn't about killing the enemy as much as weakening them, as Sun Tzu famously wrote, and I believe that's a view that China embraces. If you're patient, you don't need physical war. You can win simply by helping your enemy become more of a dysfunctional loser.