Hacker News new | past | comments | ask | show | jobs | submit login
Germany warns: AI arms race already underway (dw.com)
61 points by Tomte 10 days ago | hide | past | favorite | 31 comments





The interesting (and sinister) thing to consider is how wild this will look if fully built out. We're not just talking about some machine intelligence shuffling troops around in lieu of a general. The advantage of a computer is its vision: it can see the "playing field," the reality it's fighting in, much more broadly and deeply than a human can. It can hold more things in its mind at once. AI warfare isn't computers flying some drones or robotic tanks. AI warfare probably looks like a bunch of small, odd occurrences as both sides probe each other's weaknesses in economics, infrastructure and politics, circling each other like two knife fighters. A bunch of weird, disjointed bursts of propaganda crop up across various media channels, fragmented and often seemingly incoherent, but perfectly targeted. Small, isolated incidents of problems at important institutions start cropping up, shutting them down for weeks or months: Hospitals, minor government agencies, oil pipelines, food processing. Never enough to look like a coherent attack. Knife jabs and dodges, the fighters probing, learning, a/b testing.

And then one day it isn't a test anymore. The real strike comes. It isn't a nuke or an invasion. There's no need. Those things are as outdated as tying sharp stones to sticks. The AI knows just where to hit, in just the right way, and like a Vulcan death grip one pinch sets the whole system collapsing on itself. It will take humans a decade to even fully understand what happened. If they have that luxury.

The atom bomb changed warfare via incomprehensible destruction. AI will change warfare via incomprehensible strategy.


That's an interesting idea about a type of AI that we do not have yet.

Attributing magical pattern-finding expert-beating behaviour to AI is common among those who do not know much about it or how it works, or who haven't used it to solve problems. I don't want to guess at your experience using AI tools to solve problems (maybe you have tons!), but from my own modest experience, 'AI' is not very good at the things you are painting a picture about.

AI in most contexts today is not a way to replace a scarce expensive human expert with a cheap multipliable mechanical expert, let alone a cheap multipliable mechanical super-expert. It is a way to replace a scarce expensive human expert with a cheap multipliable mechanical 10 year old. Anyone can get AI can do what a thousand reasonably trained 10 year olds can do, but it can not out-see or out-intuition most experts at most tasks, especially many-factor soft tasks, because humans can quickly sort through what to pay attention to among many diverse signal types. An expert comedian is still a lot better at looking at, say, a tag cloud and coming up with a great joke than GPT-3 is.

In some tasks, like chess or go, AI can evaluate many many paths and store the results and train on the specific, measurable outcomes. This is valuable to it, and allows it to outpace experts in these domains. It may apply to many other domains in the future.

Is propaganda like those domains? Are exposed networks like those domains? Is trading bitcoin like those domains? I agree that it may be possible for AI to do very surprising, clever things in the future -- it maybe even is likely that it will. But today, and for the near future, what you're discussing is very far into the realm of fun science fiction.


Thinking of AI like this is completely misunderstanding complexity. AI isn’t going to be able to predict the weather 2 weeks from now let alone society at the level your suggesting. It’s going to use the same old tools at best significantly more efficiently.

Watching superhuman AI play games doesn’t look like this.

They just look like people that are really stupid in some ways and really good in others.


I suspect that this is because games have fixed rules: you can't use a hammer to smash the enemy's chess king. Reality is a lot more flexible.

Fixed rules are the reason the AIs do well in games. You can’t train an algorithm to think outside the box like that, because the box is how it’s trained.

Since we cannot meaningfully codify, much less rapidly simulate, the potential actions and real consequences in this supposed global all-fronts war, we cannot train an algorithm on it.


My expectation is that if it turns out to be practically possible and significantly beneficial to turn over lots of strategic military and business decision making over to AI, it soon won't be a choice not to, thanks to competition.

I think AI rule of much of our top-level, super-important decision making is either a dead-end that will never work or inevitable and something we won't have the practical choice to avoid (barring, say, a real and effective world government) with little room for anything in between.

And if it pans out, I expect it to be much as you wrote: the decisions will seem nonsensical much of the time, but those who second-guess the AI will be at a disadvantage against those who do not.


This sounds a lot like the Chinese idea of unrestricted warfare [1] but conducted by AI.

[1] https://en.wikipedia.org/wiki/Unrestricted_Warfare


I would read this novel.

Instead of "AI Warfare" what if we come back to reality and say machine learning driven warfare?

It is a lot of nothing. You have to conflate science fiction with reality because reality is a non-story here.

I worry we have become dangerously complacent or deluded that nuclear war is impossible. I mean if you read anything about the early 60s you almost have to conclude we just got lucky. So many chances for people to do something stupid.

Imagine if military brass in the early 60s thought the Soviets had developed science fiction "AI" the way people view "AI" today? Defcon 1 no doubt.


This reads really nicely. It’s a nice story.

But for me it breaks at a crucial point: how exactly will a strike look like that is way more sophisticated than a nuke or an invasion?

I think your comment is giving AI a life of its own. Whereas I believe that AI is just a continuation of what we have seen for the past 70 years:

Computational power will increasingly aid existing attack vectors. The nuclear strike will be better planned. The invasion much more effective. Through better computer programs, that some happen to call AI these days.


For what it’s worth, gameplay AI in real time strategy games beats humans by performing more actions per minute than humanly possible. You can find Starcraft 2 AI that perfectly controls every unit while a human can only dedicate time to controlling a few groups of units. Nothing intelligent occurs; the human loses because they simply fall behind.

If such an AI were out there, somebody would’ve already made a trillion dollars on the stock market.

If (when) that level of AI becomes feasible, it’s going to be created in the private sector, and they’re not going to be interested in toppling nations.

Although they’re actions with that AI could very well lead to the same effect.


Yeah, the consequence of large intelligence differentials in war is that the lower intelligence doesnt even know they are at war

It's not that they don't know the rules, it's that they don't even know they are playing.


You know what could be actually bad?

The AI think that could do as well as you described...


I don't think narrow AI is really capable of much of that.

On the other hand if strong/general AI pops into existence we're at a singularity point anyway.


Sometimes I feel like it's already happened.

I think in general fighting over territory is a lost cause these days. since the best resources are people. first world nations, have realized that. now they plunder people instead. Unlike extractible natural resources, which can be in abundance, brain drain is close to irreversible. Syria, will never regain it's lost skilled people. And likewise different African countries. In that sense the best AI war you can fight is disinformation and sow distrust.

Is there a non-vacuous example of AI being used in this context? Is it matrix multiplication, linear regression, same old control theory, or deep learning?

If you want sense of what real AI warfare will look like, look at Cambridge Analytica. Extremely primitive, but still highly effective. The key concept is the ability to sift through vast piles of data, find previously unknown patterns, and then exploit them. Humans have to make one ad for the whole nation. AI can micro-target hundreds or thousands of communities with custom propaganda, often even contradicting itself, achieving a much more powerful result. Now apply this approach to broader conflicts.

There is no evidence that Cambridge Analytica was "highly effective". How many voters did they convince to change their votes? Please provide a number backed by reliable data.

I've worked in Ad Tech for awhile now at many different places. While the data and privacy issues that abound are mind blowing, this process:

> sift through vast piles of data, find previously unknown patterns, and then exploit them.

is much harder to pull of sucessfuly than you think. Having worked in the weeds with many projects of this nature, in practice these models are wildly less useful and powerful than the people buying the output from them believe.

For an individual case, you can exploit this data pretty well to find out shocking amount of personal information about a person. Doing this at scale in a way that doesn't converge into noise is much harder.


Yeah, I think the point of China's term "Informationized Warfare" is to manipulate information flow and content to serve your interests more than your enemy's. If you can distort the perspective of an opponent country's decision makers or twist popular opinion there to foment internal dissent (as Russia famously has done in the US in the past decade w/o using any AI, so far), then you win.

I also believe this strategy is not only about conducting occasional violent hot wars but very much about conducting a non-violent day-to-day cold war which is largely invisible to the public (both China's and the US's). China has been waging such a "cold war" in the US for over a decade in their many break-ins onto US computers to steal IP, especially from high-tech and military/intelligence sources. In the last 5 years, they more active in international PR initiatives to promote their economic and political interests. China's AI and data-driven social engineering techniques will surely play an increasing role in daily practices like these.

Info-based warfare isn't about killing the enemy as much as weakening them, as Sun Tzu famously wrote, and I believe that's a view that China embraces. If you're patient, you don't need physical war. You can win simply by helping your enemy become more of a dysfunctional loser.


Cambridge Analytica was largely a nothing burger [0].

[0] https://twitter.com/nickconfessore/status/131385399616835174...


Was Cambridge Analytica highly effective? My understanding was that it was only used by Cruz in the 2016 primaries. In the main election, the Trump campaign found RNC’s data to be more useful than CA’s.

When someone finally rules the world, what will they do next?

I think what tends to happen is what we saw happen in Soviet Russia and Maoist China. "They," at the top, think they know what's best for people (hoi polloi) and start "designing" for that "utopian" society, history be damned.

Make prols vanish underground and enjoy empty beaches, clean air and unspoiled nature.

https://en.wikipedia.org/wiki/The_Penultimate_Truth


Tighten control, extract wealth and live the best life, direct humanity towards what it thinks are valuable endeavors.

solar system

[flagged]


It’s data on the enemy you need for warfare. Only dictators need to collect data on its own citizens to win war.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: