Pretty much the perfect use case for AI. Take a repetative job (visually scanning images) that was probably done by dozens of analysts before, automate it, and have those analysts be 100x more productive checking the results. If I were a militant I'd be investing in circus tents or some other completely out-of-distribution type of camouflage.
The article seems to seamlessly go from "computer vision" to "AI" which is an extremely loaded term from the last few years. It feels like the tech could be basically: align two photos separated by a day/week/month, do a difference. Repeat that for tens of thousands of square kilometers.
I'd call that "computer vision". If I explain that to a layperson, I might accidentally say AI, then they think ChatGPT, ..., Skynet, Terminator.
Disregarding fictional things... The idea that computer vision and LLMs are different categories of thing is false, either being called AI is a result of marketing.
AGI is distinct from AI in that one is general and one is not. If you create an AI that can discover new mathematics but it can't write a paper about it, it's not AGI.
We have AI that can and has produced new mathematics. I don't understand what you think AI is. Machine learning enables computers to make decisions about things without explicit programing, and it is done by learning from data. It is a sort of intelligence, if not to the degree that we want (yet).
What you call AGI now is what I used to call AI and ML was just ML and statistics to me. But now AI can be everything from hidden markov chains to the computer player in a game as long as it makes a decision or prediction in some kind of of process, no matter how dumb. I gave up arguing about it long ago.
Which makes AI analytics on vegetation indexes, thermal imaging from dwell sensors, EM overpass surveys, and foot|vehicle traffic monitoring all the more attractive.
When a battalion disapears into an outside toilet that endpoints a line of changed vegetation that glows hot at night, chances are somethings up^H^H down.
This is a horrible idea. When presented with a potential target, you're already biased to see an excuse to take human life. What happens when a person gets burnt out?
>last year, in which Centcom experimented with an AI recommendation engine, showed such systems “frequently fell short” of humans in proposing the order of attack or the best weapon to use.
I wonder if it's stupid risky to actually train AI on actual weapon behaviours, let alone all of them to optimize coordination. That seems like the crown jewel of leaks to infer actual US capabilities. It's one thing to embed a model with info about adversary targetting, but once you include your own capabilities to engage those targets, which I presume you would have to, then those models also becomes a huge liability.
So when the AI picks a school to be airstriked, hundreds of kids killed, we can all throw our hands up and say "oh it must've been a bug or something in the code" and dust our hands off. The banality of evil, truly.
I think the parent comment's point is that even without AI warmongers find pretexts to bomb neighborhoods, schools, and hospitals; the AI would just add another via of deniality.
I wonder what all the various “AI safety experts” think about this issue, and whether their concern for humanity extends much beyond LLMs hallucinating something that could possibly offend a random person. The silence is kind of deafening.
Feels like maybe the whole “AI safety” is about maintaining control over manufacturing consent and promulgating only the approved narrative, and not much else. When it comes to the things the regime wants (surveillance, bombing brown people) these people immediately become blind, deaf, and mute.
Depends on what the target is. I assume you'd be OK with the Ukrainians hypothetically using such a capability to beat back the Russians? Wars are horrible, but they are not necessarily always unjust or something to avoid. The Nazi death camps wouldn't have ended without a war.
I think the critical decision making is whether or not to engage in air strikes.
Once the US makes that decision, then I think, the US should use all tools at its disposal including AI, computer vision, big data, etc to ensure that it targets and destroys what it needs to. Excluding technology to make itself less competent is, I think, stupid.
Part of what I think may be an aspect overlooked in the conflict there.
The level of destruction is in part a preview of future warfare engagements without Geneva convention style bans on AI use.
Too much rhetoric is treating it as a one-off, when in actuality the technology enabling it means it's more of a first-off.
We're about to watch the new generation of WWI style technology influenced warfare without ethical and human rights concerns, and it's going to happen quickly.
What Americans asked for these fancy "digital images"? I think we should go back to the good ol' days of CORONA where REAL images were captured on REAL film, then dropped out of orbit and caught by planes. Then we can hire WAREHOUSES full of analysts and put them to work pouring over every bit of film with those little magnifiers they use and then circle the Bad Guys with a big red pen