Hacker News new | past | comments | ask | show | jobs | submit login
US Used AI to Help Find Middle East Targets for Airstrikes (bloomberg.com)
52 points by bluefishinit 3 months ago | hide | past | favorite | 43 comments



Pretty much the perfect use case for AI. Take a repetative job (visually scanning images) that was probably done by dozens of analysts before, automate it, and have those analysts be 100x more productive checking the results. If I were a militant I'd be investing in circus tents or some other completely out-of-distribution type of camouflage.


The article seems to seamlessly go from "computer vision" to "AI" which is an extremely loaded term from the last few years. It feels like the tech could be basically: align two photos separated by a day/week/month, do a difference. Repeat that for tens of thousands of square kilometers.

I'd call that "computer vision". If I explain that to a layperson, I might accidentally say AI, then they think ChatGPT, ..., Skynet, Terminator.


If you use Deep Neural Nets, even for Computer Vision, it's still Deep Learning, and can be categorized as AI.


Disregarding fictional things... The idea that computer vision and LLMs are different categories of thing is false, either being called AI is a result of marketing.


They are called AI because that is the name of the broader field that encompasses them


They are Machine Learning, a, so far, unsuccessful attempt at AI.

They aren't AI which is why the phrase "True AI" is so commonly used.


The term you are looking for is AGI, never heard of "true ai". And if machine learning failed, why are we using so much it in everyday life?


ML failed to produce AI, it's not a failure.

AGI is distinct from AI in that one is general and one is not. If you create an AI that can discover new mathematics but it can't write a paper about it, it's not AGI.


We have AI that can and has produced new mathematics. I don't understand what you think AI is. Machine learning enables computers to make decisions about things without explicit programing, and it is done by learning from data. It is a sort of intelligence, if not to the degree that we want (yet).


I agree with your distinction between AI and AGI. Hasn't ML failed to produce AGI, but been wildly successful in producing AI?


What you call AGI now is what I used to call AI and ML was just ML and statistics to me. But now AI can be everything from hidden markov chains to the computer player in a game as long as it makes a decision or prediction in some kind of of process, no matter how dumb. I gave up arguing about it long ago.


See you guus are stupid, theyre gonna be looking for Army guys


I hear underground tunnels are popular.


Which makes AI analytics on vegetation indexes, thermal imaging from dwell sensors, EM overpass surveys, and foot|vehicle traffic monitoring all the more attractive.

When a battalion disapears into an outside toilet that endpoints a line of changed vegetation that glows hot at night, chances are somethings up^H^H down.


Because common AI attacks like data poisoning and model drift are a thing of the distant past...


This is a horrible idea. When presented with a potential target, you're already biased to see an excuse to take human life. What happens when a person gets burnt out?


What happens when a person gets burnt out and makes mistakes?

Like this? https://en.wikipedia.org/wiki/United_States_bombing_of_the_C...


>last year, in which Centcom experimented with an AI recommendation engine, showed such systems “frequently fell short” of humans in proposing the order of attack or the best weapon to use.

I wonder if it's stupid risky to actually train AI on actual weapon behaviours, let alone all of them to optimize coordination. That seems like the crown jewel of leaks to infer actual US capabilities. It's one thing to embed a model with info about adversary targetting, but once you include your own capabilities to engage those targets, which I presume you would have to, then those models also becomes a huge liability.


So when the AI picks a school to be airstriked, hundreds of kids killed, we can all throw our hands up and say "oh it must've been a bug or something in the code" and dust our hands off. The banality of evil, truly.

DTA


Israel did 200 airstrikes on schools already. What are you doing about it now that you wouldn't be doing if it were AI?


It was AI: look up “the gospel” targeting system.


Where in the article does it say that there are no humans in the loop?


I think the parent comment's point is that even without AI warmongers find pretexts to bomb neighborhoods, schools, and hospitals; the AI would just add another via of deniality.


Yup. Best part: we’ll do it without AI and blame AI for it. Hit two birds with one stone, or two schools is more like it.


I wonder what all the various “AI safety experts” think about this issue, and whether their concern for humanity extends much beyond LLMs hallucinating something that could possibly offend a random person. The silence is kind of deafening.


A lot of AI safety is about the reliability and validity of information and undesired consequences of actions made on that information.

Morals and ethics about the type of information and desired action are sort of inconsequential in this view.


Feels like maybe the whole “AI safety” is about maintaining control over manufacturing consent and promulgating only the approved narrative, and not much else. When it comes to the things the regime wants (surveillance, bombing brown people) these people immediately become blind, deaf, and mute.


Yudkowsky's fine with it as it won't kill everyone. Just some people.


Depends on what the target is. I assume you'd be OK with the Ukrainians hypothetically using such a capability to beat back the Russians? Wars are horrible, but they are not necessarily always unjust or something to avoid. The Nazi death camps wouldn't have ended without a war.


I think the critical decision making is whether or not to engage in air strikes.

Once the US makes that decision, then I think, the US should use all tools at its disposal including AI, computer vision, big data, etc to ensure that it targets and destroys what it needs to. Excluding technology to make itself less competent is, I think, stupid.



Part of what I think may be an aspect overlooked in the conflict there.

The level of destruction is in part a preview of future warfare engagements without Geneva convention style bans on AI use.

Too much rhetoric is treating it as a one-off, when in actuality the technology enabling it means it's more of a first-off.

We're about to watch the new generation of WWI style technology influenced warfare without ethical and human rights concerns, and it's going to happen quickly.


So who put the wedding album into the corpus?


Far easier to kill people in Java than other languages.


What Americans asked for this? I know I didn't, and I'm absolutely sick of paying for it.


I did. I want our enemies to know they cannot hide forever. Yes, America has enemies.


Do you think American adversaries are not leveraging this tech?


And? Is bombing them more supposed to change that?


Well… generally bombing an entity does make it do less of whatever it was doing before. So yeah.


I heard that logic worked out quite well in places like Vietnam and Afghanistan.


And yet the same logic worked well in Europe.


What Americans asked for these fancy "digital images"? I think we should go back to the good ol' days of CORONA where REAL images were captured on REAL film, then dropped out of orbit and caught by planes. Then we can hire WAREHOUSES full of analysts and put them to work pouring over every bit of film with those little magnifiers they use and then circle the Bad Guys with a big red pen


You're American?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: