Oh wow. This is the next generation of that Max Headroom signal intrusion.
If it can be carefully timed it could appear as if it's just a scene cut to another story, and then disconnected before the original signal resumes the next story.
Some years ago (2010 +/- 5years?) the organization that administers the Geneva Convention had a conference on whether to classify as a war crime the act or faking video of the head of a county or its military (eg. "I direct my soldiers to surrender").
I can't find a reference to it (perhaps someone else has better Google-fu) but there was an article on it in the NYTimes when it the conference was held.
> Iran’s cyber-enabled influence operations also continued to grow in sophistication in this latest phase. They better disguised their sockpuppets by renaming some and changing their profile photos to appear more authentically Israeli. Meanwhile they made use of new techniques we’ve not seen from Iranian actors, including using AI as a key component to its messaging. We assess Cotton Sandstorm disrupted streaming television services in the UAE and elsewhere in December under the guise of a persona called “For Humanity.” For Humanity published videos on Telegram showing the group hacking into three online streaming services and disrupting several news channels with a fake news broadcast featuring an apparently AI generated anchor that claimed to show images of Palestinians injured and killed from Israeli military operations (Figure 7 (same as in the guardian article)). News outlets and viewers in the UAE, Canada, and the UK reported disruptions in streaming television programming, including BBC, that matched For Humanity’s claims.
> Iran’s operations worked toward four broad objectives: destabilization, retaliation, intimidation, and undermining international support for Israel. All four of these objectives also seek to undermine Israel and its supporters’ information environments to create general confusion and lack of trust.
Undermine international well deserved support for [to quote the US judge] plausible genocide? I'm kinda offended by the idea it was not my own choice to not support it. Not to worry, my government fully supports all aspects of the PG including the rejection of the convention on the Rights of the Child.
Besides lots of good reporting there is also hilarious nonsense:
> In April and November, Iran demonstrated repeated success in recruiting unwitting Israelis to engage in on-the-ground activities promoting its false operations. In one recent operation, “Tears of War,” Iranian operatives reportedly succeeded in convincing Israelis to hang branded Tears of War banners in Israeli neighborhoods featuring a seemingly Al-generated image of Netanyahu and calling for his removal from office
You know that feeling? You see your feet and hands move and you put up banners calling for some false operations you unwittingly don't support? Hate it when that happens. We should be happy they didn't have to consciously experience it.
Pretty clever how those researchers identified this among hundreds of thousands protesting against bibi.
Methinks Iran might be pissing in a bowl of Cheerios they're not ready to have thrown in their faces. They're entering into a deepfakes arena with several states that have more advanced technological and cyberoffensive capabilities than they do.
A very large chunk of the Iranian population already hates their government, not sure how some fake videos would make a difference. Maybe target the loyal somehow?
Because they didn't. The Guardian lies, as always. The text says:
>The fake news anchor introduced unverified images that claimed to show Palestinians injured and killed from Israeli military operations in Gaza.
So they are "unverified" in that someone from The Guardian hasn't personally checked that they are images from Gaza, but they were NOT AI-generated.
The only thing that seems AI-generated is the anchorman, but that is of little importance.
The article doesn't say anything other than you're saying.
>Iranian state-backed hackers interrupted TV streaming services in the United Arab Emirates to broadcast a deepfake newsreader delivering a report on the war in Gaza, according to analysts at Microsoft.
>The tech company said a hacking operation run by the Islamic Revolutionary Guards, a key branch of the Iranian armed forces, had disrupted streaming platforms in the UAE with an AI-generated news broadcast branded “For Humanity”.
>The fake news anchor introduced unverified images that claimed to show Palestinians injured and killed from Israeli military operations in Gaza. Analysts at Microsoft said the hacking group, known as Cotton Sandstorm, published videos on the Telegram messaging platform showing it hacking into three online streaming services and disrupting news channels with the fake newscaster.
As far as verifying things goes, yes, that's how it should work. I'm no fan of the Guardian by any means but there has been a ton of misinformation, old videos from Assad's forces bombing civilians passed off as Palestinians getting bombed, that kind of stuff. The entire conflict is an all-out propaganda war in addition to the physical fighting. Thinking that the Islamic Republic of all regimes wouldn't lie about the Israel-Palestine conflict is absurd. Also as an aside, the Guardian itself is regularly accused of having anti-Israel bias. They're not exactly known to push pro Israel talking points, to put it mildly. It's a bit like accusing Al Jazeera of having an anti Gaza bias.
The headline seems intended to mislead people. Most people who read this headline are going to interpret it as Iran hacked a station and delivered 'fake news.' Hence, the original post in this thread we're responding in. For some contrast see an identical story subject here. [1] That story's about China's AI newscaster and there they just used normal terms like 'ai anchor.'
Yeah, sorry, no. The Guardian is writing in a way obviously intended to make careless readers (which means most readers) think that the whole video is known to be AI-generated. No, they don't actually say that; they invite the reader to believe it while carefully not saying it.
No professional writer would do that by accident. The writing is very obviously slanted in a way meant to mislead the audience. I don't know what their motives for misleading people may be. I suspect it's mostly to sex up the story by exaggerating ther sensational "AI" angle more, because, hey, who cares if you escalate a conflict and get more people killed when there are clicks and views and ad impressions to be had.
But the fact that they are intentionally misleading people is evident in their own words.
As for the actual video, it's trivial to find stuff that will generate the talking head for you, and they probably did that so they wouldn't have to have any of their own people on camera. That's zero evidence in either direction about the veracity of the important parts, even though the Guardian is trying to use it to cast doubt on those parts.
The "unverified images" themselves may or may not be real, but almost certainly are NOT AI-generated. As you yourself noticed, it's still easier, even today to just use old stuff or stuff from other wars.
And honestly there's going to be more than enough real evil done in any war that you could make a video like that if you had some kind of omniscient camera.
I don’t agree that this is “obviously intended” to make people think it’s AI. Theres lot of possibilities besides AI; it could be old footage, the parties could be misidentified, it could be from a different conflict, it could be taken out of context. “Unverified” is a pretty standard way to characterize a video when you haven’t vetted its source.
This is wildly overdrawn and lacking in substantial basis.
Though I'm sure there are many who believe that any source besides Electronic Intifada (or equivalent to it in tone and selective reporting) is "extremely pro-Israel".
Yep. Even the censoring and firing of the last of their decent cartoonists, Steve Bell, shows the Guardian can't even tolerate off-message satire when doing their masters' bidding.[0] They were half-decent under Rubsbridger, but how far have they fallen.
[0] https://www.belltoons.co.uk/hotoffpress
The lie is in the title: it is written in a way that makes it sound like the entire piece that was broadcast was generated by AI, which is not the case.
Exactly. We have to disregard the entire thing. If we can get more of these images from reputable sources that verify the actual story that's a matter that should be discussed. Given, however, what we have is propaganda in the most explicit form we should not take it on faith that they're honest about any of it.