I'm stuck in a loop with my father for a while now where he forwards me a certain sort of propaganda. I feel like I've struggled to break through on a key point: when I say something smells like propaganda, it doesn't mean I think it is false--it means I think it's part of an organized campaign to build narratives and consensus.
In my case, the topic is highly charged. Reapproaching it with something low-stakes like this is at least worth a try.
Skilled propaganda isn't about making up stories, it is about amplifying and positioning existing truths. Nothing has to be a lie, and it affects the receiver as much as it impacts the whole information supply chain.
This piece might encourage someone else to research a different story and thus it amplifies the goals of the original propagandist. Many folks in a psyop/propaganda campaign are unaware of any sort of coercion. If the receiver understands it as propaganda the effort has failed.
It is an exciting time for sociologists, the disinformation war that is occuring right now is global in scope. If the cold war was WW3, we are currently in WW4.
This is what frustrates me so much with people who seem to think any article that doesn't lie is totally fine and can never be problematic. There seems to be a belief that you can't deceive unless you lie.
The very best propaganda doesn't lie, it simply is very selective about what truth it shares. By carefully choosing your facts, using loaded language, and by properly framing your statements, you can make almost any argument without lying.
This resonates with me. I sometimes used to have similar arguments about some forwarded messages with my late father, who was an electronics engineer and really sharp in his prime.
Intuitively it'd seem that the greater dissemination of information with social media and messaging apps would make people smarter and promote critical thinking, but in reality they seem to have the opposite effect. I still keep wondering if it's the environment or his age that was the primary factor. Do we become more gullible and less discerning as we grow older?. Perhaps I'll be much worse when I'll be as old as he was.
> Intuitively it'd seem that the greater dissemination of information with social media and messaging apps would make people smarter and promote critical thinking
I naively believed this too. It's wrong. The Internet and the various apps, sites, and social media things running on it aren't an information dissemination system, they are a data dissemination system. They take bytes and get them in front of many humans. The system itself cares not one whit as to whether the data it transmits has any connection to reality.
The result is that the Internet functions as a mostly indiscriminate amplifier of its inputs. Sadly, it is much easier to input lies than truth. First of all, creating lies takes much less effort since no measurement or fact checking is required. I could tell you a hundred lies about what the neighbor next door is doing in the time it took you to go over there and ring the doorbell. Worse, my lies can be crafted to take full advantage of all of the human biases and flaws in the social media systems. I can make lies that are emotionally loaded, play into stereotypical narratives, or have evocative photos. Meanwhile, you foolish truth-teller are restricted to only the narratives that actually happened and the imagery that you actually took of the event.
When you think of London in the Industrial Revolution, do you picture all of the bright shiny products of industrialization? The newly affordable textiles? Or is it more an image of a city covered filthy coal soot? It feels like we're in the latter state right now: buried under the pollution of disinformation.
We need to think of "information pollution" and need an "information environmentalism" movement if we're going to survive the Information Age.
I think my father is still sharp and have several competing quotidian explanations (projections?) of what's going on.
But, it's impossible to silence the nagging worry that there's a chemical/mental imbalance brewing, or that this will be one of those things I'll look back on some day as an early sign of something.
Could you please expand on what you mean? From the context I think you're saying it's a type of guerrilla marketing but the link doesn't seem to align with that. Nor does the article because the academics really do seem to have been falsely accused by the fake student/journalist mentioned in the article.
I was thinking the same thing, is there a word for a computer generated person? The whole article felt off to me because of what I interpreted as a misunderstanding of what a deepfake is
This should be the answer, even if there is a more accepted term already in use. The etymology is undeniable and it very effectively describes the concept in question without it possibly making references to any other adjacent concepts.
The opposite will happen. I doubt the call for preciseness wins out.
Reuters and other media will fetch deepfake to mean more things, and it will become a category of related frauds.
I have to wonder when the meaning change occurred. It's hard to tell if they got it from an interview, or a misunderstanding introduced it into the article.
>A generative adversarial network is the name given to dueling computer programs that run through a process of trial and error, according to Hao Li, chief executive and cofounder of Pinscreen, a startup that builds AI avatars.
>One program, the generator, sequentially fires out millions of attempts at a face; the second program, the discriminator, tries to sniff out whether the first program’s face is a fake. If the discriminator can’t tell, Li said, a deepfake is produced.
It's a good one too!
From our experience, an avatar is commonly used in relation to information warfare operations and can be similarly compared to sockpuppets, or deep socially engineered identities that have the will to skew with the public opinion.
Media outlets are already using deepfake as a general term for photographic manipulation. You might recall that video of Joe Biden making lewd gestures on Trump's twitter. That was reported as being a deepfake but is in fact just using generic pinching technology.
To expand on the parent comment's point, this is most likely using some variant of StyleGan2 which has plenty of endpoints on the Internet to make it easy for someone to just download a face for free.
However, they all have this uncanny StyleGAN glare. And there are still artifacts. Quality's improving though. Scrubbing it of EXIF data doesn't really deter detection.
Go ahead, try posting some StyleGAN avatars to Facebook or something. Your account will get flagged immediately for doing so. There's already a robust amount of interest in "deepfake" detection, and there are methods to fingerprint or adversarial attacking your machine learning models with data poisoning so you can reasonably determine a dead giveaway. Kind of like putting a dye pack in your bills.
What is NOT helping, is how media outlets can drum up some cheap panic over the "end of truth" or insinuating how the barrier for disinformation campaigns is lowering. Both statements can be valid but I think we are teaching the wrong lesson having people told they can't trust anything rather than to establish what trust means to them in the first place.
While you're professionally right, the term stuck to the image and video manipulation aspects to simplify the terminology for consumers and readers for the last 24 months.
And finding a cool name for the image frames that are generated through GAN's (generative adversarial networks) is a great idea...
Probably something as simple, and catchy, as deepfakes (which was ultimately linked to the initial user from github, if I remember correctly).
Right – I thought from the headline they had concocted a video of the activists doing or saying something repugnant, which does seem like a problem that’s fast approaching.
This, on the other hand, is a case of a fake online profile being used to say terrible things about the activist couple. The only notable thing is the profile photo was computer generated.
Has anyone ever experimented with totally sanitizing identities when reading news? For example, if there were a browser plugin that would cleanse pictures, associations, academic credentials, and especially names from news, would that make a difference in the way that we as humans process information?
Deepfakes create an "adjacent identity" to trick people into aligning their opinions with someone who is in the same group. But, if we were aware of this, and removed the identity to strictly review the information, it might change the way that information is received.
This would only work if a majority did this, so it would never work, nevermind.
There are differing views on how much to showcase authors (or reporters) names, and some organs (The Economist, Hacker News) intentionally hide or de-emphasize these.
Almost all questions of identity actally revolve around trust in one of its various guises: trust, credit, accountability, entitlement, and making or receiving of payment. In creation or relating (mediating) information, the persistence of authorship attribution provides a bundling handle for trust. An author might file numerous stories per year, month, week, day, etc. Assessing and assigning trust to each story is expensive. Bylines, editors, and publications accrue positive or negative associations with time. No, trust is not fully consistent or transative, but as an efficiency heuristic, rolling up and bundling reputation offers powerful gains, and numerous checks.
Even a pseudonymous source (you're reading one now) can accrue a certain reputation.
"News" is usually a hybrid of celebrity gossip, ephemeral sports/weather, PR / political narratives, and journalism. You seem to be under the impression that everyone wants to "read news" like they are investigative journalists.
Yes, the crowd on HN is more likely to do this, but I suspect it is very overwhelming for the average news reader to correlate so many data points.
> “The distortion and inconsistencies in the background are a tell-tale sign of a synthesized image, as are a few glitches around his neck and collar,” said digital image forensics pioneer Hany Farid, who teaches at the University of California, Berkeley.
and:
> Artist Mario Klingemann, who regularly uses deepfakes in his work, said the photo “has all the hallmarks.”
As an amateur photographer who only* knows enough Photoshop to quickly enhance images, I find these two statements completely unconvincing. This type of he-said-she-said news article makes me uncomfortable. TTBD: Truth To Be Determined.
* Edit: I know nothing about deep fakes but I expect "fake faces" to have "hallmarks" in the face.
That is more helpful, thank you. I'm still unconvinced since the discrepancies described are easily explained with optical depth-of-field as well as resizing and sharpening algorithms; the basic photography kind of stuff I'm familiar with.
I'm also not convinced; the fake is convincing enough for most people, also given it's unlikely a picture like that will ever be displayed at more than 100px large.
the lede is burried, the "deepfake" wrote for multiple newspapers :
In an article in U.S. Jewish newspaper The Algemeiner, Taylor had accused Masri and his wife, Palestinian rights campaigner Ryvka Barnard, of being “known terrorist sympathizers.”
so there's a real person behind the account that simply wants to stay anonymous, in this case traditionnaly the newspaper bears the responsability for the smear.
That doesn't follow. My first guess was today those articles were just part of the identity's cover, and were written by a group that was running "Taylor".
The lower left (his right) section of the collar has a bit of "crunchy pixels." Otherwise, it looks pretty damn realistic.
This appears to be a "thispersondoesnotexist.com" image. Is that site considered "deepfake"?
I always assumed "deepfake" to mean altering existing images to add actual people's likenesses (like the classic "celebrity porn" images). In particular, applied to video.
This type of thing is likely to become de rigeur, for astroturf and scam accounts. Disturbing, but not particularly newsworthy.
Before, people would just scrape images from stock photo sites, or even from some poor sap's Facebook photos. Doing it this way simply makes it less likely to be uncovered.
> This type of thing is likely to become de rigeur, for astroturf and scam accounts. Disturbing, but not particularly newsworthy.
It's newsworthy if the fake account's posts are used to assassinate the character of real people and the fake persona is pointed out for being fake:
> Reuters was alerted to Taylor by London academic Mazen Masri, who drew international attention in late 2018 when he helped launch an Israeli lawsuit against the surveillance company NSO on behalf of alleged Mexican victims of the company’s phone hacking technology.
I am not sure this is a new frontier at all. This is a case of no one being who they say they are on the internet. Frankly, given the mood in US these days, it seems wise not to share your opinion too openly ( or leave your contact information ).
Since it is not deepfake ( it may very well be an enhanced pic for all we know ), question becomes what is the purpose behind this author's post. Did they just learn about deepfakes and wanted to capitalize on it? Did they want to propagate a specific point ( and if so, which one )?
I miss the days I took articles at face value ( ignorance is bliss ). Now, I can't help myself and analyze former bastions like reuters. It is sad to me.
Deep fakes are going to start becoming part of our news, our entertainment, and our justice systems. "Fake news" a few years ago was just cherry picked data and skewed graphs, now the probability of any piece of media being completely fabricated is within a range to consciously consider.
2. This looks like any other sock puppet account but the benefit of using a generated person is that they won't accidentally cross paths with the real one and get outed.
This is not an innovation just marginally more effective approach to obfuscating your trail.
Imagine what might happen if someone were, say, to carefully craft an online persona (or several) over the course of a decade or more, complete with substantial recognition on numerous online platforms, news and academic citations, interviews, and public campaigns.
All based on created or assumed identifiers and imagery.
I am of course referring to myself. Though there may be others.
There is nothing "intelligent" about any of this. These are dumb algorithms doing what they're told. That said, there is everything "artificial" about this. Perhaps we can shorten "AI" to merely "A?"
I typed this already on the duplicate submission but here it goes.
Anytime I see things about GAN, deepfakes and bots it reminds me of the videogame Deus Ex Human Revolution (2011).
The game is set in the not so distant future and revolves around technology, ethics and humanity.
It touches on various concepts like; AI, biomedical advancements, hacking, weaponry and civil rights.
The particulary profound bit that I relate to situations like this is around a TV news broadcaster called Eliza Cassan.
---Spoilers Ahead!---
Eliza works for one of the most powerful media and news agencies, called Picus.
She is a celebrity and with her abilities to get the latest news and coverage, she is able to direct the conversations on various political and ethical issues that I mentioned earlier.
As you progress through the game and break into the Picus News HQ, you come to realise that no employees have met or dealt with Eliza Cassan in person.
This is because she is revealed to be an AI powered hologram composed from what I would assume are desirable human aesthetics.
It is not established who controls Eliza but you can tell she has a degree of autonomy but is penultimately controlled to operate for the benefit of her operators.
Her AI is also intrusive and appears to be able to mine data from around the world (cameras, hacked devices), allowing her unprecendented global access which she uses to analyse an direct conversation for her viewers.
A thoroughly good game which gives technologically pro/negative people a taste of what our future "may" be like.
S1mone is about an actress who quits a movie and is replaced by a CGI puppet, not completely dissimilar from situations in the production of Back to the Future 2 and All The Money in the World. The main character of S1mone also conceals the fact that she isnt real.
Imagine a video showing leader of a country discussing pre-emptive nuclear strike on another nuclear armed country, only that content of the video is not real but an ultra realistic deep fake video?
Not sure how accessible is the Israeli tool used for detection of manipulation in the article.
There is an urgent need gap for accessible tools for detecting deep fake[1](Sorry for the plug, it's my problem validation platform, if your startup has such tool or you know of any open-source tool feel free to comment there).
On one hand - I have a very small hope that in the future all video evidence will be treated as potentially fake and unreliable. On the other - it will be a horror show with real criminals. Our future is a bleak one.
I really don't see why you hope for that since, as you point out, once deepfakes are good enough (which might already be the case) we can't even trust our own eyes anymore.
Plato's cave allegory is going to be a very real, practical concern for people in a few decades. There won't be any record, any evidence that couldn't trivially be falsified at a large scale. You can have AIs generate serious-looking thesis written by serious-looking people with serious-looking credentials from a serious-looking university in a real-looking city and all of it be faked at scale. Terrifying indeed.
There are going to be a lot of people who share that hope, given the rapidly shifting window of permissible thought and deed - combined with technology's unblinking eye. I always thought that bit rot would act in such a manner, because it is getting harder every year to faithfully preserver internet events... but this might do the job sooner.
Exactly. To clarify, my first sentence was about a state when this so called "deep fake" tech is already present and used but courts and judges didn't caught up yet and are treating any photo/video evidence as real by default.
I'm positive this photo is computer generated. Those GAN "photos" all have a certain look: the background contains no concrete objects, even out of focus ones, but are usually not a solid color backdrop. It's usually something that looks like a normal background until you look closer, and realize it doesn't make sense. There are also problems with the details of this face, if you look closely enough. This one's rightmost tooth has an odd blurred boundary with the one next to it. The shirt also look odd.
Is he also plainly missing the left ear? Looks like it's covered by the face but I doubt it would be entirely invisible. Usually one thing that works well to detect (women's) GAN-generated faces are earrings: they're usually different from one side to the other.
Somewhat off topic / tangent: How long until courtrooms no longer accept video footage as evidence due to the prevalence of fake videos? i.e. police body cams, store security camera, videos of powerful people engaged in inappropriate activity, etc. What implications might this have?
Courtrooms will always accept video so long as the proper foundation can be provided and the authenticity can be proved. Even the best edits and deepfakes are rather simple to explain. If anything, as the technics become more advanced, we would see a rise in forensic media analysts (I made that phrase up) who would assess the reliability of the media. Many people are prone to lying when it benefits them, but we are never going to stop letting people testify in court because of that. The same goes for video and other editable media.
If this is not taken from https://thispersondoesnotexist.com/ then it's a really big deal, this is what would be interesting if you can prove it either way.
One assumes that this has been going on since print media has existed, it's just the barriers of entry are getting lower (easier to concoct fake personalities, easier to get stories spread by paying to clicks). See also for example https://www.thedailybeast.com/right-wing-media-outlets-duped...
I would be stunned if there weren't thousands or perhaps millions of "people" on Twitter that are just mercenary bots farming timeline history and meow-meow beans, waiting to be activated. It's just too easy NOT to do, right?
> Deepfakes like Taylor are dangerous because they can help build “a totally untraceable identity,” said Dan Brahmy, whose Israel-based startup Cyabra specializes in detecting such images.
Dangerous for the government, good for the citizens.
On the internet, by random people? That really isn't new, and was possible without deep fakes, you can use an anime avatar on Twitter if you want to, or just stay an egg.
In newspapers? There's a trivial solution: check that your authors actually exist and are who they say they are.
[1]: http://paulgraham.com/submarine.html