Regardless, the AP news article linked under the "methods" page provides some useful reading on how to detect these faces, for anyone interested.
I imagine this kind of stuff will trick a lot of people in practice.
But in passing, accompanying a news article, a tweet or an instagram post, are you paying as much attention? Those are the scenarios where the potential for harm is much bigger.
That 100% will gradually come down as the tech improves. And I'd guess the tech is already good enough that most people won't be able to improve on 50% success at first glance -- I don't think my instinct would noticeably improve with practice.
Apart from happiness (smiling), all of the deep fakes showed a blunted affect. Genuine humans tend to have quite expressive faces, and the many of the fakes looked like NPCs from an Elder Scrolls game.
These lead me to believe that a situation where deep fakes might matter e.g. security video presented as evidence in court, it would be possible to start picking up the deepfake artifacts/signatures even for a human expert.
Face on, ears seem to become smudges.
If you know how to look for imperfections and quirks it's easier but ain't nobody doing it for just a image without "this might be AI" context
... reeled off a list of digital tells that he believes show the Jones photo was created by a computer program, including
- inconsistencies around Jones’ eyes,
- the ethereal glow around her hair and
- smudge marks on her left cheek
But that being said, all the pictures were insanely convincing and I picked fakes only because I knew I had to pick one and not because I knew one was fake.
I'm playing "can you tell which picture has a non blurry background and has no artifacts?"
edit: My first mistake is when I thought a piece of fabric on a human was unusually warped.
So next time you're on a video call with someone and you're unsure if they're human or not, ask them to draw a letter on their face or have them dress like a pirate ;-)
is that a thing now? my cursory search for "deepfake video call" gave me https://www.youtube.com/watch?v=wYSmp-nrJ7M
but other than that, there is just youtubers goofing around with the tech. Do you know of a "good quality" deepfake video call that can fool us like the whilefaceisreal does sometimes?
Maybe there are more instances, but this made the local news.
I only get localized results on the phone, so here is a German link. Use deepl or Google translate: https://www.tagesschau.de/investigativ/rbb/deep-fake-klitsch...
btw, firefox has a first party translator that is on-device so that works nicely, btw
Maybe we will see this in the future for CEO scams. Though in that case maybe a good UI that clearly indicates that the victim is called by an external user "Mr. Big CEO <email@example.com>" might already be helpful.
To me, looking at the background is kind of cheating to sus out facial features, after all we are trying to figure out if the face is real not the background
I look at the hairs. In real images you can see their fine threaded structure, in fake ones it's rather blurry and inconsistent.
That said, even without looking deeply for weird smooshy patterns, inconsistent curves, lack of symmetry or nonsense clothing, the biggest giveaway is that most AIs are pretty bad a realistic lighting. I got most of these at a glance because it’s a very pronounced difference.
The problem is there's too much variety in the backgrounds of the training set. They don't follow a pattern the way growing a human does.
I'm sure this fools a majority of people, contrary to the comments here. Obviously, with detailed analysis, you can probably spot the difference, but in day-to-day activity, and without knowing that one picture is fake, you will fool even more.
I wouldn't assume the Avatars are the real persons either but yeah persons...
Backgrounds should be generated by a different model and face should be pasted in, now that would be a real challenge! Models that fix eyes already exist.
If the head is rotated slightly, the faces, especially the cheeks are getting slightly distorted in the artificial images.
That, or the backgrounds have the weird discombobulated shapes and structures that only vaguely resemble real things, which I’ve also noticed in other AI generation tools.
Either way, it still fools me sometimes and it’s pretty remarkable how quickly this has all been happening.
I think a similar test that is not asking for a direct comparison but just "is this image real?" would be much harder, since there is no better "safe" choice to fall back on.
A few of them to have some artifacts on the face that give it away, but this is very impressive.
Edit: Doing it a couple more times, you can tell pretty much instantly.
Use a normal pointer.
But I believe I am somewhat face-blind. I have never understood how people were able to describe faces to the cops to make those mockups of criminal suspects. I also struggle to recognize faces sometimes, including celebrities and new dating partners. At a past job, I remember thinking two of my coworkers were the same coworker until I saw them at the same lunch outing and it suddenly clicked. I recently got confused by two characters in an action movie with less than a dozen characters total, and realized shortly after that they had different ethnicities.
Biggest issue seems to be a number of images of people consuming their deformed selves:
It's hard to decide whether these are impressive without knowing whether each face is just a real face with some minor adjustments.
Example: https://imgur.com/a/eK0jMZx. I can look at it after getting used to it, but at first glance I have to look away.
Other giveaways I haven’t seen mentioned in the discussion: vague earrings (fake). Coherent details in glasses reflections (real). If second person in picture has good details, probably real. Second person has bad details, too easy, fake. Gratuitous wisps of disconnected hair, fake. Actual clearly coherent finely detailed design on glasses frames or clothing, real.
I didnt get a single one wrong, and am now playing with the rule that I have to decide within a few seconds, still all right.
Still, they're pretty good, If one of the CG images came up by itself in the course of other business I wouldn't bat an eyelid.
Not the face itself, bur what's around it/background/other objects etc.
Check for weird-looking "something's not right" objects/backgrounds and you'll get most of it fine.
Look really carefully at a small area of skin. See if wrinkles, pores, hairs, and minor skin imperfections are present. See if they make sense in the context of the rest of the face.
AI vs. Real can become somewhat easy to identify over multiple repetitions - AI vs. Real, Real vs. Real, AI vs. AI. are all scenarios that should be included to increase the difficulty imo.
On this selection at least, those with blured /unicolor background are fakes, and true pictures regularly have interesting things in the background.
Not hard to change, but it does tell me that the website is probably honest with its data so far.
ON a bigger screen, I would say in the AI ones, the fake hair is "subtly messy with imperfections" - it's a bit like a weave or rug in places, not correctly modelling strands.
There's alot of tells. Glasses make the edge of the eyes look strange. Around the ears, hats, sometimes the backgrounds etc, the blurs are wrong or corrupt.
However, if I didnt know 1 of the 2 were generated, I wouldn't look for anything and would probably just assume it's real, unless there was real obvious corruption on the face.
- artifacts in backgrounds;
- weird patterns in clothes;
- clothes that are very ill-fitting.
However, since diffusion models are all the rage now, I think we would perform significantly worse with landscapes or images of fruits and animals, especially if the task is "distinguish between the real and fake art".
However the state of the art of image generation has moved - I suspect a 2022 version of this would be substantially harder.
Depressing. They're both photos. A photo (of 'reality') is at its very best, at very best, already just a representation of the subject. Both are (technically) fake, aren't they?
I'm really not sure what point you are making.
If you say so, I guess.
If it has blurry and screwed up bokeh or random patterns, it's probably the fake one. If there's something incredibly detailed, but blurred, it's probably real.
Every single fake image has poorly rendered ears, which makes perfect sense as the contouring would be hard to get right for an AI
Maybe a face-centric ML isn't so good at backgrounds. Shocker!
tried 3, got all correct, no point to waste my time anymore, same issues as always with these photos
I got 1 incorrectly because it was bad quality and photography which created artifacts of its own.
Perhaps that is the reason people are doing badly in these tests.