Hacker News new | past | comments | ask | show | jobs | submit login
Which face is real? (whichfaceisreal.com)
235 points by GamerUncle on Nov 14, 2022 | hide | past | favorite | 181 comments



I'll simply observe that it is easy to tell a fake face when presented an either/or choice and when specifically asked to. Most of the time we aren't looking as closely, so while I see some commenters being very happy about their accomplishments, I don't personally see a reason to rejoice.

Regardless, the AP news article[1] linked under the "methods" page provides some useful reading on how to detect these faces, for anyone interested.

[1] https://apnews.com/article/ap-top-news-artificial-intelligen...


My personal observation is that these generators fail miserably when generating low-detail parts and hair. In many of these pictures you don't have to look at the face at all, but rather look at the background and the one with heavy artifacting will be fake. In "enterprise" style pictures one can look at hairs and find heavy artifacting there.


Problem is most people don't look heavily at people's hair and backgrounds when seeing a random photo somewhere, they look at the face!

I imagine this kind of stuff will trick a lot of people in practice.


Sure, I completely agree on this one - AI generated faces have been pretty decent for years now. The quality is currently in some weird space: with careful preselection they can fool unsuspecting reader passing by and yet at the same time reader looking for fakes will detect them (given high enough size/quality) with high confidence.


for me the eyes looked really off in all the fakes, the pupils seems like the wrong size for the level of light and looked like the shape of the eye didn't fit into the bone structure of the face.


This is a very important distinction. With deliberate attention, you can indeed catch many fakes in this kind of scenario (to me it seems the background is often a giveaway, but you do need to focus on it)

But in passing, accompanying a news article, a tweet or an instagram post, are you paying as much attention? Those are the scenarios where the potential for harm is much bigger.


Yeah, I had exactly the same reaction. When I take the time to scan for artifacts, I get close to 100%, but when I try to do it quickly, I get close to 50%.

That 100% will gradually come down as the tech improves. And I'd guess the tech is already good enough that most people won't be able to improve on 50% success at first glance -- I don't think my instinct would noticeably improve with practice.


I think that's true to an extent, but serious errors such as the woman with what looked like a horn growing out of her cheek to match a partially-occluded earring seem to happen often enough that they would call attention to people being fake.

Apart from happiness (smiling), all of the deep fakes showed a blunted affect. Genuine humans tend to have quite expressive faces, and the many of the fakes looked like NPCs from an Elder Scrolls game.

These lead me to believe that a situation where deep fakes might matter e.g. security video presented as evidence in court, it would be possible to start picking up the deepfake artifacts/signatures even for a human expert.


Exactly my thought - I scored 5/6 choosing very quickly the one which seemed more imperfect at a glance, but I'm no Reddit-expert photoshop-identifier or whatever. They all looked real, I'd have assumed any of them were without the 'one of these is fake' context.


I got 5/5 correct just by looking for weird artefacts in the hair and background. Just looking at the eyes alone was much harder (I sometimes couldn't tell).


Ears and neckline are also good indicators.

Face on, ears seem to become smudges.


You can also look for artifacts but yeah, it could have both faces generated by AI and in many cases you wouldn't be able to tell.

If you know how to look for imperfections and quirks it's easier but ain't nobody doing it for just a image without "this might be AI" context


Here's the important paragraph (I made it bullet points):

... reeled off a list of digital tells that he believes show the Jones photo was created by a computer program, including

- inconsistencies around Jones’ eyes,

- the ethereal glow around her hair and

- smudge marks on her left cheek


For me I didn't even look at the face. To me the obvious giveaway that made me spot all of them was, first of all the unnatural bokeh that you see in all AI images that don't look like anything a camera would produce. And the second thing is looking at clothing that folds in strange ways.


After 5 minutes, I got tired and started seeing some of the same pictures again. 100% right all the time. For me, the trick is to assess the background, ear shape, synthetic textile (if any), and skin conditions.


+1 to the background trick. Faces were pretty convincing, but the generated backgrounds all seemed to have a generic filler appearance.


For me, the backgrounds were interesting for their _more salient_ features rather than their ambience, e.g. unexpected smears of color, textures that vaguely looked like real things at a glance (like fabric or nature) but wouldn't stand up to scrutiny. They reminded me of the typical "mistakes" that you see when playing around with image generators.


My heuristic in picking the fake one was to examine the fabric - the fake ones are all wearing what seems to be Dan Flashes.


Same but I focused on their expressions. Faces with expression “we are taking a picture of me” were fakes.

But that being said, all the pictures were insanely convincing and I picked fakes only because I knew I had to pick one and not because I knew one was fake.


2 more potential things to check : digital artefacts on teeth and between hair and background Now they know what to improve ... Next round will be more difficult ...


For me its the eyes. A good number of AI generated photos I have seen have weird pupils or both eyes are not aligned to where they are looking.


And teeth. Definitely a few pictures where the AI gets the angle of certain teeth wrong from how they would look naturally.


I've used the lighting on the hair. Most of that time the specular highlight doesn't match the rest of the face.


Outside of any obvious anomalies, telling sign seems to be AI generates eyes symmetrical on the center of the screen.


yeah. but if this were randomized or corrected, it would be A LOT harder for me personally.


In almost all cases, the generated picture had an uneven distribution of light reflecting from the eyes.


I also noticed the AI often got facial hair slightly wrong.


Skin is more glossy in theses IA photos than it should be.


On a streak so far.

I'm playing "can you tell which picture has a non blurry background and has no artifacts?"

edit: My first mistake is when I thought a piece of fabric on a human was unusually warped.


This. The generator is really bad at compositing people into the image. So while from the actual face it's sometimes hard to tell, backdrop and foreground items (like a mic or toy) are a giveaway. So is face paint or unusual props (fake mustarch or carnival custome). Especially since a lot of images from the real humans dataset seem to contain these.

So next time you're on a video call with someone and you're unsure if they're human or not, ask them to draw a letter on their face or have them dress like a pirate ;-)


>So next time you're on a video call with someone and you're unsure if they're human or not, ask them to draw a letter on their face or have them dress like a pirate ;-)

is that a thing now? my cursory search for "deepfake video call" gave me https://www.youtube.com/watch?v=wYSmp-nrJ7M but other than that, there is just youtubers goofing around with the tech. Do you know of a "good quality" deepfake video call that can fool us like the whilefaceisreal does sometimes?


Some German politicians thought they had video calls with Klitschko. But it was a deep fake.

Maybe there are more instances, but this made the local news.

I only get localized results on the phone, so here is a German link. Use deepl or Google translate: https://www.tagesschau.de/investigativ/rbb/deep-fake-klitsch...


ah.... so it "has" already caused some problems. in that case it does look like an interesting idea....

btw, firefox has a first party translator that is on-device so that works nicely, btw


Yeah. In that case I think the intention was to have them look bad. And since the faked person is a celebrity, enough data was available to produce a fake of suitable quality.

Maybe we will see this in the future for CEO scams. Though in that case maybe a good UI that clearly indicates that the victim is called by an external user "Mr. Big CEO <hackerperson@totally-not-s.us>" might already be helpful.


My game is looking at the eyes, seems like the model has a tendency to make faces that have pretty much mirrored eye shapes and sockets, pupils being clear of imperfections or just plain circles or even both pupils being identical to each other, tilt is also a huge issue, most of the AI images have their output "looking directly into the lens" and nearly perpendicular to the aperture, real humans are off center in more ways than one in all of these aspects and more, as well as having numerous orientations

To me, looking at the background is kind of cheating to sus out facial features, after all we are trying to figure out if the face is real not the background


>My game is looking at the eyes

I look at the hairs. In real images you can see their fine threaded structure, in fake ones it's rather blurry and inconsistent.


Looking at the ears has worked for me 100% of time.


True, in most cases, backgrounds with a lot of elements (ex: a house in the background) were part of real pictures


Typically in AI generated images another giveaway is that when the head covers the entire height of the picture, the background may randomly change between the left and right side in implausible ways.


Outside of obvious facial artifacts the background was the next give away. I got it correct every time.


Reflections (highlights) in the eyes being different, artifacts and so on. But that said, if I weren't looking for fakes I'd probably accept them as real enough.


As soon as I started paying attention to the faces I lost my streak; they are basically perfectly realistic. But the backgrounds are a dead giveaway.


Same here. Just looked at the background and got all (10/10) right.


so next: generate green backgrounds and use the same background in both pictures


It told me that the following is the real face:

https://www.whichfaceisreal.com/realimages/02794.jpeg


So the entire website is just for training


This is absolutely what’s happening.


That doesn't look like anything to me.


looks real to me


Reality has much higher polygon count, and much better texture mapping.


For all the people boasting about how easily they can detect it: yes, you have to deeply look at possible artifacts (especially in teeth/ears) but sometimes it's not that easy and I'm pretty sure it would fool most of the population, especially if not giving a reference, real image on the side. Photoshopped images can also be spotted easily by keen eyes, but they still do their job, which is deceiving the majority.

Edit: typos


This is a bit like chess puzzles. When you know there’s some winning tactic, you’ll sit and look for it until you find it. But in most real positions in actual games you don’t know and sometimes have to trust your gut as to whether to spend time on the details. If you know one picture is fake you’ll find it. If it’s just a social media avatar, you’ll assume it’s a real person.

That said, even without looking deeply for weird smooshy patterns, inconsistent curves, lack of symmetry or nonsense clothing, the biggest giveaway is that most AIs are pretty bad a realistic lighting. I got most of these at a glance because it’s a very pronounced difference.


I've spent a lot of time playing with AI image gen and I had to think really hard about most of them. I can confidently say I would be fooled by nearly all of them if I wasn't on the lookout.


As an avatar on twitter or wherever 100% would trick me, if I even clicked the image to take a closer look I wouldn’t know if it’s the compression by the social network or the image being generated…


Also the model is trained on faces, not backgrounds. Pretty soon we’re going to see entire 3D scenes generated and rendered photorealistically through a camera model.


I don't think that's true. If they masked out the backgrounds on training, how would the model be able to synthesize a background at all?

The problem is there's too much variety in the backgrounds of the training set. They don't follow a pattern the way growing a human does.


I find the result impressive.

I'm sure this fools a majority of people, contrary to the comments here. Obviously, with detailed analysis, you can probably spot the difference, but in day-to-day activity, and without knowing that one picture is fake, you will fool even more.


In this context you are looking for fakes, but on a website with Sales Person Avatars e.g. in a Chat I wouldn't search for fakes.

I wouldn't assume the Avatars are the real persons either but yeah persons...


Eyes and backgrounds. Easy peasy, 10/10.

Backgrounds should be generated by a different model and face should be pasted in, now that would be a real challenge! Models that fix eyes already exist.


Or just use real background images and composite an AI face on top. The question which face is real so using the background is kinda "cheating" imo. So using a real photo for the background eliminates that technique to cheat.


I also got 10/10, but by looking at ears and facial hair. There's a long way to go for a test like this. That said, if I didn't know I was looking for artifacts, all of those pictures would be passable at a glance.


There are almost always these strangely colored, flare-like artifacts in the background or on the skin. Looking for those, I also easily scored 10/10.


Ears and teeth artifacts, if present.

If the head is rotated slightly, the faces, especially the cheeks are getting slightly distorted in the artificial images.


The backgrounds are a dead giveaway for me most of the time. Granted, I’m a professional photographer and spend a lot of time looking at photos taken with various lenses, so have become pretty familiar with depth of field and all that jazz.

That, or the backgrounds have the weird discombobulated shapes and structures that only vaguely resemble real things, which I’ve also noticed in other AI generation tools.

Either way, it still fools me sometimes and it’s pretty remarkable how quickly this has all been happening.


As a photographer what started throwing me off was back focus issues in the synthetic images. I assume if anything the GAN would generate an image that was uniformly sharp, but I kept seeing images where the focus was just past the subject's eyes, more around the ears and hairline. Just like a real autofocus system might lock onto shirt fabric or something.


After doing 30 I was able to differentiate very quickly, it's surprising how easy it is to detect these. You can tell by abnormalities in ears and AI probably wont show you hands because it struggles a lot. The backgrounds often look correct but dont make architectural sense. I also noticed if I dont look at the person in the eyes it sometimes is a tell, I'm not sure why though.


It's the background. The faces look half decent but all the AI backgrounds are fucked in some way. After a few misses getting my bearings I started getting nearly 100% success rate, and within a second and a half in most cases.


I found that in direct comparison, the background often was enough to tell the difference - but that was mostly because one of the images had a detailed background with text or architecture, which I know the AI would struggle with.

I think a similar test that is not asking for a direct comparison but just "is this image real?" would be much harder, since there is no better "safe" choice to fall back on.


My detection ratio was 100% successful and I didn't pay attention to anything in particular, it just clicked. I don't know what gave it away. I suspect that is because I looked through so many pics on thispersondoesnotexist.com, my brain's own neural network learned how to detect them (which is still a blackbox to the consciousness).


It's the background.


I got the first 3 wrong, then I started looking at the necks and the background and got all of them right (although not always 100% certain I was going to get it).

A few of them to have some artifacts on the face that give it away, but this is very impressive.


I only got one wrong and it's because I clicked through a little quick. To me it's almost immediately apparent by looking at the skin texture / how the hair looks. The AI is particularly "wavy" and doesn't look like normal skin.

Edit: Doing it a couple more times, you can tell pretty much instantly.


Teeth also give it away.


The pointer shouldn't turn into a magnifying glass when I hover over a picture. That signals that I can zoom in and look at details more closely.

Use a normal pointer.


I'm getting them all correct - just look at the hair, that's the simplest way to tell IMO. The edges are blurred and weird looking.


In contrast with everyone else, I struggled a lot with this when just looking at the faces. I made twenty attempts and got ten successes and ten failures. After reading other comments, when attempting again by looking at the backgrounds, I tried ten more times and went nine and one.

But I believe I am somewhat face-blind. I have never understood how people were able to describe faces to the cops to make those mockups of criminal suspects. I also struggle to recognize faces sometimes, including celebrities and new dating partners. At a past job, I remember thinking two of my coworkers were the same coworker until I saw them at the same lunch outing and it suddenly clicked. I recently got confused by two characters in an action movie with less than a dozen characters total, and realized shortly after that they had different ethnicities.


Similar here. Although my inability to distinguish faces is only mild. But for the longest time I thought Donna Noble[1] and Sarah Jane Smith[2] were the same character. They still look the same to me, modulo the wrinkles.

[1]: <https://en.wikipedia.org/wiki/Donna_Noble>

[2]: <https://en.wikipedia.org/wiki/Sarah_Jane_Smith>


It asks to click on the person who is "real", which gets a bit strange when "real" seems to be someone in green contacts and a wig for cosplay: https://www.whichfaceisreal.com/realimages/12481.jpeg

Biggest issue seems to be a number of images of people consuming their deformed selves: https://www.whichfaceisreal.com/fakeimages/image-2019-02-18_... https://www.whichfaceisreal.com/fakeimages/image-2019-02-17_...


Well, even Cos-players are real persons...


They're a real person doing their best to look like a fake person.


The teeth were a big giveaway for me. The gap between the central incisors should roughly line up with the nose but the fakes are almost always noticably offset.


I knew Tom Cruise wasn't real.


Don't play this game! All you're doing to creating free training data for the next generation of adversarial face generation.


I'd love to see what images from the training set look most similar to a given generated face.

It's hard to decide whether these are impressive without knowing whether each face is just a real face with some minor adjustments.


You can take a photo not in the training set and usually find a close match. So in a sense almost any photo matches an AI generated one with "minor adjustments".


I got 18 in a row before I missed. There's something around the corner of the eyes that's weird, but I'll be damned if I could figure out how exactly to articulate it.


On a slightly related note, whenever I see a generated face with other faces in the background, and those faces are warped in strange ways, I get a very unpleasant sensation, like a chill going up my spine. Does anyone else get this?

Example: https://imgur.com/a/eK0jMZx. I can look at it after getting used to it, but at first glance I have to look away.


Yes, it actually makes me feel slightly sick.


100% right for 10 minutes on an iPhone (zooming in as needed).

Other giveaways I haven’t seen mentioned in the discussion: vague earrings (fake). Coherent details in glasses reflections (real). If second person in picture has good details, probably real. Second person has bad details, too easy, fake. Gratuitous wisps of disconnected hair, fake. Actual clearly coherent finely detailed design on glasses frames or clothing, real.


This game seems quite easy. When I know one of them is computer generated and one not, it's easy to pick the real one. Like a multiple choice question is much easier than otherwise.

I didnt get a single one wrong, and am now playing with the rule that I have to decide within a few seconds, still all right.

Still, they're pretty good, If one of the CG images came up by itself in the course of other business I wouldn't bat an eyelid.


I always find these "which is real" comparisons interesting because there is always some type of distrotion around the borders of the face, like the AI has a good idea what a face looks like but things get fuzzy when it tries to create the stray hairs a person always has sticking out.


You only need to look for pimples or blemishes on the skin. Thats how you know what the real face is.


Yep. Everyone is focused on the background, but you don’t need that—all the fake faces look too perfect.


It's usually the glitchy artifacts which give the clues.

Not the face itself, bur what's around it/background/other objects etc.

Check for weird-looking "something's not right" objects/backgrounds and you'll get most of it fine.


I got five in a row without looking for glitches or background objects, then stopped.

Look really carefully at a small area of skin. See if wrinkles, pores, hairs, and minor skin imperfections are present. See if they make sense in the context of the rest of the face.


I agree with another commenter that "Which face is real?" is somewhat easy to determine. In this scenario, it's A or B. You already know one face is fake, and one is real. It would be substantially more challenging if the question was rather "Can you spot all the AI generated faces?" and it turns out 40% of the time there is no AI generated face at all.

AI vs. Real can become somewhat easy to identify over multiple repetitions - AI vs. Real, Real vs. Real, AI vs. AI. are all scenarios that should be included to increase the difficulty imo.


Same as the other commenters, I got the first couple wrong, but then quickly realised what I was looking for. You can see artefacts in the skin of many of the faces, and often the ears were the giveaway.


Where is the proof? How can we trust that this website is honest about which are using real and fake faces? This might be some grad student's psychology experiment, or some artist's comment on our understanding of reality. If you can fake a face, you can fake a website. If i wanted to sell a database of "real faces" i moght just generate them myself using AI and sell them to researchers as real, forever polluting such tests. That would certainly clear up any copyright issues.


I can get 80% 'real' without even looking at the visages, which are indistinguishable to me, but by focusing on the background.

On this selection at least, those with blured /unicolor background are fakes, and true pictures regularly have interesting things in the background.

Not hard to change, but it does tell me that the website is probably honest with its data so far.


Or they have fooled you by offering a simplistuc red herring.


If the trick were that both were generated, shouldn’t we occasionally get cases where both are clearly fake?


I’m viewing this on a mobile device so I can’t zoom in too efficiently to do all the subtle detail stuff. What I caught onto was the real photos were subtly messy with imperfections. The AI ones have this idealized look to them that has a slight airbrushed effect in aggregate. If people’s faces had blemishes or imperfect skin it’s more likely real. Somehow those imperfections get chopped in AI, probably because they’re so idiosyncratic they don’t survive the AI transformations which look at features en masse.


Even a person wearing glasses messes up with the A.I., look at the image on the left:

https://www.whichfaceisreal.com/results.php?r=1&p=1&i1=image...


And the person on the right here looks airbrushed though. So this might be a closer call with another image.


> What I caught onto was the real photos were subtly messy with imperfections.

ON a bigger screen, I would say in the AI ones, the fake hair is "subtly messy with imperfections" - it's a bit like a weave or rug in places, not correctly modelling strands.


Software is good at making faces now, but concentrating on the the periphery (background, ears, earrings) is still easy to spot a fake. Also computers don’t know how hands look, at all.


Just tried it on desktop. Way more noisy artifacts inside the faces with bigger resolution. At the scale shown on mobile they are barely noticeable (for me).


Perfect score, but it’s such an impressive technology. Traditional graphics with triangles and ray tracing are fake at a glance, but here you need attention to detail and a bit of wit.


I got the first 5 wrong because I thought I was meant to pick the computer, not the real person. After that I did ~20, got them all correct.

There's alot of tells. Glasses make the edge of the eyes look strange. Around the ears, hats, sometimes the backgrounds etc, the blurs are wrong or corrupt.

However, if I didnt know 1 of the 2 were generated, I wouldn't look for anything and would probably just assume it's real, unless there was real obvious corruption on the face.


The faces look pretty good for the most part, but there a few things that usually give away the generated photos:

- artifacts in backgrounds;

- weird patterns in clothes;

- clothes that are very ill-fitting.


I feel bad for the people where I thought 'this one is too ugly/asymmetrical to be real', and then they turn out to be the real ones...


Real people have imperfections. If it looks like an airbrushed photo, it was probably generated by an AI trained on airbrushed photos.


Since this uses StyleGAN, it's relatively easy to tell when an image is fake or real, since the networks seems to have trouble with backgrounds and faces that are directly adjacent to the main face.

However, since diffusion models are all the rage now, I think we would perform significantly worse with landscapes or images of fruits and animals, especially if the task is "distinguish between the real and fake art".


I found the professional headshots hardest to tell since real ones purposely bokeh blur the background which gives the glowy edges around the hair


This site was put up in 2019 presumably with images from 2015-2019 era algorithms. This was early in the viability of these image generation techniques, so the authors work is kind of precinct.

However the state of the art of image generation has moved - I suspect a 2022 version of this would be substantially harder.


At first I was tricked a handful of times, but I trained myself in what to look for. At first uneven blemishes proved a useful heuristic, but then when I looked deeper I found the edges and backgrounds were even more effective. The fakes somehow feel like they are in this... Oily world of illusions.


My hypothesis was "children are always real". Got about 15 right before a fake child appeared.


Any picture with more than one person is real. If there is a second partial face, or a shoulder or hair, or any other sign of another human, then it's the real one. They need to clean up their data, unless they're testing for people figuring that out.


Is this a serious topic worthy of serious responses from high ranking HN readers?

Depressing. They're both photos. A photo (of 'reality') is at its very best, at very best, already just a representation of the subject. Both are (technically) fake, aren't they?


There's a rather important distinction between an image of reality and a purely synthetic image.

I'm really not sure what point you are making.


A photograph of reality can and often does look 'unreal', or odd, or fake. So many, many, aspects cause this; lighting, expressions caught mid point, even the colour. My point is, the invitation is nether a robust or an intriguing one. Nevertheless, it is a success for other reasons.


Some of these are just straight up insane fever dreams if you evaluate the entire photo instead of just the face. After easily getting 80%+ correct, I had to stop. It wasn't from boredom, but getting creeped out by how grotesque some of these fakes were.


human trains GAN model --> GAN model trains human


Apparently this face is real: <https://www.whichfaceisreal.com/realimages/59264.jpeg>.

If you say so, I guess.


Good about 10 misses in a row, never once had the correct one (and I was trying)...


Eerie. It seems like most people here could tell, but with my morning vision and on my phone screen I did terribly. Not sure if it’s because I can’t see as clearly as usual or I have some deficiency in identifying faces.


Easiest thing for me right now is the background.

If it has blurry and screwed up bokeh or random patterns, it's probably the fake one. If there's something incredibly detailed, but blurred, it's probably real.


Eyes, teeth, and Ears, that’s my endgame… plenty of anomalies there, took me about one per second to get 20 right in a rows then I stopped playing. I figured I was trining the the system… your welcome!


I played about 10 of them and got them all right. It’s very impressive, but I just happened to know that ears and teeth are particularly problematic for these generative models (for now).


Showing green when you are correct or not is a really good UI move


This was surprisingly easy when looking out for artifacts. StyleGAN2 significantly reduces artifacts, i'd be very interested to see StyleGAN2 on this website as well!


The biggest tell for me after ears, hair and backgrounds was hard-lighting. None of these generation models 'learn' shadows with point-like lighting.


I wonder if it is possible that a generated face matches a real face. A person with real face is detected by algorithm as unreal and cannot access facilities.


Dead simple if you just focus on the ears.

Every single fake image has poorly rendered ears, which makes perfect sense as the contouring would be hard to get right for an AI


And the teeth


I think if the background was removed it would be harder.


Faces are great, but the backgrounds are wrong, also clothing, ears sometimes and there are strange circles here and there. i get ~19/20 correct.


shadows also help in identifying the real person.


Oddly enough I was rocking 100% accuracy by ignoring the face and looking at the background.

Maybe a face-centric ML isn't so good at backgrounds. Shocker!


After a while I could determine the fakes by carefully scrutinizing the ears. On first impression I can't tell them apart though


20/20 chosen correctly. Yes I think I can. The AI images are very impressive, but there are always artifacts that give it away.


Seems like I have been looking and playing around with too many of these AI-Generated imagery! I got the first 15-odd all correct.


So far it struggles with ears, neck, and hair


Amazing! I can only guess the fake ones because I know how AI / GANs work and I know the weaknesses.


The real image loads faster for me than the fake one. So I just click the first picture and it's correct.


The blending between background and hair are the real give-away. Some of the faces are really quite convincing.


Earrings are often a tell — many generated images have mismatched earrings, or just one ear with an earring.


All Fake ones I saw have the same „jpeg“ like artifacts in the face, and the hair always looks strange.


just look at outlines, especially EARS area or earings at women, AI can't make real ears yet, face is already OK (though AI can't really make skin imperfections), but ears are giveaway

tried 3, got all correct, no point to waste my time anymore, same issues as always with these photos


There is at least one person with five incisors between their canines in the real face camp.


Fake images seems to always carry artifacts on them. I just looked for them and got 100% right.


Interesting, but after doing about 50 I found I was getting the correct answer 100% of the time


Too easy! StyleGAN always puts a characteristic weird blob somewhere on the edge of the image.


Never got one wrong, pretty obvious to anyone with eyes. What's the point of the site?


I mean you can just about tell when you know for sure that one is fake, but this is scary.


I went thru 12 I think. One miss. Irregular pupils were what I keyed on.


I suggest to develop an AI that can tell which one is a robot or a human


Isn't that what we are training when we complete tasks on these sites?!


It seems like the real face is always the less "perfect" one


Muriel Bowser is in there.


All the images with people in background have real faces so far


Look not at the face but around the ears and the backgrounds.


New captcha challenge?


StyleGAN artifacts are too easy to spot. I was 20 for 20.


Already very outdated


The background is enough to get me 9/10 of these.


Getting all of them correct I got bored and stopped.


ai still doesn't get the eyes right^^, got 8/10 right if it's not the eyes it's patterns in the hair that return


Has we we trained AI to tell the difference?


are we just training someone's model?


I really wish it tracked your accuracy


I did like 20 and missed 1.


pretty easy. mostly there are artifacts in the background/hairs


Glad it's too easy.


I got 9/10 correct.


The necks give it away.


trivially easy if you look at the artefacts around the faces.


True, I got 9 out of 10 correct, just looking at the hairtifacts. Howdver I might be fooled by this if I wasn't primed to look for which one is fake and which one isn't.

I got 1 incorrectly because it was bad quality and photography which created artifacts of its own.


9/10, is amazing


We rarely see real faces any more in photographs or videos. On YouTube Biden looks like 60 and without wrinkles, everyone uses tons of filters.

Perhaps that is the reason people are doing badly in these tests.


not fun super easy




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: