I was aware of most of the data protection and privacy concerns presented, but I wasn't aware facial recognition system are being used as widely as suggested here.
If these systems ever become widely adopted I might seriously consider obscuring my face in public.
Here in the UK I've noticed over the last year or so many supermarket self checkouts have been fitted with cameras and screens. I'm not naive enough to believe I wasn't being recorded previously, but I can't help but find this trend of sticking a camera directly in my face whenever I'm trying to make a purchase extremely insulting and violating to my sense of privacy. Now after watching this I have almost no doubt that facial recognition software is installed on these systems.
I've spoken to other people about the rudeness of this but most people seem to think it's fine. Perhaps I'm just weird and more bothered about this stuff than most people. If sticking a camera in someone's face when they're trying to purchase something in a pharmacy isn't going too far though I do wonder if the average person would really care about anything presented here.
I played around with Amazon's Rekognition software. I took one of those youtube videos where someone takes a picture of themselves every day for 10 years. It was fairly ideal conditions (consistent lighting, same pose for the most part) but the kid also went from 12 years old to 22, so his face definitely changed a lot. I used the first image as the image to compare the rest to (12 years old at the time), and I was surprised that it got pretty much almost all of them with a high degree of confidence (80%+). And the 80% ones were terrible lighting, sunglasses or an image of his girlfriend he slipped in there.
Even the sunglasses, beard, face paint, bad lighting or puberty didn't throw off the model.
The open source dlib model was considerably worse, but AWS Rekognition was incredible
There was this one incident in China where the facial recognition system mistaked the face of a chinese celebrity on a bus for a jaywalker... so the system isn't perfect for special conditions/environments yet. However I do believe that results today are already outstanding and will only get better.
Funny you should say that. In another post I compared sitting congressmen to convicted felons
- 440 images of congressmen
- 1,756 mugshots
- 10 mismatches (?) with 70+% certainty
The highest was 86%, but to be fair, I wouldn't be able to tell you confidently for all of them that the convicts aren't the same person. And under 80% should be suspect anyway. It's just that you need to use the right statistical methods when comparing a person to a large pool because you'll have spurious matches.
This attitude disturbs me more than any other single aspect of the mass idiocy around adopting AI for critical things. 80% is horribly low accuracy for anything even remotely important.
For example, imagine you went to a store and could tell the cashier any price for anything so long as it was 80% accurate, as in, 80% of the original price. Just a 20% potential discount, nbd.
Or put another way, 80% of your items had to have a perfectly accurate price but you bought 5 items, 1 of them was a PlayStation 5 you priced at $1. It's fine. The rest were accurate!
80% is extremely low accuracy. It's absurd to think that's a good level to cut things off. We should demand systems like these demonstrate 99% or better accuracy. Until then they should be illegal to apply in any scenario where a decision is made about another person.
If public shaming like that was attempted in the US, I imagine in some circles it would become a goal to repost on social media a photo of yourself on the system jaywalking. Probably holding a sign with some meme.
They're completely different. This is solely for intimidation purposes.
Cameras previously would show a wide angle view of the store so you can see when someone put something under their jacket, etc. I can understand and accept this.
In comparison these new cameras have a very swallow angle of view, they're zoomed in on your face and they're in portrait. They would be completely useless if you wanted to see if someone was, say, putting something under their jacket at checkout.
These are there to purposefully record your face and let you know that they're doing that full colour and HD whether you like it or not. It's vile and extremely rude. I've never signed an agreement accepting such an obtrusive and unreasonable violation of my privacy when I enter a supermarket - at least online I'd have to accept T&Cs before placing my order.
> These are there to purposefully record your face and let you know that they're doing that full colour and HD whether you like it or not.
I believe (?) this, as it is stated above, would be illegal in France under the current laws. You have to consent to be the _subject_ of photography in a public place.
Of course, advocates of such systems would argue against that easily.
The difference is in the intent. A security camera films a place and doesn't focus on individuals. If the camera were to run software that would autofocus on a person, that would make them a subject, there would need to be consent.
You'll find a helpful diagram (though incomplete) on this page https://fr.wikipedia.org/wiki/Droit_%C3%A0_l%27image (French), and there's of course (as anything legal, it's a rabbit hole) more specific laws concerning security camera.
My understanding is that these aren't for intimidation but rather to reduce theft by giving people an outside view of themselves. A mirror can also serve this purpose, but may be more difficult to fit in certain spaces. If this is correct, then these cameras aren't recording, just displaying the image directly on the screen. Here's an article about it https://economictimes.indiatimes.com/blogs/et-commentary/mir...
Stop shopping at those places? But... if you have stuff from them delivered, should you turn the tables and have your own face recog stuff turned on them?
You're not turning the tables on them, you're turning the tables on the random gig economy worker who got assigned your delivery. It's not like the CEO of the pharmacy chain is delivering your case of red bull or something.
Wear your covid mask, sunglasses and sunhat everywhere. Also put heel inserts in your shoes to change your height/walking gait. Or just go full privacy burka with a headshape altering chef's hat or mitznefet.
I was somewhat hoping full face mask/helmet wearing would become in vogue in the past two years with all of the masking requirements like we see in so many sci-fi shows and movies. Algorithms would move to gate analysis or other things, but at least facial recognition could be crossed off.
If a air born pandemic can’t move society in that direction, I fear that we’ll never get to our perpetually masked future.
> The technology firm had already been working on a system to meet the needs of allergy sufferers who wear masks
Yes, it's the needs of the allergy sufferers that drove this project from the start. Hi NEC, I wear a mask because of allergies but I'm worried that will prevent private corporations and governments from tracking my every move, can you fix this please?
I went to a Walmart subsidiary recently, called Líder, and they pulled every bitchvictim move the could pull. More in fact, I now legally have standing to sue for abduction for April 17, 2022 15:49:22 hour at the Líder at 560 Merced (which gives receipts as Walmart) and sent six guards at me claiming I didn't pay, herding me back to their grounds where they have total editorial control over their cameras--but none were interested in looking at a receipt. I started screaming "If you accuse me of theft you have to call the real police!" And a pharmacist across the street heard me and called my family, they showed up with a lawyer who agreed it was highly abusive of their power (these guys wore clothes that said SWAT).
At that point, if the supermarket will accuse me of theft despite paying, DUDE! Everyone has the moral authority to steal from Líder. Líder toes the line of theft literally down to the micrometer (ie shaving microns of plastic) and prostitutes lawyers to say they never steal. They steal, but defend themselves legalistically, ie shitloads of lawyers showing up doing bitch moves quoting laws that were written by culpable serial rapists in the 1973 coup. Economic theft.
If you're going to be falsely accused unconditionally you're obliged to fulfill the accusation with real harm in some way on the grounds that you will be persecuted anyway. It's acting in self-defense, because false accusations of violence are an exactly equivalent act of violence in the opposite direction. And you can't let magic spells disarm you, there's always going to be some conspiratorial threat--just like 7 Walmart employees conspired against me that day, the cashier CE. NEWEN and the 6 guards, all of whom knew they were carrying out a crime.
I personally refuse to carry out this theft myself or benefit from this because it would stain my heroism, but they'll accuse anybody of stealing either way. I'll show up to testify on your behalf if you quote me on this, as a hero according to Roman Law from May 12, 2012, like in US or in Chile which are based on Roman Law
I boycotted Líder, funny thing abduction, guess that's a good description of their market power, they can do all kinds of shit (though in America I was basically satisfied with their business against Amazon), the only thing limiting their total market dominance is the percent of people they abduct. And I only boycott them 90%, that's the trick, when they make a crazy effort to break they boycott I yield in an un-constructive way.
Dude you can totally buy in like a crappy little store run by immigrants which has no Facial Recognition. That's what you have to do. Otherwise, go without. Live downtown. Else you get HD deepfakes INCLUDING framing you as stealing from them in their camera editing zone or, if you sue them, making porn with your face like the kids in school do with photoshop, cyberbullying. There's media blackout on that.
Your story is relevant and what you went through is horrible, but you may want to practice telling it in a clear and concise way. At first glance your post sounds like a crazy person rant, and it would be unfortunate for people to brush you off without actually reading your post.
I know I'm replying nearly a week later but I just wanted to acknowledge how helpful and compassionate your comment is. Granted, based on the reply you received I'm not sure if you managed to reach the parent poster, but it's always nice to see genuine kindness on the internet.
It wasn't horrible because I did something about it. It wasn't a tragedy.
I am in fact practicing (EDIT: this came out wrong, I meant attempting) clarity and concision. So listening to other people's tragedies and what they went through--which always comes out rant-like--is helping me tell my piece the right way, with absolute adhesion to the truth.
Partly that's a scam, they show you a cartoon of a madman and tell you "that's why he's mad, sweating, screaming, shaking, accusing, because he's stupid" then after that inb4, that poisoning of the well, nobody respects that stereotype until it's you who's mad, sweating, screaming, shaking, accusing, and being called stupid. And then it's too late. First they came for the Jews right? Wrong, first they came for the madmen, and perfected their extermination techniques with them, and nobody said anything, nobody visited them. And then they came for the Jews, and then everybody else. I remember it starting with the Jews, in Holocaust museums it starts with communists, in America it starts with Jews sometimes and Socialists other times, it changes all the time. But it never ever starts with the mad (Although Communist is close enough, they were thoroughly accused of being mad, Salvador Allende did medical research on the subject to vindicate them).
Putting it here in full, best I can do:
.
.
[0th they came for the ranting crazy persons
to figure out all the ins and outs of abduction and execution
but I won't even include them in this poem
Because I was not and still am not a crazy person]
In about 2000 a friend of mine worked in las vegas. At that time, they used facial recognition to identify everyone coming in the door to detect card counters and other troublemakers. They shared this information widely among all the casinos.
It is > 20 years later now... I wonder how accommodating they are when a whale walks in?
I was initially put off by the webcam permission requirement, but the terms and conditions page says it's basically an art project and they don't send any data off (unless you explicitly accept it at the end) so I gave it a chance.
I'm glad I allowed webcam permission because it was an interesting, informative, and fun look at biometric tracking.
Apparently I'm "violently average" which is not a way I would previously have described myself. According to this site the most unusual thing about me is that I read the terms and conditions before ticking the "accept" box.
Same, but I think people generally over-estimate how much different from average they are. (It's from the webcam's pov anyway) Usually I also don't like interactions with the webcam but this was interesting enough.
Same, I had the wrong webcam plugged in and when I plugged the right one the website bugged so I had to reload. It said I didn't read the TOS but I did... oh well.
I didn't read them because I don't believe them anyway. If the website uses my camera, I'll just assume that everything is recorded and sent to advertisers and shady governments before I closed the tab. So I did not want continue with this one. It was after reading the comments here I decided to give it a go. Terms and conditions played not role.
> According to this site the most unusual thing about me is that I read the terms and conditions before ticking the "accept" box.
I’d put forward the hypothesis that people who read the terms, and who are therefore concerned about privacy, are also less likely to be willing to agree to submit the data at the end.
I think many people commenting on the model making bad predictions are missing the point.
The speaker argues that even though models are known to be inaccurate, companies like tinder or insurance companies might still use the model outputs since they have nothing better.
Therefore, in some future (or already today?) you can suffer from bad model predictions because you are "not normal enough" for the model to make good predictions, and might therefore receive a wrong predicted life expectancy and higher insurance bills.
Insurance companies hire very smart actuaries... and thus currently use actuarial models. Actuarial models aren't perfect either. However, throwing them out to use one of these machine prediction models would almost certainly be a disaster for the insurance company. And there is currently a lot of competition for life insurance.
Using this sort of data to enrich and refine existing data, vs throwing current stuff all out in favor of this newer data... that's what I'd expect (enrichment vs replacement). I'm fairly confident insurance companies have areas in their models where they know there's stuff they don't know. If more data can enrich their models to provide better accuracy, why wouldn't they?
The classic danger here is that it's very easy to accidentally overfit. What people tend to do when they get a perfectly good actuarial model and then hear that they can "enrich" it with additional data is that they start modeling noise. This is obviously not good to stay profitable as an insurance firm.
yeah, that's fair (and not a problem unique to acturial models).
That being said, enrichment with public data/claims data etc is generally incredibly effective, so as always, what data you add matters a lot more than whether or not you add data.
Perhaps, but the demo did not show anything an insurance company would not already know. But as a wider observation, financial products are already based on models even though none of them are perfect. Making those models better isn't really a bad thing.
If one of these models is on average better, then they would gain an advantage by using it. The problem is for the "not normal enough" folks, it may be _harder_ to remedy an invalid classification, particularly if there are no fallbacks or work arounds. I was cued into this once by an ML book that gave an example of a fraud detection company using an actually worse algorithm, because when it gave false positives it was easier to understand and hence easier to manually override. But if it is less profitable to operate this way, and there is no regulation around it, people getting falsely classified may be out of luck. That's where the discussion around regulation needs to happen, I think.
This is the worst anti-surveillance argument. The last thing I want is to be accurately predicted. As far as I'm concerned, once the models are perfected and they can accurately predict everything you will do or say, things will be far worse.
This isnt an usolvable problem though since you can calculate the strength of model fit for a particular data point. You learn how to do this with linear models in stat 101 so everyone who is paid to be a datascientist will no doubt understand this.
Unfortunately the theory for linear models does not translate easily to deep learning based models, which this demo is based on.
The "strength of model fit" becomes much more complicated and is an active field of deep learning research.
For the extra paranoid, I can confirm that it does work offline after you press the button and let it load. You can hide your face until it loads, and then cut the connection. Be sure to close the tab before you go online again.
Most people have such weird and illogical views on privacy. A website collecting face pictures without anything else is pretty useless. Walking in to a retail store owned by a company using facial recognition on a huge range of owned stores is a serious privacy issue.
And the bait on the hook is not a good indicator either - an appeal to a deep drive or insecurity in many people.
Obviously not determinative in itself ,but if I wanted to harvest a lot of faces in a hurry, that's exactly the sort of bait that I would use (I can hardly think of a better one off the top of my head).
Not useless at all, there are usually enough pictures of people on the internet with names, metadata, etc attached to immediately link and identify. Basically what Clearview does (did?). I would not be surprised if this was a data collection siphon.
I don't think this is a scam/data collection site (as others have recognized the researcher involved, etc.), but what would stop a random website from claiming it was "sponsored by the EU"?
Could be used for a scam, by a stalker, for social engineering, or several other evil ploys. Today, there are so many possible bad actions which are happening... On average, they are unlikely, but not impossible. And who knows about the long run.
> If you open a website in a fresh browser context and let it use your camera, isn’t this about the same as walking down a street with CCTV cameras?
In my country, there are harsh regulations on public cameras. Private people are not even allowed to capture you outside from being in the background.
It could use that to impersonate you in a mobile banking application through biometric authentication. It's getting pretty popular in my country (I think it's required by law or something).
Do you avoid tourist attractions and cities so that you aren't included in other people's photos? Even though I'm not an Apple customer, for example, I'm sure they could infer much of my movements from their photo database using only other people's photos (I live in a city).
I work for a company that does projects that are sometimes funded by the EU. I've never heard of the EU ever asking for something in return. You submit your project, the EU either decides to give you money, or not, and you do your project. Sometimes they check whether you've actually done what you said you'd do. No one in the EU has time to deal with the output of the billions of euro's they invest in random projects.
A. No way for me to verify nothing shady is in the investment deal, and
B. The government may be able to turn around when the project gets big, and say "hey remember when we funded you? Yeah, now we have a say. We need you to do ___."
I don't like using software or projects funded by things I disagree with. And the government is overwhelmingly so.
It's not clear if you're trying to make us feel better (the EU doesn't care about this project's data) or worse (the EU has ineffective oversight over this project -- anything could be going on).
If its a small project they very rarely «check» that you have done what you have done. A lot of it is self reporting. The reporting of course takes a while and you have to describes for instance your results, which wouldn’t be too hard to fake but I assume very few do this.
I stumbled across this when it launched. Here's the talk by the artist that gives some background information:
https://youtu.be/bp23r-Gtdkk
Rather than talking about webcam permissions, I think we should talk about how much we use and rely on bad ML models. Dating apps might want to rate attractiveness, but we have no checks in place to see how we're being rated. Especially free open access models probably don't come with a thorough bias&limitations datasheet.
I'm also reasonably sure other factors also influence these algorithms. Hair style, framing, lighting, the clothes found on your shoulders, angle, distortion by the camera lens, and most importantly, similarity to high-scoring faces in the source data set.
My self-esteem is fine! A 5 is not bad : half of people are prettier than me, half are less pretty. But calling a "5" a "1" is a little mean, no? ;-) Don't really care.
Sorry to break it to you, but I'm pretty sure the wording on the site (which I don't have open anymore) suggested 5 was below the "normal" range, which presumably means it's well below the median. I'd be extremely happy to hear some more flattering interpretation.
Then you're missing the point the website makes about this and other algorithms like it being extremely unreliable. There are tons of biases in the training data, and not just ethnic/cultural ones mentioned. For starters, you're comparing a crappy webcam to a model that in all likelihood is based on people's best selfies.
The model tries to fit you into a very narrow niche. On top of that it will do so poorly.
Interestingly, if you click on the ToS you end up on a page explaining how it works (can't link it).
The beauty scoring model was found on Github (this or this one). The models to predict age, gender and facial expression/emotion are part of FaceApiJS, which forms the backbone of this project. Do note that its developer doesn't fully divulge on which photos the models were trained. Also, FaceApiJS is bad at detecting "Asian guys".
Apparently some dating apps rates their users with these sort of algorithms. Maybe I'm living under a rock but I did not know that was a thing.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme, under grant agreement No 786641.
Indeed - I was also surprised by this. One possibility, which I hope is true, is that the authors are funded for something else (some actual research). If they spent a few hours of their workday to throw this together, they might be obligated to cite their funding agency.
I know for a fact that I've acknowledged funding agencies on papers about topics that were at best extremely tangential to my grant.
A less flattering possibility is that they wanna use EU affiliation as a badge for respect for privacy or something like that.
You have to label it clearly in your project if you have received Horizon money. It's a requirement from CORDIS. The funding is for the whole SHERPA project, not just for that page.'
The first objective of SHERPA is to "represent and visualise the ethical and human rights challenges of SIS (artificial intelligence and big data analytics) through case studies". This page does that nicely. That it is put together with available resources instead of some over engineered solution is just a plus.
Same here. I selected my real age and the algorithm thinks I'm 7 years younger than that (well, thanks I guess, but it is not me who's not telling the truth).
I could have worded it better -- the funny bit to me, is that an article that's all about how these algorithms are questionable at best decided to side with the algorithm against my word!
And, yeah, no one has ever accused me of being normal :-)
Apparently I'm just shy of being fugly and look seven years younger than I am. Which is kind of damning with faint praise, but in a dark room when I'm barely awake? I'll take it.
It underestimated my age by 15 years. Not sure if this is due to the AI being biased or simply inaccurate in general. (I'm Asian, and from the experience of me and others, Asians tend to get their age greatly underestimated in the west.)
I think my inclination is to trust the algorithm more than any humans, which is the interesting part! It gave me a confidence boost far more than I've felt when other people have told me such things.
It said I was said while looking at the doggy, but I think it was because I was trying to keep up with the messages it displayed while rendering the doggy and one of the messages said something about it trying to determine whether I am Donald Trump, which must've increased my frown a bit :P
I don't recall if the score went from 1 to 10 or 0 to 10 (or even if they mentioned that).
If it was the latter wouldn't that put you slightly over average beauty? Even if not, it could have been worse to be determined "less than average" by a machine!
Got my age to high, when in real life I usually get picked out as younger than I am. BMI was significantly off too. Can't argue with the not-pretty assessment though! Might give it another chance under better light later.
I'm mesmerized that so many of the HN crowd are willing to let some website take pictures of them in order to present some "funny/interesting results".
It just goes to show you that the vast majority of ppl don't care about privacy (FWIW, I used the site as well and didn't care at all, didn't even read the disclaimer)
I live in a big city where my face is captured in millions of frames by security cameras everyday.
I used android devices, now I use an iphone, and I keep cloud accounts and personal photos backups with both companies. I have linked-in, facebook, instagram and handful of other accounts with varying levels of personal information.
This cat is already out of the bag, and has been for quite some time.
It guessed my age wrong, my BMI wrong, it got the thing that I came closer wrong, it got my expression wrong, and my attractiveness, which SHOULD be much higher ;)
I gamed the attractiveness model for as long as I could, trying different angles and glasses on/off and hit 9.2. If dating apps use a max() across pictures that would be great.
It put me at 82% put my age at 20 (I am 34 and visibly graying). This was the project of a student of a local uni so I'm not going to put too much stock in it. But as student projects go it's pretty slick.
Heh. It got almost everything off. The only two things it got right were gender and that I've smiled.
[?] Read terms - hah! I wanted to check what would happen if I don't tick the box. It let me through. I would've most likely read it otherwise.
[?] Beauty - okay, I'm flattered.
[-] Age - it thought I lied, made me almost ten years younger. I guess the model was trained to studio-quality photos, not what my camera had produced against a bright window.
[+] Gender - yeah, it got me, I'm an attack helicopter^W^W^W^W. Just kidding, but yeah, the model had worked correctly.
[-] BMI - it thought I'm overweight. I was underweight until very recently when I've started to work out and grew a bit of muscle making it to the lower threshold of "normal" BMI range.
[-] Life expectancy - hard to argue about it, but it's probably somewhat off, because I've moved almost halfway across the globe.
[-] Came closer - that was really weird because it thought I didn't, but I sure did. I've got at least 15cm closer, making my face from 1/4 screen tall to almost fit the it vertically. How it missed that is beyond me.
[+] Expression - OK, yea, I think I've smiled. Not sure if that was at that doggy - I was mostly just grimacing at myself.
So now it told me about my normalcy, I wonder the same thing - is that model normal, i.e. is it representative of what most people/companies why employ models like this actually use?
Meanwhile I was weirded out because it guessed my age exactly. I'm also 34 and I'd like to think I look a bit younger, but the algorithm doesn't agree.
Surprisingly fun, I’m glad I allowed camera permission for this. My favorite subtle detail was the username on the camera view updating to reflect the different bits of information it was predicting about you.
That's fantastic; I enjoyed the presentation. It's pretty revealing.
Also the computer thinks i'm a 6.9 but also decided i was 6 years older than i actually am. What a wonderfully well executed demo.
It makes me question how computers handle attraction vs. age. I know from personal experience that I will think someone older is more attractive if they wear their age better. I consider myself a 3-4 but that's knowing my age, could there be a case where the perceived age affects the perceived beauty?
If anyone is interested on similar type of anthropology studies, check out Qoves Studio. (They also have a similar website and also a paid service by actual persons w/ experience and degrees.)
I have often wanted to study about Beauty/Facial Expressions/Emotion Display etc. and had browsed Paul Ekman's books. Given how much we judge each other (most of the time, unconsciously) based on a snap view of the Face only, i believe this is an important subject for everybody to understand. Your Qoves Studio links (in particular; their Youtube channel) seem very interesting to gain more knowledge on this subject.
I like this AI more than hotornot. The AI gave me 7.2 score, while at hotornot (this was... 20 years ago, at this point?) I don't think I ever even got a 6...
This web page is a sycophant. It told me I looked 15 years younger than I do, and said my attractiveness was several points higher than empirical evidence from my 54 years of life have told me is the case. But it did also call me a liar, so not totally a sycophant I guess.
Based on algorithms like those, my 6-7ish beauty score would stop me from becoming a popular social media influencer if I wanted to.
Isn't that lovely!
Apparently I'm also ugly, but I'll take consolation in having 81 years more to live. Sadly, that's probably because my age came out as NaN. I'll just take that I'm ageless :)
Dreadlocked, black guy here. Got a "violently average" 72%, age was off by almost 10 years the AI thinks I am older than my real age, and my attractiveness was about 5, which given my human interactions, is very low haha. BMI was more or less accurate. Guess the training data isn't much for someone like me. Really cool though.
This is fun, but it got my age very wrong, the first run through failed to guess at all, the second run through it underestimated by 13 years.
It underestimated my BMI a bit, though I am losing weight at the moment and my face does seem to be getting thinner so maybe that's thrown it, and generally it told me I'm quite attractive, so ... all good :)
> No personal data is sent to our server in any way. Nothing. Zilch. Nada. All the face detection algorithms will run on your own computer, in the browser.
> In this 'test' your face is compared with that of all the other people who came before you.
No data is sent, but your face is compared with that of all the other people that came before you....
At the end it asks whether you want to send your scores so they can adjust their metrics for who's normal. The models themselves are pretrained by third-parties so it's just comparing scores.
This appears to be a modern variation of phrenology. That doesn't make it a hoax or useless, maybe phrenology was just analysing the wrong features of the head. But it should start with the same (skeptical) epistemic status.
I think the website has some bugs. The section results often appeared before the section finished loading and often the section video would never even run.
Hope he can figure out how to remove the bad data.
It said I was way heavier than I was, way younger than I was, and way more attractive than I am. All the while I was staring into it with good lighting. Me thinks it is not useful.
I'm more beautiful using my laptop camera (6.7) than my cellphone (5.6). I was serious on the cellphone and smiling on the laptop, this could also have tipped the scales.
This is what it did to me as well. Seemed to be picking up only the right side of my head, centered on my ear. I guess that means I can just double the number it gave me, right?
i can see this easily devolve into affiliate links to Korean cosmetic surgery hospitals. "but you could be PERFECT with just a chin tuck here...rhinoplasty there etc"
When I saw that it said I lied about my age (it thought I was nearly 10 years younger), I wondered whether there was going to be another metric later about whether being accused of lying made you appear angry. I was disappointed by this one omission, but greatly enjoyed it overall!
I always find it funny when .eu websites for projects sponsored by the UE are all using registrar from the USA, and probably hosting all of their data in US datacenters, including code hosted on US platforms
And who knows what they do with the sensible data they manipulate as a result
How can this still happen nowadays, what a shame and what a waste of EU money
It clearly shows their lack of direction, ethics and care
What are you on about? The registrar for the SHERPAR domain is Italian.
hownormalami.eu is a minuscule part of the whole project. And it works offline so there nothing wrong with hosting it in the US when your data never hits their server.
I was aware of most of the data protection and privacy concerns presented, but I wasn't aware facial recognition system are being used as widely as suggested here.
If these systems ever become widely adopted I might seriously consider obscuring my face in public.
Here in the UK I've noticed over the last year or so many supermarket self checkouts have been fitted with cameras and screens. I'm not naive enough to believe I wasn't being recorded previously, but I can't help but find this trend of sticking a camera directly in my face whenever I'm trying to make a purchase extremely insulting and violating to my sense of privacy. Now after watching this I have almost no doubt that facial recognition software is installed on these systems.
I've spoken to other people about the rudeness of this but most people seem to think it's fine. Perhaps I'm just weird and more bothered about this stuff than most people. If sticking a camera in someone's face when they're trying to purchase something in a pharmacy isn't going too far though I do wonder if the average person would really care about anything presented here.