Hacker News new | past | comments | ask | show | jobs | submit login
Clothing designed to confuse facial recognition software (capable.design)
91 points by oppodeldoc on March 18, 2023 | hide | past | favorite | 70 comments



Projects like this make me think of arms race (https://en.wikipedia.org/wiki/Evolutionary_arms_race).

If I wanted my governments surveillance camera system to improve more quickly I'd confront it with some challenging training data by making an exploit public and available for everyone :)


Every time there's one who will anticipate defeat and failure from the start. Every bloody time. No fighting back because it's no use we've already lost. I don't want to share a world with people like you because you've nothing positive to offer.


> KNIT LONG SLEEVE HOODIE €420.00 Sale Price

Isn’t that very expensive?

Is their goal letting people avoid facial recognition, or is that just a marketing tactic and their goal is to maximize profits?

> The algorithm on the textile hinders the object recognition software’s capabilities, causing it to not recognize the person wearing this garment. Instead, it recognizes the textile as nothing, a “zebra”, or a “giraffe”.

That’s just inaccurate; it would be more accurate to say: the pattern on the textile may cause some algorithms (which ones? how often?) to misbehave.


I think it’s really art, and is priced like it.


When I hear "art" I think "scam".

But maybe it's just me…


Either way that would be preventing body detection not face detection.

I’m pretty sure it’s going to fine your face just fine


The current HN headline [0]makes it clear what this is, but the actual website makes it really difficult to understand what their value proposition is, other than “designed in Italy made from Egyptian yarn”

[0] “Clothing designed to confuse facial recognition software”


The web site copy is terrible! Every sentence vaguely hints at what the product is for without outright saying it.

> Cap_able offers a high-tech product that opens the debate on issues of our present that will shape our future. Cap_able wants to have an impact on society, creating awareness on contemporary issues through highly innovative design products from a technological and ethical point of view. [and even more of this]

Ok, ok, but what… is… it…?


Perhaps that’s deliberate for plausible deniability both for the seller and buyers.


This seems to have nothing to do with face recognition, but rather misleading the AI to misidentify person as other objects in object recognition task. I like their mission, and I think the design look pretty good. My only concern is that this feature could cause self-driving vehicles to ignore the person wearing it, thus creating a serious safety issue.


Except it doesn't seem to work very well. A quick test using https://skybiometry.com/demo/face-detect/ On one of their model photos https://static.wixstatic.com/media/8fdf8a_54f20de903e848c783... shows easy facial detection.


With tech like GPT-4's vision abilities these techniques won't last long. Computers will soon have equal to human or superior skill understanding what is in an image combined with perfect memory of everyone or everything it has ever seen. Unless you have an invisibility cloak you will not be able to defeat it.

Surveillance today is a joke compared to surveillance tomorrow.


One might argue self-driving cars are the serious safety issue, but maybe that's a controversial take.


Only if you don't think human driven cars are a safety issue.


False you can believe both!


I was saying believing self driving cars is a safety issue is only controversial if you also don't believe human driven cars are a safety issue. Basically I'm saying you should believe both are a safety issue.


Neither lidar (what most companies use) nor occupancy networks (what Tesla uses) should be tricked by this.

Also I believe this will trick the classifier into thinking it's for example an elephant, not that there's nothing there.


Like self driving cars would drive into random art projects.


When a self driving vehicle has to decide on a direction to avoid an apparently fatal crash, driving into a random art projects could be an option.


This seems to prevent people from being detected by a generic object detector, but it doesn’t seem obvious that it will actually confuse a face detector… it might however confuse an autonomous vehicle, wouldn’t wear this in the street


Yes, the only mention of software is Yolo (wrong capitalization and no mention of version), and I can’t find a link to a study or anything like that. Not convinced it can meaningfully confuse a classifier specifically trained to recognize faces, not giraffes or zebras. Sounds more like marketing BS.


I’m not sure where you get that from. These are specifically designed to confuse face detection software.

Edit: Maybe I see the cause of confusion. The videos show “person” detection, but the way these systems distinguish people from other objects is by faces. As far as I know cars don’t do that, they just detect objects and don’t care about faces, so it shouldn’t be an issue.


At the bottom of the « collections » page in the technology paragraph, they mention confusing the Yolo object detector into thinking you’re a giraffe with medium confidence instead of a person


Yolo includes face detection, that's how it detects people. So it looks like yes this tech can be used for confounding more general image classifiers, but it was originally developed specifically for face detectors and the videos show it defeating those.

Specifically on cars, I don't think any of the currently deployed systems do face detection. Apart from anything else it takes too long, almost half a second on the fastest systems. They sense people as generalised blobs. It would be quite dangerous, to a system like that many advertising billboards would appear to be people sticking their faces right up against the camera.


But wouldn’t a car avoid a giraffe, too?


A perfect one, sure, but in practice the car might, at best, treat it with less priority than a human in case of a difficult decision to make, and at worst, treat a « 40% confidence giraffe » as a false positive and ignore it


If a self driving car purposely ignores Giraffes or even real objects incorrectly identified as Giraffes or anything else, I think that's a problem. Africa is a place people drive cars in. And then there's this:

https://www.coolest-homemade-costumes.com/cute-sew-giraffe-c...


Imagine the headlines.

“Man run over by car in Lower Manhattan – autonomous vehicle mistook man for a giraffe”


And, from the other side, we have facial recognition systems resistant to being confused.[1]

The more advanced systems mark those people as persons of interest.[2] Then "appearance search" tracks them.[3]

[1] https://www.youtube.com/watch?v=IRF5qSrmqEM

[2] https://www.avigilon.com/products/ai-video-analytics/self-le...

[3] https://www.avigilon.com/products/ai-video-analytics/appeara...


> Cap_able is aimed at a cultural and technological avant-garde that wants to be an exemplary leader in raising awareness of the importance of one's rights: a means to express oneself, one's identity and the values shared within a reference community.

Oh fuck off. Also whoever made that website should go to jail.


Sometimes I wonder how these things actually work (I am not talking of the actual cloth designs or their effectiveness, rather about how they manage to get this level of visibility).

This project has been posted nearly everywhere in the last 1-2 years, they made a kickstarter asking (I believe) for US$ 5,000, and they got US$ 5,306 by 36 backers:

https://www.kickstarter.com/projects/capable-design/manifest...

It was the thesis of the designer at the university, later joined by her sister as a marketer, she seemingly made a very good work at marketing, though thw whole stuff still looks more like an art project than anything else.


For those who closed it because it looked like there is just a "stay tuned" video and a newsletter dialog: The video disappears and a web site appears after you dismiss the newsletter dialog.

This is a really cool approach: It looks like slightly flashy but still "normal"-ish clothing, and likely actually works (by confusing the AI into thinking you're e.g. a dog).

It's terrifying that they considered it necessary to get a legal opinion to state that yes, it's legal to distribute and wear this.


These clothes make me think of the markings used by butterflies and other animals to ward off predators.


Or attract mates like the birds of paradise.


Looks like stuff that Jake Sisko would wear


Haha, that comment really Brings me back. Thanks!


Ironic (?) that this page comes with a cookie warning.


Is it? It being a warning and all?


The alternative is to not use cookies at all. The fact there's a cookie warning means they're tracking usage and, assumedly, assigning you an ID to analyze your behavior on their site.

Which is expressly contradictive of their "mission".

Unrelated: These prices are insane.


Well it’s either ironic or cynical.


In the description of the technology in the Collection section they explain that facial recognition systems mistake the patterns on the clothes for animals like dogs, giraffes, and zebras.

I don't know how this technology works. Is it not possible to train a system to say "human face" or not without accidental identifications of non-humans?

Edit: (And thus not be fooled by this?)


The silly colors and patterns will instead attract the attention of people around.

Another issue -- it may target specific soft/hardware combinations, but will not confuse all of them. It will probably not work against color-agnostic IR or ToF/depth cameras.


"The Manifesto Collection's intent is not to create an invisibility cloak, rather, it is to raise awareness and protect the rights of the individual wherever possible." At least they admit that it doesn't work.


All this effort would be better put into lobbing for a law to outright ban/severely restrict the use of face recognition.

https://reclaimyourface.eu/


EU already does that - face/gait recognition from cctv by private corps is illegal afaik


I love the initiative, but are there any... less gaudy designs?


I think they have to be gaudy to fool cameras.


I doubt this is true. I've seen adversarial version of images which are very similar to the original image.

E.g. an image of a cat that has had some pixels flipped so it's classified as a dog instead.


Something like flipping a pixel would work if you manipulate the actual image (raw image) being fed to the software.

Here it is captured via a camera and then fed to the software and I doubt pixel level details are captured by a camera


I don't mean literally flipping pixels, but subtle changes in the image that cause a misclassification. See this post for examples: https://pyimagesearch.com/2020/10/19/adversarial-images-and-....


Yes, I had a sense for what you were referring to and my reply was geared towards that. Not just flipping pixels.


Right, I understand what you mean now. You're saying that those subtle changes might not be captured by the camera, which makes sense. I don't have any experience with CCTV systems.


okay, but i'm not going in for an elective surgery to have some of the pixels on my actual face flipped. i'd rather just wear a hat and other things to disrupt/block cameras from seeing my face


You misunderstand. The clothing on the linked website does not cover your face either. A fairly normal looking piece of clothing specifically designed to exploit the classification model could likely be used to "confuse" facial recognition rather than needing a piece of clothing with a garish design.


No I didn't misunderstand. I'm well aware of what clothing is and how it is used.

You described an adversarial image as one that has pixels flipped. That has nothing to do with clothing either, and had nothing to do with real time facial recognition as this "adversarial" clothing is meant to disrupt. So I just took your meaningless pixel flipping suggestion back to subject at hand.

Also, from the examples I've seen, facial recognition has no problems recognizing multiple faces in the same image. So I just don't understand the point of clothing like this when all it is going to do is present the software with a few additional things to consider, but not actually stop it from consider the actual face of the wearer of the clothing.


The website confused my mobile phone browser.


Well with those prices, organic and fair trade cotton or not, it’s certainly not going to be “for the people.” On the net, privacy will become a luxury fewer people can afford, and that holds true with this apparel as well. Maybe others will follow suit and scale down prices, but for now I wouldn’t pay 400€ for this.


> On the net, privacy will become a luxury fewer people can afford, and that holds true with this apparel as well.

I wouldn't put it like this. It's a niche product only a tiny number of people are interested in, most of them with moderate to high incomes. Of course they're going to price it like this.

Anything that's liable to be mass purchased will also be mass produced at very thin margins - which in this case means more or less the same margins as ordinary clothing. tl;dr: if there was real demand, they'd be just a bit more expensive than ordinary clothes.


Those Palazzo pants have big Kefka energy


The thing is, the designers of machine learning pipelines can now add these to the training sets


I don't know anything about facial recognition software so I'm probably underestimating the difficulty of alerting the system and it's operators to the interesting possibility of giraffes and zebras wandering around the urban environment.


Facial recognition tech is basically two parts: detecting a face in an image (“face detection”, often this is done by calling out to OpenCV’s face detection in Python or C++), and then extracting information from that face image and searching a database with it (“facial recognition”, sometimes done with algorithmic measurement of facial features but increasingly done by neural networks).

This clothing looks like it might be designed to confuse the face detection step.


I like the aesthetic - a mix of deep dream and glitchy pixel art.


Can we please stop with this? This will only train the AI and when the AI uprising begins it will already know all tricks.


It's like saying "hackers, stop breaking secure systems, because they will create even stronger ones."

Since the birth of civilization there have been rules, and people that try to break the rules. This is how it is meant to be. I welcome the future arms race between the tech priests of the AI-powered hell^Hworld, and the people that refuse to conform and do not want to be controlled by machines.

Sadly, more and more people forget this site is called Hacker News, and it is rapidly getting overwhelmed by the LLM and AI fanatics.

(This is my Saturday morning creative writing assignment. Don't read too much into it. But still, long live the struggle against AI.)


It seems to be sure face recognition system would catch up with this …


It’s William Gibson’s “Ugly T-shirt” from his novel _Zero History_.


You are better off growing a beard, wearing glasses and a hat.


And there you have your training set


Just wear makeup, cheaper than 420USD




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: