If I wanted my governments surveillance camera system to improve more quickly I'd confront it with some challenging training data by making an exploit public and available for everyone :)
Every time there's one who will anticipate defeat and failure from the start. Every bloody time. No fighting back because it's no use we've already lost. I don't want to share a world with people like you because you've nothing positive to offer.
Is their goal letting people avoid facial recognition, or is that just a marketing tactic and their goal is to maximize profits?
> The algorithm on the textile hinders the object recognition software’s capabilities, causing it to not recognize the person wearing this garment. Instead, it recognizes the textile as nothing, a “zebra”, or a “giraffe”.
That’s just inaccurate; it would be more accurate to say: the pattern on the textile may cause some algorithms (which ones? how often?) to misbehave.
The current HN headline [0]makes it clear what this is, but the actual website makes it really difficult to understand what their value proposition is, other than “designed in Italy made from Egyptian yarn”
[0] “Clothing designed to confuse facial recognition software”
The web site copy is terrible! Every sentence vaguely hints at what the product is for without outright saying it.
> Cap_able offers a high-tech product that opens the debate on issues of our present that will shape our future. Cap_able wants to have an impact on society, creating awareness on contemporary issues through highly innovative design products from a technological and ethical point of view. [and even more of this]
This seems to have nothing to do with face recognition, but rather misleading the AI to misidentify person as other objects in object recognition task. I like their mission, and I think the design look pretty good. My only concern is that this feature could cause self-driving vehicles to ignore the person wearing it, thus creating a serious safety issue.
With tech like GPT-4's vision abilities these techniques won't last long. Computers will soon have equal to human or superior skill understanding what is in an image combined with perfect memory of everyone or everything it has ever seen. Unless you have an invisibility cloak you will not be able to defeat it.
Surveillance today is a joke compared to surveillance tomorrow.
I was saying believing self driving cars is a safety issue is only controversial if you also don't believe human driven cars are a safety issue. Basically I'm saying you should believe both are a safety issue.
This seems to prevent people from being detected by a generic object detector, but it doesn’t seem obvious that it will actually confuse a face detector… it might however confuse an autonomous vehicle, wouldn’t wear this in the street
Yes, the only mention of software is Yolo (wrong capitalization and no mention of version), and I can’t find a link to a study or anything like that. Not convinced it can meaningfully confuse a classifier specifically trained to recognize faces, not giraffes or zebras. Sounds more like marketing BS.
I’m not sure where you get that from. These are specifically designed to confuse face detection software.
Edit: Maybe I see the cause of confusion. The videos show “person” detection, but the way these systems distinguish people from other objects is by faces. As far as I know cars don’t do that, they just detect objects and don’t care about faces, so it shouldn’t be an issue.
At the bottom of the « collections » page in the technology paragraph, they mention confusing the Yolo object detector into thinking you’re a giraffe with medium confidence instead of a person
Yolo includes face detection, that's how it detects people. So it looks like yes this tech can be used for confounding more general image classifiers, but it was originally developed specifically for face detectors and the videos show it defeating those.
Specifically on cars, I don't think any of the currently deployed systems do face detection. Apart from anything else it takes too long, almost half a second on the fastest systems. They sense people as generalised blobs. It would be quite dangerous, to a system like that many advertising billboards would appear to be people sticking their faces right up against the camera.
A perfect one, sure, but in practice the car might, at best, treat it with less priority than a human in case of a difficult decision to make, and at worst, treat a « 40% confidence giraffe » as a false positive and ignore it
If a self driving car purposely ignores Giraffes or even real objects incorrectly identified as Giraffes or anything else, I think that's a problem. Africa is a place people drive cars in. And then there's this:
> Cap_able is aimed at a cultural and technological avant-garde
that wants to be an exemplary leader in raising awareness of the importance of one's rights:
a means to express oneself, one's identity and the values shared within a reference community.
Oh fuck off. Also whoever made that website should go to jail.
Sometimes I wonder how these things actually work (I am not talking of the actual cloth designs or their effectiveness, rather about how they manage to get this level of visibility).
This project has been posted nearly everywhere in the last 1-2 years, they made a kickstarter asking (I believe) for US$ 5,000, and they got US$ 5,306 by 36 backers:
It was the thesis of the designer at the university, later joined by her sister as a marketer, she seemingly made a very good work at marketing, though thw whole stuff still looks more like an art project than anything else.
For those who closed it because it looked like there is just a "stay tuned" video and a newsletter dialog: The video disappears and a web site appears after you dismiss the newsletter dialog.
This is a really cool approach: It looks like slightly flashy but still "normal"-ish clothing, and likely actually works (by confusing the AI into thinking you're e.g. a dog).
It's terrifying that they considered it necessary to get a legal opinion to state that yes, it's legal to distribute and wear this.
The alternative is to not use cookies at all. The fact there's a cookie warning means they're tracking usage and, assumedly, assigning you an ID to analyze your behavior on their site.
Which is expressly contradictive of their "mission".
In the description of the technology in the Collection section they explain that facial recognition systems mistake the patterns on the clothes for animals like dogs, giraffes, and zebras.
I don't know how this technology works. Is it not possible to train a system to say "human face" or not without accidental identifications of non-humans?
The silly colors and patterns will instead attract the attention of people around.
Another issue -- it may target specific soft/hardware combinations, but will not confuse all of them. It will probably not work against color-agnostic IR or ToF/depth cameras.
"The Manifesto Collection's intent is not to create an invisibility cloak, rather, it is to raise awareness and protect the rights of the individual wherever possible."
At least they admit that it doesn't work.
Right, I understand what you mean now. You're saying that those subtle changes might not be captured by the camera, which makes sense. I don't have any experience with CCTV systems.
okay, but i'm not going in for an elective surgery to have some of the pixels on my actual face flipped. i'd rather just wear a hat and other things to disrupt/block cameras from seeing my face
You misunderstand. The clothing on the linked website does not cover your face either. A fairly normal looking piece of clothing specifically designed to exploit the classification model could likely be used to "confuse" facial recognition rather than needing a piece of clothing with a garish design.
No I didn't misunderstand. I'm well aware of what clothing is and how it is used.
You described an adversarial image as one that has pixels flipped. That has nothing to do with clothing either, and had nothing to do with real time facial recognition as this "adversarial" clothing is meant to disrupt. So I just took your meaningless pixel flipping suggestion back to subject at hand.
Also, from the examples I've seen, facial recognition has no problems recognizing multiple faces in the same image. So I just don't understand the point of clothing like this when all it is going to do is present the software with a few additional things to consider, but not actually stop it from consider the actual face of the wearer of the clothing.
Well with those prices, organic and fair trade cotton or not, it’s certainly not going to be “for the people.” On the net, privacy will become a luxury fewer people can afford, and that holds true with this apparel as well. Maybe others will follow suit and scale down prices, but for now I wouldn’t pay 400€ for this.
> On the net, privacy will become a luxury fewer people can afford, and that holds true with this apparel as well.
I wouldn't put it like this. It's a niche product only a tiny number of people are interested in, most of them with moderate to high incomes. Of course they're going to price it like this.
Anything that's liable to be mass purchased will also be mass produced at very thin margins - which in this case means more or less the same margins as ordinary clothing. tl;dr: if there was real demand, they'd be just a bit more expensive than ordinary clothes.
I don't know anything about facial recognition software so I'm probably underestimating the difficulty of alerting the system and it's operators to the interesting possibility of giraffes and zebras wandering around the urban environment.
Facial recognition tech is basically two parts: detecting a face in an image (“face detection”, often this is done by calling out to OpenCV’s face detection in Python or C++), and then extracting information from that face image and searching a database with it (“facial recognition”, sometimes done with algorithmic measurement of facial features but increasingly done by neural networks).
This clothing looks like it might be designed to confuse the face detection step.
It's like saying "hackers, stop breaking secure systems, because they will create even stronger ones."
Since the birth of civilization there have been rules, and people that try to break the rules. This is how it is meant to be. I welcome the future arms race between the tech priests of the AI-powered hell^Hworld, and the people that refuse to conform and do not want to be controlled by machines.
Sadly, more and more people forget this site is called Hacker News, and it is rapidly getting overwhelmed by the LLM and AI fanatics.
(This is my Saturday morning creative writing assignment. Don't read too much into it. But still, long live the struggle against AI.)
If I wanted my governments surveillance camera system to improve more quickly I'd confront it with some challenging training data by making an exploit public and available for everyone :)