(As an aside I got a kick out of reading "some kind of hypebeast Supreme x MIT collab")
Afaik anything made by the government is expressly public domain in the US, but dunno about state govs. Unless, anyway, the state buys a design from a private company, at which point my law knowledge ends.
It's also then super easy to say that the individual wearing the shirt is likely to try to usurp monitoring. In practice this type of thing will likely make you a more prevalent target for monitoring along the lines of "what do you have to hide?".
Not that I agree at all with large-scale monitoring or think anyone should prove that they don't have something to hide. Only that it paints the target on your back.
The operative point here is not 'a shirt', but a visual pattern that tricks deep learning-style classifiers into wildly misidentifying something. There's no 'very easy' way to counteract that other than retraining on a new dataset or switching entirely away from a deep learning system.
This particular paper is based around attacking YOLOv2.
Uhhhh... why not? You can put them on hats, backpacks, arm patches, or a lot of things. I get that they are suggesting it would be uncomfortable to have a stiff shirt, but there are easy solutions here.
I'm not trying to undermine the research here (because it is good research) but I think the reporting could be a little better.
As for the research, I wished they had compared it to more accurate models. I think this would greatly help a reader to understand the limitations of the work. YOLO and faster-RCNN are great for "real-time" but don't have the greatest accuracy. They trade accuracy for speed (more accurate models are pretty slow). While I do think YOLO is more similar to what would be used in a real life setting, it would be great to know how the design works for more accurate models (this wouldn't require significantly more work either, since you're just testing against pretrained models). If the researchers stumble across this comment I would love to know if you actually did this and what the results were (or if you see this comment and try it against a more accurate model). (I do also want to say to the researchers that I like this work and would love to see more)
I can't really see a way for AI cameras to get around properly applied facepaint, especially varieties that are IR absorbent or reflective. I hold the human brain in very high regard when it comes to pattern/symbol/shape recognition and if facepainting techniques are good enough to trick human visual processing, it's going to be good enough to fool any existing AI. For an example of what I mean by proper technique, refer to this video: https://youtu.be/YpzUr3twW4Q
The trick is in getting enough people to adopt such a strategy that you can't be identified through simple exclusion. I think the idea of camo/other facepaint isn't so foreign and unappealing as to never come into common fashion.
In video people move, and 3D information can be recovered unless their faces are painted with something like Black 2.0. At which point why not just wear a mask?
A lot of those masks people in China wear they refer to as privacy masks (though this seems more an auxiliary usage -- especially in HK -- where the primary use is for filtering air). So I'd say there is evidence of such styles already becoming fashionable.
Make it illegal to use facepaint.
How do you distinguish this from makeup?
Yes you can, that's part of the appeal of applying machine learning to security. They don't rely on things like signatures or existing heuristics to identify things as malicious.
Think of it like your body. It learns to identify viruses. Does that mean you're immune from novel viruses or new strains of the flu?
I don't think this is a meaningful distinction. Who cares whether the new heuristic is being added by a machine or a human?
You still need to keep feeding the neural network data to learn from, and it will still choke when it sees novel data that doesn't align with the heuristics it developed.
That's the entire reason adversarial AI works. The reason the Trippy T-shirt makes you invisible to some current AI systems is because it exploits the heuristics they've built using data that these systems are unfamiliar with and haven't learned to process yet. If it was possible to build an AI system that could defend against novel attacks, the Trippy T-Shirt wouldn't be able to fool them.
> Machine learning only learns how to categorize things into predetermined categories.
This is just one type of machine learning called classification, there are others like regression and clustering which can be combined to create more robust models. Look at the technology behind Cylance's product which identifies files as malicious or not pre-execution. They are not just using classification.
We need better deployed testing suites that can test an adversarial model against many popular classifiers, not just 2.
Even so, the paper itsef shows tht their tshirt doesn't make the wearer undetectable, only partially-undetectable. A security system won't ignore you just because it only saw you 10% of the time you were present (unless it's an Uber self-driving car).
It will always be difficult to sustainably defeat recognition algorithms and I expect this to be an arms race along the same lines as other counter-surveillance techniques.
Gibson's suggestion that deeply coded and secret exceptions to mass surveillance might be used to protect state actors seems to me a plausible and concerning aspect of these developments.
Ultimately if a system is designed to only look at faces then this method would likely not be effective.
The explanation given was that one server per person would invalidate some portion of the overall profile so the identity would be misclassified (for all main characters)
Without getting into a debate about expectations of privacy on public roads vs. building a perpetual government database that tracks where every car is effectively at all times of day, another application of this tech would be a bumper decal.
I think most reasonable people would agree obscuring the license plate on a public road is not the solution (well, with the exception of Florida Man who racked up a $1MM fine when he was finally caught doing that through toll booths for a year), but a decal like this wouldn't interfere with any officer's human duties.