Hacker News new | comments | show | ask | jobs | submit login
Attacks against machine learning – an overview (elie.net)
328 points by ebursztein 41 days ago | hide | past | web | favorite | 64 comments



It'd be great if there was a service that you could sign up for, which would "deceive" Facebook, Twitter, and other social media websites by producing false information about you. For example, if I don't want FB to know what movies I'm interested in, how about liking "random" movie pages on FB? If I don't want FB to know about my political orientations, how about run with the hare and hunt with the hounds?


This is very much the same foundation of disinformation campaigns like strongarm regime propaganda or what is currently termed “fake news”. By attacking credibility itself, anything and nothing are equally valid. The problem with poisoning the well of your own personal data is that it also makes it easier to indict you on false pretenses.


I guess if enough number of people do it, it removes the ability to indict anyone.


Or make it so anyone can be hunted for no reason at all


indict you for what?


Whatever they decide to, I believe is the GPs point. When nothing is true, everything can be “true”


However, if people are already doing something that is deemed illegal or not allowed by any given jurisdiction/site, making the pool of people to theoretically go after/ban order of magnitude(s) larger seems ideal.


Not a happy answer but such a service would raise the noise floor but would otherwise not have much of an effect unless it was massively adopted to the point that the original signal was insignificant compared to the noise.


Even then it’s questionable if it would be effective. Over time even a faint behavioral signature will become transparent, because things that deviate from the true behavioral signature do so in random ways, which could essentially “cancel” out if your model for quantifying behavioral characteristics is well-specified. Meanwhile the “true” behaviors would “add” over time.

It would become like any other signal jamming arms race, whether it’s radar or social behaviors, and your model of generating random noise has to get more sophisticated as the other party’s anti-jamming techniques get more sophisticated.

I took a class with Scott Aaronson once where he mentioned the idea that the natural enemy of machine learning is cryptography.

So if you know the anti-jammers are using ever greater machine learning techniques, rather than trying to one-up them with adversarial learning, I suspect the best jamming would be cryptography.

Like, extensions to Facebook that essential encode text with PGP or something, send via Messenger, and allow decoding on the other side.

Then an interesting idea for machine learning would be how to make an autoencoder that accepts encrypted text, transforms it into human understandable text that would fool a machine learning algorithm designed to flag encrypted text, and can decode from natural language back to the encrypted data on the other end.


> I took a class with Scott Aaronson once where he mentioned the idea that the natural enemy of machine learning is cryptography.

He cites Rivest for that one: https://people.csail.mit.edu/rivest/pubs/Riv91.pdf


AdNauseum[1] is that "service", but Google blocked it from the Chorome Add On Store for obvious reasons.

[1]: https://adnauseam.io/


One of the many reasons I switched over to Firefox. Additionally, AdNauseum is an amazing product name.


Another reason: Stylus (FF-only) vs. Stylish (which works on both FF and Chrome, but collects your data).


*AdNauseam


I suggested this to an EFF lawyer at Defcon and they hated the idea. Perhaps I worded it wrong, it makes sense to me. Signal to noise ratio. You should have a legal right to submit false information if you feel a service might harm you at some point.


Interesting, what did they hate about it?


Not OP but if I had to guess, once you have a 'legal right' to submit false information it paves the way for a lot of unwanted behavior and would help the spread of disinformation. If a service is harming you and you're thinking about ways to legally address the issue, why not just go after the service itself and come up with legal repercussions / regulations for their actions?

Your question led me to this NPR article that briefly talks about the legality of lying on the Internet that's worth mentioning (https://www.npr.org/sections/thetwo-way/2011/11/15/142356399...). It seems that when you agree to the Terms of Service with services like Facebook, you agree to not spread misinformation or misrepresent yourself (https://www.facebook.com/communitystandards/integrity_authen...).


In my opinion, it isn't even information in the first place, so it can't be categorized into true or false information buckets. There are way too many conclusions being drawn from the stuff that gets tossed through the digital ether.


Apple could file a patent infringement suit if you tried to automate it.

https://www.digitaltrends.com/apple/apple-gets-privacy-prote...



I use this nift browser plug in: https://noiszy.com/

"It visits and navigates around websites, from within your browser, leaving misleading digital footprints around the internet. Noiszy only visits a list of sites that you approve, and only works when you turn it on. Run Noiszy in the background while you're working, or start Noiszy when you're not using your browser, and it sends meaningless data to these sites for as long as you let it run."


In order to wash out the signal, all the service would need to do is 'like everything'. In addition to masking your interests it would also grind their algorithms to a halt if enough people did that. A lot of these algorithms gain performance due to the sparsity of the data, so if everything became connected it would negatively impact the performance of their algorithms.

Anyone know how to get, or compile, a list of everything likable on Facebook?


Until you get into problems (legal or personal, doesn't matter) for "liking" stuff related to child porn, terrorist propaganda or, I don't know, scientology, without even knowing about it, because it was done on your behalf by this "like automaton".


Exactly. In France people have been convicted because they "liked" illegal opinions. As if the fact that such a thing as an illegal opinion exists was not enough of a problem, it's been decided by justice that the semantic of a "like" was "I make this opinion mine".


Can you please share some links on this? All I could find was a similar case in Thailand: https://www.theguardian.com/world/2015/dec/10/thai-man-arres...


I found that, but this is in French, obviously: http://www.leparisien.fr/rozay-en-brie-77540/rozay-en-brie-c...

My own (approximate) translation of parts of the text:

"Sur Facebook, le trentenaire avait apposé un «J’aime» sur une image d’un combattant de Daesh brandissant la tête décapitée d’une femme. Il a été condamné à trois mois de prison avec sursis." --> "On Facebook, the man in his thirties had clicked "like" on a picture of an ISIS fighter holding the head of a beheaded woman. He was given a 3-month suspended prison sentence".

"«Quand on met J’aime, c’est que l’on considère que ce n’est pas choquant ou que l’on adhère», considère pour sa part Jean-Baptiste Bougerol, le substitut du procureur de la République." --> ""When you click "like" on something, you consider it's not shocking or you agree with it"", said the prosecutor".


anything likable on Facebook shouldn't be illegal, or illegal to like


If you like everything, or if you say "my birthday is the 32nd of February", the algorithm can detect you're trying to defeat the system, and ignore you: you become a known unknown.

But if you start to like random things, or if you say "my birthdate is the 2nd of March" while it's not, you become an unknown unknown, and the algorithm must start to reason with your wrong data.


Facebook uses an enumerable ID for everything public, so to get a list you just have to enumerate their public pages.


I actually had the strategy on fb to just like very random stuff like movies, groups etc. I’m not sure whether it has helped but it makes me feel better


> I actually had the strategy on fb to just like very random stuff like movies, groups etc. I’m not sure whether it has helped but it makes me feel better

I've done similar, and afterwards nearly all advertising categorizations of me eventually dropped off my profile (after a period where they were schizophrenic and contradictory). I can't be certain I caused that, because this was contemporaneous with Zuck's congressional testimony and the run up to the GDPR (both of which probably motivated many changes).

But it would make sense that mountains of bad data would make it hard for them to confidently place me in advertising demographic and interest categories, do to all the contradictions.


How about you don’t sign up for Facebook if you don’t want them to know anything about you? I don’t really see the point of this deception.


Because facebook tracks you, even if you do not have a facebook account.

https://www.theverge.com/2018/4/11/17225482/facebook-shadow-...


Because the word "tracking" can include request monitoring for DDoS mitigation and scraping detection, or storing information purposefully uploaded by other people (e.g. photos and contact list), saying anything about "tracking" isn't very meaningful.

Think of how many people are being involuntarily "tracked" by Dropbox because others are backing up photos in which they appear, or emails they sent, without their consent. For better headlines, we could call this information "Dropbox Shadow Dossiers".


The problem is laziness. “Call your Congressperson” is hard. Waxing lyrical about political distinction is easy. Imagining technical solutions to political problems is easier.


I'd say that there are two related problems, one political and one technical, and that there is no reason not to address both issues.

* Political problem: It is legal and acceptable to track people on the internet to an extreme degree. Political solution: Call your Congressperson, donate to the EFF, reframe the issue as corporate stalking, etc.

* Technical problem: It is possible to track people on the internet to an extreme degree. Technical solution: Restrict ability to collect data by using adblock, poison existing databases with reasonable but false data.


is it possible to block facebook domain via hosts file or chrome extension?



Most of your friends, family, coworkers, etc. have your e-mail or your phone number (or both). Some (most) of them use facebook, and a significative portion of them share those information with facebook, so even if you never subscribe to the service, they know a few things about you (your name, phone number, e-mail address and some of your acquaintances), and they know they don't know anything else about you (which is some sort of information too).


In the late 90s-early 00s, at a corporate job, several of us used a script to have the browser load a random page every few minutes. (I didn't write the script... I think the pages were just random links from search results based on random words.) Anyway, the thousands of pages loaded masked the NSFW pages. We figured we'd have plausible deniability. "You say I visited Xxx.com? I don't know... I think my computer has a virus or something. It's always loading up random stuff. Let me see the log... Yep... Just as I thought... It says I visited 24,239 pages on Tuesday. Heck, that's not even possible!"


It seems unlikely that this would be 100% effective though. I am pretty sure that companies like this use data from your immediate social sphere as well to make pretty relevant assumptions about you. For instance, when I buy a product, certain others of my friends will see this same product promoted to them. My friend once looked something up on Facebook, while we were in the movie theater (before the movie started ;)) and sure enough I was getting the ads for that same event as soon as I got home and looked at my phone. Of course, on a more massive scale, this could theoretically work.


Could you just give fake info? Or else use multiple fake profiles?

I don't know how you an deceive Facebook without also deceiving your friends and contacts though.


Does the api support creating old posts and back dating them?

FB is really good at hiding old activity and being utterly worthless at searching your feed, so if you can just put all the fake stuff in the past you'd be fine with your real friends.


> Does the api support creating old posts and back dating them?

> FB is really good at hiding old activity and being utterly worthless at searching your feed, so if you can just put all the fake stuff in the past you'd be fine with your real friends.

They allow you to back date, but if you're goal is to avoid annoying your friends, you could use the privacy settings for a similar effect. Just post your garbage as visible to "only me," let it age for a week or two until the algorithm will ignore it, then make it "public," "friends only," or whatever you want.


What's funny is how bad most of these services are at identifying people. They seem to think I'm a southeast Asian female (white dude here) because I happen to read a particular article, buy a particular item (probably as a gift), etc. Turn on, tune in, drop out...



I just watched a presentation about using deep learning to detect cheaters in CounterStrike: Go (https://youtu.be/ObhK8lUfIlc) and the question he didn't seem to have an answer for was data poisoning -- what if the cheaters all volunteer to be on the anti-cheater jury? Of course they are cross checking juror reliability ratings and stuff, but it's definitely a treadmill.


Whenever you're crowdsourcing, bad actors are a possibility. You'd usually track agreement to root out both the bad and incompetent actors, but what you're saying would essentially amount to a 51%-attack. That is, with enough bad actors working together, consensus stops being trustworthy.

I see two ways to address this (there are probably more, this is just me thinking out loud):

1. Increase the size of the pool of total reviewers so a 51%-attack becomes infeasible. Incentives can be offered to the rest of the community to get them to participate. (This is similar to what bitcoin tries to do, with the added obstacle of actor anonymity. In an anonymous system, 1 bad actor can trivially simulate an arbitrary number of actors. Bitcoin tries to solve this by increasing the operating cost for each perceived actor. Counter Strike can be seen as having a fixed lump operating cost: purchase price of the game + time investment to accrue enough XP to qualify for the cheater jury. )

2. Create an additional set of people you trust unconditionally. (These can be people you train and pay a wage.) This means you can spot-check anyone, and a consensus between bad actors is an investigative clue (to find more bad actors) rather than a hindrance.


Thankfully, the majority of CSGO players (and overwatch reviewers) are not cheaters.


This is just like all the bad actors buying all the fast computers with BitCoin


Very related: about a year ago, I wrote about weaknesses of neural networks specifically: https://matt.life/papers/security_privacy_neural_networks.pd...

With powerful machine learning systems, we need to think about security a little differently. See especially the section 4.8 about function approximation:

> Given a task for which no discrete algorithm is known to solve, there is a good chance a neural network can at least approximate it. The extreme value of neural networks are their ability, in many cases, to act as an unknown function that can map inputs to outputs with good enough generalization almost as if the actual function was known. This makes any system that relies on the difficulty of implementing an unknown function vulnerable to the malignant use of neural networks


Very well said, never thought about it in this way. It also nullifies a lot of propriety risk scoring models, like credit scores. I wonder what research is done around this for automated trading systems? I see an “attacker” that creates models who’s only purpose is to force another financial institution to make unprofitable trades based on reverse engineering the other traders trading modes. Eventually, if not already happening, trading becomes machines attacking other machines.


>Eventually, if not already happening, trading becomes machines attacking other machines.

Welcome to high frequency trading. You’re a bit late to the party though (around 15 years).


I believe you are wrong, if I am to believe my trader friend to whom I showed this thread and answered:

“Trivially true in the 'of course does behaviour of others matter' sense and in the 'could my actions influence others'. Not necessarily operational though.

Fine line from there to 'spoofing' (== placing trades solely with intent of engaging others to trade at price level) -- with is VERY EXPLICITLY not allowed and for which you can get fined and go to jail. Recall the case of that poor SOB out of London who was made a poster boy for the flash crash?”

It seems there is regulation against this.


could you place a 'backdoor' in these neural networks? Releasing a publicly beneficial AI while allowing it to be activated like a Manchurian candidate.


That's basically what adversarial inputs are; except that those are unintentional. A true backdoor would probably look more like a ML system trained to give a nefarious output for the "backdoor" input (but this is like poisoning), and then whatever program relies on the output of the ML system would handle that accordingly; not unlike backdoors in conventional software. The output of a neural network is only as useful or effective as the software (or person) that makes decisions based on it.


> Model stealing techniques, which are used to “steal” (i.e., duplicate) models or recover training data membership via blackbox probing. This can be used, for example, to steal stock market prediction models

I would like to hear stories about such attacks on stock market models.


There are not any, since there are no public stock market models worth copying, and no stock market model takes external input. But if they did (you could give a time-series to a model in the cloud, and it would give you predictions) then it would be possible.

Copying models is a problem for cloud-hosted pay-per-prediction image classification, not for constantly retrained stock market models that don't take external input.


I thought it would be about observing the behaviour of a system that is trading on the market. The input to the system would consist, for example, of other people's trades.


It is referring to https://arxiv.org/abs/1609.02943

What you are referring to is possible, but is not "copying" per se, just trying to infer what the system is doing (inverse RL), and then exploit that/make it do mistakes. If you are not HFT it is very difficult to distinguish bots from humans, so you'd have a hard time even finding a target.


One simple way to minimize impact of these attacks is our work called Pixel Deflection (CVPR 2018 Spotlight). Here is a short (4 min) video introduction to the idea https://youtu.be/VgjOXJ9QKWo


Interesting post. Maybe to complete about adversarial examples in medicine https://medium.com/fitchain/attacking-deep-learning-models-3...


Is the thesis there that Big Pharma would pollute the data to sell more cancer cures to people with moles?


a couple of requests in case any of you find yourself writing something similar...

Please don't title your article one thing (ML) and then in the first sentence set the context to something else (AI).

Please lead with a short paragraph stating what you did, in what context, and for what purpose, instead of trying to grab the whole pie and implying that your experience and worldview are commonly shared by everyone else.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: