Thanks for striking down the bad guys, Facebook, our ever vigilant guardian of personal information.
It's not even that though. If someone is reselling your companies data to a third party without authorization, you put both the company and it's key people on the persona non grata list. Large well run corporations and a lot of smaller companies have black lists like that and they use them.
No point in sharing dinner.
If you've decided to share your info with Facebook, well, that's on you. But Facebook isn't going to share it.
There can't be a single repository of online data, which by design gives the repository owner too much power and control. Blaming the users for falling for it is like saying we can't have regulation because markets can self-regulate themselves.
After the last 3 years, anyone who defends Facebook comes across as disingenuous, naive or for a lack of a better word, a shill in my book.
Saying Facebook wants to protects your personal information is the most hypocritical statement, I've heard yet.
So much for protecting your personal info. That is why such a stink is being made about this story.
Or are they reformed now?
And politics aside, this is analogous to Craigslist vs 3Taps, Radpad and so on.
But does that mean one should always put in a </s> tag? It may ruin the joke.
As an example, this was one (or two) of my highest-voted HN comments ever:
If I'd added a '/s' tag to those comments, it would have spoiled it, wouldn't it? Part of what made it funny was that I played it straight, and people believed I was sincere almost to the end - or beyond.
But now I fear I have led us far off topic. Back to our regularly scheduled discussion of Facebook, watchful defender of our privacy.
> Cambridge Analytica used its own database and voter information collected from Facebook and news publishers in its effort to help elect Donald Trump, despite a claim by a top campaign official who has downplayed the company’s role in the election. ... In another case, in the late stages of the November election, Schweickert said the company acquired data on voters who voted early – data it collected from local counties and states – and linked the information to individual Facebook profiles.
> "we were able to form a model to predict the personality of every single adult in the United States of America."
Edit: also this article is a good read about how they used Facebook likes to build up profiles. Also contains a summary of the video: https://motherboard.vice.com/en_us/article/mg9vvn/how-our-li...
The thing that they seem to bring to the table however is the massive amount of personalized data. That is the issue here, because that data can be abused by nefarious actors if it were to fall in the wrong hands. What usually takes a much more personal touch (like going out to voters, talking to them, or if you can't do that, have teams on the ground that talk to voters and tell you before your speech what they care about and why) can be done in mass due to essentially aggregating private data that users "agree to" obliviously or even don't consent to. That is the issue at hand, not like Cambridge somehow warped the people in Wisconsin's brains to mush and made them go to the polls.
Machine Learning models are tools for computing specific predictions. They are not thoughts, have no concepts, etc.
The model isnt what it is used for. A model isnt what it describes. The ability of an ML system predict a person's action isn't a reasoning process. A rock rolling down a hill predicts the path of water rolling down a hill. The rock isnt thinking about it.
Describing ML models as if they were cognitive gives people the impression that computers possess the relevant concepts and understanding which is why its seems scary. It's just current through a write to a fancier dial.
That would be dystopian if it weren't so ridiculous. They can do nothing of the sort- predict the personality of any adult in the USA. At best they might be able to make predictions on a population basis, but anything more precise than that is out of the question. Behaviour prediction on an individual scale is pure, unadulterated fantasy.
This is just typical overselling of a service, by people who can use maths to obfuscate the fact that they 're making it up as they go along, targeted to people who wouldn't understand the maths if a five-year old explained it to them anyway.
The access they have to online data sources would absolutely be enough to start correlating things to build an individual profile. Say you can link someone's FB, Twitter, (HN?!) account posts, likes, shares, to credit card data, maybe even search keywords from shady advertisers, to vehicle registration records, tax records, census data, etc.
Do you really think this profiling is that far-fetched that they need to BS everyone? It doesn't have to be perfect to be many times more effective than untargeted ads...
How do I know they weren't the reason Cruz lost? How do I know that they didn't lose the popular vote for Trump? Take their word for it?
If they even exist in the future, I guess...
Incidentally, the psychologist mentioned in the FB statement appears to have changed his surname to Spectre.
I can do a model of personality using ML/DL anytime as well, it's a piece of cake. How good the model is is another question; I'd be surprised if they were significantly better than random noise given how pathetic "science" of psychology is.
It looks to me that CA may have gathered and shared FB data, and FB has suddenly realized that they have a GDPR violation on their hands.
IANAL, but I have seen articles suggesting that people will need to re-consent to the use of their data, and I could see this being a problem in this case, where the consent was never given in the first place, but FB would have to be able to report on how that data was being used.
Specifically, it looks likely that CA's data violates GDPR Article 9 section 1 - https://gdpr-info.eu/art-9-gdpr/ - completely.
GDPR violations for a company of Facebook's size would be substantial. In Facebook's case, if the fines were deemed to be "aggravated", the absolute maximum fine would be 4% of FB's annual revenue, or $318Million USD.
In this case user has apparently given, some level of, consent to Cambridge Analytica. I don't know whether that would then make them a data controller in their own right or whether they still be treated as a data processor. If former then user would have to engage directly with CA for right to erasure or FB need to invoke T&C's. If later then it's down to FB T&Cs and then FB would have to inform CA of user invoking right to erasure.
GDPR will a minefield for consumers and organisations for at least another 2 years until we have some case law that backs it all up
However if hypothetically FB sold CA some data in 2015 without their users consent then it is CA that has the problem now as it has no consent to the data, not FB
Make your final posts informing your friends how to contact you by email. Follow up with a reminder or two over the coming weeks and delete it.
It will be the best feeling you ever had from doing something with your facebook account.
I second this, deleting my Facebook account was the best decision I made in 2017
So basically FB's problem is that Kogan passed the data to third parties, without FB's knowledge and -I guess- without FB being in on the deal. Because FB's whole business model is to hoover up its user's data and sell it to "third parties".
Third parties who may then do with it whatever they like, without users having any control over it. You know- like Kogan just did.
FB is trying to pretend they're the responsible party in this - "Protecting peoples' information" is what they do, they say. Well, no it isn't. Trading peoples' information is at the heart of everything they do. And this is just one more example of why it is so harmful.
FB monetizes data by letting people use it for ad targeting, not by selling it.
>In late 2015, the turkers began reporting that the Global Science Research survey had abruptly shut down. The Guardian had published a report that exposed exactly who the turkers were working for. Their data was being collected by Aleksandr Kogan, a young lecturer at Cambridge University. Kogan founded Global Science Research in 2014, after the university’s psychology department refused to allow him to use its own pool of data for commercial purposes. The data collection that Kogan undertook independent of the university was done on behalf of a military contractor called Strategic Communication Laboratories, or SCL. The company’s election division claims to use “data-driven messaging” as part of “delivering electoral success.”
>Shortly after The Guardian published its 2015 article, Facebook contacted Global Science Research and requested that it delete the data it had taken from Facebook users. Facebook’s policies give Facebook the right to delete data gathered by any app deemed to be “negatively impacting the Platform.” The company believes that Kogan and SCL complied with the request, which was made during the Republican primary, before Cambridge Analytica switched over from Ted Cruz’s campaign to Donald Trump’s. It remains unclear what was ultimately done with the Facebook data, or whether any models or algorithms derived from it wound up being used by the Trump campaign.
>In public, Facebook continues to maintain that whatever happened during the run-up to the election was business as usual. “Our investigation to date has not uncovered anything that suggests wrongdoing,” a Facebook spokesperson told The Intercept.
>Facebook appears not to have considered Global Science Research’s data collection to have been a serious ethical lapse. Joseph Chancellor, Kogan’s main collaborator on the SCL project and a former co-owner of Global Science Research, is now employed by Facebook Research. “The work that he did previously has no bearing on the work that he does at Facebook,” a Facebook spokesperson told The Intercept.
There's no way this will have any impact until there are criminal citations on the people involved. In this case, because it's the us, there's not going to be anything criminal. Facebook, you need to sue companies that do this into oblivion. You are rich and can stand up.
In europe, there's gdpr, the us has no defense. Remember, the chinese and russian govts (probably) were the ones that hacked the company that had all the security clearance applications. Basically, everyone but average americans have access to us govt employee's private information. There must be some blackmailing going on.
Though as the article points out: "There are still questions about how the EU will enforce these actions against U.S. and other multinational companies [...]"
Even though many US-based companies are attempting to comply with GDPR, European companies which use their services aren't prepared to take the risk of being in breach of GDPR.
It's also against the guidelines. https://news.ycombinator.com/newsguidelines.html
Even if they deleted the data, they could have retained information generated using that data which would allow them to effectively abuse the original data for ad targeting - which they did years ago. FB had opportunity to know this long before now.
It's bewildering that they knew this huge amount of sensitive data got into the hands of unauthorized third parties and were willing to treat assurances as a sufficient remedy. At the very least, all of their ad targeting should have been carefully vetted, but it seems ridiculous to let these third parties continue operating on the service when they had already demonstrated a willingness to blatantly violate the FB ToS and use unethical tactics for ad targeting.
In practice FB users aren't informed enough to know what this means or care about it, but missteps like this really demolish any argument that FB cares about user privacy or ToS enforcement. They had a huge amount of time to realize this was happening and take action on it. At this point it seems unlikely that CA and SCL are the only companies doing things like this - it's not exactly a secret that these techniques are effective. If they wanted to make it clear to third parties that this wouldn't be tolerated, they should have cracked down years ago.
I guess there would still be advantage to targeting adds even more directly (using RNC voter data combined with the FB profiles).
What exactly should they have done? It’s data, once someone has it, game over. It’s like trying to unring a bell.
I see no evidence that they have made any efforts to prevent a recurrence.
Nothing. Just like the adtech industry in general. So much of the spam and fraud could be stopped if there was even minimal regulation and actual consequences, but there isn't, so these actions are usually little more than PR, especially once all the damage is already done.
I also don’t understand why Russia gets all the news, when this foreign election tampering was so much more effective and done in the open.
[edit: Downvotes. Wow. My comment is certainly on topic, so I guess “Foreign corporations shouldn’t participate in our elections” is controversial(?)]
There’s a lot of shady shit going on, and PACs are basically legalized bribery. But it’s important to be a bit nuanced when you want to effectively argue against these practices.
He publicly asked for Russia’s help with the election...
That doesn’t sound like it would hold up in court.
"On 8 February 2018 Mr Matheson implied that Cambridge Analytica "gathers data from users on Facebook." Cambridge Analytica does not gather such data."
Quoted from 'Letter from Alexander Nix, CEO, Cambridge Analytica, to the Chair of the Committee, 23 February 2018' (PDF), linked from this page: https://www.parliament.uk/business/committees/committees-a-z...
— So it would seem that they've been caught with their pants on fire then?
It is alleged that Banks colluded with other pro-Brexit campaigns to spend a lot of money with a CA subsidiary. By spreading it around the groups they would have been able to breach the election spending limits. It is also alleged that the money itself came from outside the UK (which is also illegal)
It's a twisted web but here is an entry point with Carole Cadwalladr of the Guardian. https://www.theguardian.com/technology/2017/may/07/the-great...
I can't imagine what kind of coalition he could assemble. I don't know if it would win, but if he takes whatever position his data tells him, the only obvious thing that would sink him would be his lack of authenticity. Imagine a candidate that always had an appealing take to a slim to large majority on whatever issue (for people that don't know too much about the issue).
In the previous 2016 race, HRC was perceived as inauthentic and lost. DJT was perceived as authentic to a substantial fraction the right (not the media class, though that has changed with the direction of the wind) and inauthentic to the left.
This is all hypothetical of course, he has to be one of the most unlikeable people in ages, and the reception to his “I identify with you, middle Americans” tour really solidified that feeling.
"Python for machine learning with SciPy stack and scikit-learn. Applied knowledge of SQL"
For #1: PII could be embedded using an iframe and a url. You could even pass data (such as templates) in with url params
For #2: FB would expose endpoints that allow actions (such as send this email to the user). They could make it as generic as they needed, up to running arbitrary code on a VM, minus networking calls.
Maybe what's needed is a PCI-compliance standard or a HIPPA-act for general user data?
A lot of the adoption pain reveals just how much we built businesses that couldn't care less about what individuals would like done with their data over time. If GDPR had been alive early in the web, I think we'd see different and more human business models, and technology to support them.
I guess Facebook is mad that Aleksandr Kogan profited from selling Facebook user info without Facebook taking a cut.
If such a discussion is not expected on HN, I am probably in the wrong place.
> cause for Brexit.
> Such extreme groups should never be allowed.
I surmise "extreme" in this context merely means "conservative" or "isolationist".
Voting for Brexit is hardly extreme or fringe. So much so, in fact, that most of the UK did so.
It's not a coincidence that Google ditched 'don't be evil', and doing that doesn't indicate a twist to the far right or anything of the sort: It's just the reality that being a wildly profitable advertising firm can require a lot of moral/ethical flexibility that leads to outcomes like what we have here with CA & SCL.
The significance of CA's politics is specifically that they gave their services for free to a political campaign that aligned with their goals, which is already of questionable legality - and in this case, likely expected and possibly already received regulatory kickbacks in exchange. But that doesn't really matter in the context of FB deciding to enforce rules.
EDIT: To clarify, Trump's campaign did pay CA but sources have claimed that they received a deep discount.
A completely hands-off approach is also corrosive and destructive. They're stuck trying to thread the needle, to balance civic responsibility while avoiding being an overbearing gatekeeper. I don't think it's possible to pull it off in such a way that they are not overstepping bounds in one direction or another, but as long as the pros outweigh the cons of whatever approach they take, it'd still be infinitely better than doing nothing.