Stay in the organization and work to turn it away form casual misuse of personal information. Prevent an Orwellian future of machine-learning assisted, personally targeted messaging preying upon our fears and insecurities. Stand up and speak out against the performance of unethical psychological experiments on unwitting participants.
This is one of the important moral issues of our time. To stay on the sidelines is unacceptable.
Nobody appeared to be “casually misusing” data—I think the problem is that they’re largely just engineers, particularly young ones, naïvely considering only the engineering side of things. All the data queries go through the robust privacy-checking system, so everything is good, right?
In a case like this, they didn’t consider the optics of what happens when someone scrapes the public (at the time) profiles of Facebook users and uses that information for nefarious deeds. What happens when users are angry not because their private data was “breached”—a technical problem with an engineering solution—but because they didn’t realise how much they’d already shared publicly (even if you explicitly told them) and how it could be used to influence them en masse?
Case in point, one of the most common policy violations is prefilling the user message on posts made via the API. It is forbidden. But the field is right there for you to abuse and put whatever you want into it. Sure there are some automated enforcement algorithms and policy employees look at things when complaints go up, but if the policy says you can't do it, why on earth does the code allow it?
OK I know the pat answer is that apps are allowed to prompt the user earlier in the workflow for the message, and then use that value when calling the API. That is true but weak (what would it hurt to eliminate that loophole vs. the benefit of no longer having to detect and take enforcement action on an impossible action) -- the point remains, if they really cared about their vaunted policy and protecting the user, they would put more controls directly into the code behind the API to disallow prohibited actions.
These are things where smart engineers can make a difference. Spend some time on the FB Developer Community Group and you will see the flood of questions from developers who are completely ignorant of the policy, even on basic things like "don't use an account with a name other than your own" aka, there are no business or developer accounts. Many of them willfully ignore policy and just do what the code allows them to do. A lot of good could be done by FB devs taking more accountability for how the platform is abused.
Case in point, Cambridge Analytica used ill-gotten data from 50 million people to craft extremely effective political ads. And since user engagement with those ads was very high… Facebook's algorithm made it cheaper for them to buy even more ads.
I think there is enough information available for Facebook employees to be faced with a decision, after which they are morally culpable for the growing net-negative effect that Facebook has on society.
I'm not a Facebook engineer--and I'm probably not smart enough to be one--so I can't really say how I would act if faced with an ethical decision to provide for my family or take a stand. However, I think anyone who has been employed by Facebook is capable enough to be able to immediately find comparable employment.
Similarly, I think there were lots of well-meaning people involved in Big Tobacco, who didn't realize they were contributing to the deaths of millions of people. I imagine there was a similar inflection point for them, as well.
(Please note, I do not think Facebook is as damaging to the world as Big Tobacco. I also don't think that individual contributors are as culpable as leadership. I am not comparing the degree of moral evil, but am comparing the complicity of individual contributors.)
I absolutely agree with you that this is a moral decision for the employees. At a former company I pushed to improve our user privacy and decrease our storage of unused personally identifying information.
I left that company when they neutered my project to only affect the UI...
We aren't soldiers following orders, we are humans that can reflect on our actions.
That said, I had the savings to be unemployed for a while, not everyone does.
Is this the only option?
Why can't it (not necessarily facebook) instead be a "machine-learning assisted, personally targeted messaging to help support your long term goals?"
>This is one of the important moral issues of our time.
No, it's not. Even if it was (and it's not) I'm not even sure if it would crack the top 100. For example, did you know there are people without access to clean water? There are civil wars? State run gulags? Did you know man-made global climate change is a thing? How about that we're going through an unprecedented ecological collapse? All non-issues. The big moral problem of our time is a social media company that wants to sell you shit.
Two of the issues you mentioned, state run gulags and anthropogenic climate change, are issues really only solvable at the federal level. Facebook's and Cambridge Analytica's ability to influence an election can have a profound effect on those kinds of issues. I mean, we now have a climate change denier in the White House who is dismantling the EPA. If propaganda spreading through Facebook created that, could that not also be partly responsible for our inability to do something about climate change?
That's just one example, but I think you're being just as hyperbolic by saying this wouldn't crack the top 100.
No. OP called out Facebook, not Cambridge Analytica. OP attempted to shame Facebook employees not Cambridge Analytica employees. Facebook is here to sell targeted ads.
>but I think you're being just as hyperbolic by saying this wouldn't crack the top 100.
I stand by it. This smells like a big nothing burger. I'm not even sure what the news here is. Candy Crush probably has info on hundreds of millions of Facebook users. No outrage there.
It isn't even novel that Facebook was used for political targetting, as the Obama, Romney, and more broadly DNC and RNC did the exact same thing. I just assumed this was all part of that vauted digital strategy all the news outlet were blaring about everytime one party won an election. It may be a coincidence that this is a problem because Trump used this method for voter outreach. Maybe.
Maybe it's the potential Russian meddling that's the new news here? But then it's not really what the news outlets are focusing on. It's all about how Cambridge Analytica created 'psychological profiles' on voters...which sounds more like a query that was ran against the dataset.