The problem is the foll:
 CA wrote a trojan FB app to derive psychographic user data for FB users. This let them determine for example how susceptible you were to misleading or fake news.
 They then used FB targeting to target these specific people at scale, pushing extreme fake news such as "Obama moving troops to Texas to ensure 3rd term". This is military grade psyops applied at unprecedented scale.
 These people, as they were susceptible to manipulation, would then convert at unusually high rates. He says 5% or higher conversion. Conversion was measured as an action like donating money or signing up for a mailing list.
In this way, the entire democratic process was corrupted. The issue is not that there were dirty tricks in the 2016 election. The issue here is that the existence of FB's app platform allows the detailed psychological profiling of millions of people on scale, and then allows them to be targeted at scale using that profiling. This is a clear and present danger to democracy.
Even with these changes, a rogue app such as CA's would be allowed on the FB platform. FB has no visibility into how app data can be used offline.
Let's call this what it is: Propaganda. Same as it always was, just with more input tailoring & curating it.
The biggest problem I still see here is that people expect "truth" from random Facebook posts. It's a website where basically anybody can upload anything. It's 4chan with a few more rules.
Democracy works best with an informed voter base, but misinformation has been there from the beginning. I don't see how this "corrupted the democratic process" any differently than in the past where propaganda was being pushed through other common media outlets, and making money off catering to their audience's outrage and gullibilities.
Judgment of information is something where voting citizens need to be personally vigilant, no matter their sources. That vigilance includes recognizing echo chambers and lack of exposure to a breadth of ideas.
I don't quite understand. This reads really negatively to me, like you want to proactively judge what others privately engage in so you can "re-educate" them in case you disagree, and lack of access to that is a problem? That's a horribly authoritarian view, compared to simply broad exposure to shine light on the multiple sides of issues, to promote more informed judgment (which AI-driven content "optimization" specifically works against, as its own broader problem).
> We can reject the lunatic town crier, but it is much harder when he knows you deeply and is whispering in your ear.
Of course, on the receiving side of info, with somebody with deep knowledge of you trying to convince you of something, again that's vigilance and exposure on your part. We all have family members in this exact role, trying to convince us all of their viewpoint, with intimate knowledge of us. What's the difference if it's a 3rd party? What if it's you promoting your ideas, and how would you want any "safety" mechanisms affecting you?
The responsibility always falls on the person receiving information, not to police other's receipt of information. The latter ends up with people arrested for googling pressure cookers.
Healthy discourse cannot occur in the dark. How can I discuss an article that I don't know even exists?
The overarching social problem with social media platforms is the pigeonholing. Freely sharing and discussing is quickly segmented away. The marketing & political money is made on outrage and tribalism, and these amplify differences to the point of segmenting others off if there's any dissonance at all.
Less categorized places of discussion, where members share a broader forum or space, must be more civilized by nature. You have less cherry picking of engagement. You end up exposed to (and exposing others to your) offensive differences, and need to deal with that exposure. We get that outside of social media circles, and it's overall a more healthy environment. Consequently, more and more people are recognizing that social media is not a place to anchor their trust, information, and time, which is a positive change.
> Healthy discourse cannot occur in the dark. How can I discuss an article that I don't know even exists?
Hyper targeted informational warfare inflames tribalism. You suggest we "mingle more" about 20 miles away at the park. It might be "healthy", but no one will give a crap if the park is empty.
Blaming individuals for group manipulation is not the answer.
(As an aside, I have zero problem with using social media for communication with actual friends, relatives, and activities you're a part of. It's all the extra crap they shovel on to chase that unbounded revenue growth that ultimately feeds the problem.)
Regarding the group manipulation, individuals are always at least legally held to their own actions even under manipulative circumstances (distinguished from coerced ones).
Group manipulation only lasts for so long as people don't recognize what's going on and how it's negatively affecting them, which is much harder to keep under wraps these days. There are movements against using Facebook now, which is the proper response to seeing how manipulative and literally unhealthy its ecosystem has become; whereas I don't consider it a reasonable response to call for wide censorship and scanning of personal information and "private" exchanges that happen there, in a big ol' ball of establishing precedent. If manipulators broke laws by posting or accessing stuff, it's a matter of jurisdiction as to who penalizes them, which is always a problem online, but that legal process seems to be properly progressing against CA.
Your statements indicate that you place the entire onus of responsibility on end users. I hold some fault toward users, but include fault in the providers.
Generally, the world isn't black and white.
Why do you believe users deserve all of the responsibility? What about in countries where the entirety of information in and out is filtered?
This style of propoganda may be a type of coerced information.
These are also private entities. They can legally and morally censor without recourse.
All in all, I find the triviliation of those possibilities distrubing.
You're injecting an opposite extreme, implying that I don't think propaganda should be combated at all. But the first line of combat is broader communication, so usurpation for propaganda isn't the only content that flows, and multiple points of view are freely shared. With respect to free speech, legal action against propagandists should generally be reserved for origination of falsehoods and incitement to violence, which tends to already exist in our legal frameworks.
But if there is a one-to-many pattern of information sharing (e.g. an organized attempt to share content) to unrelated parties, then that content should be vetted for veracity. Either FB does it, or they surface it for others to flag it.
This is extremely toxic for freedom of speech and information sharing and is a veiled gateway to censorship. It subjects veracity to the approval of an unknown group with unknown motives than you're assuming are righteous.
There are notably verifiable "fake news" stories that we can all logically assume are fake, but the matter of veracity is ultimately defined in opinion. Even if that opinion is your opinion on what news source to trust.
The only people that can truly verify information are those that are present when it's generated and we rely on those people to be truthful. You trust your source of news when you determine something is fact. Bob's Blog reporting "Obama sends troops to Texas to ensure 3rd term" will receive a notably different reception than NPR reporting the same thing. But ultimately that difference in perception is your opinion of what constitutes valid information. Unless you're in Texas, do you really know for sure?
If an article is 90% correct, but 10% unknown or possibly speculative, who makes the call on the validity of the news and story? What if that 10% of the information dramatically affects the context of the other 90%?
If a police investigation determines that a police officer lawfully killed someone in the line of duty, but someone disputes that with another story, who do we trust? Should one of the stories be suppressed because it can't be validated? Personally, I prefer to reserve the decision for what I consider valid for myself, not the overlords of the platform I use to consume the content.
Progress cannot happen in society without good & bad ideas freely propagating, and individual decisions to buy into them or not. Every significant cultural movement stems from a counter-cultural uprising, and these sorts of things would be swept up in the "vetting".
Again, it's 4chan with more rules. Anybody can post anything. It's not a place of fact or truth. It's of people sharing their lives, thoughts, hobbies, opinions, notices, likes & dislikes, etc. Some extremist $SIDE-wing Facebook channel is simply posting such things. It's not an official channel for trustworthy news, it's users being social, whether that user is Aunt Flo or CNN.
The core problem is that people think it's a "trustworthy" information platform (and Zuckerberg wants that trust for more customer buy-in). It's not. And it won't be, unless you remove the core personal family & friends social aspect of it.
Those people who will respond to your fake news in a way favorable to you are still out there. It just means that instead of being able to target a group of say, 10 000 people that will give you a 5% conversion rate, you might have to pay to target 1 000 000 people with a 0.05% conversion rate.
This raises an interesting question: would we be better off if instead of restricting data we make it more widely available?
As suggested in the second paragraph above, restricting the data doesn't stop those with enough money from influencing susceptible people at scale. It just makes it more expensive, so only the very wealthy can do it.
If we make the data more open, we make the playing field more level, perhaps giving smaller, less well financed groups a chance to compete with the billionaire backed groups and causes.
The problem is the press will scream bloody murder at any attempt to reign in their right to publish, perhaps rightfully so. IANAL.
The only thing that seems different is that it was the 'other guys' who were playing psyops better this time around, rather than the incumbent government.
Of course, doing so would mean fewer clicks. The more extreme the info, the greater the clicks.
This is at the heart of Zuck’s dilemma - to curb this problem in a meaningful way means reduced revenue.
Detecting and preventing brigading needs to be more sophisticated than just looking at IP addresses.
Need to determine if groups of users are acting in concert across multiple posts and comments, regardless of IP.
Should that be prevented? It's not really distinguishable from the scenario you prevented.
Independent actors are far less correlated than centrally orchestrated groups.
In an alarming revelation, he said that recent investigations into data privacy have revealed malicious actors cycling through hundreds of thousands of IP addresses in order to search for users by their phone numbers and scrape their public profile information.
Until now, users have had to opt out of making their profiles searchable by phone number. Most, Zuckerberg said, never opted out.
Though the CEO accepted blame for all of these data privacy and trust issues, saying, "It was my mistake," he also often put the onus on Facebook users to know better.
He mentioned, for instance, that the only information that bad actors would be able to scrape using a phone number was information that was public on Facebook user profiles.
Of the researcher who built the data-scraping app for Cambridge Analytica, Zuckerberg said, "Yes, he broke the policy, he broke people's expectations, but also, people chose to share that data with him."
And yet it was Zuckerberg and the company he built that made people's data privacy settings so open by default, and made it difficult to find, understand, and adjust those settings."
[ Poll: Is "open by default" congruent with "privacy by design"? ]
And part of the propaganda, even today, is the US right trying to equate that somehow to Obama. They are relying on wan, sad whataboutism: "When Obama used social media it was ok", "If Hillary had done this it would be ok".
No. The Trump campaign spent $100M on facebook ads. If they are so proud of those ads, let's see them. Release them. Let's see what they were telling people, and who they were targeting.
For normal political advertising on TV or even direct mail, it's hard to keep what you're doing a secret. For online targeted advertising, there is no such constraint.
We need to figure out a way to make all political advertising, and who it's targeted to, publicly disclosed. Sunshine helps.
And it's hard to spend $100M on facebook ads with some outrageous content without anyone noticing. Screenshots will be posted, shared, etc.
So, I don't expect anything that wasn't said by Trump himself during his stump speeches and pre-election rallies. "Build Teh Wall", et al.
It's a nag to turn on facial recognition. Feels like really bad form to be asking for such intrusive extra info with what they're going through right now.
But it's probably a good thing considering Facebook also apparently shares the content of those conversations
These additional API restrictions may be closing the door after the horses have bolted, but they will also restrict more scraping and data mining. However, I'm sure the value of Facebook data that companies have already collected just shot up significantly...
I also think that the step of planting an alert on affected users' News Feeds is a good one, and something that I didn't expect Facebook would go for. Curious to see what the report says when that feature goes live.
(shamelessly stolen from reddit)
People who specialise in data (like people who code professionally) are precisely those who would have all those questions on the top of their mind and know that old, graph-based, editable datasets that need to be matched with another dataset are bad.
Edit: I was a data scientist for Facebook and I can personally attest that most of those are genuinely hard, especially those that I intentionally overlooked.
I also wish I could client-side encrypt all of my content, share the keys with my friends who I want to have the ability to view my content, and somehow have this all be friction-less and transparent from my and contacts' perspective.
Companies are relentlessly profit-optimizing entities, and they cannot forego the extra revenue stream. Employees with ethics are like tissue holding back a tidal wave.
The only way to avoid having them exploit the data is to deny them the data in the first place.
> and they cannot forego the extra revenue stream.
Yes. They CAN. And if the incentives were in the right place they would. For instance, Coke would probably sell more fizzy sugar water if it still used cocaine as an ingredient, but it doesn't, because it's illegal.
Honestly, they'd take your money and still treat you as a product and sell you. Their entire corporate culture is built around users being the product, and small cash payments won't change that.
Nothing short of firing the entirety of Facebook's leadership and a good fraction of its other employees will change how it views its users.
You make that sound like a fact...can you please provide sources!
I am asking this because it doesn't align with my personal assessment of people who I know that work at Facebook or Google...like not at all!
> Every business has its founding DNA. Real corporate change is rare, especially when the same leaders remain in charge. In Facebook’s case, we are not speaking of a few missteps here and there, the misbehavior of a few aberrant employees. The problems are central and structural, the predicted consequences of its business model. From the day it first sought revenue, Facebook prioritized growth over any other possible goal, maximizing the harvest of data and human attention. Its promises to investors have demanded an ever-improving ability to spy on and manipulate large populations of people. Facebook, at its core, is a surveillance machine, and to expect that to change is misplaced optimism.
This observation also would also likely apply to a hypothetical Facebook that offered user subscriptions:
> One thing I’ve observed with Google over the years is that it is institutionally so used to its ‘customers’ actually being its products that when it gets into businesses where it actually has customers it really has little sense of how to deal with them.
Google's "customer service" is automated and unstaffed. When they're used to doing it that way, why reduce margins to staff a call center for the paying customers? Treat the paying customers and ad-watching users alike. Likewise, when you've built a mechanism to monetize user data, and you're used to running your business off it, why shut it all off for the few that pay you? It's "leaving money on the table." Keep it on, but maybe tone it down a little, and make even more money. To resist these temptations requires a strong culture that Facebook obviously does not have.
Easy to do in theory, hard to do in practice.
It's rather obscure term to use outside a nautical context, so I agree it's a bad name if targeting the general public. It would be a fine name if its target audience was only sailors.
"Hey non-technical friend, instead of Facebook you should use Secure Scuttlebutt; it's so much better!"
FB can limit what it wants but someone will eventually find the means to buying the necessary data today and build what CA did going forward. IMO the 2020 election campaign costs will be the billion $ candidate that competes, so just buy the data you need.
The amount of info that you used to be able to pull from Facebook's API was incredible, and most people didn't realize it. Even information as bland as friends and friends-of-friends is enough to build a useful social graph around a person. (Years ago I did just this, and it was amazing how the graph clustered all my different social groups)
Hm. This seems like an interesting tidbit, I would love to know more. It seems to imply that many profiles have already been scraped in this way. A phone number is a really strong cross-domain identifier as we use it across a bunch of different online services. Collate your Facebook scrape with a couple data brokers and you've got a real strong profile of someone.
Before that, for years, they knew exactly what they were doing when they gave apps access to their users' friends. And it must've been obvious to them throughout how such abuse could be executed. Yet they didn't try very hard at all to prevent it.
My goodness, this is the same issue, not a new one. Facebook is behaving in a trustworthy manner right now, over-communicating the scope and details of this issue, yet here we are to attack them anew over it.
They are just telling people what they think they want to hear given how they have been caught out.
Wait a second, let's back up. They didn't get "caught" doing anything. They purposely, intentionally, and with everyone's explicit knowledge shared friend list data with an app developer.
This is just a basic fact. No one, not even my mother, can realistically claim they did not know and understand Facebook was sharing graph data with app developers in 2012, or 2014. Not only was it crystal clear when you installed an app, even if you didn't, anyone that was on Facebook was inundated by messages from Farmville, and many other apps, letting them know what their friend's were doing.
Later, by 2014, Facebook decided they needed to be more restrictive with this data. They shutdown app developers access to the social graph, essentially killing Facebook Platform, which everyone expected to be a primary driver of future revenues. They shutdown Graph Search, an extremely useful tool, because it made it too easy to collect personal data.
But we need to be clear that Facebook was not "caught" doing anything at all. They did exactly what they said they would do, which was plain to everyone, even my technophobic mom.
Separately, in 2014, an app developer shared personal data with Cambridge Analytica. Facebook contacted both parties and requested that they certify they deleted the data, which they did.
The only reason people are upset now, is because:
a) politics is involved, and
b) they are retroactively applying current best practices with personal data, which were not common in 2014 and before.
The incredible part about all of this is that so many other social networks (and other companies) continue to collect the exact same data, and many of them share it publicly. Almost all Twitter users have their friend list open to the public, for all to see, along with all of their tweets, because that's what the platform encourages. No one would say Twitter has been "caught" doing this.
In fact, Facebook has been extremely up-front about the situation. They fixed the situation 4 years before it came to light. They have announced important and strong changes to further protect data in the future. They have publicly and and widely announced their detailed findings in this case, and they have promised investigations of similar unauthorized usages of personal data that may have occurred with other app developers.
I mean, what more do you really want them to do?
They are a net-negative for society.
Anybody with more insight care to comment?
It basically says they are going to do the minimal effort required to protect privacy while avoiding regulation.
As I posted elsewhere already. GDRP is more then just some rules around what you need to ask users for permission for. It's much bigger and very specific to the EU legal apparatus.
It hence makes no sense to ship globally. What you want is for the underlying privacy controls to be available for everyone...which according to Reuters is what FB is doing!
Events API: Until today, people could grant an app permission to get information about events they host or attend, including private events. This made it easy to add Facebook Events to calendar, ticketing or other apps. But Facebook Events have information about other people’s attendance as well as posts on the event wall, so it’s important that we ensure apps use their access appropriately. Starting today, apps using the API will no longer be able to access the guest list or posts on the event wall. And in the future, only apps we approve that agree to strict requirements will be allowed to use the Events API.
Groups API: Currently apps need the permission of a group admin or member to access group content for closed groups, and the permission of an admin for secret groups. These apps help admins do things like easily post and respond to content in their groups. However, there is information about people and conversations in groups that we want to make sure is better protected. Going forward, all third-party apps using the Groups API will need approval from Facebook and an admin to ensure they benefit the group. Apps will no longer be able to access the member list of a group. And we’re also removing personal information, such as names and profile photos, attached to posts or comments that approved apps can access.
Pages API: Until today, any app could use the Pages API to read posts or comments from any Page. This let developers create tools for Page owners to help them do things like schedule posts and reply to comments or messages. But it also let apps access more data than necessary. We want to make sure Page information is only available to apps providing useful services to our community. So starting today, all future access to the Pages API will need to be approved by Facebook.
Facebook Login: Two weeks ago we announced important changes to Facebook Login. Starting today, Facebook will need to approve all apps that request access to information such as check-ins, likes, photos, posts, videos, events and groups. We started approving these permissions in 2014, but now we’re tightening our review process — requiring these apps to agree to strict requirements before they can access this data. We will also no longer allow apps to ask for access to personal information such as religious or political views, relationship status and details, custom friends lists, education and work history, fitness activity, book reading activity, music listening activity, news reading, video watch activity, and games activity. In the next week, we will remove a developer’s ability to request data people shared with them if it appears they have not used the app in the last 3 months.
Instagram Platform API: We’re making the recently announced deprecation of the Instagram Platform API effective today. You can find more information here.
Search and Account Recovery: Until today, people could enter another person’s phone number or email address into Facebook search to help find them. This has been especially useful for finding your friends in languages which take more effort to type out a full name, or where many people have the same name. In Bangladesh, for example, this feature makes up 7% of all searches. However, malicious actors have also abused these features to scrape public profile information by submitting phone numbers or email addresses they already have through search and account recovery. Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way. So we have now disabled this feature. We’re also making changes to account recovery to reduce the risk of scraping as well.
Call and Text History: Call and text history is part of an opt-in feature for people using Messenger or Facebook Lite on Android. This means we can surface the people you most frequently connect with at the top of your contact list. We’ve reviewed this feature to confirm that Facebook does not collect the content of messages — and will delete all logs older than one year. In the future, the client will only upload to our servers the information needed to offer this feature — not broader data such as the time of calls.
Data Providers and Partner Categories: Last week we announced our plans to shut down Partner Categories, a product that lets third-party data providers offer their targeting directly on Facebook.
App Controls: Finally, starting on Monday, April 9, we’ll show people a link at the top of their News Feed so they can see what apps they use — and the information they have shared with those apps. People will also be able to remove apps that they no longer want. As part of this process we will also tell people if their information may have been improperly shared with Cambridge Analytica.
In total, we believe the Facebook information of up to 87 million people — mostly in the US — may have been improperly shared with Cambridge Analytica.
Overall, we believe these changes will better protect people’s information while still enabling developers to create useful experiences. We know we have more work to do — and we’ll keep you updated as we make more changes. You can find more details on the platform changes in our Facebook Developer Blog.
Out of curiosity I just re-authorised them for access and was displayed the following message:
"We're sad to announce that due to dwindling traffic, expensive hosting costs, and new limitations of the Facebook API, we've decided to close down Heyevent. We're sad to have to do this, but we unfortunatelu we see no other option. Since the launch of Heyevent, Facebook themselves has added more event recommendation. They're not as good as Heyevent's recommendations, yet. Hopefully they'll get better. Thanks for using the service!"
I found it somewhat useful in the past to keep using facebook to a minimum, so I wish them the best.
There must be some other areas of data based servicing that experience difficulties that but I can't think of any other right now.
Does this mean we won’t be able to show FB events and rsvps in our app?
I mean to say, if- for example- my mom was one of the facebook users who had their information taken/used by CA, what should she expect?
> In total, we believe the Facebook information of up to 87 million people — mostly in the US — may have been improperly shared with Cambridge Analytica.
See https://www.theguardian.com/uk-news/2018/mar/21/facebook-row... and the testimony Christopher Wylie gave before the UK House of Commons Select Committee. He's the CA whistleblower. It's a very illuminating watch - https://www.youtube.com/watch?v=X5g6IJm7YJQ
So to recap:
 Kogan writes a FB app for CA to create psychographic profiles of users. This data allows people to be targeted by how naturally susceptible you are to rumors and fake news.
 Kogan then accepts a position at St. Petersburg State University (so essentially Russian money) and moves there
 Russians subsequently magically gain a new superpower they didn't have before - the ability to target users on FB who are susceptible to manipulation towards extreme viewpoints.
They are not helping their image - the latest is a rejection of GDPR for non-European customers. Seriously? Way to demonstrate commitment to user data protection FB.
This level of arrogance is a precursor to strict regulations. They are practically asking for it at this point.
Those are indeed noble purposes that social media can serve. But if they were Facebook's true goals, we would not be here.
The ideal competitor and successor to Facebook would be a platform that actually puts such goals first.
To do so, however, it cannot be just another data-hoarder, like Google Plus.
If we have learned anything over the last decade, it is that advertising and data-collection models are incompatible with a trustworthy social media network.
When a company fails, as Facebook has, it is natural for the government to demand that it fix itself or face regulation.
If today's privacy scandals lead us merely to install Facebook as a regulated monopolist, insulated from competition, we will have failed completely.
The world does not need an established church of social media."
Tim Wu, law professor at Columbia University
- He starts by attacking Facebook to pursue growth
> Facebook prioritized growth over any other possible goal
- but, when offering a solution, he wants exactly the same thing for its proposed alternative:
> the real challenge is gaining a critical mass of users.
I’m not sure changing the name but not the presumed initial values will help.
Bu then he writes:
> for which users would pay a small fee
Anyone old enough to remember the early days of Facebook will remember how Mark Z. had to defend against that idea: being locked-in a paid service was the worst that could happen. Tim Wu knows that. Actually, everyone who is not on Facebook should know that too because that rumour became such a problem for the company that it ended up occupying the most previous location on the service: the login page.
> It's free and always will be.
The hard part is coming up with a business model that doesn't rely on advertising and is actually going to get any traction. Especially since people by and large like the model. If Facebook can simply lock down their APIs and handle state/nefarious actors better I think you will find the public moving past the current situation.
Of course, it's possible that this may be a sufficiently non-trivial problem where the best answer anyone's found to date is a centralized business. But hey, we won't know until we try! Also, email as a federated system and its history doesn't count, because that runs directly against the basic thesis that nobody has seriously tried.
The other problem is that social media isn’t novel or interesting in 2018 so I don’t see people rushing to a new platform that replicates the same old functionality.
Workplace (née Facebook at Work) is now considering payment options, too. That could actually bring quite a bit, especially if inter-company communications are worked out.
The reason that ads remain on Facebook is that the senior team believes that they can make the ads genuinely improve the quality of the experience. I personally block a lot of ads on Facebook (about 80-90% of what I see) and actively look at my Ad preferences, and I get genuinely useful ads -- often new offers from competing business, which allow me to monitor them. I can imagine why many people don’t see it that way but I would love to know where are the limits of that model. I would more generally encourage people to treat targeting (on Facebook and elsewhere) as something that they need to be pro-active about, and trust at least some platforms to use that information to improve their experience. That way, we could learn more about how to connect brands and customers better.
I'm also not super bothered by ads and fully aware that the data I give consumer internet companies will be used as they (or their future acquirers) see fit.
I don't agree with the sentiment that if we could only switch business models from advertising to subscription then all of our data will remain private. Apple's business model is not based on ads and I trust them as little as I trust Google. I know that's not a popular opinion here but I think it's prudent. Once I push information from myself to a company, I have no confidence that that information will be used as advertised.
"While we think FB's high advertising performance speaks to the value users get out of the ads served, general consumer dislike towards advertising and increased data scrutiny could cause more users to opt-out of sharing data with FB," he said."
But it's fair to say that Facebook should be less trusted today than it was nine years ago.
Just like we should all be doing our part to detangle our lives from Facebook's web, app developers owe it to users to divest in their reliance on Facebook Login.
It's one thing to offer Facebook Login as an alternative way to easily create an account, but to straight up not offer any other way to log in to an app or game is just lazy on the developers part, and speaks to the way Facebook has lulled us all into complacency."
Soon after, Tinder users started noting on Twitter that they had been kicked off the dating app and couldn't log back on, as those who used Facebook Login were caught in an infinite loop that appears to be related to an unknown bug.
Since you need a Facebook account to log into Tinder, this bug has potentially affected Tinder's entire user base.
Tinder has responded in a tweet, "A technical issue is preventing users from logging into Tinder. We apologize for the inconvenience and are working to have everyone swiping again soon."
Overall, Facebook says 87 million of its users were affected -- with nearly 82 per cent of them were believed to be located in the United States.
Canada's acting minister for democratic institutions has also said he'd be open to strengthening federal privacy laws, which don't currently apply to political parties."
Zuck: Yeah so if you ever need info about anyone at Harvard
Zuck: Just ask
Zuck: I have over 4,000 emails, pictures, addresses, SNS
[friend]: What? How'd you manage that one?
Zuck: People just submitted it.
Zuck: I don't know why.
Zuck: They "trust me"
Zuck: Dumb fucks
People don't identify with the photo-ops-in-Iowa Zuck. I think they prefer the "you have part of my attention; you have the minimum amount" version.
He's going to be vilified either way.
So what happened here? Facebook users who took a 'personality quiz' allowed the 'app' to access their information.
It is extremely depressing that 87 million people are idiots.
There are intrusive apps all over facebook and this is being released with a narrative that supports 'election interference'.
That's how a few hundred thousand people taking a "personality quiz" gets turned into millions of user-data.
And the term personality quiz is used loosely, an example of a "personality quiz" can be "Which Game of Thrones character are you? This isn't rigorous psycometrics.
Social networks were extrapolated from friend data. Cambridge analytica was able to use social connections to profile people and who they know.
That was the extent of it.
This isn't "propaganda put out by intelligence agencies", and there are not "37 million idiots" who "took a 'personality quiz' allowing the 'app' to access their information".
But he already explained that to you quite clearly, something you should have already known if you were following the real news, and you still don't get it.
But since you choose to subscribe to the conspiracy theory that this is just all deep state propaganda, and everyone whose privacy was compromised was an idiot who asked for it by doing something foolish and deserves what they got, then there's no use in discussing it with you.
Because you're their ideal target and they've already successfully targeted you and influenced your mind, even though you didn't take a personality quiz yourself.