The interesting question here is whether Facebook is somehow accidentally amplifying it. Certainly it is not in Facebook's interest to allow this kind of data harvesting. If it hurts you to think that Facebook somehow isn't maximally evil, at least consider that this is data that could be only Facebook's. Allowing somebody else to harvest it is money straight out of Facebook's pocket.
So, given FB should not be complicit, what mistake could they be making to allow the system to be haunted? The obvious guess is that they have a feedback loop in the ranking algorithm. It values comments very highly as a signal of good engagement, but they weren't prepared for "content" that is this good at eliciting low effort comments and have wide appeal demographically. As long as one of these reaches a critical mass, it'll be shown to tens or hundreds of millions of people just by any engagement feeding even more engagement.
Is there anything less obvious?
> Certainly it is not in Facebook's interest to allow this kind of data harvesting
Mark Zuckerberg has been quoted through leaked documents to be a strong purveyor of "engagement" at nearly all costs. I don't think giving Facebook the term "accidental" is appropriate anymore. Their desire for engagement trumps the health of their network. I'll dig through my favorited submissions for the WSJ article.
That was easy: https://archive.md/GQFLq
People are rarely motivated by evil, but they are motivated by opportunity to which an outcome can be perceived as pure evil by the people it affects most.
I remember can't browse IG more than a couple minutes on a phone because there is an ads for every 5 pictures, but I can browse it mindlessly on my PC with Adblock installed, because the experience is so clean.
Maybe they knew all along. :p
It would be so, so simple to stop these. Just de-prioritize posts with too many replies, or replies from people you don't know, or.. anything. The virality of these things sticks out like a sore thumb. Facebook is choosing to not stop them.
Then again as we learned recently Facebook is choosing not to stop all sorts of things on their platform. https://news.ycombinator.com/item?id=28512121
They do, at large scales. Also some of them could pass for password recovery questions.
My feed has become littered with these things because my friends are replying to them. Which encourages me (or, rather, the hypothetical FB user) to reply to my friend(s).
There are probably other clever ways to skeeve the other four, and I cannot say for certain if some digits of those other four are situationally impossible. Certainly some combinations likely are, like 0911, and 0000.
I see a lot of "viral" posts - some like those mentioned in the article, but also a ton of odd woodworking, cooking, and "resin art" videos. The videos are quite repetitive and not really interesting so I wonder if they are maybe hidden ads, but they are not marked as such, and it is not clear what they are selling. (Well maybe they are trying to sell resin, which is really expensive.)
Anyway, it seems like they are different kinds of posts on FB. Some stay close to their point of origin, and only rarely get shown to other people who have not liked a page or are friends themselves. And other posts which, if somebody commented on or interacted in any way with them, get shown to their friends and friends-of-friends.
After running a charitable cause / political FB page for a while, I'm convinced that internally there are actually different categories of posts - ones that are shown to followers, and ones that are allowed to float or go viral. I really wonder what the mechanism is to get into the floating category. It doesn't seem to be based on quality, nor on money spent. Maybe it is some interaction metric that somebody learned to game?
As someone who got caught up in some of those videos when I was in complete "mindlessly browse facebook" mode, my guess would be they are optimized for "engagement", nothing more, nothing less. They are just interesting enough that you want to know how the end result looks while harmless enough to appeal to a maximally broad audience.
Disclaimer, am infrequent FB user and this may have been around for longer than I realize.
Facebook has roughly 3Bn active users per month. Let that sink for a moment because that's more than a third of the world.
I'm pretty sure Facebook (as in Mark, or the employees) do not have the slightest idea of what is going on Facebook.
Someone somewhere found a way to exploit what FB's engagement metrics do. Is it 'accidental' that FB amplifies things if their system is designed to do exactly what it does when gamed?
FB see about 5 billion items posted daily. Or about 58,000 per second Most of those simply die unread. Our Internet has become a write once, read never medium...
If an item is to be picked up by amplification algorithms, it needs some indicia of relevance or significance. Broadly, that comes from one of 3 propeties:
- Content itself. Keywords, hashtags, URLs, other profiles linked.
- Social graph. Followers and readers of the submitting account, and their own followers.
- Engagements and interactions. Any likes, comments, or re-shares, with their own attributes as well (content, social graph, engagement.
Given that signal for a naked submission is so thin, any indication of additional relevance is likely glommed on to, and a set of rapid initial engagements might be a sign of high-value content ... or of a bad-faith mutual-admiration-society cabal (MASC). Even on sites such as HN, a little early engagement on a submission goes a long way.
Note that a group of freinds engaging with one another's content will look a lot like a MASC, though in most cases the significant distinction will be posting volume. It's rare for a person to consistently post more than 10--30 items a day, and much above that tends to get seen as annoyingly verbose by others. Promotion and amplification accounts can post many tens, hundreds, or thousands of items, hoping one will take off. Their goal isn't engagement but manipulation, they have cheap content-creation processes (stock bits, redistributed content from other sites, randomly-generated crud), and can afford to be profligate. Until the system actively penalises based on submissions without significant uptake, that's going to be the case.
Why this is happening is anyone's guess, though in general, cultivating a capitive audience has value, whether for conventional advertising or propagandistic purposes. It may be that the accounts are intended for direct use or will be farmed off to other buyers or uses later.
Gaining profiling information on follower demographics and influence points would also be part of this.
Given political cycles, odds that this is prepatory for the US 2020 campaign seems a plausible explanation.
That's worth a lot if FB have a shared agenda.
How so? They benefit directly from it without having to do any of the hard work and they can put the blame on someone else if the whole thing blows up. Seems like the perfect crime.
In that metaphor, Facebook owns the soil and they sell the harvest by charging for promoted posts and other forms of engagement purchasing. If this engagement truly drives election outcomes the way the post hypothesizes, the demand side will come back every time there's an election anywhere. Let's hope Zuck and co understand how closely this piece of their revenue is tied to democratic engagement ;)
I really don't see much actual downside for FB in allowing this kind of data collection. The people (like myself) who are disgusted with FB and think FB is a profoundly negative force that is strangling local journalism, toxifying the discourse, and stroking rage are already against FB. But most people just don't care (and may actively appreciate those effects) and won't change their behavior in response to FB allowing another likely election-outcome-altering disinformation campaign to do recon work. And FB knows that. As long as Zuck feels safe from regulation, he's not going to stop, address, or reduce any non-CSAM thing that would threaten growth or time-in-app.
Fecebutt was built on content like this. On LiveJournal before it, this kind of personality-quiz stuff was also rampant, but personality-quiz apps and this kind of meme-question stuff dominated Fecebutt from the time it launched the "Facebook Platform" in 02007 for many years, maybe even until they banned personality-quiz apps last year.
I think the real issue here is that it's impossible to tell the benign from the malignant. Is that cute mom blog going to start hawking ivermectin? What is my comment revealing about me that I don't authorize? There's no Better Business Bureau for Facebook pages. Maybe there should be.
Any “rating” organization whose value proposition is evaluating others and does not employ vast legions of laborers to continually test and retest their subjects is obviously a scam.
Not to mention the obvious conflict of interests when an organization gets paid by the test subjects to grade the test subjects.
Just a guess, though. I don't actually FB.
Facebook is a weird place if you don't have an account. And of course, whenever you scroll down for some details (and hopefully a link to that business' proper website) you get slammed in the face with a dialogue box that asks you to create an account or login, so my engagement with Facebook usually ends there.
It's grown to be a more common way to share something with a friend. Less clicks than sharing in private messages, and also more inviting to conversation — which would be shared not only between you two, but also open to other friends like yours. Mostly those @mentions and conversations are very casual, like "me: @friendName -- @friendName: OMG this is so @frank -- frank: I'm in this picture and I don't like it". All of which makes perfect sense for mindless positive chit-chat.
So if one of your friends comments with the one millionth comment, you can end up seeing the post and your friend’s comment In your feed. So while no one can read all comments - your friends are likely to see yours.
(Perhaps once upon a time I'll end up also finally figuring out how the hell comments on Tumblr work. Not that I'll have practical use for that anymore.)
It works in similar way as here. The post have 145 comments (right now) and yet I'm adding another one.
Almost always when I find myself in Reddit comments because something's interesting I wish it were HN, and am shocked a lot of the time that people have time for such drivel. Duolingo too. Yes. Me too. Haha. Nice work buddy. Looks good. No. This is lame. Lol. Why? What's the point?
But then, we can take this further, there's more thought in our comments, and on HN in general, but ultimately not any 'point' either. (Obviously there's the occasional hugely noteworthy comment on HN, but then I'm sure there is on Reddit too, albeit fewer pro rata.)
One group who would benefit from detailed life style profiles are life insurance companies. More detail is better for setting accurate premiums while remaining competitive with other life insurance companies.
Edit: I almost forgot to mention a really popular one I've seen a lot of lately: "Have you ever had a DUI ? I'll wait." It's unbelievable to me that people would answer this question, but it definitely something insurance companies would like to know because their records don't go that far back into the paper age. A lot of people answer something like "No, but I should have."
Perhaps Facebook are partly encouraging the clickbait weirdness to give the continued impression to those that are still using it, that there is something to use.
Over the last 4 years I found Facebook an entirely useless vehicle for customer engagement, instagram was marginally better but after the change that got rid of the time based sorting, it became useless as well.
It wouldn't surprise me if Facebook has become a ghost town, inhabited only by those who have become addicted to the posts that feign engagement i.e. somebody is asking ME a question! I am important, I want to be heard. It might just be a complicated scheme to keep the share price high. It doesn't necessarily have to be something nefarious.
At some point I stopped commenting or Liking any post which already has more than ~10 Likes or comments. In some sense it feels really strange to me that people bother to engage with content where their engagement is essentially invisible within the crowd.
It works. I can't even remember what normal Facebook looks like. I've lived like this for the last 5 years.
Hyper dystopian take: Gathering data to be able to create real-seeming narratives for fictional profiles to push political agendas.
That's literally a security question for a bank password reset.
I've had a couple ofpoints int he past where I've broken my graf hard enough for the The Algorhythm to break down and it's kinda interesting. A combination of 'we don't know you well enough, we'll throw this synchronized knitting competition at you' and 'huh, you've reached the end of your endless feed...press refresh...please?'
Personally, I cannot imagine a better way to build an in-depth profile of millions of voters.
- What's your mother's maiden name?
- What street were you born on?
- What was your first car?
- What's your childhood's best friend's name?
I'm thinking something like, "Add up all the individual numbers of your SSN and figure out what Founding Father you are!" Use some statistics to ensure that lots of people get good ones.
"Tag your mother if you love her", "tag your childhood best friend", etc.
I know that siding with FB is one of these topics that are very controversial in HN, but I am not finding excuses for the practices of these companies, my point is that our kids will live in a different age than the one we lived in, educational systems should keep up with these challenges and find innovative way to prepare people to efficiently manage that.
"But why is that-"
"Because OTHER people are doing it!"
Social networks tolerate fake traffic because it increases their perceived value. The real crime is the fictive usage and engagement metrics they use to set ad pricing.
> target misleading messages based on material stolen in the Russian hack of Democratic National Committee servers
Facebook is a publicly traded company. It may have novel negative externalities, but we're largely comfortable with its impact: maximizing shareholder value. When Facebook manipulates us, we feel we can hold it accountable as an institution. We're wrong, but we're complacent. With these mystery accounts, we're out of our depth. We do not know "cui bono," and that's eerie.
Secondly, this is almost like the email phishing paradox: to an educated user it seems like the number of people who respond with relevant information would be extremely low, but if the attempt costs you basically nothing and you get something useful 1% of the time, you're still winning.
"My perfect fall day is my memory of Aunt June when we lived in Connecticut in the 70s, before she passed away." In itself something like that doesn't seem useful, but there's a good amount of information in there if someone can correlate it with other details about your life.
Additionally, if your profile is public (the default still, I believe) is made available when you comment, I'd guess there'd be follow-up scraping going on to collect more details that are then used in conjunction with whatever your response was.
"How old were you when you got your first job" actually does seem like a good security question for many people. You're unlikely to forget it, after a while not many other people will know it, and it's a single number (whole number, most likely) so it's easy for a computer to parse (and hard for you to mess up by leaving out / adding too much detail).
Depending on what you respond to on social media, and depending on if you've got any accounts that use this as a security question, you might want to go back and force the account to use a new question ;)
And honestly I bet age of first job probably is a decent indicator of certain economic factors.
And for all time, everyone else is like WTF did you even answer the question if you don't know, it's not like your friend asked you in person, and that is the story of 80% of Q&A's on every product. *SIDE RANT OVER!
They’re deliberately made to look like personal appeals to the individual specifically, and I don’t blame people for not understanding that it’s disgusting growth/engagement hacking.
I'm usually reasonably good at imagining the mechanisms behind stupid engagement-hacking phenomena like that, but the question of why people answer questions on Amazon product pages with "I don't know" has stumped me for years.
But, in this case, the product is the 'network' rather than individual accounts. Something that appears this 'organic' and 'homegrown' is a very valuable tool for a widespread disinfo campaign.
Or, it could simply be the magician gang that makes viral posts of gross food.
> Without naming the state you are from, what is it famous for?
Hard to tell if that's intentional data gathering or just someone innocently copying a common data-gathering question format from Facebook though.
This also goes for "the first letter of your last name plus the date of your mother's birthday are your <pop culture tag>".
Oh and all the cutesy little image processing tools like "what would you look like older/younger/as a different gender/if you were a cartoon" are there to train facial recognition algorithms.
Yet even supposedly sophisticated people fall for these.
If not, this just reads like a recipe for paranoia. What we've seen of ML really _isn't that good-_ my phone has full access to anything I've ever typed and its predictive keyboard rarely actually predicts what I'm going to say.
Doesn't even matter if there's any truth behind the science. If the personality profiling works, they can apply their MODEL from the 270k responses they DID get, against every single other user's public profile information, and get a high-likelihood result.
Does that mean they can tell exactly what MY personality type is? (as much as that may or may not be a valid "thing") - - I doubt it. But can they locate a region on a map where around 180,000 of 200,000 residents fit into a particular personality type, and then target ads to that region with a degree of better accuracy than "random"? Absolutely.
Index that against voting records; (which is what the "fake election audits are for" why else do you think they started with Arizona which blew in an unexpected direction for them? Also Kobach's efforts in 2017 were probably actually a data-collection operation for the GOP) - and you have a good map for how to target campaign spending in the next election. I also assume the Census data is abused in this way, (by whichever party is in-power at the time).
"dont ever engage with anyone or anything for any reason whatsoever" is fine and dandy.
And it's like a Zombie Apocalypse: you can protect yourself from being bitten. But you can't protect yourself from the consequences of everyone else being bitten and turning into zombies and coming after you in such great numbers, that you don't have enough ammunition stockpiled for all the headshots you'll need. (That's my analogy for "people infected with racist (and etc) disinformation are gonna vote a certain way, whether you take in that disinformation or not.")
On Facebook. Yeah probably a good idea and not really paranoid considering Fuckerberg's history.
> What we've seen of ML really _isn't that good-_
And, after Brexit and Trump we've seen the difference a few percentage points can make. Add another point every couple years as the tech improves and you're mad not to be paranoid.
"What state are you from" seems like pretty innocuous information. I'll readily mention that I'm from Seattle if it's relevant to the conversation or asked directly, and I tend to think of myself as being on the more paranoid / pro-privacy side of things.
All the big players already have my address because I gave it to them, and a stalker should be able to work that information out pretty easily because I post in the Seattle sub-reddit and otherwise engage a lot with Seattle topics.
That's how they get you. It's a trap!
This is the way statistics work.
The threat is an aggregate threat. What happens when 150k people respond to a post.
Yes, teach your kid to be careful with their data.
But also, how many outfits are actually doing this, or is this currently a theoretic concern? If people are doing this, what are their goals? Are they meeting them?
Are there less obvious examples of the same thing?
I'm sure you can keep going with more points.
Tell me what kind of car you plan to buy next!
I know I don't go back through someone's post history before voting on their comments and I don't really care about their aggregate karma values.
The very secretive spam filter has cut-outs for 'high value' accounts - this isn't really documented formally, but it's pretty obvious that posting limits are essentially nonexistant for 1M+ users, either by design or because they're well known to the mods...
The value of the high-karma accounts is that they're much more likely to be accepted for moderator applications. Get enough mods on a default sub, and you basically control the universe. That's very difficult, so the much easier way is to create legit-looking fringe subreddits with names like "newstoday" or "politicallyuncensorednews". Get enough of your smurf accounts to upvote those, and you can get to rising for /all. Get enough real bites and you might even get it to the front page.
I haven't really looked into this stuff for a few years, because it's frankly depressing. So my understanding will be a little off what the most recent networks are doing.
Unfortunately, I don't have 1M+ users; so I doubt this message will get out or make any difference.
My assumption is troll farms are buying accounts with karma to try and do an end around such a system. It wouldn't be hard on reddit/hn/or other vote based social media locations to pay extra scrutiny, even automated, against brand new accounts. By using established accounts, it makes astroturf detection harder to do. Now every account is potentially an astroturfer.
Some fun oriented subs will however happily ban spam bots that just automate imgur reposts if you point one out early enough.
This is what the profile of a shadowbanned user looks like for everyone but the user:
This was before Facebook started showing the history of the page's title changes over time.
As students who were broke and had lots of time on their hand they'd basically post all day a variation of cultural references, leading questions, polls, memes etc..
Anything that would drive the subscriber count up, which is the most important metric for how much they can charge for a page.
I'm guessing the posts that got 1.4 million comments is just that but on steroids.
The government would literally have to pull wires out of walls to stop Facebook and whatever inevitably follows it.
The ads seem much worse on Scary Mom. What am I missing?
Like "Your stripper name is: Your Favorite Color + Name of Your First Dog!". Never fails to get tons of responses.
Actually it would have, in 2019. 66 + 1953 = 2019, subtract your age, and you get your year of birth.
Slowly facebook is learning that it needs to show me pictures of my friend's kids - even friends are rarely interact with though. Mostly because I've blocked everything else the algorithm might show me.
You seem to be misunderstanding Facebook's goals. They don't care about showing you pictures of your friend's kids. They are using your friend's content to dangle a carrot in front of you to keep you clicking on what they want you to click on. If you've stopped clicking on ads and promoted posts, it's because they haven't yet learned what they need to show to you keep you clicking, but they will! They're dedicating all of their engineering efforts to keep people clicking.
Every 20th post or so, someone would be saying something like "Stop it you idiots, this stupid copy-paste doesn't do anything, you can't declare your rights like this." Then a bunch more copy-paste comments would appear before the next person telling the idiots to stop.
I don't mean that as a snarky dismissal, but a sincere question. I know that plenty of people, especially among digital-natives, have instant negative reactions to being reminded of data collection. But the type of user answering "what was your first car" or "what do you call your grandchildren" do not strike me as having much overlap with the groups that are cynical about social media platforms.
Another thing to do is to tell people "they're trying to use this information to get into your bank account". That will make them stop really fast. But as far as scalability goes, I don't think there's a way.
If we all agree (and I mean, a large number of people) to fully quit Facebook; or at the very least - report and block every single one of these sites and postings, I think that would be a great deterrent.
Wait, how do they know this is a multi-billion dollar industry in the first place?
The most likely possibility seems that we have algorithms fooling algorithms with no humans in the loop. Sure, there might not be enough "real" (i.e. a real human purchasing a real product) revenue sloshing around here to make the whole effort worthwhile. However, there might be a poorly configured dashboard somewhere that makes it appear as if that's the case... Meanwhile FB laughs all the way to the bank.
> Generally speaking, people should become more and more wary of memes.
I suppose memes which explicitly attack other memes had to emerge at some point.
The content is overall wholesome and useful, but I'm assuming most members (both contributers and passive viewers/clickers) don't realize that they're lining the owners' pockets with their clickthroughs, along with whatever personal data is being collected through Facebook.
Whatever Cambridge Analytica actually DID, it was probably 90% snake-oil; (leveraging on the "AI" hype in the industry).
I've noticed wannabe influencers on Instagram including questions and polls with every one of their Stories. They're doing it to "juice the algorithm" by getting responses. That in turn theoretically gets them featured on the Explore Page or whatever. YouTubers do the same things, ending each video with a CTA question you should answer in the comments.
The Facebook question pages that boomers answer seem to just be doing the same thing, attracting comments and interactions and thus boosting the page.
The bigger question I have is why Facebook thinks I would be interested in seeing in my timeline that my 68 year old aunt has answered "Freddy Mercury" to some question about the best musical act they've seen live.
Also, I've strongly indicated that I don't want to see this content by clicking the "X" every time I see one of these "question reply" posts. That probably hides that Page forever. But the next meme question page reply by my next boomer relative will pop up tomorrow. Facbook isn't correlating the fact that I want to see nothing from any of these clickbait pages.
It'd be pretty interesting to collect and play around with all the same data this affiliate network is purported to be collecting.
its pretty straight forward and easy to do. just promote one post to a certain targeted audience(say women only) then run another campaign targeting a specific region or an age group, now you have another dimension of data. Rinse and repeat a few times and you will have a decent data set that you can then cohort out and show targeted ads to on other ad networks. running tens of thousands of these campaigns will net you a lot of very useful data. since you are paying the platform to promote things, they don't care, its still revenue to them.
the folks behind some of these things seem to be doing pretty well at it too, at least, their social profiles show a very lux life.
I wonder if marketing folks will even notice. Like Google Analytics, which is disproportionately blocked by smarter and more technical people. Marketers cheerfully ignore that, though. Will they even know that they're missing our data?
Is the Venn diagram of FB enthusiastic data-donaters and people who don't block GA just a circle? If so, are public policies and corporate marketing strategies going to be designed to cater to them and not us?
They could probably guess your preference, but prediction is way harder, as shown by the way so few people saw Clinton's loss coming in 2016.
A similar thing happened with Clinton in 2016 and with Brexit. Political polling has stopped working. It's not clear whether the polling organisations are simply asking the wrong people, or the wrong questions, or that people are no longer as fixed to their opinions as they used to be, or would prefer to hide their preferences.
It would seem to me that a lot of the stranger activity on social media is an attempt to fix that problem.
To some extent, it seemed like the 2016 results were a response by people who wanted to resist lazy categorization. Granted, it's not hard to be subtler than "deplorables", but I expect some amount of people not wanting to be strategized with.
It's like we're caught watching a tornado hit a garbage pile ... while the 'exit' sign is clear for all of us to follow if we want.
I think the answer to all questions Facebook-related is 'delete' / 'exit' / 'log off' and then to go ahead to Spotify and listen to some Ahad Jamal from 40 years ago to put it in context.
Change starts with you. This is simple.
I have done this 2 months ago. It works great so far. Most people have moved onto IG/Whatsapp already anyways. I know those are also FB owned, but do this as a protest.
It's a totally sleazy company, but it actually provides a valuable service at the same time.
Why would I not just pay Facebook directly for such targeting?
This is how politics works now. It's not (just) Russia, or China, it's every political activism group or lobby that wants to achieve anything. Welcome to the new age.
Previous means of influencing politics involved NGO's and political parties actively working different demographics in order to get them to vote in the organization's interest. These organizations may loosely be considered managerial bureaucracies, whether they are labor unions, or activist NGO's, or political parties. Even large scale media campaigns conducted via mass psychology are essentially managerial or bureaucratic in nature, using mass organizations at a large scale.
The new means of manufacturing consent are not in this character. Rather than acting directly on mass groups using mass organizations, they operate by directly targeting individuals and niche groups leveraging algorithms and digital means. It's different paradigm: mass vs niche, mass media vs targeted media, mass psychology vs individual psychological profiling, large bureaucratic organizations vs smaller technologically enabled teams, the management of people vs the management of algorithms.
It's two different approaches to power, and hence, two different elite groups. And when you have two different elite groups, you have conflict. It's a new world, a revolution in the making.
That was the entire value-add of Cambridge Analytica , whose Facebook-API data-gathering loophole has now been replaced by just engaging suckers via the platform itself and a tiny bit of NLP/sentiment analysis.
The most obvious one: with the security question type stuff, you can take over other people's accounts.
But collecting data about people is also useful if you're trying to spread an agenda. You can determine what types of messages resonate with an audience. You can group those people and target them separately -- not through ads, but through special interest accounts/groups. You can recruit people to amplify your message. You can even get people to act in real life, i.e. https://en.wikipedia.org/wiki/Internet_Research_Agency#Ralli...
If you can identity, categorize, and influence the loudest voices, you can influence public discussion and opinion.
AFAIK you absolutely can target (or could target) groups of people based on very specific criteria.
Targeted advertising lets you run a campaign that never could be run before... one where you appear to be something different to different people. If you were trying to seize control of three different warring groups, you could advertise to A, "We'll kill B and C!" , to B, "We'll kill A and C!" and to C, "We'll kill A and B!" which you couldn't do in a stump speech without people figuring out the ruse.
By building a detailed psychological profile of individuals, you can build a model that allows you to tie their responses to these questions with the political messages they're susceptible to. Cambridge Analytica paid a few hundred thousand people to do a quiz where they shared their Facebook likes and answered questions about their personality. CA then used that to build a model that showed "People who live in Slidell, Louisiana and like Dodge Ram trucks will be most receptive to messages about illegal immigration and are generally supportive of state violence". Then they can run that ad to everyone in Slidell who likes Dodge Ram pickups.
It seems the crux of the issue here is that people are being fooled into supplying data about themselves in a non-consentual manner.
The former has been happening in politics forever and imo the latter has been every tech companies MO essentially for the last decade.
Tools are not immoral.
Immoral uses of tools are immoral.
FB's targeting tools go to some lengths to only allow you to target by dems on fb and associated properties. These questions, plus profile views, allow you to extract the information for external use. Eg selling a list of <fb id, first car, birth year, favorite color, pet name, etc> tuples.
are now in the hands of companies like Facebook which by their own admission are driven all but exclusively by the necessary (sic) pursuit of growth via "engagement" at any cost.
This system is not just capable of, but biased towards the amplification of exactly that content which maximizes limbic system engagement, i.e., triggers the sub-cognitive emotional brain.
I.e. that content which enrages, titillates, and otherwise triggers reward centers.
Let's look at that again, stripped to the core.
Our society's primary mechanism for interpersonal communication,
is social networks which by their own description, depend on the purist possible amplification of that content which triggers us the most,
regardless of all other factors, including truthfulness, social benefit, coherency, utility to the commons, you name it, call it anything you like. The good.
Naturally one can individually work hard to use these systems for the good.
But the systems themselves have zero incentive to amplify you when you do; and every incentive to amplify the shit-posting trolls being paid by our enemies foreign and domestic.
That's not hyperbole; that's a simple statement of fact.
The end of this road is like e.g. the climate change that goes unaddressed in part because of these very mechanisms,
approaching more rapidly than we think.
A cognitive error to remember: we extrapolate linearly, and have no native ability to extrapolate exponential outcomes.
The clearly visible end game in the US for unchecked perpetuation of bad-faith high-performing "engagement" on Facebook and its properties, in particular,
is clearly civil war.
Maybe it's cool for a while; maybe it's hot only in moments; but the edge of the cliff is visible and the intense push by bad actors of all kinds to push us off it is palpable.
And the mechanism remains Facebook, steered as it is by an amoral culture which sprang from and is perpetuated by a literally emotionally truncated high-functioning sociopath.
If you think that's wrong, the onus is on you to document how the company's behavior internal and external is distinguishable from one in which that is a precise definition.
I've said it here many times, I'll say it again:
If you work for them, it's time to quit.
If you do business with them, it's time to quit.
If you think think you can't because <reason>, you're wrong.
The damage to yourself, and to our society, is profound and un-ending.
We remain at memetic war and it is reifying into a culture war on the edge of becoming simply a war.
Don't be part of the problem.
Get off now, and work to get others off, and work to unmake this.
Seems to avoid most of the garbage on Facebook but I can still use it to contact people or hit the marketplace.
I'm not sure that's less harmful to society than no policy. It may be causing more people to use.