Hacker News new | past | comments | ask | show | jobs | submit login
Facebook Decided Which Users Are Interested in Nazis – and Let Ads Target Them (latimes.com)
135 points by mnm1 28 days ago | hide | past | web | favorite | 180 comments

Worth noting: the article's headline is one of those tricky situations where the summary isn't wrong, but should probably include more information.

FB allows advertisers to target at specific topics, and they've been blacklisting objectionable categories. But the blacklisting appears to be manual, so while "nazi" isn't a micro-targeting category, things like Josef Mengele and a white supremacist punk band are.

Manually keeping up with and out-thinking objectionable content keywords is a perpetual arms race. If FB wants to win it that way, they'll have to invest pretty hard in that space if they don't want a story like this every quarter.

Why not just ask people for $1 per year to use FB, and then have non-targeted ads to cover the rest of the loss in revenue?

Because an insanely high percentage of people will refuse to pay $1/year. Also, because $1 does not come close to covering the cost difference; targeted ads typically bring in 2x or more revenue per impression and the average revenue per user per year in the US is around $25.

I don't know about that. The latest estimates I heard was that you would have to pay the average American $1k not to use facebook for a year.

That doesn't mean they would or can spend $1000 on FB, but with the proliferation of subscriptions on app stores that are $10+ a month, I bet FB could get $10 a month from a large chunk of users and start to move towards other revenue models.

Ah but that is while it is free and presumably everyone else is using it. If it costs a dollar people would have to enter payment information all of a sudden, so it is putting a decision point in front of people where they have to expend effort and money (albeit a small amount of both) to keep using this thing.

If their friends seem hesitant about it maybe they won't sign up to pay for it.

This isn't to say they could make as much money with such a model, but if they were more concerned with the downsides of surveillance ad targeting capitalism as their revenue stream, they could certainly explore shifting to be a different sort of company - one with different and maybe even less revenue streams.

Funny thing that, why do we never hear about a company that wants to stay the same size or shrink? What is facebook decided they wanted to be half the size they are now expense-wise, get out of the ads business, and encourage people to use their platform for less things?

> Funny thing that, why do we never hear about a company that wants to stay the same size or shrink?

I guess it's a rhetorical question, but if you went public it would scare away all the investors, right? Who would invest in a company that does not plan to grow, and might even shrink?

Private companies sometimes do that, tho.

targeted ads typically bring in 2x or more revenue per impression



Study quotes 2.68x earnings in 2009 but in my experience it’s even higher these days.

Does that mean advertisers pay more for them or that they are actually more effective (or both, or other?)

Their operating costs would be massively lower if they didn't have to collect every inch of mouse movement etc. for targeting advertisements. So even with lower income, they might still be as profitable or more profitable.

Facebook made $48B in revenue last year. That's around $38/user (with much more coming from people in North America). Even $1/mo doesn't come close to covering it.

$1 is vastly different from free for a variety of psychological reasons. People will also be less likely to want to pay to see ads on every other post. You'd have to provide more value. Also, the fees from credit cards companies are so high that micropayments don't make sense (yearly subscription charge + per-transaction fixed fee + per-transaction percentage charge).

Because that wouldn't cover the loss in revenue. $1 per year and non-targeted ads doesn't equate to $50B+ in revenue growing 20-40% per year. Also, advertisers come to FB because they can be ROI positive with the targeted ads. Most advertisers don't want to run non-targeted ads because they would be ROI negative.

Zuckerberg would become Steve Case wealthy.

Facebook at $X/mo is AOL. AOL was a successful but fairly predictable business. But Facebook offers a promise of growth that promises more return.

most ppl in the world have no credit card. I didn't have one till i visited the west.

It's a 'bad headline' if it will be commonly understood as something not representative of the content.

It's done on purpose by all publications, because the nuanced reality of most situations isn't inflammatory enough to drive clicks and create commotion.

Given this situation i.e. wherein it's really hard to track down and manage the colloquial lingo used around the world for various things ... the headline is unfair.

"Facebook fails to stop advertisers from targeting some extremist memes"

"Facebook unable to tamp down extremists shifting lingo"

"Nazis by another name: advertisers target extremists using shifting terminology on Facebook"

There is definitely a more responsible headline here, and surely the editorial staff are capable of that if they wanted to.

Surely the staff at many publications are wary of this as well, it's one of the ugly pressures of business reality that 'someone' is enforcing.

As for FB ... this has to be hard, whack-a-mole kind of things. Sometimes I'm sympathetic to them, other times I think $100B and some of the best AI folks in the world should be able to mostly figure this out.

Much like rampant fake goods on Amazon ... Bezos can land a rocket by itself, but can't get counterfeit goods off of Amazon? ...

Not that I'd defend either company, but to be fair, landing a rocket is applied engineering and math. Invest enough to build it right, and it will land. Making subjective calls about "good vs bad" content or products, where what's considered good or bad may ebb and flow with current political/cultural whims, is not so straightforward. While "are nazis bad?" is pretty black-and-white, there are a lot of gray area topics. Not sure if adding magical AI fairies would help.

I disagree with both points.

"Counterfeit goods" are 'black and white'. Not 'maybe good or bad'. They're either counterfeit or they are not. Maybe tricky to track down in some cases, but ultimately - objective.

Yes, stopping it might probably be fairly manual process involving process and overhead, but it's achievable.

Bezos spends $1B a year on his rockets - you don't think he can tamp down counterfeit goods for that amount?

I'm always shocked at people defending Bezos flagrant hustling of counterfeit goods. He can stop most of it, he knows about it, therefore he implicitly choses not to.


'Nazis = bad'? That's easy.

But this is not the subject at hand. You're missing the issue that the headline is misrepresenting the subject! Facebook has banned ads for Nazis, because that's easy.

We're talking about ads for 'Skinhead Bands' - or other ill defined groups. Well, Skinhead is a complicated subgenre, it's not black and white. But ok, ban that in ads. What about 'Reggae Skinheads'? 'Black Metal' - wait, is that a Nazi thing in Europe, or just basically 'metal'?

And what about terms in France? Ukraine? Algeria? Every culture has a broad set of terminology, sub groups and political dynamics that make this thing fairly difficult to interpret.

More important is the underlying fact that it's subjective. Nobody can really satisfy all voices because we all have different notions of what 'wrong/evil/bad/offensive' is.

Just like there is effectively no way to ban 'hate speech' on FB because there's no globally accepted definition of what that is, kind of the same applies to ads.

They can stop Nazis, but maybe not some weird 'bad' skinhead subgenres that popup now and again.

Counterfeit goods are a good example where it's not black and white. Say Louis Vuitton runs a factory, and it's workers sneak back into the factory, make a couple of handbags and then sell them themselves, literally the exact same product as Louis Vuitton sells, but isn't sold by Louis Vuitton, I would say that is still a genuine "Louis Vuitton handbag", on the second level, whether or not counterfeiting is bad is also not black and white, counterfeiting makes it harder for companies to sell their products, but also reduce prices for consumers by increasing competition.

Can Facebook tell the difference between interest in studying history and interest in repeating it? Interest in a skinhead band seems to only have one interpretation, but couldn’t Goebbels and Himler search queries just be to learn and see who is discussing them? I know I’ve spent a lot of time reading about WW2. I’m not trying to be obtuse, just ignorant to what degree Facebook can target different intents behind a query or interest.

Not all skinhead bands are nazis. It might be seen as a minor point but I am worried how quickly we make sweeping appraisals based on partial information.

Early skinhead culture and music overlapped heavily with Jamacain rude boy culture.

There's even a group who is (was?) dedicated to skinheads fighting racism. https://en.wikipedia.org/wiki/Skinheads_Against_Racial_Preju...

I didn’t know that. You’re right; I made an assumption without researching the band or the genre.

In fairness, the interest of Facebook is to target ads, so I don't think they really care. An impression on a user with Himmler interest is an impression on a user with himmler interest, a click is a click.

Now the folks monitoring at DHS? Yeah, I think they probably go to great lengths to try to differentiate and segment the population of people who look at extremist material of any kind.

Actually being interested in skinhead bands might also have a legitimate scientific intention when studying political extremism. Intention and interest are very difficult to distinguish.

This is a great point. How many high school kids doing school history reports are getting funneled into nazi indoctrination?

And you've described why I use DuckDuckGo when I want to learn about something like that. I have no idea how Google is interpreting my intent, if it is at all, and how it might come back to bite me in the future.

Do you think being a skinhead is a requirement for listening to a skinhead band?

I think it's absolutely a datapoint considered by any organization that is trying to build a comprehensive profile of citizens to catch people like this before they hurt anyone.


My understanding is that they rely on the reporting system because their whole curation model is built around assessing individual posts rather than patterns of activity.

I went through a phase where I listened to a lot of Hitler speeches on YouTube, just because I was historically interested in what the man actually said and how he said it. He's a fascinating person to study - also from a psychological point of view - precisely because of the horrible things that happened. Anyway, after that, YouTube kept recommending me nazi speeches for at least half a year. I remember how on some days I opened YouTube and Hitler kept reappearing on my front page. Now, I'm not a sensitive person so I don't mind, it's actually kind of hilarious. But after months of this, it irked me a bit.

Indeed, someone may have a very strong interest in Hitler and Nazis and Neo-Nazi culture without being one - aka a professor or researcher of authoritarian history.

I read a bit of history. I was once derided by a self-important house guest for having a copy of The Rise and Fall of the Third Reich with its visible swastika on my bookshelf.

She was wearing a T-shirt bearing the face of noted mass-murderer Che Guevara.

What's the expected fix here?

I was unfamiliar with the bands and most of the people in those targeting lists.

The way I see it, these are the ways you can handle this:

1) Facebook builds this data the hard way. They staff a team of experts on "undesirables", who research and implement custom blocklists at facebook's scale. Insanely cash and time intensive, to say nothing of the "who decides what's undesirable" problem.

2) Spread cost and effort by amassing a central repository of known baddies, and all the orgs contribute and share access. The government does something like this with hashes of sex trafficking imagery, so that eng teams can filter against a blacklist. I think this topic FAR more nuanced and less binary than "does this picture contain illegal pornography or nah". Who maintains this list of undesirables? You're at "social credit score" in a hurry.

3) Algos. You let software extrapolate commonalities from known-bad actors – school shooters, confirmed russian propaganda branches, etc. And let the machine learn their language and flag accordingly. This is going to be coarse and stupid in the way ML always is, and local business owners with names like Heinrich are gonna get their livelihoods smashed accidentally here and there. Not great.

4) What Simulacra said – you just turn the whole targeting infra off. Facebook stops making money. This is great, I'd love to see it as regulation, but it's a big stretch, and very lofty when phrased like this.

5) Some kind of adtech equivalent of finance's KYC (Know Your Customer) regulation. Tie ad buys to confirmable, prosecutable identities, and rather than filtering before launch, aggressively follow up after launch. You run an ad campaign for nazis? Cool, your LLC and its primary stakeholders are permabanned. Facebook has already tried light versions of this, but it was lip service.

IMO 4 and 5 are the places to spend effort. I think we nee to start having conversations that do away with the idea that humans are autonomous and impervious to influence, and start having the discussion in a new context: When and how are you allowed to manipulate the minds of citizens at scale, and what kind of paper trail does it leave?

I don't understand why we don't let such platforms adopt a more laissez faire approach to such situations. There's an inordinate amount of pressure to curb free speech these days which seems very un-American.

I was in this camp for a while but the current political reality has shifted me towards viewing the normalization and acceptance of such hate speech as a negative to society. When I was growing up nazis and white supremacy were always framed in a negative light to make it clear they were wrong (this is mostly relevant in the developing years, less than 14 or so where children don't have a developed moral compass) now a days the extreme right is being treated as just another opinion you can have, normal adults recognize the danger associated with unrestrained nationalism and hate speech but young people are unexposed or unfamiliar with what it can lead to.

This stuff is dangerous, it can warp the way you view the world, any information glorifying it should always be accompanied with explanations and warnings as to the hate it is imbued with... It is un-American to hate another person because of their origin or their religion, it's also un-American to squash an open discussion on that topic but... that discussion needs to happen between mature adults in a setting that makes it clear how unacceptable it is to lean on racist tropes.

Can we be clear on this, please? It's inhuman to hate another for their origin or religion - not just un-American. This problem isn't about whether or not you're a good American, it's about whether or not you're a good human.

Hating people from outside your tribe is as human as any behaviour gets. It's a behaviour we have to work so hard to suppress precisely because it's so natural.

It's a throwback to our feral ancestry. I'd contend that the urge to overcome our animal nature is the bedrock of what makes us human.

I absolutely agree with this while acknowledging that in some circles being inhuman carries less weight than being un-American. I'm currently in Canada anyways[1] so... good luck down there folks!

[1] Expat, still a US citizen.

An easy way to encourage unthinking hate towards a group is to falsely tag them with the term "nazi". Be careful here.

Nazis and white supremacists are still shown in a negative light. In fact, I'd say it's even more negative now with people openly suggesting violence against them ("punch a nazi"...). I think the problem with all of this, and one that we're experiencing now is the widening of the definition of nazi and white supremacists. That's where the danger lies and that's where free speech is most valuable. Once you say it's ok to censor nazis then the game changes to redefining your opponents as nazis.

A lot of people will say people who wear MAGA hats are white supremacists for instance. It's a slippery slope, and a dangerous one.

I agree.

I think the only lasting solution to this is to raise a populace smart enough to be impervious to the radical left and right, both fringes being the domain of sloppy thinking and emotionally driven agendas. And in a way that keeps the flywheel of education spinning. We do a bad job of that today, I think.

But that takes time, strength, money, and – most critically – unified vision that I'm not sure America has right now.

So how do you implement education reform that takes 50 years, when nobody even agrees that it's needed, and kids are getting killed and radicalized today?

Do you allow it to continue in defense of the underlying principle of truly free speech? Maybe. To abandon that principle is a terrifying slippery slope.

This is just one of the ways where tech and culture vastly outpace science and regulation.

I have no answers.

I generally agree with you, and I'm trying to parse through this myself. I think one difference in the modern era is that the Internet and social media have blurred the lines between broadcast speech and individual speech. Historically, we recoil at any restrictions on individual speech. They exist (can't yell FIRE in a crowded theater), but they are generally considered an unfortunate necessity. However, most people agree that broadcast speech should be censored.

Now we're in an era where speech can start out as individual, and then be broadcast. Or perhaps it's better considered a false dichotomy in the first place. Either way, we definitely haven't figured out how to handle this as a society.

Social media is in a way a hack similar to money and property law, where you can carry or own more than you actually could in natural way without losing it or having to leave behind, only that this time it's with the local space that hooks into the global space and vice versa.

Smashing natural boundaries like that creates anomalies that we have to learn to deal with. I think it's also something that will enable us to spread to other planets sooner or later, hopefully.

Hacking natural boundaries and overcoming the self regulating mechanism that otherwise would take place could be painful in a way and bite us back. But just think about the global exchange of knowledge, people getting interested in things they never thought they'd have access to under pre internet circumstances. A hell of a ride indeed..

Great comment, thank you.

Letting the government use its monopoly on force to stop speech is un-American. Holding Facebook accountable for the voices it chooses to amplify and profit from is the market/society at work. Free speech was never something that makes distributors not responsible morally or ethically for what they distribute.

Separately, hate speech crosses over into a form of speech many societies, America included, have decided can cause direct harm and needs special treatment (including legal restrictions). People can disagree about that, but should probably get that disagreement sorted before diving into the additional questions of speech versus distribution.

Uhh, that's what we have been doing the whole time. That's the status quo. We're talking about changing it because it's caused massive societal problems.

In the specific context of Facebook ad targeting, reports and investigations concluded that it was used as a propaganda channel in the US by foreign national agents attempting to tip the scales in the US Presidential election in 2016. "Why is that a bad thing" or "Why is that Facebook's problem" is a reasonable question, but it won't be seen as popular in the US if Facebook accepts those questions publicly as its business attitude.

The US has always had effective limits on broadcast speech, a history of bans on pornography and "communist" literature, as well as a number of quasi-self-imposed rules (Hays Code, MPAA ratings, Comics Code, "color bar" on radio and live music, etc). Effectively you could have very free speech so long as it was small-scale, but anything sufficiently controversial or offensive on a mass scale attracted attention.

Un-American maybe, but they’re global. Here in Europe, their behaviour is highly controversial and I believe that regulation would be popular.

Really? I've witnessed zero efforts on behalf of the government to curb free speech. But if you're talking about private speech that happens on privately owned websites, then that's flat out what is allowed by the constitution.

Seems to me that the issue is actually with private ownership being able to censor. Which can only be addressed with, guess what, government intervention.

"Inordinate" is in the eye of the beholder. Free speech is curbed all the time in the interests of maintaining smooth operation of society, and as even the most intelligent among us tend to forget, free speech is not guaranteed on platforms like Facebook. There are quite a few laws on the books about hate speech, and how that extends to, say, Facebook ads, is what's being discussed here.

"Free speech is curbed all the time"

No, it really is not.

"There are quite a few laws on the books about hate speech"

In the USA I believe there are zero. [1]

There's a misnomer that free speech only exists somewhat arbitrarily as a bill of right. In fact, we actually value free speech even when granted by a private entity.

1: https://en.wikipedia.org/wiki/Hate_speech_in_the_United_Stat...

...6) Full transparency at scale. When someone buys an advert in a newspaper or TV broadcast, it is visible to anyone watching.

In contrast, thousands of adverts can run in FB environment without anyone but the target able to see -- completely under the radar.

Starting with #5 KYC, and adding a site where EVERY advert of every type is available for public inspection, along with its (verified) originator info and targeting parameters.

This would allow all kinds of scrutiny by journalistic and public interest groups (e.g., researchers tracking hate groups, etc.).

FB Is making a bit of a start at this publishing some ads, but until they get to the full transparency, it can't be trusted.

The full transparency would also be a huge benefit to researchers.

I'd also be happy to see it required as a regulation for all players, not only FB.

I'd argue the rule should simply be that, the moment money changes hands, you can either tell me who paid you for a service, or you're legally liable for performing that service yourself. KYC for ads would then just be a natural consequence of this.

I would expect some combination of (1) and (3).

Facebook already necessarily employ a small army of moderators to remove illegal and "undesirable" material; they must have supervisors who set the overall policy direction and deal with new, emergent problems.

Companies that use social media on a large scale, and those companies that run social networks disguised as videogames, employ "community managers", whose job it is to understand and communicate with the community, including keeping abreast of disruptions. It doesn't seem that Facebook itself has many of these.

Facebook should get itself some "machine anthropologists", to study the ant farm. They can then get a sense of these problems before they get in the press, and definitely before they get to the Parliamentary committees. And feed the existing algorithms.

> Facebook should get itself some "machine anthropologists", to study the ant farm.

This is such a cool job title, I love it. I've long dreamed of doing this job at Twitter. There are so many blatantly patterned spam attempts and whatnot. I would love to work on analyzing, for example, the patterns in follower graphs surrounding templated bitcoin scam tweets.

The only viable solution is #4. So long as Facebook can make money by targeting specific groups or behavioral features, their algorithms and advertisers will find facsimiles for protected groups, the vulnerable, children, or Nazis since this optimizes revenue and engagement in dramatic ways. Even the most well-intentioned actor -- and Facebook is very far from that -- could not win this self-imposed game a whack-a-mole.

One alternative to shutting off targeting altogether would be switching from a blacklist to a whitelist approach, where regulators provide the set of features or groups that are allowed to be targeted.

Can we just have a social network that doesn't target anyone, for anything, and just lets us communicate, and build relationships without trying to manipulate us or violate our privacy?

Sadly we already have such a thing. It's the Internet itself.

It's quaint how in movies people easily identify with the underdog/resistance, e.g. a "Neo" in the "Matrix", and would see themselves taking the same direction if a similar scenario would ever play out.

Yet here we are, and at the first hint of inconvenience it's blue pills all around.

If it helps the understanding, it's worth remembering that life outside the Matrix was a hardscrabble garbage existence---so much so that a character was willing to kill to get back in.

Any social network that has people posting any personal data at all, will be mined. For one purpose or another. (And probably for one purpose AND the other.) Almost the only way to stop it would be to make it explicitly illegal, and even then they would still allow mining for certain purposes. There's just no way to get away from it at this point. (Other than just not using social networks at all I suppose?)

Putting personal data out there means... putting personal data out there, advertisers mining that data is a hard problem to solve. So can we re-orient the need for a solution to the injection of advertising into the platform? Lets solve this problem one step at a time and first stop those advertisers from leveraging the platform to distribute advertisements, instead only allowing them to mine the data to support ad campaigns. Once that's done we can look at salves for the data mining of social media, though I can't see that being possible without really resilient privacy enforcement.

Yes, but you would have to pay for it. And convince your friends to also. Actually maybe the subscription model would let you pay for their subscriptions as well.

It's funny because the idea of paying for it reads like a non-starter. Maybe it is. Crazy times when we'd trade away so much for so little vs just paying some nominal amount.

But I sometimes wonder if it's just as much a convenience thing as it is the actual money. That is, if the sign up process was just as easy with payment as it is now, would a lot more people go for it than assumed (at some price point)?

In any case, I don't think that pay or have your privacy co-opted are really the only paths to a viable social network business model.

Signing up or putting in financial information is not the hard part, the hard part is the anxiety of constantly having a stream of money leaving your account every month bit by bit. We try to avoid this as much as humanly possible.

>the anxiety of constantly having a stream of money leaving your account every month bit by bit

I don't know that this is true writ large. Monthly budgeting is, essentially, how most people operate, whether for expenses of necessity or choice. That is, much of our financial lives are oriented around (largely monthly) cash-flow management.

So, we prioritize what's necessary or important to us, then add it to the mix. I don't know why a subscription to a service like FB would be any different than, say, Netflix (if the price were "right").

OTOH, there's an ease in just slipping into something with an e-mail address. So, I believe the friction of going from no payment to any payment is the much bigger leap to overcome (again, provided the price in question is reasonable).

But, there is also now the notion to overcome that some things (like a social network) "should" be free. But, that's another topic.

Netflix fits what i'm saying exactly.

1- many people on here have said they don't want to keep track of any more streaming/subscription services beyond netflix (showing even netflix is too much)

2- netflix makes it as easy as possible to sign up and let other people join via family profiles because they know this is a true problem.

The reason people budget in the first place is precisely because they have an anxiety at the back of their mind about where their money is going.

Email never stopped existing.

I don't understand why this isn't the top comment.

Because email does not do as good a job as social media for most people. If it were so great, how was there sufficient demand for social networking companies to grow large? Even in the 1990s people used usenet because social discover and public conversation over email is cumbersome.

Real life?

Ha! Touche, and a very good point.

Phone, email, texting, real life activities? Where did the obsession come from with having to be in real time constant contact with a ton of people who we didn't care about staying in contact with before? Other than a common place to share photos, which existed prior, why?

we had that. It was called livejournal. And everyone stopped using it.

Wasn't that exodus predicated by the same sort of "undesirables" reputation? Though I'd much rather have a thousand Harry Potter erotic fan fiction authors than a handful of neo-nazis sharing my website.

.. because it was bought by a Russian parent company who brought in restrictive policies. And, obviously, surveillance.

Don't these exist already? [1] Other than that, I can relate to the other comments of just doing away with social networks on the web and replacing it with real-life communication via talking, phone calls, emails, or text messaging. As I get older I realize how much I miss the traditional ways of communicating without being inundated and distracted by constant notifications from present day social networks.

[1] https://diasporafoundation.org/

I deleted my fb+twitter several months ago. It was lonely at first. I realized I had let social media 'automate' my social life away. Since then I've been trying to foster actual friendships through the 'normal channels' like talking, hanging out (when I can make the time) and sms. It feels a bit Luddite in a way, and I feel less like I have 600 friends, and rather like I have 6 friends.

It hasn't been some relevatory experience like some would suggest; social media does make things easier and life without it feel quieter. But like anything else, it's a trade off. biggest positive I've noticed was that I find myself less dopamine-addled. I no longer waste hours on my phone scrolling in order to relax. It took some adjustment time, but I find it easier to get my dopamine fix from more productive hobbies.

I quit FB and Twitter nearly 4 years ago. Now I waste that time on WhatsApp, reddit and 4chan. I figured the problem was more with me than with the media I was consuming.

That would at the very least, mean that users would pay a fee to use their product. As of now, as a free product that is the only way to monetize. The internet loves it's free, open products but at the same time, ironically, loves to build products whose founders are billionaires and even raises them to near hero status. It culminates in the users having a pretty terrible experience in the way of manipulation and privacy.

Commercial radio and broadcast television worked without fine grained metrics on the users. There's nothing special about the internet that makes invasive tracking a necessity.


How are they doing now?

It's a cats-outta-the-bag situation. What DOES makes invasive tracking a necessity is it works so much better than the free models that came before it. If you're not doing it, your competition is, and they're getting one up on you.

I don't think we have a free model that monetizes more efficiently than invasive tracking, so the only option you're left with is a not-free model. And for consumer-social, "not free" is a small niche (eharmony comes to mind as making it work, but even they bought all the free dating companies because the free ones were eating the market...)

Really though? Advertising supported multi-billion dollar industries for decades without mining our personal info. Just because they can doesn't mean they have to. If companies want to advertise to Facebook's hundreds of millions of users, they will do so without targeting if that's their only option.

Why can‘t ads (again) be targeted to content and not users? If I‘m reading an article on a Tesla I might be interested in what VW is offering instead - and might not be interested in that headset I looked at a week ago.

And, if your friends are anything like mine, only a small fraction would be willing to pay, making it a pretty lousy social network.

"at the very least"? No i think the third option is having an advertisement model that simply feeds users ads without having to abuse their data for 'optimization'.

Or they'd have to run the software on their own machines.

How about Mastodon? [1] It's free, open source, and federated like email.

If you're hoping for a commercial, centralized social network, then you're always going to have to deal with perverse incentives.

[1] https://joinmastodon.org/

It can still be mined. That's the crux of the problem is the mining. If you can mine it, then you can use that data to target ads, you can use it for law enforcement purposes, you can use it to stifle dissent, etc etc etc. Could also use the mined data for a myriad number of good things as well. Really is just up to the miner.

Point is, it's the mining that lays the foundation for the dangers, and there is little that prevents facebook, or twitter, or whatever entity that wants to mine mastodon instances from doing so.

With Mastodon you can run your own server, so you can make it invite-only and disallow data miners from joining.

The more secure and exclusive your server the less utility it provides to the network at large. Great for privacy, useless if you wish to connect with a broader community. Also assumes that both internal and external security are solid. They're probably not, and covert mining isn't that hard.

wish to connect with a broader community

This is no longer in the realm of technical problems. If you want to connect with a "broader community" (i.e. people you don't know) then you don't get privacy. To say you do want privacy while also getting to connect with strangers is to insist upon the trustworthiness of strangers.

Yes, I'm inclined to agree. Seems like this is the only way to instil awareness in people what the tradeoffs are.

Even in a decentralized/federated network there will always be people trying to mine data for the purposes of advertising and political campaigning. It's human nature to try to take advantage of others for personal and financial gain.

“It's human nature to try to take advantage of others for personal and financial gain.“

Is it? Or is it the perverse financial incentive structure that leads humans to put financial gain over all other priorities?

Would humans still make decisions to take advantage if the incentive structures were built differently?

> Would humans still make decisions to take advantage if the incentive structures were built differently?

Human history proves that yes, they still would.

That seems way too general of a conclusion to be adopting as a premise. At the very least it ignores the distribution of behavior in a population.

The network effect is strong and if your family is on facebook it can seem rude to post about your life elsewhere where they lack access to it. I'd love to see a government mandate forcing cross pollinization of these networks (perhaps using diapora's node approach) that would allow smaller hosts to compete with the big guys by letting people pull feeds in from other networks. Basically we need XMPP for social media but none of the social media companies have any motivation for implementing it so... ideally the government does sometime.


How does any of that stop the data mining?

It's the data mining and the ad targeting that people want to stop. And the FB competitors could be mining data just as easily as FB can.

It doesn't directly stop the data mining, it stops the profitability of allowing that data mining to the social network... and making that data mining less profitable reduces the value of allowing that to the network host.

If people can jump ship they'll slowly migrate to ad free platforms, then we can attack the issue of data-mining with regulations at the government level to force a cessation of private data collection.

>If people can jump ship they'll slowly migrate to ad free platforms...

Here's the problem, ad free, does not mean mining free. These other platforms could still be mining your data with you none the wiser. In fact, it's a virtual certainty that they would, at minimum, for law enforcement purposes. (Which would entail stifling dissent in certain nations.)

One of the only ways to cut down on mining is to make it explicitly illegal. And again, even then there will be exceptions carved out by congress. But that would at least provide most people a bit of privacy in their personal data just as HIPPA provides us a bit of privacy in our health data.

Again, I absolutely don't disagree with you. I'm just putting on my project management hat and saying "We've got two problems, one is hard and will fix everything, the other is easy and will fix 95% of the stuff, let's do the easy one first" I'm all for GDPR like protections (but ideally a bit better thought out) in the US, but given the entrenchment of these social media networks I think it'd be hard to pull off politically.

Zuckerberg has yet to try and mobilize facebook for a political end, when he tries it I hope it backfires and he ends up shooting himself in the foot, but it may work. Attempting to make data mining and resale may unleash a torrent of angry voters that rally against anyone associated with pushing it forward and get them voted out of office - or cowe them into submission. Either way I think the best first step is to force the platform open and deride the network effect that facebook currently controls, then let them lose power and enact stricter privacy laws when the public is ready for them.

If I were King of America though, I'd totally just go your route.

How is that supposed to impress stakeholders?


Nothing like this can exist without some money for the owners to maintain and create it. Nothing's free. Why can't people understand this?

OP didn't say it had to be free. You could also have a social network that just displayed ads without any kind of targeting and user-tracking whatsoever.

... it's just that in the marketplace of ad channels for advertisers to invest their budget in, that network loses out to networks with targeted ads.

Who's to say that if the general public were presented with two free (to the user) social networks, one that targets ads and one that doesn't, that people would opt to use the targeted one over the non-targeted one?

My snarky answer is "Google vs. DDG's usage numbers," but a more serious answer is that yes, that'd be a possibility (but I'd sadly put it on the low end of probable outcomes given what we know about both the stickiness of existing social networks vs. newcomers and how much concern for privacy users actually tend to demonstrate).

Fair! :)

Available evidence suggests that enough people want to be on whatever the biggest platform is for that to give it a decisive advantage over the competition.

Who would pay for that though? And why shouldn't it manipulate? I'm fine with manipulating people. Nudging is okay in my book if you're benevolent and aiming to make people's life better. It's just shit when all you want is to make them buy some product to fill the void you've helped create.

>And why shouldn't it manipulate?

Why should it? And why not have a social network that just serves up ads that aren't based on tracking/targeting users?

Sure, that's possible, and won't make as much money. Now you're talking about ethical business practices. Good luck!

If it makes enough money for the company to sustain itself, then I feel like that's good enough for what OP is looking for. Not everything needs to be about continued growth of profits.

It's idealistic at best. Many people say this, until you see the money come in, and are pressured to make more. You just can't grow into such a monstrosity without stepping on morals along the way. For a social network to even work, it needs to be gigantic, or why would anyone use it?

I think a lot of the public is getting fed up with morals being stepped on. The demand for a not-immoral social network is huge.

Btw, smaller social networks are great. Early days of reddit were amazing, and it didn’t get better when it became mainstream

I think the general public hasn't got a clue, or doesn't care if they do. We're vocal here, because we understand what is going on. Smaller social networks are great, for niche areas. Facebook is a glorified address book, which you need EVERYONE in to be effective.

Because manipulation is great. You can help people by manipulating them, you can measurably improve their lives. That's a big reason for me. That's like "why should we heal the sick?" You can use a scalpel for good and you can use it to hurt people. There's no use in just saying "we shouldn't use scalpels, ever" to me.

Ads that are completely random won't pay the bills, and ads that are content-based (without looking at the user) might, but will also target the user, they just won't follow them around. If I want to reach people interested in the Nazis, I'll just put my ads on pages dealing with Nazi stuff, because that's what people interested in Nazis tend to look at and other people don't.

>Because manipulation is great. You can help people by manipulating them, you can measurably improve their lives. That's a big reason for me. That's like "why should we heal the sick?"

Are you really comparing targeted ad manipulation to medical professionals healing people? And yes, it can help improve lives, but there's also a flipside to what it can do (read: Cambridge Analytica, Russia, etc.). There are negative connotations to manipulation that you need to keep in mind.

>Ads that are completely random won't pay the bills...

It's worked for TV and radio for decades. They're still somewhat targeted but in a far, far less invasive manner, and they're not individualized.

I would've replied sooner, but the silly "oh no, you've posted two times in one hour and somebody didn't like what you said, go cool of for 12 hours" function didn't allow it. Talk about manipulating users' behavior.

> Are you really comparing targeted ad manipulation to medical professionals healing people?

No, I'm comparing beneficial targeted manipulation (notice the lacking of the word "ad") to healing people. You CAN use a knife to murder somebody, but you CAN also use it to perform life saving surgery on them. Advocating for banning all knifes because you don't want people to murder seems weird to me.

Facebook literally tested what they can do with targeted manipulation [1], and they found that they can do both good and bad with it. It freaked the public out, and so do knifes, and I get that. The problem isn't that you can use it only for evil, the problem is that FB & their customers don't want people in a good place mentally and emotionally, because you're less likely to buy things you don't need or spend hours on their site. And that's what was suggested: have a social network that isn't run from the Bay area trying to maximize profit extraction, but one that's user focused. You know, where the user is the customer, not the ad agency.

> There are negative connotations to manipulation that you need to keep in mind.

As there are to using a knife on people. I'm saying "let's ban the bad part and use the good part". Facebook, Google probably know more about their regular users than those users' family, psychiatrists, psychologists and even the users themselves. Are you aware of the potential good they could do with that? Yeah, we won't solve mental health issue by manipulating people, but we'll make it a whole lot better, and we can measure that.

> It's worked for TV and radio for decades.

Not it really didn't, because they barely used random ads. Most shows' demographics where very specific and they don't run car ads during the sunday morning cartoon hour. Hell, they created show concepts to target demographics so they could then sell ads reaching that demographics.

I suppose suggesting to use technology for good doesn't go over well with this crowd. More and more I feel that we're letting the wrong people design, build and steer the technology that will shape the future.

[1] https://www.theguardian.com/technology/2014/jul/02/facebook-...

> Nudging is okay in my book if you're benevolent and aiming to make people's life better.

"Better" according to whom?

Are there really different ideas about this? I mean, there might be some grey area, sure, but in general? Is there an argument for example that depression is great and shouldn't be touched?

Am I the only one who doesn't mind the targeted ads on FB? Even from a site I visited recently, it doesn't really bother me. I actually like seeing products I'm interested in as opposed to generic targeting like you have on TV. I've discovered some pretty compelling products this way.

I feel the same, but at the same time I understand that not everyone feels that way, and that there should be easy ways for those people to not have to be subject to it.

Sadly any time this kind of thing gets discussed, the nuance gets thrown out and the discussion becomes either "ban all targeting" or "don't regulate anything", both of which are horrible ideas IMO.

That doesn't really seem to address the issue at hand, which concerns allowing people to advertise to Nazis.

At least I hope it doesn't.

Just stop using Facebook. Stop putting the onus on Facebook to fix all of this. We just need to stop using it. There are alternatives out there; start using them.

These articles are always going to come up. We're going to act surprised for 5 minutes, and continue to feed the machine.

After seeing these comments, I think we have it all completely wrong (that is, the mental model of those of us who wish FB stopped existing). Don't ask people to stop using Facebook. Encourage them to use it more.

We should instead call out the people who fund Facebook as sponsoring child abuse. [1]

And those who work at Facebook as inciting pogroms. [2]

And finally, those who defend Facebook as "dumb fucks" because, well, that's what they are anyway according to Mark Z.

At the same time, don't ask anyone to stop using Facebook. In fact, they should use Facebook so much that they bring down Facebook's servers. [3] Encourage the low ARPU "deadbeats" to keep using Facebook and its network as much as possible.

Just call out everyone who is giving them money [4].

Lets see how long FB operates after that.

[1] https://www.npr.org/2019/02/21/696430478/advocates-ask-ftc-t...

[2] https://newrepublic.com/article/147486/facebook-genocide-pro...

[3] https://www.jbaynews.com/whatsapp-crashes-almost-worldwide-o...

[4] https://m.signalvnoise.com/become-a-facebook-free-business/

I just don't understand why Facebook doesn't provide a subscription option. I would be willing to pay $4.99/month or perhaps a bit more in exchange for no ads and no tracking.

What is so wrong about being able to pay for things?

It blocks you from seeing the future. If they get money and no data, they're getting paid for who they are today with less insight into who they need to be tomorrow to keep getting paid.

The idea that data is "just sold to businesses" and can be substituted for its sale-value equivalent in cash is wrong, IMO. Serious insight, product directions, election swinging power, really big shit – are all emergent properties of data in aggregate like this that are somewhat unknowable until you actually get the data and see things unfold.

So what they're actually keeping, by choosing data over cash, is priceless long-term optionality.

It won't be a $100B+ company. And that's important to them.

If you let people pay for an ad free service, you just removed the most valuable users (target demographic) from your advertising pool.

The problem with algorithms and such that show you what you're interested in is that sometimes they work.

The math and computers don't care, we have to care unless we want to facilitate just about anything.

Couldn't agree more, and the real problem is that Facebook (would be the same for any other company with a similar ad targeting model) is now in the position of arbiter of what is acceptable and what is not. Perhaps a punk white-supremacist band is off limits, but what about joining "maga" with "Infowars" with "Insane Clown Posse"? You'll likely be targeting many of the same people anyway.

So let's step away from the "Nazis" part for a second, and rewrite the headline: "Facebook Decided Which Users Are Interested in ____________ -- and Let Ads Target Them Directly".

Isn't this how it is supposed to work?

So let's go back to the "Nazis" part of this. Yes, Nazis, Neo-Nazis, white supremacists and racists of all stripes are repellent to most of us. Are we agreed on that? Good.

How is Facebook supposed to differentiate between somebody who is an admirer of Nazi icons, and somebody simply doing research on them?

Is somebody supposed to be curating a list of what's "acceptable" for people to like and not like? Who's in charge of that list? What happens when those people leave and are replaced by different people?

What happens when something deeply embarrassing to Mark Zuckerberg or Facebook takes place and starts getting attention? Should that be on the list? And if somebody places it on the list, what recourse does anybody outside of that company have to remove it from that list?

Popular, happy speech isn't the speech that needs to be protected. Everybody nods when this point is raised, then we end up having this same stupid conversation again in a couple of weeks and people act like this time something is new and different and in just this case perhaps some light, smiley-faced censorship is necessary... but of course it surely won't get out of hand.


How is Facebook supposed to differentiate between somebody who is an admirer of Nazi icons, and somebody simply doing research on them?

The degree, frequency, and enthusiasm with which they share content are good clues, among other factors.

There are a lot of people interested in these keywords because they're horrified by the Nazi phenomena, and so wish to understand it, to help ensure that it doesn't happen again. I recently made several such searches after rereading Winds of War and War and Remembrance. Would if be better it I had found nothing?

George Santayana had a better take: "Those who cannot remember the past are condemned to repeat it." When someone asks "what could go wrong" it's better to have an answer.

Let's say they perfect a method of only targeting actual Nazi sympathizers. Personally I have a few things that I'd like to show those people, without insult or invective. Like photos of dead family members. It seems to me at least as important to be able to send a targeted message to Nazis as to cat fanciers.

Amen. Sunlight is the best disinfectant.

Yet we are repeatedly bombarded by stories of this nature and voices telling us that well in this case, maybe we shouldn't have any sunlight because the subject is just too horrible to behold.

A censored public is an ignorant public.

Google (youtube) is having a similar problem with pedophiles taking advantage of comment keywording to make it possible for people to find things that ostensibly should be banned. https://techcrunch.com/2019/02/18/youtube-under-fire-for-rec...

This sounds like a non-issue.

> Facebook allowed The Times to target ads to users Facebook has determined are interested in Goebbels, the Third Reich’s chief propagandist, Himmler, the architect of the Holocaust and leader of the SS, and Mengele, the infamous concentration camp doctor who performed human experiments on prisoners. Each category included hundreds of thousands of users.

If i make a documentary about goebbels' life i shouldn't be able to advertise it? How about even selling it? I think this is crossing the line of fake outrage.

This. I think I have searched for Nazi-related stuff a lot in the past when watching documentaries or reading WWII books. I think this goes without saying, but I'm NOT a sympathizer of Nazi ideology. I just enjoy learning.

By this rationale, I should be in a hate crime watchlist already.

I was literally about to make the same comment. It's guilt by association. Researching these topics does not mean you are a Nazi. In this case, Facebook did nothing wrong. One can have an interest in history without being a Nazi. One can have a history in Nazis without being one.

This is unbelievably poor reporting, the headline does not reflect the actual content and the wrong conclusions are drawn. LA Times, in particular Sam Dean, should be ashamed.

Shit, I guess I'm a Nazi because I like history.

Edit: Just checked my bookshelf and I might also be a genghis khan sympathizer.

Facebook/YouTube's approach to fixing these issues is reactive instead of proactive. Would an open-sourced list of unequivocally objectionable topics work to help companies QA their algos? I'm up for building one.

I imagine if your list is missing anything at all (and it will) then you’ll get slammed in the press for missing it. Skinhead bands are by no means obvious to me, but even if they would have been in your list, you’ll miss something inevitably.

"Deradicalization" is a form of "Advertising" which doesn't seem to cause so much mainstream consternation

I encounter and consider this when creating ads for my black metal band. Avoiding supporters of nazi metal is such a common concern that I didn't even think of it as odd, just business as usual. I actually appreciate that I'm able to explicitly exclude audiences, as it limits the likelihood that I'll deal with a deluge of hate mail.

Incidentally, Facebook seems to have already reacted to this article by removing "national socialist black metal" from its interest targeting options.

NSBM's a difficult problem. For one thing, where do you draw the line? For every band that are out-and-proud unequivocal neo-nazis preaching race-war, there's another who insist they're apolitical as musicians, another who just want to "explore themes of our national history" (yeah, pull the other one...) and probably half a dozen who just flirt with the imagery because it sells records.

For another thing, this stuff is interesting - there's no two ways around that. Musically, it's almost entirely straight-up bad (when Varg Vikernes is the best a movement has to offer, you know there's a quality problem), but the cultural mechanisms that made it and the social history that feeds it is, speaking with cold clinicism, really very interesting.

> Musically, it's almost entirely straight-up bad (when Varg Vikernes is the best a movement has to offer, you know there's a quality problem)

That's a very precarious judgement call, unless you mean the severely limited production value, which has become a hallmark of black metal by itself.

No, I mean half of them flat-out don't know how to handle their instruments properly, and even the ones who do are still using them to make art that is lazy, adolescent and derivative.

The production-value stuff I totally understand and wholly dig, and that's not why I lack respect for their music.

> half of them flat-out don't know how to handle their instruments properly

I'd venture to say that's in the eye of the beholder. A highly skilled, say, progressive rock guitarist could reasonably claim all of them don't know how to handle their instruments.

> lazy, adolescent and derivative.

I wouldn't discount any argument that would claim this is true for all metal. In a way, that's part of its appeal.

    > In a way, that's part 
    > of its appeal. 
You're not wrong - and this is no small part of why it's a tough problem.

But I think those three are a bit of a "pick any two" situation.

It’s a tough question for sure and I don’t think there’s a clear answer. We can start by refusing to support the worst of it, calling out the friends and artists who do, and making it seem less normal and acceptable than it really is. Black metal has become such a safe space for it. Even the term “NSBM” helps whitewash it!

Edit: I'm not trying to insult or shame you for using "NSBM." Everyone says it, it's totally normal at this point, and that's the problem.

I have never looked at the term that way, but what you say is completely fair.

I seem to recall a time when a lot of that scene rejected the label and tried to claim it was "just black metal", but now that I think about it, I suspect they've collectively owned it these days.

I don't know what else to call it that doesn't either minimise it or need a dozen paragraphs' worth of explanation, though...

I think it's best to just call it what it is: nazi black metal or racist black metal. Anything else dresses it up. Even fully spelled out, "National Socialist Black Metal" has always struck me as far too sophisticated a label when you consider the content and the people making it. Most of them aren't threatening soldiers writing political treatises, they're sad kids LARPing as Nazis.

    > sad kids LARPing as Nazis.
Spot on. A description I am undoubtedly going to steal when the opportunity arises, so thanks! :)

Please do!

Nazi imagery and information is prevalent still, such as History Channel's many Nazi shows, or Nazi mega weapons on PBS, or Science/Discovery which also sell many ads on Nazi-related shows. They cover the atrocities but also the many successes of the regime. I'm not surprised this programmatically translates into Facebook as well, i.e. Nazi information is not equivalent to skinheads. It's a tough problem to solve programatically.

How come ads are allowed to evade public scrutiny when they're shown online?

It's amazing to me how attitudes have shifted so critically.

In the 70s, 'Hogans Heros' was a hit tv show. It portrayed the Nazis as bumbling idiots, but still it was a topic that was featured prominently in the show. I bet today people would be fearful to say they watched such a show.

The same thing is true of the 'Dukes of Hazzard'. Imagine it: A tv show where two wild young men drove a car that had a Confederate flag on the hood, and the horn played 'Dixie'. (Even though the show famously portrayed African Americans only in a positive light.) Today, people are ashamed to admit they watched the show, had t-shirts, etc. Yet the show was wildly popular back then. (And race relations seemed like they were better at the time, TBH.)

It's good to make progress in calling out evil, but things feel a little odd in this area.

Lots of historians are interested in Nazis. As are many people that follow politics. Who cares?

This story immediately becomes more interesting if an advertiser figures out a way to exfiltrate Facebook's graph of who it thnks Nazis are.

... but in general, the ad networks are architected to make that kind of exfiltration as difficult as possible, since it violates the privacy constraints users assume.

The libertarian answer to this would seem to then be "Excellent; now anti-fascist organizations have an optimized channel to get their message to those people."

The libertarian answer is "to let the market decide" and walk with their pocketbook / talk with their wallet. That is all fine in theory but the theory doesn't address what happens when there is only a single dominant player. For example, if I "walk away with my wallet" on my cable tv provider, i'm done -- there is only one provider.

Cable TV isn't a good example because "nothing" is a strong competitor and really much better than cable TV.

But lately companies like VISA and MC seem to be abusing their semi-monopolies. Try living without a card or bank account.

>That is all fine in theory but the theory doesn't address what happens when there is only a single dominant player.

The market can decide that only one player is sufficient to its needs.

A single player that is strong enough can decide that the market will not have more than one player.

Facebook and your cable company are similar in that they both offer a compelling (for some people, at least), yet unessential product.

in these situations, you vote with your wallet by no longer using the service. there are issues with the libertarian "vote with your wallet" theory, but this is not one of them.

How does Libertarian "vote with your wallet" theory deal with essential products?

there might be some diehard libertarians that disagree with me, but I think it's pretty obvious that the theory can't work when a company has a monopoly on a truly essential product.

I live in a place (Canada) where even some non-essential products are monopolized not by private companies, but by the government. In fact, the only monopolies of which I am aware are created and mandated by government.

Also: now we know who the Nazis are.

“Know” is probably giving their targeting and classification a bit too much credit.

No, now you know who are interested in the Nazis, which includes no shortage of kids writing history reports for school.

They'll take anyone's money. I got a sponsored ad (meaning Facebook got paid) from a notorious antivax organization, these people: https://www.skepticalraptor.com/skepticalraptorblog.php/phys...

...and? Don't they have categories for every political leaning?

People generally have a problem with political leanings that are explicitly predicated on abuse of others.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact