FB allows advertisers to target at specific topics, and they've been blacklisting objectionable categories. But the blacklisting appears to be manual, so while "nazi" isn't a micro-targeting category, things like Josef Mengele and a white supremacist punk band are.
Manually keeping up with and out-thinking objectionable content keywords is a perpetual arms race. If FB wants to win it that way, they'll have to invest pretty hard in that space if they don't want a story like this every quarter.
That doesn't mean they would or can spend $1000 on FB, but with the proliferation of subscriptions on app stores that are $10+ a month, I bet FB could get $10 a month from a large chunk of users and start to move towards other revenue models.
If their friends seem hesitant about it maybe they won't sign up to pay for it.
Funny thing that, why do we never hear about a company that wants to stay the same size or shrink? What is facebook decided they wanted to be half the size they are now expense-wise, get out of the ads business, and encourage people to use their platform for less things?
I guess it's a rhetorical question, but if you went public it would scare away all the investors, right? Who would invest in a company that does not plan to grow, and might even shrink?
Private companies sometimes do that, tho.
Study quotes 2.68x earnings in 2009 but in my experience it’s even higher these days.
Facebook at $X/mo is AOL. AOL was a successful but fairly predictable business. But Facebook offers a promise of growth that promises more return.
It's done on purpose by all publications, because the nuanced reality of most situations isn't inflammatory enough to drive clicks and create commotion.
Given this situation i.e. wherein it's really hard to track down and manage the colloquial lingo used around the world for various things ... the headline is unfair.
"Facebook fails to stop advertisers from targeting some extremist memes"
"Facebook unable to tamp down extremists shifting lingo"
"Nazis by another name: advertisers target extremists using shifting terminology on Facebook"
There is definitely a more responsible headline here, and surely the editorial staff are capable of that if they wanted to.
Surely the staff at many publications are wary of this as well, it's one of the ugly pressures of business reality that 'someone' is enforcing.
As for FB ... this has to be hard, whack-a-mole kind of things. Sometimes I'm sympathetic to them, other times I think $100B and some of the best AI folks in the world should be able to mostly figure this out.
Much like rampant fake goods on Amazon ... Bezos can land a rocket by itself, but can't get counterfeit goods off of Amazon? ...
"Counterfeit goods" are 'black and white'. Not 'maybe good or bad'. They're either counterfeit or they are not. Maybe tricky to track down in some cases, but ultimately - objective.
Yes, stopping it might probably be fairly manual process involving process and overhead, but it's achievable.
Bezos spends $1B a year on his rockets - you don't think he can tamp down counterfeit goods for that amount?
I'm always shocked at people defending Bezos flagrant hustling of counterfeit goods. He can stop most of it, he knows about it, therefore he implicitly choses not to.
'Nazis = bad'? That's easy.
But this is not the subject at hand. You're missing the issue that the headline is misrepresenting the subject! Facebook has banned ads for Nazis, because that's easy.
We're talking about ads for 'Skinhead Bands' - or other ill defined groups. Well, Skinhead is a complicated subgenre, it's not black and white. But ok, ban that in ads. What about 'Reggae Skinheads'? 'Black Metal' - wait, is that a Nazi thing in Europe, or just basically 'metal'?
And what about terms in France? Ukraine? Algeria? Every culture has a broad set of terminology, sub groups and political dynamics that make this thing fairly difficult to interpret.
More important is the underlying fact that it's subjective.
Nobody can really satisfy all voices because we all have different notions of what 'wrong/evil/bad/offensive' is.
Just like there is effectively no way to ban 'hate speech' on FB because there's no globally accepted definition of what that is, kind of the same applies to ads.
They can stop Nazis, but maybe not some weird 'bad' skinhead subgenres that popup now and again.
Early skinhead culture and music overlapped heavily with Jamacain rude boy culture.
Now the folks monitoring at DHS? Yeah, I think they probably go to great lengths to try to differentiate and segment the population of people who look at extremist material of any kind.
She was wearing a T-shirt bearing the face of noted mass-murderer Che Guevara.
I was unfamiliar with the bands and most of the people in those targeting lists.
The way I see it, these are the ways you can handle this:
1) Facebook builds this data the hard way. They staff a team of experts on "undesirables", who research and implement custom blocklists at facebook's scale. Insanely cash and time intensive, to say nothing of the "who decides what's undesirable" problem.
2) Spread cost and effort by amassing a central repository of known baddies, and all the orgs contribute and share access. The government does something like this with hashes of sex trafficking imagery, so that eng teams can filter against a blacklist. I think this topic FAR more nuanced and less binary than "does this picture contain illegal pornography or nah". Who maintains this list of undesirables? You're at "social credit score" in a hurry.
3) Algos. You let software extrapolate commonalities from known-bad actors – school shooters, confirmed russian propaganda branches, etc. And let the machine learn their language and flag accordingly. This is going to be coarse and stupid in the way ML always is, and local business owners with names like Heinrich are gonna get their livelihoods smashed accidentally here and there. Not great.
4) What Simulacra said – you just turn the whole targeting infra off. Facebook stops making money. This is great, I'd love to see it as regulation, but it's a big stretch, and very lofty when phrased like this.
5) Some kind of adtech equivalent of finance's KYC (Know Your Customer) regulation. Tie ad buys to confirmable, prosecutable identities, and rather than filtering before launch, aggressively follow up after launch. You run an ad campaign for nazis? Cool, your LLC and its primary stakeholders are permabanned. Facebook has already tried light versions of this, but it was lip service.
IMO 4 and 5 are the places to spend effort. I think we nee to start having conversations that do away with the idea that humans are autonomous and impervious to influence, and start having the discussion in a new context: When and how are you allowed to manipulate the minds of citizens at scale, and what kind of paper trail does it leave?
This stuff is dangerous, it can warp the way you view the world, any information glorifying it should always be accompanied with explanations and warnings as to the hate it is imbued with... It is un-American to hate another person because of their origin or their religion, it's also un-American to squash an open discussion on that topic but... that discussion needs to happen between mature adults in a setting that makes it clear how unacceptable it is to lean on racist tropes.
 Expat, still a US citizen.
A lot of people will say people who wear MAGA hats are white supremacists for instance. It's a slippery slope, and a dangerous one.
I think the only lasting solution to this is to raise a populace smart enough to be impervious to the radical left and right, both fringes being the domain of sloppy thinking and emotionally driven agendas. And in a way that keeps the flywheel of education spinning. We do a bad job of that today, I think.
But that takes time, strength, money, and – most critically – unified vision that I'm not sure America has right now.
So how do you implement education reform that takes 50 years, when nobody even agrees that it's needed, and kids are getting killed and radicalized today?
Do you allow it to continue in defense of the underlying principle of truly free speech? Maybe. To abandon that principle is a terrifying slippery slope.
This is just one of the ways where tech and culture vastly outpace science and regulation.
I have no answers.
Now we're in an era where speech can start out as individual, and then be broadcast. Or perhaps it's better considered a false dichotomy in the first place. Either way, we definitely haven't figured out how to handle this as a society.
Smashing natural boundaries like that creates anomalies that we have to learn to deal with. I think it's also something that will enable us to spread to other planets sooner or later, hopefully.
Hacking natural boundaries and overcoming the self regulating mechanism that otherwise would take place could be painful in a way and bite us back. But just think about the global exchange of knowledge, people getting interested in things they never thought they'd have access to under pre internet circumstances. A hell of a ride indeed..
Separately, hate speech crosses over into a form of speech many societies, America included, have decided can cause direct harm and needs special treatment (including legal restrictions). People can disagree about that, but should probably get that disagreement sorted before diving into the additional questions of speech versus distribution.
Seems to me that the issue is actually with private ownership being able to censor. Which can only be addressed with, guess what, government intervention.
No, it really is not.
"There are quite a few laws on the books about hate speech"
In the USA I believe there are zero. 
There's a misnomer that free speech only exists somewhat arbitrarily as a bill of right. In fact, we actually value free speech even when granted by a private entity.
In contrast, thousands of adverts can run in FB environment without anyone but the target able to see -- completely under the radar.
Starting with #5 KYC, and adding a site where EVERY advert of every type is available for public inspection, along with its (verified) originator info and targeting parameters.
This would allow all kinds of scrutiny by journalistic and public interest groups (e.g., researchers tracking hate groups, etc.).
FB Is making a bit of a start at this publishing some ads, but until they get to the full transparency, it can't be trusted.
The full transparency would also be a huge benefit to researchers.
I'd also be happy to see it required as a regulation for all players, not only FB.
Facebook already necessarily employ a small army of moderators to remove illegal and "undesirable" material; they must have supervisors who set the overall policy direction and deal with new, emergent problems.
Companies that use social media on a large scale, and those companies that run social networks disguised as videogames, employ "community managers", whose job it is to understand and communicate with the community, including keeping abreast of disruptions. It doesn't seem that Facebook itself has many of these.
Facebook should get itself some "machine anthropologists", to study the ant farm. They can then get a sense of these problems before they get in the press, and definitely before they get to the Parliamentary committees. And feed the existing algorithms.
This is such a cool job title, I love it. I've long dreamed of doing this job at Twitter. There are so many blatantly patterned spam attempts and whatnot. I would love to work on analyzing, for example, the patterns in follower graphs surrounding templated bitcoin scam tweets.
One alternative to shutting off targeting altogether would be switching from a blacklist to a whitelist approach, where regulators provide the set of features or groups that are allowed to be targeted.
It's quaint how in movies people easily identify with the underdog/resistance, e.g. a "Neo" in the "Matrix", and would see themselves taking the same direction if a similar scenario would ever play out.
Yet here we are, and at the first hint of inconvenience it's blue pills all around.
But I sometimes wonder if it's just as much a convenience thing as it is the actual money. That is, if the sign up process was just as easy with payment as it is now, would a lot more people go for it than assumed (at some price point)?
In any case, I don't think that pay or have your privacy co-opted are really the only paths to a viable social network business model.
I don't know that this is true writ large. Monthly budgeting is, essentially, how most people operate, whether for expenses of necessity or choice. That is, much of our financial lives are oriented around (largely monthly) cash-flow management.
So, we prioritize what's necessary or important to us, then add it to the mix. I don't know why a subscription to a service like FB would be any different than, say, Netflix (if the price were "right").
OTOH, there's an ease in just slipping into something with an e-mail address. So, I believe the friction of going from no payment to any payment is the much bigger leap to overcome (again, provided the price in question is reasonable).
But, there is also now the notion to overcome that some things (like a social network) "should" be free. But, that's another topic.
1- many people on here have said they don't want to keep track of any more streaming/subscription services beyond netflix (showing even netflix is too much)
2- netflix makes it as easy as possible to sign up and let other people join via family profiles because they know this is a true problem.
The reason people budget in the first place is precisely because they have an anxiety at the back of their mind about where their money is going.
It hasn't been some relevatory experience like some would suggest; social media does make things easier and life without it feel quieter. But like anything else, it's a trade off. biggest positive I've noticed was that I find myself less dopamine-addled. I no longer waste hours on my phone scrolling in order to relax. It took some adjustment time, but I find it easier to get my dopamine fix from more productive hobbies.
How are they doing now?
It's a cats-outta-the-bag situation. What DOES makes invasive tracking a necessity is it works so much better than the free models that came before it. If you're not doing it, your competition is, and they're getting one up on you.
I don't think we have a free model that monetizes more efficiently than invasive tracking, so the only option you're left with is a not-free model. And for consumer-social, "not free" is a small niche (eharmony comes to mind as making it work, but even they bought all the free dating companies because the free ones were eating the market...)
If you're hoping for a commercial, centralized social network, then you're always going to have to deal with perverse incentives.
Point is, it's the mining that lays the foundation for the dangers, and there is little that prevents facebook, or twitter, or whatever entity that wants to mine mastodon instances from doing so.
This is no longer in the realm of technical problems. If you want to connect with a "broader community" (i.e. people you don't know) then you don't get privacy. To say you do want privacy while also getting to connect with strangers is to insist upon the trustworthiness of strangers.
Is it? Or is it the perverse financial incentive structure that leads humans to put financial gain over all other priorities?
Would humans still make decisions to take advantage if the incentive structures were built differently?
Human history proves that yes, they still would.
How does any of that stop the data mining?
It's the data mining and the ad targeting that people want to stop. And the FB competitors could be mining data just as easily as FB can.
If people can jump ship they'll slowly migrate to ad free platforms, then we can attack the issue of data-mining with regulations at the government level to force a cessation of private data collection.
Here's the problem, ad free, does not mean mining free. These other platforms could still be mining your data with you none the wiser. In fact, it's a virtual certainty that they would, at minimum, for law enforcement purposes. (Which would entail stifling dissent in certain nations.)
One of the only ways to cut down on mining is to make it explicitly illegal. And again, even then there will be exceptions carved out by congress. But that would at least provide most people a bit of privacy in their personal data just as HIPPA provides us a bit of privacy in our health data.
Zuckerberg has yet to try and mobilize facebook for a political end, when he tries it I hope it backfires and he ends up shooting himself in the foot, but it may work. Attempting to make data mining and resale may unleash a torrent of angry voters that rally against anyone associated with pushing it forward and get them voted out of office - or cowe them into submission. Either way I think the best first step is to force the platform open and deride the network effect that facebook currently controls, then let them lose power and enact stricter privacy laws when the public is ready for them.
If I were King of America though, I'd totally just go your route.
Why should it? And why not have a social network that just serves up ads that aren't based on tracking/targeting users?
Btw, smaller social networks are great. Early days of reddit were amazing, and it didn’t get better when it became mainstream
Ads that are completely random won't pay the bills, and ads that are content-based (without looking at the user) might, but will also target the user, they just won't follow them around. If I want to reach people interested in the Nazis, I'll just put my ads on pages dealing with Nazi stuff, because that's what people interested in Nazis tend to look at and other people don't.
Are you really comparing targeted ad manipulation to medical professionals healing people? And yes, it can help improve lives, but there's also a flipside to what it can do (read: Cambridge Analytica, Russia, etc.). There are negative connotations to manipulation that you need to keep in mind.
>Ads that are completely random won't pay the bills...
It's worked for TV and radio for decades. They're still somewhat targeted but in a far, far less invasive manner, and they're not individualized.
> Are you really comparing targeted ad manipulation to medical professionals healing people?
No, I'm comparing beneficial targeted manipulation (notice the lacking of the word "ad") to healing people. You CAN use a knife to murder somebody, but you CAN also use it to perform life saving surgery on them. Advocating for banning all knifes because you don't want people to murder seems weird to me.
Facebook literally tested what they can do with targeted manipulation , and they found that they can do both good and bad with it. It freaked the public out, and so do knifes, and I get that. The problem isn't that you can use it only for evil, the problem is that FB & their customers don't want people in a good place mentally and emotionally, because you're less likely to buy things you don't need or spend hours on their site. And that's what was suggested: have a social network that isn't run from the Bay area trying to maximize profit extraction, but one that's user focused. You know, where the user is the customer, not the ad agency.
> There are negative connotations to manipulation that you need to keep in mind.
As there are to using a knife on people. I'm saying "let's ban the bad part and use the good part". Facebook, Google probably know more about their regular users than those users' family, psychiatrists, psychologists and even the users themselves. Are you aware of the potential good they could do with that? Yeah, we won't solve mental health issue by manipulating people, but we'll make it a whole lot better, and we can measure that.
> It's worked for TV and radio for decades.
Not it really didn't, because they barely used random ads. Most shows' demographics where very specific and they don't run car ads during the sunday morning cartoon hour. Hell, they created show concepts to target demographics so they could then sell ads reaching that demographics.
I suppose suggesting to use technology for good doesn't go over well with this crowd. More and more I feel that we're letting the wrong people design, build and steer the technology that will shape the future.
"Better" according to whom?
Sadly any time this kind of thing gets discussed, the nuance gets thrown out and the discussion becomes either "ban all targeting" or "don't regulate anything", both of which are horrible ideas IMO.
At least I hope it doesn't.
These articles are always going to come up. We're going to act surprised for 5 minutes, and continue to feed the machine.
We should instead call out the people who fund Facebook as sponsoring child abuse. 
And those who work at Facebook as inciting pogroms. 
And finally, those who defend Facebook as "dumb fucks" because, well, that's what they are anyway according to Mark Z.
At the same time, don't ask anyone to stop using Facebook. In fact, they should use Facebook so much that they bring down Facebook's servers.  Encourage the low ARPU "deadbeats" to keep using Facebook and its network as much as possible.
Just call out everyone who is giving them money .
Lets see how long FB operates after that.
What is so wrong about being able to pay for things?
The idea that data is "just sold to businesses" and can be substituted for its sale-value equivalent in cash is wrong, IMO. Serious insight, product directions, election swinging power, really big shit – are all emergent properties of data in aggregate like this that are somewhat unknowable until you actually get the data and see things unfold.
So what they're actually keeping, by choosing data over cash, is priceless long-term optionality.
The math and computers don't care, we have to care unless we want to facilitate just about anything.
Isn't this how it is supposed to work?
So let's go back to the "Nazis" part of this. Yes, Nazis, Neo-Nazis, white supremacists and racists of all stripes are repellent to most of us. Are we agreed on that? Good.
How is Facebook supposed to differentiate between somebody who is an admirer of Nazi icons, and somebody simply doing research on them?
Is somebody supposed to be curating a list of what's "acceptable" for people to like and not like? Who's in charge of that list? What happens when those people leave and are replaced by different people?
What happens when something deeply embarrassing to Mark Zuckerberg or Facebook takes place and starts getting attention? Should that be on the list? And if somebody places it on the list, what recourse does anybody outside of that company have to remove it from that list?
Popular, happy speech isn't the speech that needs to be protected. Everybody nods when this point is raised, then we end up having this same stupid conversation again in a couple of weeks and people act like this time something is new and different and in just this case perhaps some light, smiley-faced censorship is necessary... but of course it surely won't get out of hand.
The degree, frequency, and enthusiasm with which they share content are good clues, among other factors.
George Santayana had a better take: "Those who cannot remember the past are condemned to repeat it." When someone asks "what could go wrong" it's better to have an answer.
Let's say they perfect a method of only targeting actual Nazi sympathizers. Personally I have a few things that I'd like to show those people, without insult or invective. Like photos of dead family members. It seems to me at least as important to be able to send a targeted message to Nazis as to cat fanciers.
Yet we are repeatedly bombarded by stories of this nature and voices telling us that well in this case, maybe we shouldn't have any sunlight because the subject is just too horrible to behold.
A censored public is an ignorant public.
> Facebook allowed The Times to target ads to users Facebook has determined are interested in Goebbels, the Third Reich’s chief propagandist, Himmler, the architect of the Holocaust and leader of the SS, and Mengele, the infamous concentration camp doctor who performed human experiments on prisoners. Each category included hundreds of thousands of users.
If i make a documentary about goebbels' life i shouldn't be able to advertise it? How about even selling it? I think this is crossing the line of fake outrage.
By this rationale, I should be in a hate crime watchlist already.
This is unbelievably poor reporting, the headline does not reflect the actual content and the wrong conclusions are drawn. LA Times, in particular Sam Dean, should be ashamed.
Edit: Just checked my bookshelf and I might also be a genghis khan sympathizer.
Incidentally, Facebook seems to have already reacted to this article by removing "national socialist black metal" from its interest targeting options.
For another thing, this stuff is interesting - there's no two ways around that. Musically, it's almost entirely straight-up bad (when Varg Vikernes is the best a movement has to offer, you know there's a quality problem), but the cultural mechanisms that made it and the social history that feeds it is, speaking with cold clinicism, really very interesting.
That's a very precarious judgement call, unless you mean the severely limited production value, which has become a hallmark of black metal by itself.
The production-value stuff I totally understand and wholly dig, and that's not why I lack respect for their music.
I'd venture to say that's in the eye of the beholder. A highly skilled, say, progressive rock guitarist could reasonably claim all of them don't know how to handle their instruments.
> lazy, adolescent and derivative.
I wouldn't discount any argument that would claim this is true for all metal. In a way, that's part of its appeal.
> In a way, that's part
> of its appeal.
But I think those three are a bit of a "pick any two" situation.
Edit: I'm not trying to insult or shame you for using "NSBM." Everyone says it, it's totally normal at this point, and that's the problem.
I seem to recall a time when a lot of that scene rejected the label and tried to claim it was "just black metal", but now that I think about it, I suspect they've collectively owned it these days.
I don't know what else to call it that doesn't either minimise it or need a dozen paragraphs' worth of explanation, though...
> sad kids LARPing as Nazis.
In the 70s, 'Hogans Heros' was a hit tv show. It portrayed the Nazis as bumbling idiots, but still it was a topic that was featured prominently in the show. I bet today people would be fearful to say they watched such a show.
The same thing is true of the 'Dukes of Hazzard'. Imagine it: A tv show where two wild young men drove a car that had a Confederate flag on the hood, and the horn played 'Dixie'. (Even though the show famously portrayed African Americans only in a positive light.) Today, people are ashamed to admit they watched the show, had t-shirts, etc. Yet the show was wildly popular back then. (And race relations seemed like they were better at the time, TBH.)
It's good to make progress in calling out evil, but things feel a little odd in this area.
... but in general, the ad networks are architected to make that kind of exfiltration as difficult as possible, since it violates the privacy constraints users assume.
But lately companies like VISA and MC seem to be abusing their semi-monopolies. Try living without a card or bank account.
The market can decide that only one player is sufficient to its needs.
in these situations, you vote with your wallet by no longer using the service. there are issues with the libertarian "vote with your wallet" theory, but this is not one of them.