Clicking through to read that article, it's clear that the categories were created by an algorithm rather than a human, and when reported, were removed without fuss. The way the main article phrases it is highly misleading.
The click-through article also says that while "Jew Haters" was a group that could be selected, it was too small to buy a targetted Facebook ad for by itself. They added a few more 'jew-hating' categories, and a few 'Hitler' categories, and the target demographic was still too small. So they ended up adding the category for a small extremist political party in Germany. At this point, the target demographic was big enough to allow them to buy a $30 ad.
Painting this as "Facebook is facilitating the rise of antisemitism just for a buck" is extraordinarily disingenuous.
I think that's the point:
a) Automatic categorization means offensive categories will be created
b) It takes a complaint to have them removed.
The click-through article also says that while "Jew Haters" was a group that could be selected, it was too small to buy a targetted Facebook ad for by itself. They added a few more 'jew-hating' categories, and a few 'Hitler' categories, and the target demographic was still too small.
It is unclear to me why the fact that this particular phrase was too small to buy for means it isn't a problem. The fact is you can target ads based on it, even though currently you also need to buy other groups.
The Daily Beast, which briefly ran its own ad campaign to test the company’s tools, says Twitter’s platform shows 18.6 million accounts “likely” to engage with the word “Nazi,” while another 14.5 million users might be drawn in by the N-word. For Twitter, the process seems entirely automated and there appear to be no safeguards in place — The Daily Beast tried a number of different hateful words and phrases and none were blacklisted by Twitter’s tools.
Keep in mind that "engage with" includes everyone who takes offense to them.
> Automatic categorization means offensive categories will be created
This is like saying that an automatic printing press means offensive literature will be created. And rope causes lynchings.
Someone said this in another thread -- "algorithms" are the new "chemicals".
The most absurd part of this isn't that people are astounded that generic tools can be used for specific evils, which has been happening for many years. It's that people seem to want corporations the likes of Facebook to be in charge of making political decisions.
Yet transfer the same issue to this context and it suddenly becomes a free speech thing.
I mean, seriously, if you ran a forum and someone opened an antisemitic thread in a minor board on it, which one of your others users pointed out and you then removed, do you really think it would be fair to call you antisemitic? After all, your forum allowed the antisemtic thread to exist in the first place and didn't auto-filter it out, and apparently there's no leniency even though you removed it as soon as you knew.
I am unconvinced by a partial recreation of the actual situation by ProPublica.