Hacker News new | comments | show | ask | jobs | submit login

> Ross Baird, president of the venture capital firm Village Capital, noted that when ProPublica[click-through link] tried last month to buy targeted ads for “Jew haters” on Facebook, the platform did not question whether this was a bad idea — it asked the buyers how they would like to pay.

Clicking through to read that article[1], it's clear that the categories were created by an algorithm rather than a human, and when reported, were removed without fuss. The way the main article phrases it is highly misleading.

The click-through article also says that while "Jew Haters" was a group that could be selected, it was too small to buy a targetted Facebook ad for by itself. They added a few more 'jew-hating' categories, and a few 'Hitler' categories, and the target demographic was still too small. So they ended up adding the category for a small extremist political party in Germany. At this point, the target demographic was big enough to allow them to buy a $30 ad.

Painting this as "Facebook is facilitating the rise of antisemitism just for a buck" is extraordinarily disingenuous.

[1] https://www.propublica.org/article/facebook-enabled-advertis...




the categories were created by an algorithm rather than a human, and when reported, were removed without fuss.

I think that's the point:

a) Automatic categorization means offensive categories will be created

b) It takes a complaint to have them removed.

The click-through article also says that while "Jew Haters" was a group that could be selected, it was too small to buy a targetted Facebook ad for by itself. They added a few more 'jew-hating' categories, and a few 'Hitler' categories, and the target demographic was still too small.

It is unclear to me why the fact that this particular phrase was too small to buy for means it isn't a problem. The fact is you can target ads based on it, even though currently you also need to buy other groups.

Additionally:

The Daily Beast, which briefly ran its own ad campaign to test the company’s tools, says Twitter’s platform shows 18.6 million accounts “likely” to engage with the word “Nazi,” while another 14.5 million users might be drawn in by the N-word. For Twitter, the process seems entirely automated and there appear to be no safeguards in place — The Daily Beast tried a number of different hateful words and phrases and none were blacklisted by Twitter’s tools.

https://www.theverge.com/2017/9/15/16316352/google-twitter-a...


> The Daily Beast, which briefly ran its own ad campaign to test the company’s tools, says Twitter’s platform shows 18.6 million accounts “likely” to engage with the word “Nazi,” while another 14.5 million users might be drawn in by the N-word.

Keep in mind that "engage with" includes everyone who takes offense to them.

> Automatic categorization means offensive categories will be created

This is like saying that an automatic printing press means offensive literature will be created. And rope causes lynchings.

Someone said this in another thread -- "algorithms" are the new "chemicals".

The most absurd part of this isn't that people are astounded that generic tools can be used for specific evils, which has been happening for many years. It's that people seem to want corporations the likes of Facebook to be in charge of making political decisions.


It's interesting. On the "Our minds can be addicted" story the HN consensus seems to be 'yes and it is a problem'. See https://news.ycombinator.com/item?id=15421704

Yet transfer the same issue to this context and it suddenly becomes a free speech thing.


Because it's two sides to the same coin. The problem isn't which decision Facebook makes. It's that Facebook, like governments, should not be in charge of this.


So who should be in charge of it?


Nobody is supposed to be in charge of it. A central planning commission is not a thing that should exist.


I guess the point is that Facebook has a responsibility to stop this proactively, not just make money off it until someone complains.


The point is that this is an issue of "slipped through the cracks", not "Facebook facilitating anti-semitism for a dollar". I'm no fan of Facebook and they've done sleazy stuff in the past, but this bit of reporting in the NYT is bad journalism.

I mean, seriously, if you ran a forum and someone opened an antisemitic thread in a minor board on it, which one of your others users pointed out and you then removed, do you really think it would be fair to call you antisemitic? After all, your forum allowed the antisemtic thread to exist in the first place and didn't auto-filter it out, and apparently there's no leniency even though you removed it as soon as you knew.


What if you took the data from Cambridge Analytica and applied it to facebook segmentation, as was apparently done in the last election?[0]

I am unconvinced by a partial recreation of the actual situation by ProPublica.

https://www.theguardian.com/technology/2017/oct/01/cambridge...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: