Hacker News new | past | comments | ask | show | jobs | submit login
Linda Yaccarino: no single authentic user saw this content alongside IBM's ads (twitter.com/lindayax)
37 points by throw310822 on Nov 21, 2023 | hide | past | favorite | 68 comments



I can understand very well how other companies want to steer clear of this whole burning pile of garbage, even beyond the issue with this single ad. We terminated our efforts on that site because it's not a place we want to be associated with. The promoted content and the mentally unstable man-child owner is a risk to our reputation, and we don't want to be seen anywhere near it. It's simply not worth it anymore. There are alternatives that don't suck quite as hard as "the social network formerly known as twitter".


That leaves an obvious question: why not?

Is it just because the tweets in question weren’t very popular? In Media Matters’ screenshot, the ad-adjacent tweets only seem to have a few hundred views, so perhaps it was pure chance that they didn’t appear next to those specific ads for anyone else, though that seems unlikely.

Or is it because there’s an algorithm that tries to steer ads away from appearing next to potentially-objectionable content? But then why didn't it work for Media Matters’ test account? Does it have a threshold where if there’s so much potentially-objectionable content that it can’t find any valid placements, it loosens the rules? If so, how often does that happen? Or was there some other kind of failure?

In any case, I’m not feeling the transparency here.

And of course it’s hard to know what to conclude from data that’s apparently only about three(?) individual tweets, rather than a representative sample of similar tweets.


Or they could have made a test account and visited “problematic” sites on purpose to fake the story because of political purposes.

Occam’s razor is still undefeated there. What surprises me is how many folks think institutions are saints who can’t lie a la George Washington and not just a political apparatus


> Data wins over manipulation or allegations.

Sure, but with the data-hose being priced prohibitively high, how can this be independently verified? Oh yeah, let's just trust the CEO's words on X.


It's not really the point. Twitter made representations about "brand safety" that ads for these brands had a low chance of appearing next to neo-nazi content. It turns out it's easy to make it happen.

In any case, she's also admitting that IBM, Comcast, Oracle, Apple, etc. are paying for ads that not a single authentic user has seen. That doesn't make Twitter look good either. How many other useless ads are they paying for?


"she's also admitting that IBM, Comcast, Oracle, Apple, etc. are paying for ads that not a single authentic user has seen."

No, she's saying what she said:

"saw IBM’s, Comcast’s, or Oracle’s ads next to the content in Media Matters’ article. Only 2 users saw Apple’s ad next to the content, at least one of which was Media Matters. "


You pay for ad impressions. Twitter calls it "Reach":

https://business.twitter.com/en/help/campaign-setup/create-a...


That isn't what was said at all. The statement was that these ads don't appear next to "the content in Media Matters’ article", which might just mean that the content in the article was not that popular, or that other advertisers were displayed next to that content.


If you have to guess what it "might" mean then nothing has been said.

Her claim is Media Matters "manipulated" Twitter. The reality is they just used it, found neo-nazi content, and saw ads next to that content.

Why would any brand want to advertise on Twitter when that outcome is so easy to achieve?


What even is an "authentic user"?


My understanding of their claim is that Media Matters created a account, with which then exclusively followed, and liked all sorts of unwanted content to achieve their goal of seeing ads alongside that unwanted content. So they (X) see this as a purely lab experiment misrepresented as a typical behavior of the system.


A user that seems to have a one-to-one relationship to a human who wants to use Twitter in the ordinary way. There are many other kinds: Test users created by Twitter staff or other companies, dormant users, spambots, and so on and so forth.

If you want to make good decisions about what to serve, it helps to analyse the traffic logs, separate the authentic users from the rest, do lots of analysis on authentic users only, and then make implementation decisions based on the analysed behaviour of the authentic users.


Hm. Now I wonder whether they consider accounts like bigbenclock or darthputinkgb to be authentic.


Any user that does things that Musk likes and does not tell the media about Nazis on the site.


> Only 2 users saw Apple’s ad next to the content, at least one of which was Media Matters.

So did Media Matters hack your server and inject the ads? I don't see why they or the other person would not be "authentic" users otherwise.


X/Twitter say that media matters followed both “nazis” (or whatever) and also major brands, and their feed did indeed show major brand ads next to nazi tweets. Apparently no real people subscribe to both nazis and major brands, they say, though, so “no harm no foul.”

(This neglects the fact that Musk publicly gives props to antisemitic memes while repeatedly saying, naw, don’t you believe your lying eyes — tweets that everyone saw, and the advertising pauses accelerated after he last did that, undermining his narrative.)


Rather, the question is "how unlucky you must be if the offending content made just two views, and one of the two was on the organisation that spends its time checking content on social media platforms"?

Unless Media Matters has ways to make infinitely more probable to provoke the occurrence. Which, I guess, is what happens. I wonder, for example, if they spend a lot of time browsing nazi content, and then Apple content, and then reload the page until the two appear next to each other?


Which would never occur in RealLife™ as no Tru Apple fanboi would follow The Daily Stormer ?


But it would mean that the nazi apple fanboy is seeing exactly what they want to see. The problem lies with the user, not with the platform.


That doesn't matter -- the issue here isn't that "innocent" people might see Apple advertisements, or nazi posts. The issue here is that Apple doesn't want their ads _next to_ nazi posts, no matter what the user is interested in. That's why they pulled the ads, not because they were worried that their targeting might offend one of the nazis.


For me, it is like saying that Apple doesn't want me to open a tab of my browser on their homepage and another on nazi content while using a Macbook. It might be a problem if someone takes a screenshot and circulates it under the caption "Apple allows nazi content to appear on their laptops and alongside their homepage" but it's a fake problem- Apple has really nothing to do with it.

My feeling is that the issue has been shifted from the legitimate worry that "ads might appear alongside content that offends the viewer" to the made up one "ads might appear alongside content that offends us but that absolutely no one will see except those who are fine with it".


One of the problems is the user.

Another problem is that Apple is effectively paying people to say extremist things via Twitter ad revenue shares.

And, apparently in order to enhance this funding, nazi followers just need to follow a few big brands alongside their extremist content.

A third problem is that makes Apple look bad and damages their brand.


> The problem lies with the user, not with the platform.

The platform promises "brand safety" but isn't delivering it. That's why advertisers have withdrawn their ads.

The owner of the platform very publicly cozying up to antisemitic comments isn't a good look either. Brands don't want to be associated with that. Brands don't like brand damage.

Tesla investors don't like the brand damage:

https://electrek.co/2023/11/20/tesla-investors-turn-tsla-boa...

Musk has chosen these outcomes. He has no one but himself to blame.


> Neither Paxton nor X argues that Media Matters was falsely claiming to see ads on pro-Nazi content. In fact, the suit confirms that the screenshots the organization posted are real. But it alleges the organization “manipulated” the service to make X serve the offending ads. “Media Matters has manipulated the algorithms governing the user experience on X to bypass safeguards and create images of X’s largest advertisers’ paid posts adjacent to racist, incendiary content, leaving the false impression that these pairings are anything but what they actually are: manufactured, inorganic, and extraordinarily rare.” The alleged manipulation involved creating an account that exclusively followed a combination of major brands and extremist content, then “endlessly scrolling and refreshing its unrepresentative, hand-selected feed” until it saw a confluence of the two.


Funny, exactly what I wrote one minute ago as a pure speculation

> I wonder, for example, if they spend a lot of time browsing nazi content, and then Apple content, and then reload the page until the two appear next to each other?


Her statement is crafted in a way where I feel it is meaningless. She refers to “the content in the MediaMatters article”. The statistic could literally be about how many people saw the ads next to the actual tweets in the article.

This is problematic because advertisers care about being next to any hateful content. She makes no claims about how many people saw ads next to Nazi content or racist content or homophobic content. Neither does the lawsuit. My bet is that she’s unable to make a more general statement because it would make X look bad. They clearly are able to mine the data, just not make truly exonerating statements from it.


Maybe the best analogy would be speed running a video game and then suing the developer because you didn't get the "estimated playtime" listed on the box. You're abusing the product, it will not perform to spec.


Just to correct the analogy, it’d be the person speed running describing what they did in an article and the developer suing them.


So your spec sucks...

If you have the requirements that in no case your ads should be shown with content that you don't like but it happens at least twice.

You have a buggy mess of an unfinished software.

Oh wait that the game industry right now...


Social media is a really poor spec, to be fair.


> "Stand with X."

Now that is just sad.


Why does every other social media get a pass with this stuff? This is so blatantly a political move it’s sickening.

I just went to Instagram, typed in “shoplifting” and saw dozens of extremely racist comments, followed by advertisements. I can’t imagine the stuff written on Facebook being much better and don’t get me started with TikTok.


Report it. And it will go through a process that has been vetted and refined over the years.

That is being staffed appropriately.

That has dedicated, senior, responsible leadership.

That has been audited by the Media Rating Council.

Musk dismantled all of this when he took over.


I’ve been seeing those videos for YEARS on my feed, and nothing has changed.

Go type “shoplifting” into IG search right now, you’ll see the comments with thousands of likes, some calling for genocide among other things.


You're missing the point. It's not about individual incidents.

It's about you having a process in place that companies, regulators etc can trust.

And for many reasons all of the trust in X has gone. e.g. pulling out of audits:

https://digiday.com/marketing/brand-safety-concerns-mount-as...


A process which clearly does not work. No reason to trust Instagram either


Advertisers, regulators and independent auditors disagree.


Other social media doesn't get a pass. Facebook faced a similar boycott in 2020, and Youtube was hit with two in 2017 and 2019.

I'm sure there are earlier examples, but it gets harder to search as you go back, because the typical advertising-related boycott used to be that consumers would boycott companies who advertised in ways they didn't like. (Which, of course, is why companies have strict brand safety rules in the first place.)


What’s sickening? Musk has provocatively pushed the envelope, again and again, and has courted hate speech. Which isn’t illegal, mind you, but thoroughly unpalatable to companies with decent reputations who cater to most Americans. X is still full of ads, from companies that cater to rightwing audiences.

And remember, this whole lawsuit is to distract from Musk’s outrageous antisemitic tweet that was the reason that many of these companies pulled out in the first place, accusing Jewish people of hating white people and precipitating The Great Replacement against them, real Nazi shit.



It’s very obvious that Elon and co would rather us concentrate on a blog post by someone else than his own words, words which are the real cause of this controversy. To that end, and since I think the actual cause is too often elided in these discussions:

[in response to a challenge to those saying ‘Hitler was right’ to justify saying so, a twitter account replies] “ Okay. Jewish [communities] have been pushing the exact kind of dialectical hatred against whites that they claim to want people to stop using against them. I’m deeply disinterested in giving the tiniest shit now about western Jewish populations coming to the disturbing realization that those hordes of minorities that support flooding their country don’t exactly like them too much. You want truth said to your face, there it is.”

Musk replied: “You have said the actual truth.”

I think it’s clear why Yaccarino and Musk want everyone focused on how many impressions a small subset of the antisemitic content on X got, and not the breathtaking antisemitism (and racism, although clearly “hordes of minorities” doesn’t raise as many eyebrows because nobody seems to care about that one) that musk himself endorses (and has not apologized for or retracted since).


I don’t understand what you’re saying, but there’s really no good reason to paste another person’s fucked up racial delusions. The current brand of antisemitism is absolute bullshit and has no place here or anywhere, even being pasted. It’s not only completely stupid to believe this stuff, but the simple act of spreading it makes it worse. It’s dangerous stupidity.


I think this attitude is the reason that Elon is going to get away with agreeing with the statement in public. Most stories tiptoe around how extreme the statement is, and as a result he is now successfully deflecting on the issue.

The quoted tweet remains endorsed by the wealthiest and most powerful person on the planet. Hard to read? Disturbing? Yes, of course it is.


Some context. I grew up in Germany, have Jewish friends and have visited Nazi concentration camps. I see why you feel that posting such hate speech is hurtful and gives hateful people a platform. At the same time closing the eyes in front of terrible things, makes these terrible things a lot more likely to keep existing. Here in Germany we show our schoolchildren age 16 or so, documentations about Nazi concentration camps and even make them visit them. It's not meant as a platform to spread Nazi ideology, but rather meant to face the reality of what happens when you let fascism dominate politics. In this situation we have Elon Musk endorsing unquestionably antisemitic content. By hiding what he endorsed, it makes it a lot easier to ignore the gravity of the situation and moving the topic away from it, as done here. I personally believe in this situation the importance of precisely characterizing the kind of hate speech Elon Musk endorses outweighs the risk of spreading Nazi ideology. So I commend OP for posting it here verbatim.


Elon is struggling with consequences of his desire to become Kanye.


Linda Yaccarino: Yes, we serve ads to bots and make you pay for it.


Did she say that? I didn't notice the bit after "and".

Of course Twitter serves ads to bots. Distinguishing tests from other bots is difficult and a site as big as Twitter will be tested by very many third-party automatic tests. Suppose you run an advertising agency that buys ads on Twitter, perhaps via an intermediary. Why wouldn't you run a regular automatic test to e.g. test how many per cent of the queries for foobar point to pages that serve your foobar campaign? If you do that, you've created a bot that needs to be served the same ads as typical humans.


This is going to be a weird thing to say but: What happens if I'm a genuine nazi user (and follow lots of nazi content creators) and also interested in computers? Eventually they would see Apple ads next to Nazi posts. Right? Anyone that's been online for more than a couple seconds knows these kind of people exist.

To say that media matters manipulated the algorithm because they follow nazi people and apple is a weird argument to make. Twitter is more than capable of knowing which posts are antisemitic/nazi and it should be trivial to simply not show sponsored content around these posts. The only side effect of this would be that Nazis in the platform won't be seeing any ads.


> Twitter is more than capable of knowing which posts are antisemitic/nazi and it should be trivial to simply not show sponsored content around these posts

Apparently Twitter's system works by having tags associated with each user.

Some of those tags are added by automated systems, others are manual. Based on the tag it will affect your reach, what content you see, what ads get shown along side etc.

From all accounts Musk has reset this system back to zero, messed with the algorithm so tags are less effective and dismantled the safety team. So nothing is really working. And because it's been reset all banned users are allowed back in.


> Eventually they would see Apple ads next to Nazi posts. Right?

The claim is that X promised Apple (& others) that their ads would not be shown next to such content - regardless of whether that content had been served by algorithmic coincidence or because the user followed exclusively toxic accounts.

> The only side effect of this would be that Nazis in the platform won't be seeing any ads.

I imagine the no-toxic-content feature is available to select brands. Otherwise, you're right - Nazis get adblock for free. They likely see ads from a different pool, like one of those thousands of NFT AI ICO scheme ads that come with community note warnings. Point is, they were never meant to be Apple ads.


Does it make any sense though? For a Nazi user seeing ads alongside Nazi content I guess feels natural, and non-Nazi user would not see the combination.


But through Twitter/X's ad revenue sharing, money you spend on Apple products goes into marketing on Twitter/X, and finances Nazis.


I'm not a user of X/Twitter, so genuine question: how does it work? Users get money for viewing ads? For ads shown along their posts?


you pay X $8/month and then you are eligible for a payout based off of some metric around how much your tweet is interacted with and the ads that were near it.

https://finance.yahoo.com/news/self-proclaimed-misogynist-an...


> Eventually they would see Apple ads next to Nazi posts. Right? Anyone that's been online for more than a couple seconds knows these kind of people exist.

Why would they have to show them ads? Sure, platforms won’t catch coded messages, but the examples from media matters weren’t coded and should be easily detected. The request from advertisers isn’t “only show our content next to nazi posts for users who are nazis”. It is “don’t show our content next to nazi posts”. The second isn’t any harder. It is in fact easier to accomplish.


I love the term “nazi content creator”.


Well I suppose that’s good news for marketing people who prefer their ads not to be seen by people


Media Matters staff aren't authentic users? Is Linda trying to imply that they violated the terms of service?

If a user can "force a scenario resulting in 13 times the number of ads," and those ads end up alongside pro-Nazi content, then the platform has a problem. The user isn't at fault here for the experience provided by Twitter.


I think she is suggesting that the user profiles they curated to maximize the chance of this happening were not the result of authentic usage patterns.


I'm confused how simply using Twitter and following Nazi-leaning accounts is an inauthentic usage pattern.

Nazism is bad but being a Nazi supporter or sympathizer doesn't make someone's usage inauthentic. At worst it makes their usage harmful.


Afaik the media matters employees were not genuinely interested in consuming Nazi content.

I think their point was that none of the real nazis saw these ads next to nazi content I guess?


If they want to state that no nazi's see advertisements on their platform, they are free to make that claim. But I know there are nazi's on that platform and I know they get served ads. What else needs to be known?


> Afaik the media matters employees were not genuinely interested in consuming Nazi content.

I'm quite certain they were genuinely interested in seeing what happens when they consumed Nazi content.


If they weren’t interested in the content itself but were instead interested in studying or manipulating the system providing the content I can see why the behavior in question would be deemed inauthentic. However, I can’t really back it up beyond that as I’m not them, sorry.


I think you're probably right, but...

Twitter's implenmentation of authentic user detection must be expected to try to separate normal users from e.g. test accounts created by client authors. And even if the Media Matters staff was genuinely interested in seeing etc, I think it's quite likely that their click pattern resembled a test account more than a Joe Normal User. (Intense activity immediately after signup, for example.) If that's the case, then Twitter's authentic user detection would classify the MM user as "likely test account" or something like that, and the authentic user data shown on Twitter's dashboards would exclude the MM user.


> I'm confused how simply using Twitter and following Nazi-leaning accounts is an inauthentic usage pattern.

Lol. Indeed - if anything it's only becoming more representative every day.


That's like commenting a crash test with "Not a single real human was sitting in that car".


X seems to acknowledge that following Nazis and brands will lead to brand ads next to Nazi content if you scroll enough.

This is from the lawsuit:

"Media Matters executed this plot in multiple steps, as X’s internal investigations have revealed. First, Media Matters accessed accounts that had been active for at least 30 days, bypassing X’s ad filter for new users. Media Matters then exclusively followed a small subset of users consisting entirely of accounts in one of two categories: those known to produce extreme, fringe content, and accounts owned by X’s big-name advertisers. The end result was a feed precision-designed by Media Matters for a single purpose: to produce side-by-side ad/content placements that it could screenshot in an effort to alienate advertisers."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: