They curated feeds specifically trying to find offensive content and then refresh their feeds until they could find an ad placed near offensive content.
You could pull this trick on any tech company: Google, Facebook, etc. But Media Matters hates Twitter/X and Musk, so they're focused there.
People forget that Media Matters is a political organization, created by a Clinton loyalist, that works closely with a number of super PACs also created by David Brock. They target their political enemies and that's it.
Anyway... these advertisers will be back. This weekend's fiasco about OpenAI proved that there is a need for X -- none of the breaking news was coming from Threads or some random Mastodon server.
Just a reminder they tried the same political hit on Elon at this exact time last year. This is pure party politics. They see him as a threat to their power.
This is what they were created to do. Their founder has created several Super PACs and has been an advisor for the Clintons. Their largest donor was George Soros. They are political in nature and always have been.
That's fine, but people should know what Media Matters is and not confuse them for some non-partisan non-profit.
Then there is this gem, also a stunt pulled by their founder:
> And a nonprofit group founded by the Democratic activist David Brock, which people familiar with the arrangements say secretly spent $200,000 on an unsuccessful effort to bring forward accusations of sexual misconduct against Mr. Trump before Election Day, is considering creating a fund to encourage victims to bring forward similar claims against Republican politicians.
What would you propose if you were tasked with testing whether the ads are placed next to hateful content or not? Please explain how you would test that paying attention to the test being whether there is ANY such placement at all.
I've never understood the "it's not a problem because it's also a problem elsewhere" mindset.
And I'm skeptical of your implication that there is just as much awful content on platforms that at least try to police it as there is on the one that doesn't.
Because it's an isolated demand for rigor and gives away the real game which is "take out the target".
If flipping the partisan lens helps, consider the Hillary Clinton 2016 emails thing. Did anyone really care about the emails? Did we have immaculate email security from 2017-2020?
This seems like a really myopic view, focused on ideas like blame and purity.
Or, another way, this seems like a blame the messenger strategy. Sure, great, Media Matters is not a perfect arbiter of whatever. But how does that make it not a problem that Ford's ads are showing next to overt Nazi content? Am I (or is Ford) supposed to handwave it away because the same thing could happen on another platform?
Henry Ford has been dead for 76 years and the company is not currently run by vocal antisemites, so I don't quite get the point you're trying to make. Presumably you're saying it's not a big deal and they don't really care?
Recognizing an isolated demand for rigor isn't necessarily a demand for purity.
It's specifically noticing when nobody cares at all about a topic in all contexts except when it gores a particular ox.
In both this example and the emails thing, you never hear anybody talking about it except some very particular cases where they're clearly motivated to tar a particular target.
"We all have to consider this topic to be very important, specifically and only when party X does it, and talking about anything else is whataboutism!"
But Twitter did serve those ads adjacent to the offensive content. No one seems to be claiming that Media Matters photoshopped the ads. The question boils down to how often it happens. Is Apple OK with half their ads being next to racist crap? A quarter? 10%? 1%? 0.1%?
Media Matters demonstrated that it does happen more than 0%. Looks like it's up to Twitter's advertisers to try the experiment for themselves and see if it breaks their own comfort threshold.
I mean beyond that the ads don't literally need to appear next to the racist tweets for advertisers to consider the entire platform poisoned by the behavior of Elon.
This is one of those nerd snipe "but you didn't say simon says" defenses that no one in the real world actually gives a shit about. Elon has shown his face, advertisers don't want their brands associated with that. It's that simple.
I agree with all that. I doubt Apple, IBM, etc. saw the Media Matters report and took it at face value, then redirected their advertising budgets without checking into it on their own. In fact, I'd say it's a given that they had marketing people doing their own reload-and-count exercises. And when they did, they came up with a number that was greater than their comfort level.
All platforms will have some awfulness. There's no way around it. If Twitter had a trillion good posts and 1 bad one, no one would bat an eye. It's a rounding error; things happen. But advertisers have at least an informal idea of how much badness is too much, and Twitter crossed. That's the end of the story. Apple and IBM et al ran the numbers and decided that Twitter's content is too tainted for them to want to associate with it.
And because free market, they decided not to any more.
Yes, they "demonstrated" it with this tactic described below:
- *Media Matters’ Research Tactics*:
- Created an alternate account to manipulate public and advertiser perception.
- Curated posts and ads on the timeline to misinform about ad placement.
- Contrived experiences are not platform-specific and could be replicated elsewhere.
- *Ad Serving Instances*:
- After curation, they refreshed timelines to find rare instances of ad placement.
- Logs showed 13 times more ads served to their account than the median X user.
- *Ad Impressions Data*:
- On the day in question, of the 5.5 billion ad impressions on X, less than 50 were served against the content in the Media Matters article.
- *Specific Brand Exposure*:
- One brand had an ad run adjacent to a post twice, seen by only two users, including the article's author.
- Another brand had two ads run adjacent to two posts three times, seen only by the article's author.
- *Content Policy Evaluation*:
- The article highlighted nine posts believed to be inappropriate for X.
- Only one of the nine posts violated content policies.
- Action was taken on the violating post under the "Freedom of Speech, Not Reach" enforcement approach.
It's unimaginable that any advertisers took their word for it and dropped their Twitter ad budget simply because Media Matters said so. At the most, Media Matters might have called their attention to it (if it wasn't already on the advertisers' radio, which it almost certainly was).
>They curated feeds specifically trying to find offensive content and then refresh their feeds until they could find an ad placed near offensive content.
You agree, then, that twitter is doing the thing that they said it did. What's your point.
> People forget that Media Matters is a political organization, created by a Clinton loyalist, that works closely with a number of super PACs also created by David Brock. They target their political enemies and that's it.
Can you complete that thought, I don't understand who the political enemies here are. Are you implying that Musk is basically "the other side", the "political enemy" here, i.e. Republican? Why is pointing out that literal Nazis do their thing on Twitter political in this case?
Facebook allows far worse, especially if it isn't in English.
Late edit: I meant "far worse" in this context to be far worse than merely supporting neo nazis, not far worse than Twitter/X. Both have lots of nastiness.
Facebook notoriously doesn't employ native speakers for all the languages the interface supports, so posts in languages they don't staff for have zero oversight.
In my sibling comment, I do point out some sources for English language nastiness as well.
They may not explicitly embrace it, but one could argue their erratic enforcement of the vague "community standards" is itself insufficient, if not simple lip service.
It's a mixed bag. Their official stance is to take down hateful content, but they don't hold themselves to their own standards very well.
The short version is they have a set of "community standards" (borrowed from, but different to, the legal concept of the same term) that are erratically enforced.
>We examined the verified accounts of Lucas Gage, E. Michael Jones, Stew Peters, Andrew Torba, and Way of the World — all of which have at least 50,000 followers and regularly use X to engage in antisemitism. Among the ads appearing on these accounts included those for MLB, the NFL, and the Pittsburgh Steelers. As verified accounts with such large followings, these figures could theoretically receive revenue from those ads under the social platform’s revenue sharing program. (At least one of the accounts has received money through the program.)
They are a political organization and make no secret of that.
>Media Matters for America is a web-based, not-for-profit, 501 (c)(3) progressive research and information center dedicated to comprehensively monitoring, analyzing, and correcting conservative misinformation in the U.S. media.
Everything you claim is a nefarious conspiracy is on their website.
If you think there's no evidence Twitter was cozying up to neo-Nazis, that would make it a strange coincidence that Musk himself turned out to be an anti-Semite...
Did you read what you posted? All they did was look for neo-Nazi pages and hit "refresh" a few times. That's exactly what Media Matters said in the first place.
Anybody remember when all the worst people on twitter were so excited to be able to say the N word after Musk bought it, so much so that people were asking if they could yet?
Musk has the right to hold whatever views he wants. He has the right to enforce said views via whatever mechanisms in Twitter, because it's his. Advertisers in turn have the right to leave if they don't want to be associated with Twitter anymore because of that.
He can sue whoever he wants, his cases will be dead on arrival.
A stated reason for his buying it in the first place was he felt there was a liberal bent to it's moderation and he wanted to change that, and what he failed to recognize or refused to comprehend is that the views of bigots are in fact often filtered from social media not because of a liberal bias in the programmers, moderators, or even leadership, but because bigotry is fucking disgusting and all social media besides Mastadon is beholden to advertisers for the lions share of it's revenue, and advertisers (usually) don't want to be associated with bigotry. And just like any product, when you want to make a social media site that's like other social media but conservative (read: bigoted and unmoderated) you can still get advertisers, but it's InfoWars-type adverts: the gun nut coffee people, dick pills, mobile games and financial scams like reverse mortgages, which are worth significantly less money.
So? Musk claims to be a free speech absolutist. Is this not free speech?
It's only defamation if it's not true. It's completely obvious that the site is absolutely loaded with Nazis and trolls. Seems straightforwardly true to me. Media Matters would not have had to try very hard to cram a feed with that stuff. I don't engage with crap like this and my feed is still loaded with it.
That and Elon has been "reply guying" such material for quite some time. He either agrees with them, isn't reading the stuff he promotes, or is actually that tone deaf.
Musk also claims to be pro-free-enterprise. Doesn't that mean it's fine for companies to decide not to advertise on X if they dislike the content? These are private companies, not governments. They can advertise anywhere they want.
> Musk also claims to be pro-free-enterprise. Doesn't that mean it's fine for companies to decide not to advertise on X if they dislike the content? These are private companies, not governments. They can advertise anywhere they want.
Yeah, the sentiments I'm seeing in this comments section ("they'll be back" / "he should charge even more when they return") are funny in that they're totally disconnected from the current reality of digital advertising: i.e., it's a free market, and ad spend budgets are finite.
If people don't want the space, they won't buy it. If the space is seen as less valuable, the people who do buy it will pay less. Supply and demand, baby!
>It's completely obvious that the site is absolutely loaded with Nazis and trolls.
Twitter feed is fitted to the consumer. Mine is mostly filled with pro Israel stuff and crypto bros turned AI evangelists. The only antisemitism I have seen is retweeted by couple of Jewish intellectuals I follow. And majority of the antisemitism since Oct 7 is of Muslim or left wing accounts. The ones from the proper Nazis is order of magnitude lower.
It's interesting that people who don't use the site at all have such strong opinions on it.
I have an account from 2008. I use it daily. Because I don't follow bigots and Nazis, and don't interact with their content, I don't see bigoted/Nazi stuff EVER.
I primarily see a feed of tech/AI/DevOps stuff, mixed in with pro-Israel posts tossed in with Cheech and Chong gummy ads.
The idea that the typical Twitter user following Taylor Swift and Travis Scott and the Washington Post is going to see Apple ads next to Nazi content in their feed is blantantly artificial and silly.
> They curated feeds specifically trying to find offensive content and then refresh their feeds until they could find an ad placed near offensive content.
I don't understand what's disingenuous about this? They demonstrated that these ads can be served alonside Nazi posts, didn't they?
They curated feeds specifically trying to find offensive content and then refresh their feeds until they could find an ad placed near offensive content.
You could pull this trick on any tech company: Google, Facebook, etc. But Media Matters hates Twitter/X and Musk, so they're focused there.
People forget that Media Matters is a political organization, created by a Clinton loyalist, that works closely with a number of super PACs also created by David Brock. They target their political enemies and that's it.
Anyway... these advertisers will be back. This weekend's fiasco about OpenAI proved that there is a need for X -- none of the breaking news was coming from Threads or some random Mastodon server.