Clearly the people running these sites are true scumbags. However, the article doesn't mention that the only reason these sites making false claims have any value is because google directs traffic to them! If somebody slanders me on a site nobody reads why would I care? I only care if Google slanders me through their search results. That's what you're paying for to get the slander removed: removal from search results.
And yet nobody talks about the responsibility search engines have to prune obvious scams, viruses, and the like from their index. Google is not a neutral actor in this, it's Google's algorithm that's getting exploited, so Google has both a moral responsibility and the actual capability to fix this. Legal action against these slander sites is important too, but more of a secondary concern.
At the end it does say you can contact Google to have these kind of results removed, which seems to work well.
I don't disagree they should probably be more proactive, but it's tricky because "X is a horrible person" can be legitimate articles too; banning domains is a whack-a-mole, and of course you'll get complaints about "censorship".
Google has a filter. Google filters for "quality", for relevance and quite a few things. The filter against spam. The filter against SEO tactics that were once common. etc.
Filtering against slander seems like reasonably simple and not one that different from what they already. They're already doing whack-a-mole against a whole variety of things.
But yeah, shitty websites suing their ways onto Google search results with "free speech" complaints may one thing that is making this hard but I don't know enough about that situation to say with any certainty.
This may be controversial, but I don't agree. Any legitimate item can be used for bad things if someone wants.
At the risk of making a bad analogy, a car can be used by someone to go on a rampage and mow down hundreds of people. If that happened you can be sure that someone, somewhere would suggest that the auto manufacturers have a responsibility to stop this kind of thing from happening again by making a "kill switch" available to police that can be used to stop the vehicle remotely.
Another bad analogy would be using a hammer to kill someone.
Clearly there is a fuzzy line there somewhere. I am not saying that companies have no responsibility in keeping their products from being used in bad ways by bad people, but I do think it is important not to only look at the bad thing and say that companies need to stop that bad thing from happening at all costs.
I don't think Google is quite as neutral here. Google suggests adding "cheater" to the search of the victim's names and ranks those websites.
What if the manual of the hammer included a section "try hitting someone with it".
Now, Google's recommendation clearly came from seeing those words appear together, but to me Google is somewhere between completely neutral like grep, a hammer, or a car and fully editorialized like a blog or newspaper.
A pr statement addressing it would be a start. Probably also a token gesture, such as issuing a hammer recall. Sure it costs money, but this is PR 101.
this will always remain an incredibly difficult problem to solve. the fact that people mention Google should fix it shows that we like to rely on 1 or 2 large companies who should solve this and it isn't even in our imagination that if the power of Google distributed over 30-50 companies might not create the same issue in the first place.
take this thought further (this is going to suck and most will hate this and I won't blame anyone who does because I've also not completely finished this idea in my own head) ... what if everyone got slandered somewhere online. Or in other words I can't change the way people treat me but I can influence my response to how I react. If everyone is slandered then nobody is slandered because the narrative becomes the Web is shit so you can't anyway trust what people write online.
A lot of the problems would go away if we'd come to the conclusion that what happens online really is just a fake world with people sharing unfinished thoughts (that constantly evolve) rather than everything we publish is like a public statement (because it sometimes even has our real name acting almost like a signature on a contract).
this will always remain an incredibly difficult problem to solve. the fact that people mention Google should fix it shows that we like to rely on 1 or 2 large companies who should solve this and it isn't even in our imagination that if the power of Google distributed over 30-50 companies might not create the same issue in the first place.
-- The issue doesn't seem unsolvable at all. Google can investigate the network of slander websites and reputation management consultants the same way these news reporters did. Use some AI to find per quality site devoted to this stuff. Google already devotes a lot of resources to similar things. Google doesn't have to be certain a site is garbage to delisted it, just get enough red flags.
-- But oppositely, if you had 30 dispersed search sites, it seems likely this stuff would become utterly intractable, an endless game of whack-o-mole. And so it seems like your literally is primarily using kind of garbage reasoning just to take a shot at Google. Not that I like Google or whatever but it's an illustration of a common HN prejudice.
"If everyone is slandered then nobody is slandered because the narrative becomes the Web is shit so you can't anyway trust what people write online."
Basically how everyone looks at news / journalism on tv and the net then?
Looking at the 'news feed' on yahoo is not much different than the old enquirer fake news tabloids of the grocery stores in times past.. yet people sometimes cling to "this is the way it is.. I saw it on the web" - even though I think deep down they know they only get half the story / not half the truth.. with any organization online.
So it's entertainment to shame the 'others' and some people take it pretty hard when the 'others' shame their allies.. yep, not hard to imagine at this point sadly.
I long for a browser extension that follows my choice of editors to remove all huffposts and many individual authors / 'journalists' / editors from portals / socials / etc - even then the truth will not be complete and much of what is not true will still entertain/influence/stir up anger, etc.
once we get more deepfakes across the net I think people will finally start to see it for what it is - all one big enquirer trying to get eyeballs and clicks and as just as trustworthy.
Although - some people have started to say things like 'if it wasn't true fbook would remove it - or put a notice on it - and they haven't so it' probably true' - similar with google I guess - ugh.
If google / fb continues to censor, fix and filter - it could have a similar opposite effect. Ok we need a button on google to show unfiltered.
it's true once a business is set up to do this it will want to do things to perpetuate itself, but obviously it only got set up in the first place because it was obvious and easy to do and therefore really cheap, if it was far more difficult to do it would become expensive to do because difficult things must be managed with work which costs money.
In short I am not sure you are correct that there will be uncountable adversaries trying to do this.
Do you think that Google has being a motherfucking sorcerer as a job qualification? Just sentiment analysis would not be sufficient and epistemological truth detection is beyond what is possible. "Have you resurrected Abraham Lincoln? No? You bastard!" No sane moral system declares not doing the impossible a sin.
Their interests are already aligned against it, no need for farcical allocation of responsibility.
There is no such thing as "natural" search results, the search results are the result of complex interacting algorithmic logic that Google implements to try to give you "good" results.
When Google for instance penalizes a site for being slow; or for being spammy; or being identified by the algorithm as likely to be disliked by most users, and not be what they are looking for; or for having spyware; or for seeming to use "black hat" SEO with irrelevant keywords...
...are these actions "censorship"? What kinds of shaping of google results are censorship and what kinds aren't?
The results are inherently shaped, they only exist because of algorithms that implement choices, there are no "natural" unshaped results.
Let me get this straight: someone posts slanderous lies about another individual in order to defame them and perhaps profit from the defamation; google is exploited to drive the slander to the top of the search results; and you think any attempt to remove the slander would constitute censorship and compromise the purity of the information you have access to? Really?
The idea that Google should act as judge of what is slander and what is a true allegation of wrongdoing is what gets up the hackles of anyone who still thinks free speech has value. The parent comment said nothing about the slander being proven such in court, google is simply assumed to know what is slander and what is not.
Which of course it doesn't, a state-of-the-art AI's language comprehension is still extremely rudimentary and would be prone to being gamed by powerful malicious actors even if it were human level.
Exactly. A MSM outlet whose own content is dubious and is no stranger to exercising their own self-entitled "opinions". These opinions could be reasonably construed as slander in a world where an individual could stand up against them in court without going broke.
But let's play along and assume that everybody else's speech is the problem. Citizens don't need MSM fitted, woke muzzles.
The co-opting of internet providers to mete out draconian terms of service that strip the individual of their freedom to express ideas needs to be criminalized.
I'd rather push back against the idea that a search engine that only ever promised to report popularity be expected to now report accuracy or cease operating.
I’m kind of surprised at the absence of Scientology from the article. They pioneered slander sites in the early millennium in order to attack their critics, and they had excellent SEO-fu that would usually make the slander sites rank higher than the target’s own site. Because they have branched out into other businesses to bring in money, upon seeing the title of this article I half-expected them to be offering services to third parties in this vein, too.
Using SEO tactics to launch an automated slander campaign against common people, who likely have very little other web presence so that the slanderous material som be among the top hits, and then charging them money to have it removed.
Horrible, considering how many recruiters will lazily just Google a candidate's name and blindly trust the results.
This is one of those instances where it is fortunate I have a common name. The top results if you google my name is some 18th century scottish architect. Hell, there's even another software engineer with the same name in the UK who always appears above me in search results. He had made a name for himself in the RoR community, apparently - I get emails from recruiters who apparently think I'm him (looking for a RoR expert) all the time.
I had the thought that if something like this happened to me I might consider changing my last name to "Smith" or "Jones". Maybe it would be fun if we all did that ;-)
This, along with many of the excesses of cancel-culture, goes away if recruiters/employers stop acting like twelve-year-olds who just heard a piece of juicy gossip. What do we think when someone in their personal life cuts off a friend with the reasoning "I don't mind this person, but what will my other friends think of me??"
It's actually a positive. The commoditization and proliferation of content-free and evidence-free complaints against people will lead inexorably to the distrust of all such things, leading to a birth of a future "evidenced-based" moral culture that will wipe the slanderous, coercive and blackmail-esque "fake accusation" intelligence-industrial complex from the face of the earth, break it into ten thousand pieces and scatter it to the winds upon the waste. The trend is already in progress, and much of the blackmail networks are already undone. The pendulum will swing back. We're just in the the lowest, and fastest, point of its arc right now.
Used to think this but now I’m not so sure. If you aren’t already a skeptic I can’t imagine what else you need. People used to pay the national enquirer for fake news. People want to believe horrible things about other people and will not be encumbered by facts or rationality.
When will The New York Times investigate one of the largest slander laundering operations ever created in the United States: The New York Times?
The New York Times => Wikipedia References NYT => Google EAT score adjusts ranking according to what the NYT/Wiki said. Once "The Network" decides to slander someone there is 0 recourse as all the other sites de-rank or disconnect the individual.
To get an extreme idea of how carefully in lockstep the "Network" acts to collude against actors it doesn't like, look what was done to erase "covfefe" off the internet:
It seems like the value proposition of this sort of extortion would rapidly fall if you just made a GPT3 bot that slandered people using a name generator to iterate over the less than 1 billion unique names that exist in the world. Publish and spam social media, mangle search results, then we'll be back to where we were in the 90s, with no one believing the contents of search engine results.
Neal Stephenson wrote about something like this in “Fall” - mass slander bots. Ultimate goal was destroying the credibility of the internet. In his world, it worked, and people started to watch only curated feeds. But it didn’t help much..
You need more than GPT3-generated text. As the article notes, slander sites often feature photos of specific individuals, taken from their social media profiles or other sources, and sometimes cropped in some way to make the target look even more ridiculous.
This is, of course, not new - it's been going on for at least a decade. One of the networks of sites like this even funded a lawsuit against one of the big revenge porn site networks that was carefully written to be entirely porn-specific and avoid any arguments that touched the shared parts of their business model - presumably because revenge porn was so evil it was bringing unwanted attention and they wanted it gone before someone took action that might affect them.
I remember noticing this same thing back then with those sites as well: "Most sidebar ads are programmatic. That means they are served up by an ad network with no involvement by the people who run a site, and they change every time you visit. That wasn’t the case here. The RepZe ads were permanent fixtures, written into the websites’ coding." It was really obvious that the adverts for removal services on that network of sites weren't standard programmatically-selected ads, that they must have some kind of business arrangement. On that network, they also seemed to be the only genuine ads - meaning that the removal fees were presumably their sole source of income.
this makes my blood boil, the author is obviously technically savvy, but how damaging it can be for someone (even technically inclined) in this day and age.
it would be a real shame if an organised group of person would start posting programatically to these slander sites with material from https://thispersondoesnotexist.com/ and randomly generated names. Storing all these pictures will cost something afterall.
on a second thought, we could re-use the names and some of the texts on these slander sites with the newly generated posts (with the generated picture) to give website administrators a hard time figuring out which post is a real person and which one is a duplicate, bonus, google will index all these "john doe" and "jane doe", bluring the real person/victim in a ton of results with different pictures so anyone looking at the results will think the site is total garbage (which it is)
> Mr. Sullivan told us that copying content was a great way to lure people to his sites. (He said he didn’t feel bad about spreading unverified slander. “Teach children not to talk to strangers, then teach them not to believe what they read on the internet,” he said.)
The fact that his business relies on people not being able to teach themselves not to believe what they read on the internet proves that he doesn't actually believe that this is effective mitigation. We aren't going to see a cultural shift in timescales shorter than a decade.
>We aren't going to see a cultural shift in timescales shorter than a decade.
Why not? We already did. 2000 was still in the era of "never give out accurate personal information online." 2005 was fully in the MySpace/Facebook social media era.
Facebook captured not just internet powerusers, but an entirely new (and eventually, a substantially larger) audience. I would very much doubt there's a larger audience currently offline out there today.
This is a good reason to keep off the social media as much as possible because anything can be doctored to go against you. Starting from the real name, anything you say and and opinion, photos of you, all can be used against you not only if they’re public but if the attacker can get into your private circle (eg, hack the accont of a person that is and vaccum up all info about you).
The only downside is if an attacker's post is the first result on a google search of your name, which can easily happen for an unusual name having no other internet presence. Some people figure it is good to have a lot of other results under their name, results they have seeded themselves.
> In certain circumstances, Google will remove harmful content from individuals’ search results, including links to “sites with exploitative removal practices.” If a site charges to remove posts, you can ask Google not to list it....
I eventually found the Google form. I submitted a claim to have one URL removed. “Your email has been sent to our team,” Google told me.
Three days later, I received an email from Google saying the URL would be removed from my search results.
Useful information, though one would hope to never need to use it!
A side point: these sites are referred to as “clunky and text-heavy”. Text heavy is what I want; generally a site with mostly images is something I’ll close.
The NYT article itself did not have an obnoxious level of photographs, for that matter.
- Took everyone's money with bank bailouts and inflated stock prices
- Dump problems on everyone else. Debt, price inflation, job loss
- H1B visas only to win foreign contracts from hostile nations.
Now, not only do they hire no one, their product keeps people from getting any job.
Their recent activism doubled the murder rate. That's more people killed than bin Laden, who we just launched a two decade world war against. One day the doors of those luxury buses will open, and charging out, nothing but monsters, ones of their own creation. funny.
Please stop taking HN threads into flamewar and stop using HN for ideological battle. It's not what this site is for and it makes threads tedious, repetitive, and nasty.
Doing good journalism is hard. Yes, the NYT sucks, but they do good reporting too. And this article is very good, so if you want to complain about how bad the NYT is maybe reserve that criticism for those times when a bad article from the NYT hits the front page.
I have the same reasoning. The NY Times has done some awful things, and they have some biases that are offensive to me.
But I am a paid subscriber! (As I am with the WSJ). Why? It's a much better source, even with the faults, than all that "free" journalism that the Hacker News people seem to like so much.
I'd not like to see this response go unanswered; I'm the first to attack these organisations for the damage they do and I've long abandoned reading them, having written them off for the damage they've already done.
But I can respect honest acknowledgement of the bad while arguing the good side is worth staying. You'll (obvs) have a better perspective as a subscriber and given you can approach the thing with nuance I'm willing to go with your take on it.
I basically agree with you. But the fact that the NYT is "bad for America" doesn't mean this article exploring the slander industry is bad or wrong, only a bit ironic.
You are right, the article is not bad just because it came from the NYT. There are still the occasional useful piece, though Always and only on safe or inconsequential matters. But the bad far outweighs any good. At any rate, irony is precisely what I pointed out.
Nah, a reminder of how bad the NYT is, is always a good reminder.
NYT isn’t guilty of making mistakes, they are guilty of spreading lies.
To frame their deception as an honest mistake, is deceptive in and of itself.
I wish these kinds of vitriolic posts cited some specific instances and sourced their claims. Otherwise I'm left to Google "NYT bad" to try to corroborate any of it.
Glenn Greenwald talks about this on his Substack a lot, too, and the malpractice of the larger media as a whole. This recent and highly topical Twitter thread is one of many examples.
news is ultimately someone's opinion that something is 'news' in the first place. i went to mcdonalds earlier - is that news? i guarantee you vice.com could come up with some kind of news story about bored 30-somethings with nothing better to do during the pandemic than go to mcdonalds as a way to take a break from the internet. does that make it news though, just because a news site says its news?
something can be news somewhere [0], and not news somewhere else [1].
News is news wherever it is. No one credible talked about the non-story you mentioned.
Trend stories (like you talked about with McDonald's) are considered "soft news" and is filler.
Meanwhile, the person we are talking about was published on the editorial page. That's not "discretion about what news to publish". That is literally "this is opinion only"
So what you're saying it that's it's utterly essential for the posters complaining about The NY Times to source their claims? Why don't they do it then?
Oh no. I was just making a random philosophical point. Nothing really to read into. Just that it would be difficult to find examples of organization x criticism, using organization y, if both x & y are partners.
It’s common knowledge amongst most educated people. Sorry if you had to find out by googling. Probably a sickening feeling if you’ve been a subscriber or passed along any of their retracted news as fact. I too used to trust the nyt, but “show me incentive and I’ll show you the outcome.”
Please don't respond to a bad comment with another bad comment. That only makes this place even worse, and the site guidelines explicitly ask you not to: "Don't feed egregious comments by replying; flag them instead."
Actually, the responses are what does the real damage. If people simply do as that guideline asks, these fires would die out rather than spread.
Also, your comment broke this guideline too: "Please don't sneer, including at the rest of the community." Can you please not? It's its own variety of tedious and nasty.
Edit: unfortunately your account has been broken the site guidelines a ton already, and rather shockingly. That's not cool, and we already asked you once not to do it (https://news.ycombinator.com/item?id=26652797). I've banned this account until we get some indication that you want to use HN as intended. If you do, you're welcome to let us know at hn@ycombinator.com.
Project Veritas? Really? These are the guys who have cut together several "Homer Badman" style videos of their political opponents and tried to pass them off as news. They also tried to plant literal fake news and were caught[1] by a traditional media organization. I'll be surprised if this lawsuit results in anything.
If you look closer at O'Keefe and his organization you'll find that since the ham-fisted attempts to slander ACORN backfired, they've grown more savvy, and they now release more unedited footage and have won more lawsuits than they've lost
Your complaints regarding Veritas hardly compare to the wrongdoings of the New York Times. Just recently you had the NYT reporting that Trump supporters had murdered a police officer with a fire extinguisher, then retracting that story and claiming he died from exposure to bear spray only for the truth to finally come out that he had died from natural causes the day after the alleged murder.
As for edited videos complaint all media outlets take this approach, with Vertias being one of the few to release their videos in unedited form.
They are no longer an independent news organization and instead are a trafficker and sycophant for one set of views masquerading as independent journalism.
"Forced to walk it back" is an oddly condemnatory way to describe admitting to a mistake. Granted the mistake has already been made, surely it's a good thing to own up to it?
No, the pattern of slander an individual, wait for the story to pass then apologise later once the damage is done and a litigious response is coming to bear (claiming it was a "mistake" or whatever excuse is necessary) is a tactic used by scum and defended by the same.
And yet nobody talks about the responsibility search engines have to prune obvious scams, viruses, and the like from their index. Google is not a neutral actor in this, it's Google's algorithm that's getting exploited, so Google has both a moral responsibility and the actual capability to fix this. Legal action against these slander sites is important too, but more of a secondary concern.