Hacker News new | past | comments | ask | show | jobs | submit login
Canadians have a ‘right to be forgotten’ on Google, Federal Court rules (theglobeandmail.com)
206 points by voisin 11 months ago | hide | past | favorite | 149 comments




Politicians pass a law saying Google must artificially suppress unflattering news about politicians, and the entire citizenry is cheering it on. Wild. Thank god for the first amendment.


Under Canadian law, any 'news' who receive money from a non-canadian source. So for example, receiving money from tim hortons to advertise their new donut, you now may not report on politics. You are influenced by the Americans. So they now can only report on 'so and so said something about politics'


If someone has the same name as someone who committed an offence that's a problem right?

This permanent memory doesn't extend to yourself only, it covers OTHER PEOPLE who have the same name as you. You are with them, for better or worse.

I could see it being necessary for some people to have to literally change their name just so they wouldn't be the same search result as the other person.


Sounds like a management being too fucking stupid to live problem. If the idea of a Bob Smith committing bestiality doesn't mean all Bob Smiths fuck animals is a foreign concept to them. Or that given Bob Smiths are different people.


In Europe this has been used by a lot of shady characters, including a drug dealing diplomatic visa buying individual in my country, to remove all mentions of whatever they can on Google. If I Google certain shady politicians and businessmen in my country vs DuckDuckGo the difference is stark.


I would rather that right to be forgotten exist for that individual than not exist for anyone.


So you want Google to lie about reality and punish people who state the truth.

I honestly can’t understand why.


Because people shouldn't be judged on only the things that make it into search results.

People are complex and not their mistakes. Once you serve a punishment you should be free to live & not forever branded a criminal.


Ahh yes... because Google would never do that before!


I think this comes down to failure modes. No system is going to be perfect (you'd need an infinite amount of resources) so you need to design failure methods into it. The same is true about laws. In fact, I'd say that this is a big reasons "laws are meant to be broken" because the ecosystem is always changing and intent and implementation don't always align. Sometimes laws that seem very good are incredibly stupid in certain situations. This is why we should never rule by the letter of the law alone. There's a famous philosopher who came up with the (namesake) "Blackstone's Ratio". In short "It is better that a ten guilty men go free than a innocent man be stricken of his freedom." Blackstone influenced many of the early founders for countries in the Americas.

So we have to ask ourselves, is this a thing that is beneficial to normal people. Is their right to be forgotten and their privacy important? You know people will always abuse this system, but is it better existing than if it hadn't? You will __always__ be able to point to abusers with __any__ system, so the truth is pointing at abusers itself isn't an argument against something. It needs more context. If the system is only used by abusers, then this is a problem. But terrorists and pedos use encryption (the common fingers being pointed) but so do normal people and it is highly valuable to normal people and their daily lives. Idk if removing your identity from Google is as important as that, but we should make sure we include more context than saying "people abuse it." That's a statement that will always be true.


I would think that the idea here is that courts aren't currently intentionally giving out "and everyone remembers what they did forever" as part of most criminal sentences; that this type of indefinite scarlet lettering even after serving out a sentence was never an intent of criminal sentencing, but only became one as a byproduct of the judicial process due to court cases being public for auditability + the Internet durably remembering and indexing everything forever; and that if courts want to make an indefinite scarlet letter a part of (some) criminal sentences, then they should be doing so explicitly, as with "sex offender registries" and the like, so that that power can be applied only where democratically favored via legislation.


> this type of indefinite scarlet lettering even after serving out a sentence was never an intent of criminal sentencing

Most people used to live in small towns and die within 100 miles of where they were born. If you got convicted of something, everybody knew.


Most people, yes. But criminals in particular used to escape this bubble of knowledge about who they were, by simply leaving town and starting over somewhere else. The possibility of doing this is one of the implicit bedrock assumptions of the design of most legal systems — but, due to everyone being able to look up "who you are and what you've done" everywhere you go, it's currently a broken assumption.


You could just change your name, which seems like less of a burden than moving away from everything you've ever known.

If you want to replicate the way it used to be then instead of trying to censor the world, just have the government give you a new ID and social security number when you change your name without publishing any record associating the new one with the old one.


Doesn't work when private companies keep facial-recognition databases and use them to auto-tag new photos of you with your old name.


Facial recognition vendors: Our systems have 99.97% accuracy! Actual meaning of this: In a nation of 300M, there are 90,000 false positives for every true match.


This is wrong and not how the numbers are computed. It's not pairwise. Pairwise error rates are more like 1 in 10 billion.

NIST did a review in 2019 (published 2020) with datasets containing 12 million people. For searching a given photo against the database, solutions might, say, fail to pull anyone 3% of the time and pull the wrong person 0.5% of the time.

Thus, you get about 1 false positive for every ~190 true matches. It was also under pessimistic conditions: a big fraction of the searches were for individual not enrolled in the dataset.

https://github.com/usnistgov/frvt/blob/nist-pages/reports/1N...

edit: I see there are newer versions of the data since I bookmarked this, but I have not reviewed them. A glance shows they format the data differently.

Still, things have likely improved somewhat since then.


> This is wrong and not how the numbers are computed. It's not pairwise.

I thought that actually was the pairwise number for some algorithms in use. They're designed to produce a list of suspects to investigate, and then if you get a dozen hits against your surveillance camera photo from your database of a few thousand local mugshots, that's what you're after.

> NIST did a review in 2019 (published 2020) with datasets containing 12 million people. For searching a given photo against the database, solutions might, say, fail to pull anyone 3% of the time and pull the wrong person 0.5% of the time.

Doing the numbers this way implies that the error rate would be ~2800% higher if the database contained the entire population, even under these conditions.

> It was also under pessimistic conditions: a big fraction of the searches were for individual not enrolled in the dataset.

It was also under optimistic conditions: They had a database of profile photos taken under controlled conditions with an attendant present. This is obviously not available for most people. It's using the most sophisticated algorithms in existence under laboratory conditions rather than describing what happens in practice in most cases.

But even supposing that this is a problem, wouldn't it still be better to ban facial recognition databases than news reporting?


> I thought that actually was the pairwise number for some algorithms in use. They're designed to produce a list of suspects to investigate, and then if you get a dozen hits against your surveillance camera photo from your database of a few thousand local mugshots, that's what you're after.

You can certainly accept a whole lot more false positives in exchange for a lower false negative rate depending upon application; that's what figure 1 shows.

> Doing the numbers this way implies that the error rate would be ~2800% higher if the database contained the entire population, even under these conditions.

Disagree. Error rates with 1 million people aren't anywhere near 1/12th; they're perhaps half. It's nowhere near a linear relationship.

Also, if you had everyone in the database, you wouldn't have lots of people presenting who aren't in the database, which is where most of the false matches presented.

> But even supposing that this is a problem, wouldn't it still be better to ban facial recognition databases than news reporting?

Don't take my disagreement on the magnitude of the threat of facial recognition as supporting something else.


> Disagree. Error rates with 1 million people aren't anywhere near 1/12th; they're perhaps half. It's nowhere near a linear relationship.

Is this only because of the trade off between false positives and false negatives? To avoid raising the false positive rate excessively you could accept a higher false negative rate, but how much does one rate change if you hold the other constant?

> Also, if you had everyone in the database, you wouldn't have lots of people presenting who aren't in the database, which is where most of the false matches presented.

This is just a facet of measuring false positives like that: If you're not in the database, the algorithm may be confident that someone who isn't you, is you. If you get added, you may look slightly more like yourself than the other person does and you may end up at rank 1, and then it isn't calling this other person a false positive anymore even though it does consider them as looking enough like you to exceed the threshold for returning a match. But as long as the database isn't fully comprehensive, that means the false positive rate for someone who isn't in the database is going to get higher the more people who are in it, and then anyone would just claim that they aren't in the database.

As would be the case for someone who just changed their name.

Unless they add themselves to the database under their new name, in which case the opposite happens: The database has Bernie Madoff with your mugshot from 2009, you change your name and go to the Department of Privacy to have your picture taken as Altria Academi, and your recent picture looks more like you do now than your old mugshot so Altria Academi comes up as rank 1 and Bernie Madoff comes up as rank 2 and you've got yourself a false negative.


I've been trying to follow this case, and I'd love for someone with actual legal background to correct me if I'm wrong. But as far I can tell from reading the rulings and PIPEDA is that technically all the courts have said so far is that commercial search engines are not blanket exempt from PIPEDA, and that they are not exempt from PIPEDA in the context of the original complaint (the actual guy complaining that his name is giving untrue and damaging results on google).

Google has been trying to argue that the Privacy Commissioner needs to consider charter implications of PIPEDA applying (and allowing right to be forgotten), while the Commissioner and the courts have been stating that they will first rule on if google search engine results are in scope of PIPEDA before considering if delisting would unduly limit Charter rights.

Finally, as I understand it, PIPEDA effectively grants the Privacy Commissioner investigatory powers, and that enforcement and remedies are supposed to be enforced by courts (ie, another lawsuit). That said, the Privacy Commissioner has published a draft position on online reputation that says it does believe that delisting/deindexing can sometimes be an appropriate remedy, so the obvious follow up steps would be the Privacy Comissioner/original complaint proceeding.

In other words, even if Google fails to win an appeal on this set of rulings, there will almost certainly be another round of court cases on if delisting ("right to be forgotten") is actually Charter compliant.


This is definitely going to be used in ways legislators haven't(?) intended.


Legislators intend to use it to remove incriminating news about themselves during the election cycle.


Canada gets removed from Google Maps.


'Right to be forgotten' is kind of absurd law. If you did something notable that is mentioned by press, then by press webpages, then indexed by search engines, you could ask search engine to hide that information, despite the original information is still publicly available (at least in libraries archiving paper press).


If you do something notable that is in the public interest the Right to be Forgotten does not apply. In any jurisdiction I'm aware of where comparable laws are implemented, they cover only personal information.


Now do America. I know - a lot of Silicon Valley business models are built on scraping and utilizing and selling personal data - but people should have control over that data and they should have the right to say no.


What about when it's news articles? Should people have a right to control those that might contain their name?


Sometimes yes, sometimes no. You could be arrested tomorrow in a case of mistaken identity. Every time somebody searches for “Kalium”, the first result might be details about how you were arrested for some terrible crime and the follow-up a few day later about how charges were dropped would rank way lower because that never made the front page.


Yes, that definitely could happen. The question is if I have the right to personally control what random people see if they look up public information about me.

I'm not particularly comfortable with the obvious extensions of this. What if I'm a local businessperson doing unpopular things? What if I'm running for office? Where does it start and stop?


So who decides if an individual person has the right to be forgotten?


I think the problem is not so much the news article, but that it pops up front and centre when doing a trivial search. Also, considering the low levels of alphabetization, most people won't even bother reading the article, just the (often clickbait) snippet. I'm not blaming the newspaper, I'm saying search results should be held to higher standards.


I think the First Amendment would block anything similar in the US.


Not an American, but I think your first amendment only applies to gouvernement. Most sites do moderate which would run afoul with your interpretation.


Sure. What we're talking about here is government action enforcing a putative "right to be forgotten".


The court and the gouvernement are independent and different things. In this instance the gouvernement is not involved. A citizen is taking google to court.


Yeah, no. Courts are part of the government. Third branch, you know.


I don't have an opinion on whether there should be a right to be forgotten or not, but it doesn't do the thing you described. We need GDPR or a similar protection. And to do that, it needs to be regulatory legislation, not court rulings.


Another right that requires someone else to do something for you rather than refrain from doing something to you. I guess the whole “force is only justified against force” thing is not believed by most Canadians.



Privacy is ingrained into Canada's laws and can't be changed willy-nilly. It'll be interesting to see how we deal with it and the 'blabbermouth' nature of tech.


This is great Canada, now do the CRA please.


Canadian revenue agency.


Expand?


More like the right to suppress history.


Absolutely, and they should have the right to be reinvented for public occasions to cheer and honor them the way it is meant to be.


I can’t read the article it’s paywalled, but what if this used to cancel someone and erase all their information as if they never existed in the internet? Just because of wrong thinking?


Perhaps it's worth noting the following in the article (before rushing to comment here based on the title alone):

"The case began with a complaint to the federal Privacy Commissioner in 2017 from a man whose name and details are kept confidential in the ruling. The man said outdated and inaccurate information about him in newspaper articles found on the internet was leading to great personal harm, including physical assault, employment discrimination, severe social stigma and persistent fear. He wanted the information to be delisted – made unsearchable, unless someone knew the website urls featuring his name."


While targeting search may be more expedient, I would expect the proper target to be the publisher or maintainer of the inaccurate newspaper articles. If articles make false claims, that can fall under defamation. A reasonable newspaper will include corrections (preferably inline).


Quite understandable.


It really depends. Some people would consider it “outdated and inaccurate” to report they were arrested for some crime - even if that’s true. Should such articles be scrubbed from Google if they are causing harm? Removed from newspaper archives too?


In my own opinion, if you’re arrested, have paid the standard debt to society that we all (in theory) have agreed on, and gone back into society seeking a job… forcing people to constantly deal with their old (non violent) crimes seems like extra-judicial punishment for very little benefit.

Even worse to leave that punishment to a private company.


Do you think a world where it is illegal for a victim to tell other people that someone victimized them is a just world?


Hmm so is that just crimes or anyone can get anything bad they’ve done removed from the internet?


GP comment was about specifically information about crimes. Not sure what precedence is set by the decision mentioned in the article.


Just seems awfully hard to draw a line.

So if the victim of the crime published an essay about what happened, are they allowed to tell their own story and have it listed in Google?

What if it’s a story about conduct that could be criminal but for which you were never charged?


Fortunately, in the US it’s fairly easy to draw a line: There’s no First Amendment exception for “I’d rather you didn’t remind people of that.”


I wouldn’t be so certain of how the first amendment applies to tech platforms (esp as interpreted by current SCOTUS)

This case could go either way https://arstechnica.com/tech-policy/2023/09/scotus-to-review...


Where do you draw the line?

My SIL, divorced with a young daughter, started dating a man. My wife (her sister) googled him and discovered he was a convicted child molester.

What if I was hiring an accountant? Should I be able to google him and find out he had been convicted of embezzlement?


Yeah, I feel you. Clearly in that case, she dodged a bullet.

But you hear about people having to explain drug possession charges, or even robbery charges from their past when they were a different person. We pretend like we actually want the judicial system to at least sometimes “fix” people but we make it next to impossible to do that. We assign a “price” to pay back to society, and the whole point is that price is supposed to be concrete and understandable.

The line is hard to draw between crimes where “society agrees you’ve ruined your life” (child molestation) and “society wants you to get better”. (drug possession, I hope?)

Maybe I’m more so vouching for “public awareness” to be a part of the sentencing procedure? (Allowing crimes without that punishment minimum to be forgotten on public platforms like Google.)

We already have laws/precedence that make the distinction between degrees of homicide for example. So the fuzzy line problem is already kind of solved in that sense? We’ve being doing that sort of difficult sorting for other punishments for many centuries. We just assume that EVERY crime is worth remembering at the moment.


The line is surely at "inaccurate" which is mentioned up-thread and in the case. Neither of your scenarios describe an inaccurate datum, just old (but not outdated)


I was responding to the poster who claimed that this is an "extra-judicial punishment" after the person has "paid the standard debt to society". If someone was convicted in a court of law, the information isn't inaccurate.

Addressing your point, if the newspaper article is inaccurate, why not have the article removed or corrected? Instead, it is the opposite. The man says the articles can stay up. Canadian defamation law is pretty clear. If the article is false, he can sue to have it removed. I'm tending to believe delisting from Google is the man's plan because he can't get the articles removed.

https://en.wikipedia.org/wiki/Canadian_defamation_law

FTA: "The man said outdated and inaccurate information about him in newspaper articles found on the internet was leading to great personal harm, including physical assault, employment discrimination, severe social stigma and persistent fear." and "He wanted the information to be delisted – made unsearchable, unless someone knew the website urls featuring his name."


Hmm is this really about “privacy” then? Seems like privacy would mostly be about things that are accurate. We already have laws about libel and defamation.

Who decides what’s inaccurate or outdated? I think the subject of a negative article may often feel it’s inaccurate even if it’s generally correct.


Aren’t there sex offender public registries that are legal extensions of the conviction?


if you’re arrested, have paid the standard debt to society [..]

It's a little careless to equate arrest with conviction.


Sorry, you’re right, that’s the wrong word.

I mean “tried and convicted”.


You would've had Epstein's history erased after his first stint in jail


That would bring us back to parity with pre-Internet society, assuming said info could still be accessed through court and library archives.


Even if the reason you're being physically assaulted is for a crime you did do, yes, information about this shouldn't still be easy to reach on the internet by your release from prison.

Adding mob justice to the formal legal system doesn't make it better.

> Removed from newspaper archives too?

No, but most people don't look in the archives so that doesn't matter.


How would you craft a law that applies to some search engines but not others? Market share?


I assume from the question that you think I think newspapers archives means online archives.

I don't, they can be offline, just as they were prior to the internet.

If you meant anything else: I wouldn't distinguish between any search engines, large or small… but I'm also not a lawyer or a lawmaker, so I have a paper-thin awareness of the consequences of my suggestion that even ChatGPT can probably poke holes in if only it weren't so sycophantic.


This is why in most of Europe, full names are not printed in the press, the last name is abbreviated, and photos are blurred.


Until conviction.


Arrested or convicted? Though to be arrested in many eyes means guilty.


Instead of "some people" why not offer a real world example of this assertion.


I’m so sure because I’ve worked for news websites and personally had this happen. I’d rather not antagonize them by linking the article. But in some cases we’ve added editor's notes or addendums to provide additional context or updates. We always correct factual errors. But I don’t think we’ve ever taken an article down because someone didn’t like it. Sorry you’ll have to take my word for it.

It’s common enough though. Here’s an old pointer article https://www.poynter.org/reporting-editing/2010/5-ways-news-o... and there are tons of companies offering to help you get articles removed from Google.


Rather than speculate or reinvent from first principles, we could look at existing “right to be forgotten” laws like GDPR:

https://gdpr.eu/right-to-be-forgotten/

Not sure what the Canadian basis is, but in general these laws recognize that it’s a tradeoff where the public has some interest in most information, and the individual may have an interest in it being forgotten. Criminal records would be a clear grey area where one could make the case for a public interest particularly in the short term, which is less strong over time. But inaccurate reporting is clearly much less eligible for the public interest.

I think phrasing it as a right is perhaps sets the wrong framing, as it’s quite conditional. (Certainly seems to trigger many Americans.)


A "right" to destroy other people's memories, and that it unironically goes under the shockingly Orwellian new speak "right to be forgotten", is one of the more horrifying recent things to see both spread and actually get embraced by so-called defenders of the people. There is no parallel in history. People absolutely can be naturally forgotten, but the principle that there is a right to have others forget naturally leads down a very, very ugly path. What people think about us and remember about us is NOT our data, it's theirs even if it's about us. If someone is spreading something factually and materially wrong there are already a myriad of tools to correct that.

Edit: I think honesty and HN spirit compels me to steelman against my own argument as well. An argument can be made that there was a time when it was possible for someone willing to take some serious risk, cost, and effort to start anew from even a really bad past. Step back a century or two or earlier, and identities and persistent records could be pretty scattered. Somebody who did something pretty bad (but maybe not truly notorious) and served any time but was determined to reform could leave it all behind and travel a thousand miles away with a new name and build a new life (or die trying). The space for that has undoubtedly shrunken in some respects, and looks like it may continue to do so. I can accept that there can be reasonable differences of opinion on that.

But against that is what it means to force it to happen. What such powerful legal tools will do when, with total inevitability, they are most effectively wielded by the most powerful as such tools always are. The people who already face the least accountability in many ways also tend to be the ones most able to make use of anything that may give them even a colorable case against critics and detractors. Some people truly reform, but of course some do not. And of course there are the principles involved. Is this something to use force over, because make no mistake, that's what it means to make something a right backed by law. That's what the very nature of "law" is, it's formalized opinions backed by power. The push for this new "right" feels different and dangerous to me then mere basic privacy, or worries about government (or even megacorp) surveillance.


Despite the hand-wavy arguments and use of misinterpreted shock phrases like "Orwellian", a persons' memories (and biochemical retention mechanisms) are not the same as a computers. At a bare minimum, the former has a singular lifetime and there can only one instance - zero copies.


OK, so you're fine with being able to forcefully destroy people's diaries? Photographs? What about those with memory problems who depend on such external memories?

>the former has a singular lifetime and there can only one instance - zero copies.

That's curious, I could have sworn humans had developed ways to share their memories with each other through things like "words" or "pictures"? Things proven to allow someone to share their memories with millions and pass them down such that we still know them even thousands of years later. So you're "only" talking about censoring/destroying conversation, phone calls, letters, messages, newspapers, paintings and such?

There is no shock phrasing here, despite you wanting to deflect. To forget is to lose memories. A right to have others forget necessarily implies actively forcing them to destroy their own memories. That's what it means, stripped of the misdirection. It's evil. I'm not sold that "do it on a computer" always is a magic wand that means now it's fine to use government force with it.


> So you're "only" talking about censoring/destroying conversation, phone calls, letters, messages, newspapers, paintings and such?

"Destroying a person's data" and "destroying Google's copies of that data" are two different things.

It's quite possible that Aunt Sally could keep her conversations, poems, paintings, macaroni pictures, or what have you, without allowing Google to index those things or make its own copies.


>"Destroying a person's data" and "destroying Google's copies of that data" are two different things.

Why? How many people may they share copies with before it's no longer allowed in your opinion? What is your basis for calculating that?

>It's quite possible that Aunt Sally could keep her conversations, poems, paintings, macaroni pictures, or what have you, without allowing Google to index those things or make its own copies.

Of course she can choose not to allow Google to index those things or make its own copies, but why should she be forced not to? Or Google forced not to? What about her circle of friends, and how big can that circle be? What if she runs her own little blog, wiki, forum, newsgroup, or the like? What about if it's a newspaper, can they index it or make copies? A single journalist? Researcher? What does "newspaper", "journalist", or "researcher" even mean as a matter of law? Are only certain elites allowed now? And on and on.

You shouldn't argue for a big new expansion of government force restricting information over and above defamation law without really being able to think all this through IMO. Details really matter. Imagine the absolute worst populist wannabe dictators at not merely national scale but at the local small town scale that doesn't get much attention but has real power to affect people's lives. What are they going to do with this? Does it matter if they'd lose a 6-figure lawsuit in the end if no one they would go after can afford to fight it? That's not a theoretical threat. Anti-SLAPP laws help a lot by making things much cheaper thanks to a fairly straight forward (under defamation law) low pass filter. A judge can determine pretty easily if there is a colorable case or not. As the linked article says, "right to be forgotten" is complex and case-by-case, ie, expensive. And we know that "expensive lawsuit" means "bullies will use this for illegitimate ends", so we should be very cautious about opening up mass applicable complex new forms of action without really thinking it through.


> Why? How many people may they share copies with before it's no longer allowed in your opinion?

Zero. Google is allowed to share zero copies.

That's the whole point, right?

> but why should she be forced not to?

She isn't "forced not to". Google is forced not to.

I'm not sure why you can't see the difference here.

> What about her circle of friends, and how big can that circle be?

Google isn't "her circle of friends".

> What if she runs her own little blog, wiki, forum, newsgroup, or the like?

Google isn't allowed to index it. And?


> She isn't "forced not to". Google is forced not to.

> I'm not sure why you can't see the difference here.

Indeed. This is a poor analogy, but it seems kind of like the difference between

1. Aunt Becky is allowed to keep a diary with lies about you

2. Aunt Becky is allowed to show those lies about you to Susan

3. Aunt Becky is allowed to take out a newspaper add sharing those lies about you, in every newspaper in the world, for the rest of time

There's a gap between 1 and 2 where you can make an argument. There's a HUGE gap between 2 and 3.


> There's a HUGE gap between 2 and 3.

Definitely. With 1 or 2, the lies aren't in a searchable database accessible to (e.g.) potential employers, or any rando stalker with an internet connection. That's a very important difference.


>> What if she runs her own little blog, wiki, forum, newsgroup, or the like?

>Google isn't allowed to index it.

I assume you mean Google can't index the forbidden names on her blog. Or is her entire blog now unfindable because she included a forbidden name? How does Google know if the name, of which multiple named people may exist, is the forbidden one? What of the rights of samed named people to be found?


That's Google's problem.

Note that they already have a well-established takedown mechanism for copyright violations.


There's this thing - it's pretty new, you might not have heard about it - called "writing".


> there is a right to have others forget

This is different from what the article (and ruling) are talking about. This is not about going in people's memory and deleting them, neurolizer style. It's not even talking to get the page down. It's talking about stopping to smear that person's name by advertising the publication.

> There is no parallel in history

Yes there is. If someone smears your name on billboards you're allowed to sue. Yesterday I read a story of a comedian who was getting YouTube videos about his plagiarism taken down. If you are publish stuff about a corporation that is bad for them, you might be sued to take it down. Plenty of examples.


>If someone smears your name on billboards you're allowed to sue

Sure, but you're not going to get anywhere with your suit unless it's defamation. "Smearing" your name with something you actually did is (and or should be) protected speech. The article does not describe the confidential plaintiff winning a defamation lawsuit and forcing the material to be taken down from its source. And there would need to be no new "right to be forgotten" for that, that's just bog standard century old defamation law. All search engines and regular sites for that matter, anything that hosts 3rd party generated content, has (and is required to have by law) contacts and processes in place for taking down actual illegal material. The entire debate around this new thing is getting stuff you merely don't like but is entirely true hidden away.

>Yesterday I read a story of a comedian who was getting YouTube videos about his plagiarism taken down

Which sounds like horrible to me and exactly the problem! If he plagiarized, why should he be able to get those videos taken down?


Feel free to chat with a lawyer, but truth can be defamatory, it's in the intent (or carelessness) to cause harm to another that it's decided to be defamation or not.


>Feel free to chat with a lawyer, but truth can be defamatory

Not in the US it can't, truth is an absolute defense against defamation. And while sure, absolutely lots of countries don't respect free speech or support robust criticism very well, but I think those that don't are wrong. Hence why I included "(and or should be)", the ones that don't still should IMO.


One of the challenges on HN is that the United States pretty much singularly has a very strong right to free speech as a part of law. The right to be forgotten is a very direct an unarguable violation of this. It isn't even debatable, and thus we are left debating free speech issues of which Europeans and Americans have very strong differences (in my opinion, anyone who is under an authoritarian government doesn't get to participate in the debate for censorship.)

I don't know what is worse - that, for example, a Norweigen neo-Nazi mass murderer who will only serve 20 years in prison could erase the stories about them and that event ever occuring, or that someone who did something really dumb in college could usurp and override everyone else's right to free speech.

The right to be forgotten rests on the premise that someone can be "reformed" and that their right to erase the past outweighs the rights of everyone else. The basic premise is someone did something really bad or embarrassing and they are too lazy to change their name (I have friends who have done that, due to an embarrassing newspaper article appearing Google.)

I think we are headed toward a bifurcated interest. The reason that countries outside of the US have been able to compel American tech companies to do their bidding, to date, is because these companies are monopolies that generate massive amounts of cashflow doing things which are not impacted by that law (right to be forgotten does almost 0 to Google's income.)

How is this going to work with LLM models? Keyword censorship can be inserted at the input and output levels, but you can make the LLM bypass that. These models will not be re-trained on demand. The open models can not be recalled. I suspect a lot of countries are about to be excluded from the next wave of innovation.


The First Amendment is a fundamental of US society. It was the source of two of the "Four Freedoms" during WW2.

Countries that infringe on the First Amendment rights of Americans are properly termed "enemies". Countries that are currently under the US nuclear umbrella should understand the cost of that protection.


Why not look what happened elsewhere where the right to be forgotten exists? Actual implementations might not turn out to be so dire.


And do the "myriad tools" to combat disinformation stack up in this brave new era of LLMs dedicated to firehosing that disinformation far faster than you can combat it? Are you willing to put your reputation on the line to "steelman" your conviction? Do you get this worked up about simple link rot?


Search engines are factual; why should someone be allowed to override the facts?

I'd be very happy if this were about correcting facts: about forcing the takedown of inaccurate content from websites. Google would then update its index to reflect the improved accuracy.

Fix the problem, not hide it.


"I would be much happier if we did a thing can only intermittently be accomplished by legal means and can be nullified at will by creating a new instance of a discardable entity, rather than doing a thing addressed to an identifiable and durable entity from which to request it."


Unfortunately by the time you have found this lies about you will have spread to many sites often outside the courts jurisdiction. How do you get all those sites taken down?

Thus you have to make the indexer stop indexing you.


Do you really have to get those sites taken down if they're not indexed by any major search engines? Very few people will be able to find them.


So in order to prevent hypothetical misinformation, it is imperative to maintain the right to demand that accurate information not be spread?


It is not hypothetical - there are cases in Europe where the first entry in google for a person is an accusation against them that was proved false


And the solution to that is to remove the factual information about them from search results?


What else can you do - those sites are not under the courts jurasticion possibly in Russia, China, USA who do not honour requests from other countries either through spite or because their laws differ.


Nothing hypothetical about it, it's fairly common for even big news publishers like CNN to put out a headline that is so misleading that it's effectively a lie, and then either never follow up with a correction if the story turns out to be false or put out a much less publicized and still misleading "correction" such that the correction is buried in a few lines in the middle somewhere.

If big publishers are doing this often on well known topics, it stands to reason that the much larger number of little publishers are also doing it with at least similar frequency for less well known topics.


Yes, hence the existence of slander and libel laws.


How do you sue many sites in twenty different countries?


That's not at all how libel works, at least in the US.


How do you “fix” a search engine creating top results for your accusation of a crime while burying the acquittal? Unless you’re famous, your acquittal will basically never outrank your accusations in a search engine, and the accusations will follow you.

Hence the ruling here.


Search engines are factual

Only in the sense that they index what other sites say, which might or might not be factual or honest.


Search engines also curate nowadays. Search for something vaguely related to suicide, and the thing you're looking for will not appear among the first few search results.


> Search engines also curate ~~nowadays~~ and always have.

It's impossible not to. Even merely putting one specific search result above another because it's "more relevant" is the very act of curation, because you are inherently defining "relevance."


Hi Google, please forget me?

Sure Dave, please wash your hands for a valid finger print and move in close so I can scan your retina. Closer Dave...



[flagged]


An alternative, much simpler explanation is that people downvote comments which they don't think are fitting for HN or don't provoke good discussion. I don't care about the ruling, but I do think some of the comments here are more fitting for Reddit than HN. In general though, I don't get taking issue with comment votes, it's just random internet points on a screen.


I'm not sure that stands up. 'Votes' are always positive feedback to the one on the receiving end. Assuming HN (or Reddit, or whatever) successfully filters bot activity, they provide indication that there are people in the vicinity of the comment and therefore it is likely that the comment is being read. Which is the reason why one comes to a forum and not to a private journal. As there is no formal analytics system, getting feedback (button presses, replies, etc.) is the only way to know you are contributing something that a larger community is taking in. And such feedback is what encourages contributing more.

If there was something that the community doesn't like, naturally they would hide the fact that a community exist. No votes, no replies, no nothing. When one feels like they are simply writing in a private journal, they will grow bored of their activity pretty quickly. As the old adage goes: Don't feed the trolls.


> In general though, I don't get taking issue with comment votes, it's just random internet points on a screen.

Most of the time, I want to be able to easily read each and every comment that was posted, regardless of what some other random people may think about those comments.

I find it annoying how the votes and flagging can affect the appearance/visibility of a comment here.

Downvoted comments are rendered in dimmer text, which makes the comments harder to read.

I also have to log in so the "showdead" setting takes effect, just so I can see comments that were flagged and hidden.

I wish I could use this site with all of the voting/flagging/content-hiding "functionality" disabled, ideally without having to log in and otherwise overriding the default behaviour.


Cue all the Canadian politicians asking to be forgotten for inviting and celebrating an actual Nazi in Parliament this week.

I'm wary of the "right to be forgotten". In this particular case, the complainant (allegedly) had false information posted about them that was detrimental. This can certainly happen. But why aren't the outlets who posted that false information on the hook?

There is a balance between people having a permanent mark against them for doing something stupid and the public having a right to know when, say, someone actually commits a crime.

There are PR firms that specialize in "reputation repair" where wealthy people will pay a bunch of money to have anything disparaging about them removed from the Internet or just buried. This can take the form of takedowns but can be way more isidious eg buying a local newspaper to simply bury a story.


Tech companies that make money off of hoarding your data have clear incentives to push the idea of security over privacy.


Citizen: "I was cancelled!"

Government: "Apparently you've forgotten exercising your right to be forgotten. Fret not! We have remembered on your behalf. Some day you'll thank us. We're waiting."


I don't think people should have right to be forgotten, just like they shouldn't have the right to be killed by someone or right to be sold to slavery. I think people should rather own their mistakes, and should behave to be remembered fondly rather than disgracefully. Forgiving someone should happen on terms of person doing it.


You seem to be assuming that just because someone wants to be "forgotten" they've done something wrong.

There is a vast number of situations where someone might want to be forgotten because someone else has done / is doing something wrong towards them.


Then they should be made whole by that perpetrator. If that action can just be memory holed for a victim's sake then you have motivated the perpetrator to lean on them so their wrongdoing can be forgotten.


Your classification of situations by 'vast number' is wrong. Most erasures are to hide guilt.


So someone that is falsely accused of a rape or murder, that is widely reported, shouldn't have any recourse after the fact to have those stories removed from searches? Often news organizations will only report on the accusation and rarely update when they are exonerated.


Google doesn’t produce the content, sites do, the issue is with those sites surely?


One of those things is a reachable legal entity. The other is a pop-up content farm whose next instance is a few hours of work away. Google is the logical one to ask.


A library is a reachable entity. Do we ask them to purge their catalog of metadata that indicates a particular source contains particular data about someone? The premise is absurd.


>Google doesn’t produce the content, sites do, the issue is with those sites surely?

Google decides what someone will see when they Google your name, they might decide that the articles where you were accused of CP have much more priority then the single article 1 month later where you were found not guilty.


That can be extremely difficult to do false information is spread widely. Say a story is published by the Associated Press and then is republished on hundreds of other websites.


This doesn't really address the core of the problem which is why people often wish to delist, defamation and intentionally inaccurate or misleading information. Anyone can go and make a website right now that claims that someone else is a child predator or a terrorist or what have you. Mix the fake information in with some real facts and plausible glue and you have your standard fake news level content. Should there be no way to get rid of such content aimed at you online?

As a tangential side note, why shouldn't people have the right to be killed? Assisted suicide is legal in many countries for people who are e.g. battling a painful disease with no hope for a cure or improvement with modern medicine.


Does Canada not have laws against defamation? In the UK and US you could make a website and write that someone is a child predator but you probably get charged with libel.


Why the right to be forgotten exists in general: Let's say there was a widely publicised murder in your neighborhood. You get picked up by the police, your mugshot and name get plastered all over the newspapers. It's a mistake. They catch the real perp a few days later and you go free. Now every time someone searches your name they are greeted with your mugshot. You don't get jobs, you don't get to rent an apartment and so on. It's not fair, it doesn't serve society in any way, shape, or form. You should be able to get that information removed.

The same goes, of course, for any drunken party pictures and the likes, which should have no bearing on your life decades later.


>Let's say there was a widely publicised murder in your neighborhood. You get picked up by the police, your mugshot and name get plastered all over the newspapers. It's a mistake. They catch the real perp a few days later and you go free. Now every time someone searches your name they are greeted with your mugshot. You don't get jobs, you don't get to rent an apartment and so on. It's not fair, it doesn't serve society in any way, shape, or form. You should be able to get that information removed.

Rather than gaslighting the world, shouldn't you just prevent the problem? That is, just ban posting that sort of information before a conviction.


That is the case in a lot of places for arrests, but wrong convictions are a thing too and those get reported. Plus, there are situations where someone posted a presumably funny picture of someone else, which you should absolutely have the right to not be found. It has an impact on your life for no good reason.


> ban posting that sort of information before a conviction.

A society that does not make arrest information public is one where it is very easy for someone to "disappear." That's the whole point.


If the government is trying to disappear people, then having in place the mechanisms for a right to be forgotten seems like it would only help them disappear people more thoroughly.


And when that "right to be forgotten" is being exercised to expunge demonstrably false information that, because the internet is generally a swamp of copy-paste today, will be replicated endlessly to that person--not even a notable person's--detriment? Or because, say, they have been the victim of a crime perpetuated by a popular person and that popular person's stans won't leave them alone?

I recognize the potential problems, but there are absolutely good and just reasons for this in the main.


>And when that "right to be forgotten" is being exercised to expunge demonstrably false information that, because the internet is generally a swamp of copy-paste today, will be replicated endlessly to that person--not even a notable person's--detriment?

Does that mean if there's unflattering information about you on the internet (eg. disorderly conduct video), you can get that scrubbed by by posting some "demonstrably false information" about yourself (eg. that you're a child predator and war criminal), thereby causing your whole identity to be delisted?


This sort of gray area is one for a court to figure out. If it were me? "Court records and reporting on governmental proceedings, including civil and criminal proceedings, shall not be included unless a court determines there is no prevailing public interest."

Barring that? I think erring on the side of the vastly more numerous set of decent people who may be affected by parrot farms is probably wise. (And I can't speak to Canadian jurisprudence, but it's often the case that courts make a broader decision in the American system and constrain it down over time.)


> This sort of gray area is one for a court to figure out. If it were me? "Court records and reporting on governmental proceedings, including civil and criminal proceedings, shall not be included unless a court determines there is no prevailing public interest."

Okay but the hypothetical above doesn't include any court proceedings or any other government records. It's just a video of the person doing something illegal. It doesn't even have to be limited to videos/photos, it could also be verbal accusations (eg. of sexual harassment).

>Barring that? I think erring on the side of the vastly more numerous set of decent people who may be affected by parrot farms is probably wise.

I don't get it, are there websites out there dedicated to posting false information about random people?


> I don't get it, are there websites out there dedicated to posting false information about random people?

Yes. And also prurient and invasive things that may be true (which is why we have laws against things like revenge porn, which Google also removes from its SERPs in some jurisdictions.


>is being exercised to expunge demonstrably false information that

We already have defamation law for that, have had it for literally centuries. If you can prove in court that something is defamation, you can then get court orders around it, including having it taken down from the actual source sites as well as search engines. There is no new "right to be forgotten" law needed for that that that's not what the push has been about, it's about people complaining about stuff that they say is "outdated" or "harmful" but not actually defamatory, you'll notice they don't actually go after any of the places that host the content even when it's in their own jurisdiction. It's the truth, just an unpleasant truth they don't like showing up.

The extent to which government force should be used to wipe someone's slate cleaner is certainly debatable. Arguments can be made that at one point even a nasty criminal who wanted to turn over a new leaf could travel to a frontier or the like where nobody would know them or find out about them and start anew, and that was a good thing (vs criminals who did that and just did more crimes). And that maybe that should be something society backs. And then there are arguments against it.

But that's a very different debate then if the information is simply false.


The internet is global yet you can't deport people abroad to answer libel charges. Cyberbullying does not have borders.

Allowing someone to delist information via the single most important bottleneck, the search engine, is a very reasonable solution.


> I don't think people should have right to be forgotten, just like they shouldn't have the right to be killed by someone or right to be sold to slavery.

I don't see your logic at all. The premise is something which the person is seeking, and your examples are things the subject would not want (to be killed or sold into slavery). I understand the rest of what you're saying, though.


I meant that these things are illegal even if the victim agrees with them.


This seems like a poor solution to the death of journalism.

There aren't enough details to go on in the story, but I'm going to make an assumption that this person was charged with a crime, stories were written about the charges, but when the charges were dropped or he was acquitted there was no followup story written and no update made to the original story.

Journalism in Canada is dying. Most of the newspapers in Canada are owned by one company and newsrooms have been consolidated and shrunk. I used to read my city's two major newspapers - they'd report a different subset of what was happening, and where they reported on the same thing the stories and takes would be different. Both were biased, but biased in different directions, so I felt that I was getting a semi-balanced view by reading both. Now both papers are owned by the same company, have the same reporters, same editors, report on the same things, and run identical stories. There's no balanced view and not enough reporters to follow up on every little thing that gets reported.

Ideally searching for this person's name would show the newest stories first ("person acquitted of crime", "charges dropped", etc), but if those stories never get written they will never show up in a search engine. And very few people will dig into CanLII to find the outcome, if it even makes it far enough to get into CanLII.

I don't know what the solution is. Maybe the media shouldn't name names until someone is convicted? But that would likely have unintended consequences.


I'm getting my news from a couple journals of Glacier Media group, that gives me a pretty good idea of the best deals at Cambodian Tyre, and how many cats got strained during the last marmot festival.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: