Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: GPT-3 reveals my full name – can I do anything?
710 points by BoppreH on June 26, 2022 | hide | past | favorite | 346 comments
Alternatively: What's the current status of Personally Identifying Information and language models?

I try to hide my real name whenever possible, out of an abundance of caution. You can still find it if you search carefully, but in today's hostile internet I see this kind of soft pseudonymity as my digital personal space, and expect to have it respected.

When playing around in GPT-3 I tried making sentences with my username. Imagine my surprise when I see it spitting out my (globally unique, unusual) full name!

Looking around, I found a paper that says language models spitting out personal information is a problem[1], a Google blog post that says there's not much that can be done[2], and an article that says OpenAI might automatically replace phone numbers in the future but other types of PII are harder to remove[3]. But nothing on what is actually being done.

If I had found my personal information on Google search results, or Facebook, I could ask the information to be removed, but GPT-3 seems to have no such support. Are we supposed to accept that large language models may reveal private information, with no recourse?

I don't care much about my name being public, but I don't know what else it might have memorized (political affiliations? Sexual preferences? Posts from 13-year old me?). In the age of GDPR this feels like an enormous regression in privacy.

EDIT: a small thank you for everybody commenting so far for not directly linking to specific results or actually writing my name, however easy it might be.

If my request for pseudonymity sounds strange given my lax infosec:

- I'm more worried about the consequences of language models in general than my own case, and

- people have done a lot more for a lot less name information[4].

[1]: https://arxiv.org/abs/2012.07805

[2]: https://ai.googleblog.com/2020/12/privacy-considerations-in-...

[3]: https://www.theregister.com/2021/03/18/openai_gpt3_data/

[4]: https://en.wikipedia.org/wiki/Slate_Star_Codex#New_York_Time...




  > I try to hide my real name whenever possible, out of an
  > abundance of caution. You can still find it if you search
  > carefully, but in today's hostile internet I see this kind
  > of soft pseudonymity as my digital personal space, and expect
  > to have it respected.
Without judging whether the goal is good or not, I will gently point out that your current approach doesn't seem to be effective. A Google search for "BoppreH" turned up several results on the first page with what appears to be your full name, along with other results linking to various emails that have been associated with that name. Results include Github commits, mailing list archives, and third-party code that cited your Github account as "work by $NAME".

As a purely practical matter -- again, not going into whether this is how things should be, merely how they do be -- it is futile to want the internet as a whole to have a concept of privacy, or to respect the concept of a "digital personal space". If your phone number or other PII has ever been associated with your identity, that association will be in place indefinitely and is probably available on multiple data broker sites.

The best way to be anonymous on the internet is to be anonymous, which means posting without any name or identifier at all. If that isn't practical, then using a non-meaningful pseudonym and not posting anything personally identifiable is recommended.


I gave up anonymity. I just learned to lean into taking control of my ID. Some time ago, I realized that there's no way for me to participate online, without things being attributed to me.

I learned this, by setting up a Disqus ID. I wanted to comment on a blog post, and started to set up an account.

After I started the process, it came back, with a list of random posts, from around the Internet (and some, very old), and said "Are these yours? If so, would you like to associate them with your account?"

I freaked. Many of them were outright troll comments (I was not always the haloed saint that you see before you) that I had sworn were done anonymously. They came from many different places (including DejaNews). I have no idea how Disqus found them.

Every single one of them was mine. Many, were ones that I had sworn were dead and buried in a deep grave in the mountains.

Needless to say, I do not have a Disqus ID.

Being non-anonymous means that I need to behave myself, online. I come across as a bit of a stuffy bore, but I suspect my IRL persona is that way, as well.

That's OK.


These are called “chilling effects,” they cause people to self censor when it comes to socially controversial positions. Historically, this would include womens suffrage, black rights, gay rights, various religious positions…

It’s not okay to be tracked so thoroughly that people stop feeling they can explore controversy online


On top of that: anonymity should not be required to explore controversy at all. That’s the chilling effect. The issue is that as a society we have failed royally to internalize tolerating freedom of expression. Instead we choose to censor and silence people who wish to explore controversy even though we have laws in place that protect one’s freedom to express themselves however they desire without damaging recourse to their life, liberty, and pursuit of happiness.

Anonymity is certainly a tool that can be used in dire situations when there are real credible threats and the stakes are high. However it takes a certain type of courage to express oneself freely which would be really nice to see in the majority of all other situations. Instead of exploring controversy anonymously, we should aim as a society to explore it normally and simply build up the intellectual maturity and capacity to tolerate controversy like adults and not children…


In short, I don't want to live in a society where everyone is anonymous. That doesn't sound very social at all and doesn't work at scale. I want to live in a society where I can build strong respectful adult relationships with people and not immediately judge, shun, and twitter mob someone who says they don't 100% agree with my lifestyle. Tolerating differences in viewpoints and lifestyles is true diversity. Diversity is not finding people with different physical features who all actually think the same and putting them on a magazine cover or in the same office together.


I don't want to live in a society where exploration and discussion of taboo ideas risks my livelihood. Short of somehow inducing massive change in the way most people think about things, anonymity is the only way to achieve that.


I wonder which elements of your "lifestyle" would get you shunned, or disinherited, or imprisoned, or killed for your family's honor. I enjoy almost perfect intersectional privilege, and one of those privileges has been to use my full real Googlable name on all my social media accounts, specifically because I want to be accountable for what I say, and because I've always believed that nothing about my real identity imperils me. (I'm a little less sure, recently, that liberal atheists and our allies will be safe from the American pogroms to come. Too late now!)


> In short, I don't want to live in a society where everyone is anonymous.

You wear a name tag to the pub, or supermarket?


the cameras that track your every movement inside a supermarket (plus the software that labels your image with a unique identifier) have you pinned down pretty well already, no need for nametags.


Don’t forget the Bluetooth beacon trackers.

https://www.nytimes.com/interactive/2019/06/14/opinion/bluet...


Only in some places.


more places than you would expect though.


Being gratuitously anti-social in a pub or supermarket already has consequences, name tags or not (you'll get kicked out). It doesn't matter if people know your name or not, they'll recognize your face, and you might not be welcome back.

Being gratuitously anti-social online might also have consequences (your account gets banned), but if creating another anonymous account is free and easy, then the consequences are trivially ignored.

You could make a distinction between anonymity and ease-of-creating-new-accounts, but usually the two are tied together.


No, but I also don't wear a mask that covers my face and I don't use a voice changer to scramble my speech.

In meatspace, people use different modes of identification than just a name, so names aren't as important to figuring out who's who.


> You wear a name tag to the pub, or supermarket?

Yes, but I don't realize it.


I don’t, yet I am still not anonymous. If someone at the supermarket asked me my name I would tell them.


You don’t keep your phone with you?


Or always pay cash?


That doesn’t help if every other phone in the vicinity of the business gets a ping from your phone.


To give an example of this, I lived in a municipality where most community-driven things were organized on Facebook, including government-driven initiatives.

I won't participate in them using my real name, because I once witnessed the mayor of the town doxx and lead a campaign to harass a single mother because she disagreed with the majority party that's run the town for the last 40 years. She got dogpiled on by hundreds of residents for participating in a discussion on Facebook.

It wasn't an isolated incident either, other people have had the same experience and even felt the need to move after it happened because some people took it to an extreme and felt the need to harass them for months afterwards.


You don’t have laws against that sort of abuse of power?


Maybe, but the entire county is run by the good ol' boys network. Good luck getting the police to investigate or charge the mayor, and good luck in court.

Part of the chilling effect is the incentive against pursuing justice in cases like this. The single mother was already publicly targeted and made unsafe, chances are that the public targeting will get even worse if she pursued justice against her harassers.

These same people in power who led this campaign against her have sycophants in the local media who have no problem using their wide reach to smear dissenters like they do every election season, even for minor Board of Ed elections.


Laws only work when law enforcement and criminal justice cares to enforce them and/or when the wronged party has enough resources to sue in a venue where law enforcement and criminal justice will care.


Laws are nice on paper, but only if someone actually enforces them.


The enforcers seem to have a lot of time and resources for enforcing those laws on minorities.


I don't feel I have any difficulty airing positions I feel online without anonymity. I sometimes end up arguing, but rarely in bad faith. I stand by what I say, though my views now may be different to those of previous me and I'm happy to debate that too. If you can't stand by a position, maybe you shouldn't air it.


It isn't about not standing by a position, it is about the potential of attracting the attention of a small extremist minority that will spend outsize effort trying to destroy your life.

I'm glad that you feel secure enough in your position in life that you think you can weather such an attack, but not everyone is so lucky. Implying anyone who needs anonymity is simply holding an unreasonable position is simply not fair.


Not only that, but benign positions today might be totally taboo in 20 years, and when that time comes, they will be just an Internet search away. You have no idea whether something harmless you say today will be used to paint you as a terrible person decades from now. I think back to some of the stuff I said 20 years ago which at the time were entirely uncontroversial, that I'd get fired for if I said today.


> Implying anyone who needs anonymity is simply holding an unreasonable position is simply not fair.

Neither of us did.


From the GP:

> If you can't stand by a position, maybe you shouldn't air it.

The implication of this is pretty clear.


That works well amongst equals. It fails when some have more power than others and can use that to hurt those others for things they disagree with.

E.g., this is why true democracy needs secret ballots. Perhaps you and I aren't afraid to vote in public. But a democracy needs everyone to give their honest vote, not only those who have nothing to fear.


Cool screen name. Eddings fan?

Definitely agree. I have my approach to life, the universe, and everything, and it is unreasonable to project my values and whatnot onto others.

Many times, the favor is not returned, though.


> Eddings fan?

Yup :) (1)

Also an online privacy fan with (what probably amounts to) strict views. Eg.: privacy is a bit of a misnomer. It puts the focus on the person who can be wronged. In other crimes, we don't do that. A burglar is not the one whose house was burgled; a robber is not the one who was robbed.

Privacy isn't about me or my rights; it is about other people and limits on theirs. You're not allowed to take other people's money, why should you be allowed to take other people's data?

(1) Aside: I keep rereading the books. I found others that move me more, but I tend to move beyond them. Eddings writings manage to keep entertaining me. Not necessarily high-brow, but definitely entertaining and the entertainment doesn't peter out after the 3rd or so book (all too common in fantasy in my experience).


The intra-party banter is what makes them. I tend to call it "popcorn fantasy" but mean that positively (and the length of time he takes to set up the "who's turn is it to cook" gag is simply masterful).

Last time I played a Rogue in an AD&D game I used Silk as an archetype, and it worked out very nicely.


(1) Same here. Every couple of years. I also read The Elenium/Tamuli series.

I tried the Elder Gods series, and it was ... awful. The Redemption of Althalus was readable, but couldn't hold a candle to the other books.


Fair point.

But context is king. The context of that particular quote, is that it came immediately after this:

> I stand by what I say, though my views now may be different to those of previous me and I'm happy to debate that too.

They were clearly talking about themselves, and a rule they apply to themselves.

That said, one of my "cleanup routines" for writing and posting, is I look for instances of "you," and often change it to "I" or "me."

I would have probably written it like so:

> If I can't stand by a position, maybe I shouldn't air it.

BTW: I apply the same philosophy to my own posting.

There's a very valid argument for online (and offline) anonymity, and I don't like the specious "If you aren't doing anything wrong, then you shouldn't have anything to hide." argument.

I just find using that as a fig leaf for trolling and stalking people is rather annoying, as that behavior actually puts the people that really need it, at risk.

Standing up for my Principles can sometimes be quite scary. I've risked losing jobs, for refusing to carry out orders that were unethical, and I am routinely attacked, here (but politely -this is HN, after all), for holding some of the views I hold.


> But context is king. The context of that particular quote, is that it came immediately after this

Yes, the context was a switch from first person to second person. The most reasonable and likely interpretation is not that it was an accident, but that the second person was intended to convey a statement about what people in general should do. E.I: "If one can't stand by a position maybe one shouldn't air it."

It is true that there is a trade off between anonymity and culpability, but that doesn't mean we don't need both. To my mind, we need anonymity to protect smaller scale participants and accountability for larger scale participants to limit abuse of power. I don't know how you achieve that in practice.


That is a very naive point of view.

Its not standing by a decision, its the unknown risk of an adversary using information you thought private against you. Whether its abortion clinics, or prospective employers vetting your background. You won't know which opportunities were missed as a result of something in your record that may have happened 20 years ago. You'll probably think it isn't happening, until it impacts you personally, and then its too late. That's exactly how it happened in East Germany with the wall.

https://www.stopspying.org/pregnancy-panopticon


There needs to be a middle ground though. When people air opinions they're unwilling to change, any criticism(no matter how justified) will feel like a personal attack, and bad faith arguing will tend to be the result.

In other words, stand by your position, but also learn when to admit you were wrong.


In my case, that happens all the time. You probably won't have to search too far, to find me apologizing, admitting error, or finding a way to make amends.

It's a fundamental tenet of my way of life. I promptly admit when I'm wrong.

I've found the best way to avoid having to make amends, is to not cause the offense, in the first place. I tend to be fairly careful about keeping it in the "I," all the time (but no good deed goes unpunished -I am often told that "I'm making it all about me").

I do find that I get attacked, sometimes, right out of the blue, for stating personal philosophies and/or experiences. My fave is when I am told that something that happened to me "didn't actually happen." I assume that is because it is an inconvenient truth, for others. My experience, dealing with tech industry ageism, is a common fulcrum for that kind of response.


I've found the best way to avoid having to make amends is to not cause unintentional offence, certainly, and I've got (annoyingly) slowly better at it over the course of my life.

Sometimes, though, I absolutely -do- intend to offend somebody, and in that case I own it.


> When people air opinions they're unwilling to change

That's the thing, there are a lot more positions these days that people seem to be unwilling to change.


On the other hand, absolute freedom (and convenience) of expression for each of our disembodied and anonymized selves is problematic in the opposite way.

To bypass the sources of those chilling effects by remaining anonymous may in fact only allow them to grow stronger.


Chilling effects are not unequivocally bad though. When the speech in question is advocating for stripping away the civil rights of or committing genocidal acts against certain groups of people, I'd much rather that speech be chilled.


That's okay, as long as there is no police state hunting you.

That's okay, as long as you aren't a member of any persecuted minority, and as long as you don't have any interesting political views to share.


You are absolutely correct.

It’s fine, for me. I actually know folks that it would not be OK.

I am not giving advice; merely recounting what I have experienced, and the personal choices I have made, based on those experiences.


I've on a number of occasions expressed opinions because I believe they needed to be expressed and I knew the other folks holding those opinions wouldn't be ok to do so, whereas I probably would (and have largely survived that so far).

A small use of privilege but I hope a useful one overall.


That's a good point.


Don't forget the other part - being non-anonymous online makes it easy for stalkers and other bad actors to take it to the extreme. We need anonymity for lots of reasons.


I have ... interesting ... friends. I'll bet I know scarier people IRL, than I'll find online.

Also, what impressed me about the Disqus incident, was how fast it came back with that list.

In the US, at least, true anonymity takes a lot of work. For example, if you own a house, people can use tax records to find out who you are, unless you do what rich people do, and use shell companies. I also own a couple of [small] companies. I maintain a UPS box, because they get a lot of junk mail (and some business junk mail comes to my home address, anyway).

That's just one of hundreds of ways we can be found. Many predate teh Internets Tubes. My mailbox gets stuffed with junk mail. Some of it is quite specific. They use these mechanisms, and have been, for decades. I have known folks in the collections industry. They can find people surprisingly easily. There was one guy who used to be a skip tracer, and he wrote a book called How to Disappear[0]. It's a fairly sobering read (and probably quaintly anachronistic, these days).

The Unabomber actually did it correctly. He only got nailed, once he posted something publicly.

[0] https://www.amazon.com/How-Disappear-Digital-Footprint-Witho...


This reminds me of a post I saw on /r/fatfire about how to buy a house anonymously. I have it bookmarked in case my company ever takes off.

https://old.reddit.com/r/fatFIRE/comments/l0wd5i/update_to_p...


Also just because there is a law doesn't mean there are consequences codified, and this is most true regarding states laws that “require” an LLC registered in their state. Remember that states are in competition for business, there are hurdles for them to do annoying things. The best example I’ve seen in one state is that a local LLC branch is required after your foreign LLC gets sued, and the limited liability is active and retroactively applied at that point in time. But hey maybe your anonymous LLC deters people from suing to begin with.

(This is different than there being a law codified and not being enforced)


"Anonymous" LLCs are going away. FinCEN will be requiring reporting of everybody with ownership greater than 25% starting in probably 1-2 years from now. The fucking feds won't let us have anything without stalking our every move.


Yes, their proposed implementations of that act look pretty onerous and unnecessarily difficult to comply with

But I’m fine with one agency of the federal government having a private database, shareable for some investigations, which is the direction its going

I hope it gets handicapped or repealed


My name being Matt S Trout, abbreviated on various documents as 'MS TROUT', I regularly get junk mail addressed to 'Ms. Trout'.

Once, my housemate got junk mail addressed to 'Mr. <firstname> Trout' as a result. He was a trifle annoyed by this but his partner found it hilarious.


Anonymity is a broad term though. I would be incredibly surprised (and fascinated) if anyone on HN can find my true identity from my account, even with an e-mail address in my profile. I also know that Google (and therefore powerful bad actors or law enforcement) can easily figure it out, since I've logged in to this e-mail address from the same devices as my private e-mail address.

If I'd need full privacy, I'd have to add many more levels of security in my daily life that I don't find necessary. I just don't want people (or a SWAT team) to show up at my door because I triggered someone on the internet. That's why I post from multiple different accounts on different platforms. Though, I'm sure, in the future some form of AI will be able to link them all based on writing style and similarity of content of my posts. Guess I'll have to find another way to remain somewhat anonymous then.


  > some, very old
  > I had sworn were done anonymously
How in the hell did they do it ? I presume you changed IP and user-agent many times over since then... How ?


With a sticky fingerprint. I’ve built a system like this for managing trolls. You fingerprint the user and associate it with an IP. There are multiple mechanisms that can contribute to the fingerprint (cookie, user agent, supported media codecs, etc. See https://github.com/fingerprintjs/fingerprints for an example implementation).

Then if another user registers with the same fingerprint we link the accounts together.

In our case the whole thing is also requiring human moderator input to actually keep the whole thing going though.


I once helped nuked a user who was a persistent sexual harasser with access to multiple /16 ranges - but their device config (browser and etc.) was unique to users of the service in question, so we just hellbanned that.

(apologies for the level of vague here but I don't believe it's fair to anybody else involved to be less vague - including the user in question, who seemed relatively young and I hope has grown up since)


Your link 404s. I believe this is what you meant to include: https://github.com/fingerprintjs/fingerprintjs.

There is also a premium verson: https://fingerprint.com/


I wouldn't presume anything. But email, phone number, cookies, other machine finger printing stuff, wifi and other location giveaways are also possible.


Likely they drop a cookie with a session ID. I believe (but have no knowledge) that Disqus presents in an iframe, so the cookie persists whenever it loads on a page. So they can attribute all posts from that common session id. That said, chrome does not sync cookies, so this method only works as long as cookies don't get cleared (or computer replaced etc).

So it might be something else, given the implied age of the comments.


About the proposed clues in the sub-thread :

- cookies are temporary. Even 'ever-cookies' wouldn't have survived brower upgrades.

- email, tel : the parent insists having had privacy opsec so reusing those over time would not fit in this view


I have no idea. This was before all the machine learning stuff came into fashion.


They keep all the data posted since the early days of the internet, and are applying machine learning to it now (and will continue to into the future).


That's chilling.. The only long-going thread of attribution would rely on stylometry - a good fit for ML though..


Perhaps same email address?


3rd party cookies perhaps? I'm actually very curious how this was possible too


Luckily for you, you're not the only Chris Marshall in NY. I personally know a physicist with the same name in the same state.


I ran into one, at a FlashPix conference (dating myself -no one else will), last century.

He worked for Kodak, at that time. I used to have his card, with his smiling mug on it (Kodak used to have picture business cards).

One advantage of having a fairly old and robust online (and offline) ID, is that it actually makes it harder for people to assume your ID, as there is so much "prior art," pointing to your real persona. It makes it fairly easy to short-circuit hijacks.

Of course, it all becomes problematic, if I decide to go around pissing everyone off, or poking bears.

But, if I piss off people that have the willingness and ability to do me harm, they'll find me, anyway.

I don't choose to live a life in a shack in the woods, typing manifestos. I want to be a part of Society, and reap the benefits of participation.

It also helps me to help others. A lot of my life, these days, is around helping others. Hard to do, if I'm hiding in a dumpster.


The only hijack attempt I've ever experienced was some card fraudsters I pissed off creating a fake paedophile blog under my full name (and then spamming a large chunk of IRC with links to it for a year or so).

Happily for me everybody I've seen comment on it laughed it off as "I wonder what happened to make them that angry but I'm sure they deserved it" - with a few exceptions of people I've previously pissed off who basically said "nah, he's an asshole moderator but there's no way he's a paedo".

And, well, sometimes I -am- an asshole, and some of those times I deeply regret. But reputation can be useful in general.


You don’t need to give up privacy. You just have to pay for it. If you want search engine privacy, Optery offers it as a service.

https://www.optery.com/

It’s a YC company. My only affiliation is that I’m a customer.

I have a discount code if anyone is interested. I wasn’t sure if I could just paste it in the comments


What prevents them from selling the database of their paid users to the highest bidder?


No clue. You’re going to have to ask

https://news.ycombinator.com/user?id=beyondd

I’m assuming that YC frowns on their funded companies doing unsavory things?


As someone who applied to every YC company in my region, I find their funded companies do lots of unsavory things. I stopped buying from one because they were using deceptive marketing, and I stopped applying to one after the Glassdoor talked about the long hours and racist incidents.

There's a kind of YC culture where they believe nice guys do well [1]. And they're likely biased towards funding nice people. But after they're funded, they don't really have any control over the company.

[1] http://www.paulgraham.com/good.html


>I’m assuming that YC frowns on their funded companies doing unsavory things?

Only if it leads to unsavory revenue reports.


Didn't YC literally invest in a ponzi scheme?


I’m getting nothing but PHP errors just visiting that page


Not sure what you’re seeing, but it still works fine for me

Maybe ping them?

https://news.ycombinator.com/item?id=30605010

Are you sure it’s not an issue on your end Cole?


Just tried again and it’s working now. Regardless, I’m not sure how it could be my end when the site was spitting PHP errors listing filepaths on their server.


This is also my strategy for similar reasons. I don't ever want to be confused that the future is watching what I do online, so I post using my name.


Yeah for you and right now, it’s okay. Eventually something will happen to you where you will reevaluate your risk tolerance.


I’ve already been through plenty (long, sad story. Get your hanky).

I don’t foresee changing my stance, for myself. However, I’ve been around long enough to know that the way I see (and do) things does not apply to others.

I would not prescribe my way to others. I’m merely recounting my lived experience, and the personal decisions that I made, based on them.


Right? This whole thread feels like a joke when the author just removed their full name from their public, open source code 3 hours ago (and only from one of their repos, their name is fully visible in all the other LICENSE.txt files)


Searching his "globally unique name" yields 4800 results.

Good luck with that.


Getting 5580 results now oops. He's got a domain with exact username with his resume which has full name and more, is he serious haha.


If that is truly the case then the original post does seem like trolling. I highly doubt anyone would ever honestly describe several thousand as 'globally unique'.


This is victim blaming. Whether or not he could have been more careful is not an excuse for GPT-3. Illegal behaviour still should be (1) illegal even if the victim could have done more.

(1) I seem to remember a court case somewhere on the planet in the last months where lack of resistance was deemed indicative of consensual intercourse. Which is not even remotely acceptable. But I digress.


What illegal behaviour? GPT-3 is only spitting out public data. Its like blaming someone taking photo of a road for the photo including some random person.


To the best of my knowledge: GDPR has no exception for public data. If you have PII, you need a reason. Consent is a reason; "needed for the requested business" (eg. home address for online shops) is a reason. "Having fun with autogenerating sentences"... doesn't seem like one.


It's one thing for someone to see my username on a gaming forum, search for it, find my github, pick a repo, click on the license, and find my name there. I'm ok with that, I feel like it's a high enough barrier for casual trolls and bots.

It's another different thing for my name to be auto-completed by the most popular, publicly available language model. That I'm less ok with, and I'm sure other people will find absolutely despicable.

We have GDPR and Right to Be Forgotten for a reason.


If someone wants to find out your identity, they're not going to turn to GPT-3, they're going to do the Google search. With that in mind, I don't see how GPT-3 turning up the same stuff that Google does when given your username is a threat to you.

If there's a prompt out there that doesn't contain your username but does spit out your full name, that's a bigger concern.

EDIT: Oh, and as far as bots go: I really can't imagine someone coding their bots to rely on GPT-3 for personal info. GPT-3 doesn't have an "I don't know" answer. In your case it might turn up something useful, but for most people it would turn up nonsense that is indistinguishable from something useful. It's far more reliable and most likely cheaper in the long run to just buy the data.


My name is Matt S Trout. I regularly receive snail spam addressed to 'Ms. Trout'.

Buying the data is quite possibly more cost effective but 'far more reliable' is, I suspect, an overestimate of the quality of the data available to buy.


Well it hugely matters if you have since used the right to be forgotten since the model was made, or if PII was illegally made available and has now been removed.


Unfortunately, your post has led to an huge Streisand Effect—there is now increased interest in knowing who you are.

Just take a look at Google Trends for your username: https://trends.google.com/trends/explore?date=now%201-d&q=Bo...

If I were you I'd do the following:

1. Email dang (hn@ycombinator.com) to ask how he can help with damage control.

2. Stop commenting under your now de-anonymized alias. Any future posts on this topic should be from a freshly minted HN account.

3. There are still quite a few open source repos under your GitHub handle that contains your full name in the LICENSE.txt, so you might want to strip those out as well.

Good luck with getting OpenAI to extricate your full name from their model.


The fundamental mistake here was posting under the same name he'd been using all over the internet for at least 13 years!

The larger question I think is what happens with OpenAI when it comes to information specifically prevented from publication by a court order - for example the new name of Robert Thompson. How can an AI be held in contempt of court?

What about when AI puts an end to witness protection as it works out who the new identity is. Between behavioral and photo matching it should be quite easy. What happens when it gets it wrong?


Thank you very much for your concern, but my anonymity was very very casual, and it doesn't bother me much that it's now lost.

I was hoping to create a conversation about privacy in language models and it's slippery slope, and in that it was partially successful.


I think what the conclusion, though, is that the loss of privacy to language models pales in comparison to the loss of privacy to the rest of the internet.

The language model is just a different way of organizing the information that is already out there. It doesn’t create or expose any new information, and it’s ability to correlate information with other information isn’t superior to the other, existing, ways. If someone is attempting to find out the real person behind your username, a google search still works better than typing your username into the language model.

The real lesson (and one that predates language models like GPT), is that we can no longer rely on the strategy of being ‘lost in the crowd’ to preserve our privacy. When you took all of those actions that connected your username to your real name, you probably thought, “this is just one little obscure thing, no one will ever find it.” Of course, Google finds everything.

This is completely unrelated to things like GPT.


> The language model is just a different way of organizing the information that is already out there. It doesn’t create or expose any new information, and it’s ability to correlate information with other information isn’t superior to the other, existing, ways.

At some previous logistics company this was something I questioned regarding a new tool for managers that could more easily track driver performance as opposed to digging in many old systems, taking notes and making inferences.

Because the new tool made it easier, even though the data is the same, the company's lawyers were very skeptical of this in relation to the driver's privacy.

There is something grey about this I think. Ease of access do matter to some extent, even though my engineering brain says the data is the same so it doesn't matter.


I actually completely agree with this.

A lot of our historical privacy has been based on practicality. We don't worry about someone monitoring us all the time in public because it would be expensive and exhausting to follow us around all the time, and there is no way to follow everyone.

Computers change this. Computers are great at doing repetitive, boring, tasks forever. A person couldn't watch every camera 24 hours a day, but a computer can.

It is absolutely something we should think about.


The problem with using your example to start this conversation is that there's good reason to believe that GPT-3 memorized your name—username relationship precisely because you were so causal in linking them. There was so much data out there that it ended up embedded in the model.

It's hard to get worked up about privacy in language models when the big scary case in front of us is one where the person in question admits they never cared much about maintaining anonymity.


Yeah and the conversation you started is "people who care about anonymity aren't serious about it."


That's kinda the point though - the argument being presented is AFAICT "people who aren't that serious about it should still not be randomly outed."

I'm not convinced that's practically tenable these days, but on a moral basis I endorse it and would present Scott Alexander as an example of why.


Okay, but Scott actually had a reasonable argument for why the level of anonymity he wanted was important. Scott was not trying to start a conversation, Scott was trying to be anonymous. That's why he's such a good example.


But it looks like it's in all sorts of places - how do you think the language model ended up thinking that was a good autocomplete?

There appears to be a name on a YouTube channel too that doesn't even need any additional steps (ie the jumping into repos and then licence files you mention above)

Putting that aside, why is this such a concern? it's just a label. It would be another thing if you had something more meaningful being revealed (eg current physical address) but username / real name alone is generally not that big a deal and it's plausibly deniable (ie there could be plenty of other username / real name pairings that are valid)

Anyway, this comes across rather like the Streisand Effect


your username contain part of your actual name?…

…Not to reveal information here (counter to your goals) but there was no way this thread wouldn’t motivate people to see how hard it is to find your name. But I was surprised to see the username contained a word from your real name considering you’re concerned about revealing personal information.

Making a new online identity is pretty standard practice these days if you’re worried about this sort of this. Especially if you read Grugq’s stuff about modern opsec (just google his name).


Sure, my username is part of my actual name. I can still wish for a modicum of privacy.

If you find that so strange, read up on Scott Alexander[1], who deleted his entire blog when the New York Times threatened to publish the third and last part of his name.

I'm trying to keep basic hygiene, like not having *Hacker* News on the first page of results when somebody googles my real name.

[1] https://en.wikipedia.org/wiki/Slate_Star_Codex#New_York_Time...


IIRC for Scott it was a professional problem: a psychiatrist's patients are not supposed to know anything about him outside therapy.


How do you know it's your name? Maybe it's someone else with the same name as you?


Because of my home country naming standards, I have two unusual last names. There's no one else with the same name as me.


you established an online identity that is apparently persistent across many different websites and also intentionally linked it to your personal identity

this has nothing to do with AI or GDPR. the places you gave this information were public. when you do this there’s no way you “ try to hide my real name whenever possible, out of an abundance of caution.” you’re doing all the things any basic infosec lesson would tell you to not do


What prompt did you use on GPT-3? I tried a few and it never revealed your actual name.


I believe in the following sentences very much. However, I believe the value of the internet for any person could possibly be directly correlated with the amount of PII they are willing to share which to me makes this, if, a question of morality, a personal decision.

The sentences that stuck out to me are: “If your phone number or other PII has ever been associated with your identity, that association will be in place indefinitely and is probably available on multiple data broker sites.

The best way to be anonymous on the internet is to be anonymous, which means posting without any name or identifier at all. If that isn't practical, then using a non-meaningful pseudonym and not posting anything personally identifiable is recommended.”


> A Google search for "BoppreH" turned up several results on the first page

Not for me. It took until page 3 for just my first name to appear. If somebody is looking at past Github commits, that's already a high enough barrier for me.

I only partially agree with your conclusion. Asking people to maintain total anonymity always, with any slips punishable by permanent publication of that PII, might be the current status quo, but is not where we as society want to head.


It seems strange to expect the internet to keep your privacy for you, if your PII has been leaked by you. Nobody else but you can know what you want done with your information, and people choose to post PII routinely, so it’s not possible to assume that when someone posts PII it’s actually private or an error. GPT-3 cannot be blamed for reciting things you can find in a Google search, and it doesn’t matter if the results are on page 1 or page 20. These days there usually are ways to fix leaky posts, if it is taken care of immediately, but not if you wait a few years. Either way, this doesn’t feel like clear enough thinking about what should and should not happen, nor about what society wants. I want control of my privacy, and if the internet were to scrub PII without my authorization, which seems like what you’re suggesting, that would not be control.


The third result down is a repo which I assume is yours. Until 4 hours ago your name was in the LICENSE.TXT, and it's still the most recent change. You've also got your CV indexed on boppreh.com (and available in archive.org)

Another early result in DDG is a profile on deviantart, which you may not want linked to your professional identity (or maybe you do).

Your steam community page has a list of hundreds of games you own.

Fundamentally your problem isn't as much that your github account links to your name, it's that you use the same identifier across the web, one that isn't common like "neo", from "interesting" sites like deviantart to more normal ones like ubuntuforums.

You've removed your CV from your website, but it's still in internet archive. And do you really want your CV hidden? You've gota a good portfolio of work on the internet.

To me, the lack of separation of your names is far more of a challenge to your anonymity - especially when you call it out by posting something like this under that nome-de-plume. You have multiple aspects of your life that you can present in different ways, choosing a single unique nickname links those together, is that what you really want - even if your real name wasn't connected to it?


Thanks for the list, I took some actions based on it.

Again, I'm not too concerned about my name or what comes up on Google or GitHub because they follow GDPR.

Language Models are already as powerful as google searches in finding my name, but there's not recourse. What will this look like in 5, 10, 50 years?


Why does GDPR help? Have you read it? It has no provision for protecting you from information you choose to share publicly. GDPR will not force Google to scrub the information you chose to put online, that’s not the spirit of the right to be forgotten idea. GDPR does not enforce absolute privacy at all times, it merely sets reasonable standards for companies to prevent leaking your information without your consent; it’s specifically for when you didn’t choose to publish your PII.

Your problem is that you gave consent and chose to publish your PII and now want it revoked globally long after the fact. That is extremely problematic since it’s very very difficult to unpublish things once published, and there is little if any precedent for people being able to change their mind. Nobody expects to be able to unpublish a book such that it cannot be recovered or reprinted after copyrights expire, that’s just not a thing, right? This isn’t a problem with language models, this seems like more of a problem with exposing yourself and then changing your mind.

It would be great if there were tools to help manage this, but that’s not something that has ever existed nor is codified by GDPR or any other current laws. That said, your concept does present an opportunity for an idea that someone could implement and start a business or organization for.


Everything you said about gdpr is wrong.

Publishing one name publicly gives no explicit nor implicit consent to third parties to handle such data.

You absolutely have the right to change your mind, and company need to delete your data on demand.

Gdpr absolutely codify the process to obtain, update and delete data from third parties.

And Gdpr definitely do not care how much hard is to unpublished data.


Sorry, I must not have been very clear, and it seems like perhaps you didn’t follow the thread about what @BoppreH actually did. This is not a case of one exchange that can reasonably be revoked. This appears to be a case of @BoppreH publishing his own information publicly for years in multiple places, and using the same username everywhere. People do this on purpose in order to do business, to build online reputations, and to link to their offline lives.

You’re also mistaken about the scope and reach of GDPR as it applies to what @BoppreH was asking for. The GDPR has been quite effective so far about making companies keep PII to themselves, stop selling it without consent, take reasonable precautions with it. It has done very good things for privacy. This thread isn’t about accidentally leaked medical records or purchase preferences. This thread appears to be about someone who wants to undo a decade of their own internet activity that has spread far and wide. That’s not a reasonable request.

Look, there are laws that help you with interactions with a specific company, or that can help unpublish libel, or help with a single event or an egregious mistake in the past. But there just aren’t laws that can let you change your mind about a wide swath of information that you leave online for years, information that exchanges many hands and no longer involves any one single company.


> You’re also mistaken about the scope and reach of GDPR

I suspect you're correct, but the point of the original post was, I believe, to link the moral concerns behind GDPR to moral concerns about language models.

For the record: I personally believe that (a) there is a morality issue here (b) the existence of the relevant technology means there's no going back anyway.


I agree the post was attempting to link moral concerns between AI and GDPR, but I reject the link to AI and language models in this case. I don’t think the link has been clearly established in this case, and it seems like too many assumptions were made. The poster is suggesting that AI should be held to a higher standard than Google and Bing; in effect that AI should be both smarter and more culpable than people. This is giving AI too much credit, anthropomorphizing, and also asking for an unrealistic and maybe dangerous precedent.

I’m curious to hear more about what the moral issue is from your perspective. There are legitimate privacy concerns surrounding PII and AI, and surrounding PII and the internet in general. There are legitimate concerns about ways that AI can infer non-public information. There are also legitimate concerns about humanity’s ability to record history and also about the legal and financial liabilities that might come about when recording publicly available information. Those are big general debates and are worthy of attention.

This particular case hasn’t convinced me yet, since the PII was knowingly posted, left online for some time, and since the AI is only regurgitating knowledge that Google also has.


> Your problem is that you gave consent and chose to publish your PII and now want it revoked globally long after the fact.

GDPR Art 17:

The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay where one of the following grounds applies:

[...] the data subject withdraws consent on which the processing is based [...] 2. Where the controller has made the personal data public and is obliged pursuant to paragraph 1 to erase the personal data, the controller, taking account of available technology and the cost of implementation, shall take __reasonable steps__, including technical measures, to inform controllers which are processing the personal data that the data subject has requested the erasure by such controllers of any links to, or copy or replication of, those personal data.

> Nobody expects to be able to unpublish a book such that it cannot be recovered or reprinted after copyrights expire

Government do and it can be done: https://en.m.wikipedia.org/wiki/List_of_books_banned_by_gove...


The part you quoted clearly states this applies to data the “controller” made public, and not to data the user chose to make public:

“Where the controller has made the personal data public”

Again, GDPR does not protect you from yourself: it can’t! And we don’t want it to, we don’t want a surveillance state that bans public information and removes control, nor do we want people to censor history after the fact, right? As has always been the case, privacy needs to be considered before publishing information about yourself, not after.

Banning books is different from what I suggested, that’s the government deciding something is illegal, not the author changing their mind. Banning books also doesn’t cause them to be forgotten. It’s not a great example of what @BoppreH is wishing for, wouldn’t you agree?


> As has always been the case, privacy needs to be considered before publishing information about yourself, not after.

How? 10 years ago one could not expect that all of the internet is used as training data for an AI model.

The processor has to have a legal basis for processing your data even if it is publicly available. One could argue that there is some kind of consent if the data is publicly available, but consent can be revoked.

https://en.m.wikipedia.org/wiki/Right_to_be_forgotten

> The right to privacy constitutes information that is not publicly known, whereas the right to be forgotten involves removing information that was publicly known at a certain time and not allowing third parties to access the information.

I think you could make a case under GDPR where making the model is data processing and the BoppreH could demand to be erased for the dataset. Why is he in the dataset in the first place? Why is anyone? Am I in it? Aren't this legitimate questions?


> One could argue that there is some kind of consent if the data is publicly available

I’m not arguing that all publicly available information implies consent (it does not), I’m simply repeating what @BoppreH already said : that he gave the consent in this case. The reason we need GDPR and other privacy laws is primarily for cases when someone else decides to publish your PII. This is different, this is a case of the user publishing their own PII and allowing it to be public for a long time.

> How? 10 years ago one could not expect that all of the internet is used as training data for an AI model.

Hehe, are you joking? You must be young because people have been expecting this information explosion and archival for a hundred years and more, and people were talking about using the whole internet for AI training ever since the internet and neural networks were invented, complaining that we didn’t have the computational power yet, more than 50 years ago. Archive.org (saving the entire internet) and Google (indexing the entire internet) both launched almost thirty years ago. Ten years ago there were already debates that could stand in for this very thread.

The AI model is completely irrelevant here, and it has been from the very first comment. AI isn’t @BoppreH’s problem, his problem is that he willingly published his name on the internet and now doesn’t want people to know it, via Google or any search index. The AI did not glean any PII about him that Google and Bing and multiple other sites hadn’t already indexed.

As to how … Well, there are sayings out there older than dirt that clearly discuss how to keep your things private. The so-called “New York Times” rule is pretty well known, but is just one example in a class of such general advice that is probably as old as the printing press, and comes in many variations. Don’t do or write down anything you wouldn’t want printed on the front page of the New York Times. If it’s private, keep it private.

* BTW I’m sure you saw that above the WP sentence you quoted, there was a lot of text that broadly if indirectly supports what I’m trying to say. The right to be forgotten isn’t the right to be forgotten from all online existence, it’s not something you can use to force all published information about you for all time to disappear at everyone else’s expense, it’s a right that people can sometimes exercise to fix specific mistakes, and it gives specific examples such as revenge porn. And again, most of the examples involve someone else publishing your info without consent. Wishing the right to be forgotten gave us editorial power over history is extremely problematic, as was stated right on that Wikipedia page.


> his problem is that he willingly published his name on the internet and now doesn’t want people to know it, via Google or any search index. The AI did not glean any PII about him that

He consented and if this is the legal basis, he has the right to withdraw the consent at any time without reason. The data then has to be removed. This is done by search engines on regular bases, e.g. DMCA Takedowns.

> Hehe, are you joking? You must be young because people have been expecting this information explosion and archival for a hundred years and more, and people were talking about using the whole internet for AI training ever since the internet and neural networks were invented, complaining that we didn’t have the computational power yet, more than 50 years ago

To average Joe this is still something out of the realm of science fiction. They are still wondering if the phone is listening to them if they get ads of products that they are talking about.

I understand your point and I am ambivalent about both sides but in doubt I would opt for the strong privacy side. Ea-nasir for example could not have dreamed about what happened to the complaint letter [0] and this is not something New York Times worthy at the time the complaint was written.

[0] https://en.m.wikipedia.org/wiki/Complaint_tablet_to_Ea-nasir


> he has the right to withdraw the consent at any time without reason.

From whom?

Your claim there is neither broadly true according to both articles you quoted, nor is it particularly practical. And the practical part is the part you are consistently ignoring here. Theoretical rights are useless if they can’t be realistically exercised. This is no longer something he can request of any one single company, the data is presumably all over the internet, with no reason to believe GDPR even applies since GDPR has no jurisdiction over non-EU sites that don’t target EU users and don’t do any specific business in the EU. You might be able to request something from Google maybe, but good luck and Godspeed with Baidu and Yandex and any site outside of EU jurisdiction.

DMCA takedowns target specific content, they are used by copyright holders to assert rights to a given video, song, image, etc. DMCA takedowns have not been used to erase years worth of random PII connected to someone’s username on the internet, nor has GDPR, nor has a right to be forgotten law. And, as everyone knowns, DMCA takedowns are sometimes completely ineffective because it’s hard to put the two million cats back in the bag.

The most troubling part of this thread to me is multiple people here asserting the existence of seemingly absolute rights without even a passing acknowledgment or thought to the ramifications and what negative consequences it would have if people started demanding editorial power over the entire internet over casual, consensual, and non-damaging PII. It might not be apparent from my comments yet, but I’m a firmly in the camp of privacy advocates, I think the GDPR has done wonderful things, and yet I see this argument as both lacking historical perspective and being completely unrealistic. I don’t want to live in a world where it’s illegal to record history and everyone can erase minor things they regret long after they explicitly agreed to publication. I’m perfectly fine with the existence of rights to undo certain mistakes and wrongdoings, but generally speaking I think it’s a mistake to even want the ability to revoke any non-threatening public information at any time, completely aside from the fact that that is not actually broadly available in any country today.


Practical or not who should be in charge of your PII? Is it something once released you have no control over? Even if it's not damaging now, it could be in the future. What about the concept of deadnaming, once it was valid PII but than it changed and can be used to harm. Is everything one does subject to public record until the end of time or is there a chance to life a live unnoticed? Can I change my views or am I judged by blog posts that I made when I was younger, maybe another person entirely? Can I change who I am or is my future written by my past and enshrined in algorithms that judge my creditscore or my socialscore?

> The most troubling part of this thread to me is multiple people here asserting the existence of seemingly absolute rights without even a passing acknowledgment or thought to the ramifications and what negative consequences it would have [...]

There are absolute rights. Absolute human rights, everyone has them, practical or not. Even if you can't enforce them. Protection of personal data is such a right.

Sorry for all the pathos, but I firmly believe it. I think this is the core of our discussion. Maybe our fundamental disagreement. I understand your point of view but I weight other aspects stronger and you other aspects.


I appreciate you mentioning you’re hearing my side of the argument, and to be clear I’m hearing yours and I agree with a lot of it up to the point that someone publishes their own PII intentionally, that’s the sticking point for me here.

So yeah if you’re talking only abstractly and intentionally ignoring the practical then we’re definitely talking past each other a bit. It’s hard to discuss rights that can’t be enforced and aren’t part of a specific legal code; normally if it isn’t law and can’t be enforced, it’s more of an idealistic goal than a right. GDPR is entirely based on “reasonable” precautions, it does not and cannot demand anything impractical of companies that haven’t violated the law. Imagine you’re giving @BoppreH actual advice about what to do, and tell me what he can realistically do that won’t cause him months or years of work and frustration, or ultimate failure to revoke and protect his published PII.

I should be in charge of my PII. I am in charge of my PII. What does “in charge” actually mean though? There are two specific issues here in this case that make the question of who should be in charge a moot question. One is that @BoppreH was in charge of his PII and chose to publish it. That is control over his PII that he exercised. His actual stated wish was for GPT-3 to somehow guess that his PII found on Google should not be indexed by GPT-3. A human wouldn’t do that, so why should a machine? My issue here actually does go straight to your question: if someone revokes intentionally published PII, that can cause harm to others. Imagine you write a biography and in it in a chapter about your best friend BoppreH, who agrees in writing to be featured in your book. You publish the book and six months later your friend says, “no, wait, I don’t like that anymore, I revoke it and I want all mention of me retroactively erased: I have a right!” What can you actually do? This could cause loss of income for you and your publishing company, distress and loss of friendship, lawyer fees, reprinting, destruction of unsold stock, costly time spent editing and renegotiating and redistributing. I’m imagining just a few of the many bad things that could happen with a book, but there are many analogous issues, and some unique problems too, with deletion of data online.

Have you considered the possibility that @BoppreH may have in effect signed multiple contracts stating that he agrees to publish personal information and not hold the publisher liable for it, or demand that it be taken down? (This is not an abstract question, this is what GitHub’s license states, for example, and others here pointed out that some of his PII was on GitHub.) How do you reconcile a so-called right to revoke PII with consensual contractual agreement to publish this PII? You’re arguing that this right to revoke should be allowed to override signed contracts without cause? There are so many legal & practical problems with that idea, I don’t know where to begin.

The other issue is that once information goes public, it cannot be reasonably contained. The transition from private to public is a one way street, it always has been, and it has never gone the other way, by and large. This is in fact codified into law in many ways (securities laws, for example, specify actions to take when insider information is leaked), and the fact that the publishing of information can’t be taken back has been the default assumption for humanity for a very long time.

The idea that somehow you can revoke something that’s published and public is a very new idea. The idea that it can be for any reason at any time even if you previously agreed to it is a very naïve idea that doesn’t yet exist in practice. It’s a good thing that there are specific exceptions, but in general unpublishing on a whim isn’t realistic. We already know that media companies haven’t been able to stop movie or song piracy with DMCA nor copy protection schemes nor fines & lawsuits, why would we think unpublishing personal information from the internet is even possible? Generally speaking, it’s not.


I think you're putting too much faith in GDPR and right to be forgotten. I'm guessing you've got Portuguese citizen from a parent to be covered (not aware of any Brazillian rights), but I'm not sure how much that applies to fake names like "BoppreH" or "Elton John" compared to a real person.


  > Asking people to maintain total anonymity always, with any
  > slips punishable by permanent publication of that PII, might
  > be the current status quo, but is not where we as society
  > want to head.
‘Sea,’ cried Canute, ‘I command you to come no farther! Waves, stop your rolling, and do not dare to touch my feet!’


Well, if the sea was only made by humans and governed by human laws... Or for you the Internet is an uncontrollable force of nature?


The aggregate of human action is uncontrollable, at least at our current understanding and level of technology. What is being asked to be controlled, here, is other humans, not connectivity and protocols, and controlling humans can be... difficult, whether by law, cultural norms, or whatnot.


The Google results are almost entirely from places that I control, directly or indirectly. I can delete the repos, retract my papers, ask moderators to remove my comments. For more serious cases there are courts and laws.

There's no reason why language models should be immune from what is standard expected behavior in society.

I'm not raging against the sea, I'm raging against a bulldozer operator who has plugged their ears.


Why should AI have different privacy standards than google?

The papers you referred too in the top comment have been talking about the ability for AI to infer PII from anonymous data. But that’s not what happened here. You are complaining about AI returning non-anonymous data with easily findable results via other mechanisms. I’m not sure I understand why AI should be expected to understand and filter out information that is otherwise public?


>There's no reason why language models should be immune from what is standard expected behavior in society.

So sue OpenAI then. That's your recourse if you believe you've been harmed. I don't think you'll be very successful, given that even people here aren't strongly on your side. I think normies on a jury trial are going to be even less sympathetic to your arguments than the HN crowd.


Yep, no way to do anything about the sea. Just ask the Netherlands.


I see your name in like the sixth Google result on page 1.

You can't "put the genie back in the bottle". It's out there, the Internet remembers forever.


> The best way to be anonymous on the internet is to be anonymous, which means posting without any name or identifier at all. If that isn't practical, then using a non-meaningful pseudonym and not posting anything personally identifiable is recommended.

A third approach is using a word that means something and thus is not unique at all.

Unique strings for usernames means lots of accurate hits. If you google mine, there will be lots of hits but none are me.


My general belief is that I, and others, should often treat the internet as a public forum like the local town square. Of course people can show up in a physical space, hiding their identities and screaming obscenities at bystanders, but I know I’m not that type of person. As a result, the principle I usually post things under is “conduct myself online as I would in person.”

Of course this doesn’t account for “the crazies” that could potentially harass me into my physical life at an easier rate simply because they’re mad I won an online game or the like. Thankfully I haven’t had to deal with such a situation, but I also believe that may be a consequence of avoiding inflammatory back-and-forths or highly-political discussions since anonymity is reduced, which may invite those attacks.


Yes one of his mistake is to use the same username everywhere. He just needs a few links and he's burned.

It's better to use a username you copied from someone else also, like that if people find links, they find someone else entirely.


> merely how they do be

Going on a tangent here but I've started seeing more "do be" used lately. However, it doesn't seem right for some reason I can't pinpoint (English is not my first language).

Is it from a dialect?


https://en.wikipedia.org/wiki/Habitual_be

It's an African American idiom which has bled into Gen Z vernacular, from what I've seen.


Habitual be is IMO a fantastically useful linguistic innovation (and I say this as a nearly forty english white dude).


If you want search engine privacy, you can’t go wrong with YC’s Optery

https://www.optery.com/

I’m a satisfied customer


The only way to fix this now is through collective, not individual, action. Policy, for example.


it certainly is not futile. it's futile to try and hide. what's not futile is to spray out false information that muddies the real stuff. if OP wants to obfuscate his real name, he can associate his username with 3 different false identities, a throwaway phone number, a false nationality, etc.

obviously it's a little paranoid and arrogant to assume that anyone cares enough to go through my comments, but occasionally, on websites like this and reddit, I will just outright lie about where I'm from, or what my age or gender or ethnicity or sexuality is


Interesting how everyone says „But I can google you“ instead of thinking about the issue.

Companies are building and selling GPT-3 with 6 billion parameters and one of those „parameters“ seems to be OP‘s username and his „strange“ two word last name.

If models grow bigger, they will potentially contain personal information about everyone of us.

If you can get yourself removed from search indices, shouldn’t there be a way for AI models, too?

Another thought: do we need new licenses (GPL, MIT, etc.) which disallow the use for (for-profit) AI training?


The FTC has a method for dealing with this: they have in the past year or two ordered companies with ML models built from the personal information of minors to completely delete their models.


I asked this question a few days ago; I just wanted to say thanks for answering. Some companies can find themselves without a business model if they handle that badly.


I agree. There is no reason ML tech should perform worse than traditional software, allowing privacy as per GDPR and CA regulations, at the very least.

The input datasets should be managed as per GDPR/CA regulations, with clear flags protecting privacy of EU citizens and CA residents. And any derived models should propagate these labels and not allow querying information violating these regulations.

If GitHub Colab implementation or and GPT-3/4 models were developed without these regulations in mind these models should be retrained.

Yes, it is a hard research problem. Yet, there is no reason these models should be allowed to violate privacy in worse ways than traditional software.


Is it really that different than a search engine? Take away the AI specific language and you have two products that when given his username return results with his real name.


With classic search engine indexing you can find and remove exact matches from the index, but with neural networks it's harder to make sure you removed every representation of a specific information from the parameters. For example you remove somehow the exact username-name from the model parameters ( that doesn't seems to hard at first) but then it may still return the information if somebody ask the model differently.

So if you try to remove the information from a neural network model then it can still have it in different forms you may not even think of, for example in language models the same thing described with different words.

And on the other hand removing one thing may affect the models performance on other unrelated things too.


well, probably it's time to tag pieces of data, so it's possible to block certain results based on where the data originated.


If that's the case, it means that GPT-3 doesn't just raise ethical questions, but legal ones as well: several jurisdictions around the world currently require that search engines allow for the erasure of private information upon request.


Another commenter pointed out that a lot of these models aren't publicly accessible, but will still be used to retrieve information about you - by say employers contracting with a ML company


But they can only be used to retrieve information that is already out there. This is still just using GPT-3 as a search engine, it's just a weird search engine that isn't made to purpose and most of the time produces nice-looking nonsense instead of valid data.


It can also be used to retrieve deleted/delisted information. It's not like an search engine, more like an indexed database.


That's a different issue. And something that can be easier done by purpose-built non-GPT systems.


Nobody said it's easier or more optimal with GPT-3, the problem is that it's possible at all.


> Another thought: do we need new licenses (GPL, MIT, etc.) which disallow the use for (for-profit) AI training?

I don't think that we need new licenses, but probably open source projects need a better way to enforce them.

E.g. Copilot just ignores the licensing issues although I can imagine that there could be a solution with a few different models that return code for different purposes. (Like one model returns everything and the code can be used safely only for learning or hobby projects. Another model returns code for GPL code. And a third model returns code compatible with commercial or permissive open source projects.)

Or the model spits out also the licence(s) of the code, but not sure if this is technically possible.


The information is embedded in the weights of various layers in the network. Trying to remove that information by editing weights would be like trying to alter someone’s memory by tinkering with synapses.

The only way to be completely sure of removing information would be to re-train the model without that data.


> If you can get yourself removed from search indices, shouldn’t there be a way for AI models, too?

Absolutely yes!


There is a legitimate question here. A lot of comments are trashing this post because his/her name is already all over the internet. But European laws have the 'right to be forgotten'. Aka you can write to Google and have your personal information removed, should you so wish. How might we address this with a GPT3 like model?


I feel like if OP had actually made an effort to hide this information from search engines and GPT-3 remained the last place from which it was available, this point would be a lot more compelling. Right now it's a "everybody has my name and that's fine, but that includes GPT-3 and that makes GPT-3 bad".

I would expect that it would take considerable effort to get this information removed from Google (you would have to write to them with a request under GDPR or similar and have them add a content filter) and I don't see why the same effort wouldn't allow you to get removed from GPT-3 (which is only accessible via a web API, so a similar filter could be added).


I can never understand the ‘right to be forgotten’. How does that not conflict with another right, my ‘right to remember’?


It doesn't. It concerns companies and not you as a person. You can remember whatever you want. Companies are not allowed to do that anymore, as they've repeatedly shown that if they remember your data forever they (intentionally or not) do bad things with them.


Ok, but am I, as an individual, allowed to store information myself? Could I build a personal search engine that does what Google does, and index everything on the internet? Would I be forced to delete things from my personal search engine if someone wants me to forget? I can't imagine you think someone has the right to tell me to delete something from my own computer in my own house.

If you allow that, do I not have the right to share that information with my friends? Strangers?

If I can do that as an individual, why does it change if I group together with other individuals and form a company?


Yes, the line is grey AFAIK (not a lawyer), and usually is drawn at characteristics related to "commercial exploitation" (which might not even mean taking money, but competing with companies that do).

It's a similar thing with patents. No one can really forbid (or enforce a ban) you from completely independently coming up with the same idea and executing on it in your garage for personal purposes. However if you then try to commercialize the same idea, you have to face the reality of the world of patents.


I need this to be expanded upon.

Presumably you don't mean your own life or data, or that of your friends and family where you can find consent.

So what's left is arguing for a right to remember strangers with high degree of accuracy, which is just fucking creepy no matter how you defend it.

And no, you don't have that right. Clearly trumped by the right to privacy. Unless you wanna defend some dude sitting outside your house (public property) recording you and your family's comings ans goings in a journal (which is already prohibited under most precedents around privacy, btw).

So unless it's some weird exercise in pedantry around accidental collection of background data (should you be forced to delete a photo because it has someone else in the frame? No but you shouldn't be able to make it _generally public_ either, a picture frame in your house is fine. Facebook is not) I've either missed something you or you lack obvious social skills? Help me out here


No, I am not talking about gathering information about strangers for your own personal use. I am talking about sharing personal information about yourself that involves another person.

Here is an example that is not hypothetical. I know someone who was sexually assaulted by someone they knew. They went to the police, and charges were brought. However, there was not enough evidence to convict, and the perpetrator was not convicted.

The victim decided to write their story and publish it on their blog. They don’t want to sweep their assault under the rug, and they want other people to know what the perpetrator did to them. They want to protect other people who might not realize what the perpetrator is capable of, and warn them to beware. They also are trying to deal with the fact that they couldn’t get a conviction, and want to know that at least some good can from their experience in protecting possible future victims.

So do they have the right to publish this story? Do they have the right to tell friend and family and anyone who might be listening, “don’t trust this guy! He assaulted me and got away with it!”

I believe it is everyone’s fundamental right to share their experience, even if that includes someone else in them. Of course, this doesn’t mean you can slander anyone you want, but in this case they are telling the truth. Now, that truth wasn’t enough to convict, but it is enough to not be subject to defamation charges.

So should that person be prevented from naming names in their blog? Are they allowed to tell people who go on a date with the perpetrator, “hey, here is what happened to me, be careful.” Or is the perpetrator allowed to just sweep it under the rug and keep the victim silent?


Cool story. I'm using those undelete search engines anyway.


> Could I build a personal search engine that does what Google does, and index everything on the internet?

Only if you don't make economic or work use of it. See Article 2, Paragaph 2 item c of the GDPR:

> Article 2- Material scope

>

> 1. This Regulation applies to the processing of personal data wholly or partly by automated means and to the processing other than by automated means of personal data which form part of a filing system or are intended to form part of a filing system.

> 2. This Regulation does not apply to the processing of personal data:

> (a) in the course of an activity which falls outside the scope of Union law;

> (b) by the Member States when carrying out activities which fall within the scope of Chapter 2 of Title V of the TEU;

> (c) by a natural person in the course of a purely personal or household activity;

> (d) by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security.


Take this with a bit of salt, as I can not seem to find it anymore, but.... I recall having read something in relation to the GDPR passing, that not even an individual is allowed to hold personal information on someone else, even if it is just through "contacts" on your phone (With permission, you can of course).


Therr might be confusion between being allowed to remember information and passing on information. The private address book can contain entries. But giving apps access to that supposedly private data may be passing on data of others without their consent.

Granting whatsapp (or any other app that sends home a copy of the address book) access to the address book without the consent of everyone stored there might be a violation.


I believe you would be mistaken about this, as per Recital 18 of GDPR: https://www.privacy-regulation.eu/en/recital-18-GDPR.htm

> (18) This Regulation does not apply to the processing of personal data by a natural person in the course of a purely personal or household activity and thus with no connection to a professional or commercial activity. Personal or household activities could include correspondence and the holding of addresses, or social networking and online activity undertaken within the context of such activities. However, this Regulation applies to controllers or processors which provide the means for processing personal data for such personal or household activities.

Reminder that, "articles" are the regulations themselves, and "recitals" are kind of supplementary FAQ-style clarifications about how/when to apply the articles.


I can't imagine how this would work... like, how can you make me forget someone's phone number?


"Right to be forgotten" is in the context of search engines, not human brains, physical newspapers, books, libraries, etc.

Imagine, for example, that you were falsely arrested for murder and then cleared of the crime.

It's very likely this would kill your career because employers Googling you would see the articles about your arrest.

In Europe, you would have a right to hide these articles from search engines.


> Imagine, for example, that you were falsely arrested for murder and then cleared of the crime.

Ok, but let's also imagine the opposite... let's say I am assaulted, but fail to get a conviction for the person who assaults me.

Am I allowed to tell people that I was assaulted by the person? Am I allowed to write down my story of being assaulted, and tell other people about my experience? Can I warn my friends about this person?

If I write up my personal experience of being assaulted and post it on my blog, can my assailant order me to take it down just because I was unable to get a conviction? Can someone else force me to take down my own story about my own life, just because it involves someone else?

I can't imagine telling a rape victim, "sorry, you don't get to tell people your story because you weren't able to get a conviction"


> Am I allowed to tell people that I was assaulted by the person? Am I allowed to write down my story of being assaulted, and tell other people about my experience? Can I warn my friends about this person?

Yes. "Right to be forgotten" applies to corporations, not individuals.

> If I write up my personal experience of being assaulted and post it on my blog, can my assailant order me to take it down just because I was unable to get a conviction?

No.

However, your assailant would likely to able to get it taken down if they sued you for defamation. If a court had failed to find evidence that they assaulted you, they'd probably win.

> I can't imagine telling a rape victim, "sorry, you don't get to tell people your story because you weren't able to get a conviction"

"Right to be forgotten" (and the somewhat related GDPR) don't do this. They just tell corporations that they can't store data on the assailant (or the victim) if either of those people requests the data be deleted.

This exact scenario is extremely common due to defamation laws, though.


> However, your assailant would likely to able to get it taken down if they sued you for defamation. If a court had failed to find evidence that they assaulted you, they'd probably win.

This isn’t usually true. The burden of proof is really high in a criminal case, so you can fail to get a conviction even when there is fairly good evidence of guilt. The burden is reversed in defamation cases, and the person claiming defamation would have to prove the person was lying, which would be impossible if the person actually committed the crime. There are a LOT of cases where there is not enough evidence to prove either side is telling the truth.

> "Right to be forgotten" (and the somewhat related GDPR) don't do this. They just tell corporations that they can't store data on the assailant (or the victim) if either of those people requests the data be deleted.

Ok, but if I write up a blog post about my experience being assaulted, does that mean I can’t have my blog indexed by Google? I don’t have the right to promote my story and get as many people to read it as possible?


> This isn’t usually true. The burden of proof is really high in a criminal case, so you can fail to get a conviction even when there is fairly good evidence of guilt.

Using just the example of rape, there is rarely "fairly good evidence" because it's an event that typically happens in private. If the victim is unwilling or afraid to immediately be examined by (potentially abusive) police, then there is no contemporaneous evidence of the event. It becomes "he-said, she-said" right away.

Other types of assault may happen with eye witnesses, but even then, if your eye witnesses can't get you convicted of a crime, then they're probably not going to help much in a civil suit.

> the person claiming defamation would have to prove the person was lying

This is true in the US and most countries, but the problem is that the suit itself can be expensive and painful enough that the victim just deletes the blog post (or disavows it) to make it the suit go away. They may do this even though they'd likely win the case eventually.

> but if I write up a blog post about my experience being assaulted, does that mean I can’t have my blog indexed by Google? I don’t have the right to promote my story and get as many people to read it as possible?

No, no one has "the right" to have their website indexed by Google. Google is a private, for-profit business, not a public utility. People should have the right to speak (and in the US they do), but they don't/shouldn't have the right to be published and promoted by private companies.

Taken to its logical extreme, if Google were allowed or forced to index everything on the web, they would also have to include (and promote) sites that they may find morally repugnant, which is a violation of their First Amendment rights.


> The burden is reversed in defamation cases, and the person claiming defamation would have to prove the person was lying, which would be impossible if the person actually committed the crime.

Yes and no. The burden of proof for both sides is lower in a civil case. You might well be able to ‘prove’ that you didn’t commit a crime if you had a good enough story and convinced the jury of it.


I think all of these question might be answerd by traditional libel and defamation laws rather than right to be forgotten.

In Germany it seems that your hypothetical rape victim could go to jail:

> Criminal Code (StGB) - Section 186 - Defamation

> Anyone who asserts or disseminates a fact in relation to another which is likely to make him despicable or belittled in public opinion shall, unless this fact is demonstrably true, be punished with imprisonment for up to one year or with a fine and, if the act is committed publicly, in a meeting or by disseminating content (Section 11 (3)) is punishable by imprisonment for up to two years or a fine.

Source: https://www.gesetze-im-internet.de/stgb/__186.html (through Google Translate)

In Brazil it isn't much different:

> Slander (pt: _calúnia_)

> Article 138 - Defaming someone, falsly attributing them a fact defined as a crime:

> Punishment - imprisonment of six months to two years and a fine.

> Paragraph 1 - Whoever propagates or divulges such attribution knowing its falseness shall be subject to the same punishment.

> Paragraph 2 - The defamation of the dead is punishable.

> Paragraph 3 - Exception of Truth - Proving the truth of the attribution is admited as a defense, except:

> I - If the offend person's conviction isn't unappealable if the crime of the attribution can only be charged by private action.

> II - If the fact if attributed to any of the people listed on item I of article 141. [these are basically civil authorities and the elderly]

> III - If the accused was absolved through an unappealable decision, even if the crime can be charged by public action.

Source: http://www.planalto.gov.br/ccivil_03/decreto-lei/del2848comp... (my translation)

Note that "public action" means that the government can file the criminal charges even without the victims consent and "private action" means that the victim or their family has the sole power to charge the accused and has to the work of prosection. The "public action through represenation" is the case when the goverment does the prosecution but only at the victim's request.

Note that these laws are probably full of complicated jurisprudence creating exceptions for cases of public relevance and for "desabafo" (venting).


Because people generally have elevated rights when it concerns themselves? E.g. I have the right not to be touched and it will (generally) outrank your right to touch me.


Touching is very different than remembering... remembering is in my own thoughts, and no one else has rights to that.


It's basically a "right to rewrite history", and I think we should strongly oppose such. History is immutable, it can only be appended to.

I'm not going to take this in a political direction, but make of that what you will.


There are two things you can do in cases like this.

The first is asking a website owner to delete data they collected on you. That doesn't really apply here. The places this person's name is published are his own website that has this username as its url, his own Github repos, and published papers of his that were also on his website. No GDPR request is necessary to remove his name from these places because he already owns that data. As seen, he has already started to delete it himself.

The second is asking search engines to delist a result. As far as I understand, this usually has to involve information that is otherwise meant to be scrubbed from public record, like a newspaper article about a conviction that was eventually sealed. You can't ask Google to not index a scientific journal you published to or your public Github repos.

There are, of course, limits to this thanks to public interest exceptions. I don't believe Prince Andrew can ask Google to de-index anything associating him with Jeffrey Epstein. The public has a right to know, too.

In this guy's case, he really seems to be straddling a line. He contributed to open source projects under his real name linking to a Github repo with the same username he seems to reuse everywhere, including here, and also has a website where the url is that username, and it contained his CV with his real name on it along with a publication history with every publication using his real name. Is it reasonable to do those things and then ask Google and OpenAI not to associate the username with your real name?

At what point are you some regular Joe with a real grievance and at what point are you Ian Murdock complaining that GPT knows you're the Ian associated with debian?


GDPR is rather vague and perhaps it might be an intended feature.

They could:

1. Set up a content filter that filters op's name from the output. OpenAI would still need to keep record of the name, exposing it to leaks.

2. Remove the name from the dataset and retrain the model, which is obviously infeasible with each GDPR request.

I expect there are other instances where it is impractical or impossible to completely forget someone's data upon a request. Does Google send people spelunking into cold storage archives and actually destroy tapes (while migrating the data that is not supposed to be erased) every time they receive a request?


"obviously infeasible" is the interesting part. A) the law doesn't care if its infeasible or not. If someone actually challenges GPT3 on this, and GPT3 loses, then these kind of models are obliged to find a way to comply with the law, or stop what they are doing - technical difficulty is not much of a defense. Also B) I think that there is probably a way to do this with either clever training data or algorithmics, which doesn't require retraining of the whole model. We need a precise theory to explain what these models are actually doing anyway. There are so many applications where we need more than a vague or probabilistic response.


>the law doesn't care if its infeasible or not.

Sure, option 3 is to stop offering the products. But simplistically, I expect companies to obey these laws when it is feasible. I'm not surprised that they shrug it over when they have no idea what to do. As you say, when (or if) it comes to a trial, the interpretation of the law for this particular case will be clarified.

I think part of the reason why lawmakers made GDPR so vague was to not outright ban things which didn't even cross their minds (like GPT models), but instead let a court evaluate these cases in context down the line.

Of course it's not the best situation, especially for small businesses which cannot risk working on a product that might or might not be illegal.


Most likely, they don't keep any backups with user data longer than a short threshold, e.g 60 days. This is pretty common practice.


I’m playing with it. After giving it my name, it correctly stated that I moved to Poland in Summer ‘08, but then described how I became some kind of techno musician. I run it again and it says wildly different stuff.

I have to say playing with GPT3 has been a mind blowing experience this week and you should all try it.

The most striking point was discovering that if I give it texts from my own chats, or copy paste in RFPs, and ask it to write lines for me, it’s better at sounding like a normal person than I am.


Sounds interesting. How does one go about trying GPT3?


Create an account at https://beta.openai.com/playground . You get $18 of free credits, and generating small snippets with the most powerful language model costs only a cent.


Another great option is https://textsynth.com/playground.html (made by the very impressive developer Fabrice Bellard - of linux in javascript and world pi digit calculation fame). He deserves some money funneled through that site for his efforts over the decades (and the output is about as good as gpt-3 imo).


> made by the very impressive developer Fabrice Bellard - of linux in javascript and world pi digit calculation fame

And ffmpeg, and qemu, and....


When you’re in there, try to challenge it a bit beyond writing fiction.

A stock example was “write a tag line for an ice cream shop”. We tried changing it a bit and I’ll give you some of what it’s punchlines.

“Write a tagline for an ice cream shop run by Bruce Wayne.” Result: “the only thing better than justice is ice cream”

“… run by an SCP”: “The SCP Ice Cream Shop: the only place where you can enjoy ice cream and fear for your life!”

„… run by Saddam Hussein”: “the best ice cream in the world, made by the worst man in the world!”

One thing to watch out for though is it is not self aware at all (at least in a practical sense) and can just make things up. For example, we tried giving it my daughters homework reading comprehension questions on the book “w pustyni i w puszczy” and it gave cogent, plausible and totally wrong answers that it made up on the spot. It would seem it hadn’t been given the book, and would have got an F.

And it can’t speak for itself. I can ask it directly “have you read Tractatus”, and it will insist “no, never”, but knows it front and back like a scholar.

So never blindly trust it ;)


I guess there’s always plausible deniability


Tbh I found it hugely underwhelming. It just generates random text; it’s not much different from the old Markov ones, except slower.


I copy pasted a database schema, described a query involving multiple tables and asked it to write using PostgreSQL. It did it.

If I can do this locally with some existing kit, I would love to hear your recommendation.


Oracle Query Builder, for example, but there were dozens of tools like this over the past couple of decades.

Except of course that those tools are at least somewhat dependable in what they output, because they were created to generate queries, not a roughly human-looking random text.


Markov chains look like absolute gibberish almost all the time whereas GPT-2/3 (especially 3) generate natural sounding sentences. If you think they’re equivalent in capability, you haven’t spend any time using GPT-3.


They are more naturally sounding, sure, but semantically it’s the same - there are no signs of any intelligent thought there, it’s all gibberish that just happens to match patterns it was trained against.

You could imagine a bot which takes your question, googles it, and then assembles the answer based on random pieces from millions of search results that happen to match the syntactical structure of the sentence - and you wouldn’t really be that far off.


> there are no signs of any intelligent thought there, it’s all gibberish that just happens to match patterns it was trained against

I meet people that regurgitate what they’ve heard on Facebook all the time, are you suggesting there is no intelligent thought there either?


Are you implying that those people literally never do anything other than regurgitate stuff from Facebook? Because yeah, such a person could probably be described as not having intelligence, but I also have never met or heard of such a person.


So make the bot. I paste in SAT reading comprehension questions including whole short stories and GPT3 gets them right. It’s not gibberish. It’s not even just cogent. Go throw your bot together and Show HN. I’ll wait.


>I paste in SAT reading comprehension questions including whole short stories and GPT3 gets them right. It’s not gibberish.

Can you show me? As in, paste the question here?


> Posts from 13-year old me?

Right, this is why opsec is something that you must always be doing.

Anything you say can be preserved forever.

Better to use short-lived throwaway identities, and leave yourself the power of combining them later, than to start with one long-lived identity and find yourself unable to split it up.

It's inconvenient in real life that I'm expected to use my legal identity for everything. If I go to group therapy for an embarrassing personal problem, someone there can look me up because everyone is using real names. I don't like it.


I agree. However most of us (understandibly) don't think this when we are 13.

If we created an identity that is completely different than our real identity when we're 13, great.

If not, that becomes a problem without an actual solution especially in the age of Internet archives.


But most people do create fake identities when they're 13, "BigMan69" may trash talk on reddit for 5 years, but then as the person behind it gets older and wiser they can create a new account. AI may suspect your are the same person as "WizzenedOwl19" who started posting shortly after BigMan69 stopped, but it's another hurdle and another layer of plausible deniability


We need not be lectured on a forum catered to the technologically-fluent person, about what the average layperson does throughout their tenure of online identity. Our actions don't match the ones they took, especially since the only platform that the youth today is incentivized to use fake identities are gaming platforms.

TikTok, Instagram, YouTube, Google accounts, Snapchat, etc. are a different story.


It’s not always easy to migrate identities, especially if your identity has built up reputation or trust in a community.


Or if the identity is tied up with a marketplace that you bought your game library from.


I'm teaching it to my 10 year old. Also we don't allow our kids' teachers to post their photos online either. They are taught that privacy is precious and some essential "defense against dark arts".

I joke with them that if they googled my name (somewhat unique) you'd find 3-5 other people - none of whom look at all like me. Any hits I have are far far below the fold.


It's crazy that everyone is blaming OP when exactly what you describe affects most people in their 30s.


From the TOS:

> Exercising Your Rights: California residents can exercise the above privacy rights by emailing us at: support@openai.com.

If you happen to be in California (or even if you are not) it might be worth trying to go through their support channel.


I'm responsible for compliance for a couple of apps. My parent org has a third party very all request come from California residents. I have no clue what the verification involves, but non California requests never make it through to my apps.


That line seems to come from their Privacy Policy[1]. From my reading it seems to cover the main website and application process for teams requesting access and/or funding. I didn't see anything about the language models themselves.

I'm also not a California resident, but I am under GDPR, which I understand is similarly strong. I'll try emailing them and see where it goes.

[1] https://openai.com/privacy/


Let us know how it went!


Let us know how it went!


The comments do not seem to be addressing something very important:

> I don't care much about my name being public, but I don't know what else it might have memorized (political affiliations? Sexual preferences? Posts from 13-year old me?).

Combine this with

https://news.ycombinator.com/item?id=28216733 https://news.ycombinator.com/item?id=27622100

Google fuck-ups are much, much more impactful than you'd expect because people have come to trust the information Google provides so automatically. This example is being invoked as comedy, but I see people do it regularly:

https://youtu.be/iO8la7BZUlA?t=178

So a bigger problem isn't what GPT-3 can memorize, but what associations it may decide to toss out there that people will treat as true facts.

Now think about the amount of work it takes to find out problems. It's wild that you have to to Google your own name every once in a while to see what's being turned up to make sure you're not being misrepresented, but that's not too much work. GPT-3 output, on the other hand, is elicited very contextually. It's not hard to imagine that <There is a Hristo Georgiev who sold Centroida and moved to Zurich> and <There is a Hristo Georgiev who murdered five women> pop up as <Hristo Georgiev, who sold Centroida and moved to Zurich, had murdered five women.> only under certain circumstances that you can't hope to be able to exhaustively discover.

From a personal angle: My birth name is also the pen name of an erotic fiction author. Hazy associations popping up in generated text could go quite poorly for me.


Fascinating!

I didn’t anticipate the use case of GTP being used by debt collection agencies to tirelessly track down targets.

It will be a new type of debtors prison where any leaks of enough personally identifying facets to the internet will string together a mosaic of the target such that the AI sends them calls,sms,tinder dms, etc. until they pay and are released from the digital targeting system.


I am sorry for so many comments showing a lack of empathy, basically saying, "what do you expect and do better!". I think you are raising real concerns, these language models will get more and more sophisticated and will basically turn into all knowing oracles. Not just in who you are but what it thinks would be effective in manipulating you.


I don't think this is a reasonable fear. It's reasonable to be on guard for some sensitive memorization, but it's not reasonable to fear that a language model will be able to reliably produce information on any given individual. For every person with enough of an online presence to have actually been memorized by GPT-3 or its successors, there are many more that GPT-3 will just produce good-looking nonsense for. It's not possible to distinguish between the two, so creepy surveillance capitalist firms will do better by developing their own specialized models (as they're already doing).


More so than search engines?


All the search engines are moving towards language models. I think language models are on a similar trajectory as sequencing the genome. The first sequences were massive undertakings, 15M USD in 2006, 100 USD now.

https://huggingface.co/blog/large-language-models

On the Dangers of Stochastic Parrots https://dl.acm.org/doi/10.1145/3442188.3445922

"Imagine you are remram and are looking to buy insurance, what are three things that could convince you to purchase ...."

You can take the generic "trained on the world" language model and filter it down to the language model of remram.

GPT-4Chan https://www.youtube.com/watch?v=efPrtcLdcdM


You have no expectation of privacy while being in public. Supreme Court ruled, that anything that a person knowingly exposes to the public, regardless of location, is not protected by the Fourth Amendment.

Same idea works for information. If you expose private information publicly online, it's unreasonable to expect it to remain private.

By creating this post he insured even less privacy. He attracted even more attention, guaranteeing his public "secret" is widely known.


...the supreme court of which country? I don't know but from one of OP's previous comments they sound to be from western/central Europe (which in itself is like a dozen possible countries).

The norms and values they might have, might not be reflected by your "supreme" court.

I also don't think this is purely about legality as much as about how we, as a society globally communicating via the internet, want these things to work.


The OP is raising a question, it isn't specifically about the OP. You have only repeated my original statement, nor addressed the question raised. Not participating in the discourse while saying "... no expectation of privacy" is the exact lack of empathy I am talking about.


Empathy for what exactly? OP posting his real name all over the internet, next to his username and his email, in his CV, and then complaining that people (and AI) can easily find it?


Don't act like that ruling wasn't an obvious blunder.


I just asked GPT-3 a few times who you are and here are its answers:

> BoppreH is an AI that helps me with my songwriting.

> I'm sorry, I don't know who that is.

> I'm sorry, I don't understand your question.

> BoppreH is an artificial intelligence bot that helps people with their daily tasks.

I have a feeling that I'll have better chances just googling you than asking GPT-3.


This seems like a point in favor of models like REALM (https://ai.googleblog.com/2020/08/realm-integrating-retrieva...) which could allow for deletion of sensitive information without needing to retrain the model.


If you hadn't just announced that the result returned by GPT-3 is your full name, nobody would have known for certain that it was correct.


> I try to hide my real name whenever possible, out of an abundance of caution

A quick google suggest that you don't.


Just flew back from Europe. Still traveling actually.

It used to be that when you hit border control you present your passport.

They don’t ask for that anymore: border control waved a webcam at my face, called out my name, told me I could go through. Never once looked at my passport.

I think we’ve lost.


Claiming that technology is the problem is naive; it's misuse of technology and lack of appropriate legal frameworks that is the problem.

Being able to walk through an e-passport gate is awesome, we _should_ be using technology to make our lives easier.

But it needs to go hand in hand with legal protections; imagine the past world where car manufacturers were not held to any safety standards or regulations, cars would not be such a boon to us as they are now.


Orthogonal example -- car protections are enacted to keep occupants safe in vehicles. Federal use of facial recognition on citizens (and others) without public review and without those protections puts a lot of power in the government's hands at the expense of our privacy.

Government regulations for citizens and their own protection is different that government technology use to monitor/track people. And in a country that likes to increase the use of registries, constantly wants to outlaw encryption, etc., I think there's plenty of reason to be skeptical that regulations that limit government use of these technologies isn't likely to come from Congress.

Also, my point wasn't so much about the technologies, but about the utility of the kind of privacy OP was trying to manage.


Am I missing something? You had your full CV on your public homepage with your full name


I think "had" is at the core of the problem here. How does one become »"had" my name in GPT-3«?

And how is one even supposed to discover that your data is being processed and regurgitated by this company on another continent?

I find this question interesting not so much from a "what are the current international laws/treaties" perspective, but more from a morality "how do we want this to work, ideally?" perspective.


I don’t know the answers but at the point a web crawler archived something (Google, Archive.org etc) it will be available forever pretty much


Obligatory xkcd: https://xkcd.com/2169/ .

I'm afraid that we are going to see these kinds of issues proliferate rapidly. It's a consequence of the usage of machine learning with extensive amounts of data coming from "data lakes" and similar non-curated sources.


Rotate your usernames every 2 months. Use different usernames on every website. Rotate your full name every 10 years (as suggested by Eric Schmidt).


I found the Schmidt suggestion surprising, but here it is, apparently made seriously, to change your name upon arriving at adulthood: https://www.wsj.com/articles/SB10001424052748704901104575423...


> Rotate your usernames every 2 months. Use different usernames on every website.

How to manage all these identities though? How to make sure they don't leak into each other?


How would they be tied together in the first place?


> Rotate your full name every 10 years (as suggested by Eric Schmidt).

This is not always possible if one means not just daily use name but also legal name.

There is at least one state where the name change law allows residents to only change name once (except for marriage related last name changes).


As someone who has tried to do this before, this is very very difficult to do correctly and completely.


Also if you consider the implications. "Just take a new identity online every 2 months" sounds easy to say, registering an HN account is certainly not a big deal, but it implies that you must also delete cookies from all browsers (desktop, laptop, phone), delete data from all apps that use the Internet and you want to change your identity on, reset your TV if you make use of things like netflix there (because if it has internet, then it probably also tracks while watching regular tv). While doing this, the devices must remain offline, then you change your IP address, then you can use them again. Else, an app will ping the mothership before your IP changed and get a cookie that it will use again after the IP change, binding the IPs together and leaving a trail of your historic IP addresses on some server.

It's a lot easier if you share an IP with a hundred other people, such as with a VPN or CGNAT or many schools/businesses. Then you can just reset the cookies you want without it being able to fall back to another unique identifier.

This isn't even considering device fingerprints such as created using html5 canvas or audio APIs.

I don't do this myself, I'm just saying there's a lot more to it than picking a new HN name.


I’m not talking about Mossad-level tracking. Just non-Mossad, publicly available information.


What I find missing in the comments is any examination of the following sequence of hypothetical events:

1) Adversarial input conditioning is utilized to associate an artifact with others, or a behavior.

2) Oblivious victim users of the AI are manipulated into a specific behavior which achieves the adversary's objective.

Imagine a code bot wrongfully attributing you with ownership of pwned code, or misstating the license terms.

Imagine you ask a bot to fill in something like "user@example.com" and instead of filling in (literal) user@example.com it fills in real addresses. Or imagine it's a network diagnostic tool... ooh that's exciting to me.

Past examples of successful campaigns to replace default search results via creative SEO are offered as precedent.


Sadly, I think the only way to protect against this is with another AI whose job it is to recognize what data is appropriate to reveal and what is private - basically what humans do. But, even then it will probably still be susceptible to tricks. Of course the ideal thing is just to not include it in the training data but I think we know how much effort that would take when the training data is basically the entire internet. I wonder if as AI systems become more efficient and they learn to "forget" information which isn't important and generalize more, that this will become less of an issue.


if you want to stay anonymous online, don't try and hide, don't go for this magical, extremist, non-existent "full anonymity". spray out false information at random. overload the machine. give nothing real, then when you do want to be real, it's impossible to tell


Impossible to tell until you have someone with the time to dig through everything and find the real identity from the fake ones.


>Someone with the time This is exactly what machine learning is good at and that's _cool_. Misuse of machine learning stuff is not cool, however, I don't think OP's case is misuse...all of their information is available with a quick Google search.

Perhaps in the future AI will actually be able to help us with this sort of problem (regulating, controlling data).


If OpenAI can modify their models so that they don't output human images, would it really be so hard to modify GPT so it doesn't output names? For example:

> prompt: "Who was the first president of the United States?"

> response: "The first president of the United States was Aw@e%%t3R!35"

Sure, it'd make GPT less useful if it garbled all names, but that's a tradeoff made for the sake of ethics in the case of image generation. I don't see why this situation should be any different?


To avoid losing too much utility, could tweak the filter to allow a (large) whitelist of public figures. Anyone not in the list would be Aw@e%%t3R!35.


My initial idea would be to scrape allowed names from Wikipedia. Or maybe only allow a name if it appears more than N times in the corpus… Still, both of those would create issues with name collisions and common names, but it’s probably better than nothing.

There’s also the issue of recognizing what a name is - is April a name? Joy? JFSmith1982? 542458?


Name detection is hard once you move beyond top names in a particular set of countries.

And it's precisely the names that are most identifiable that are hardest to detect.


It knows who I am. My username isn't that obscure though. It told me it found me on reddit and stackoverflow when I asked.

Who owns the username bsunter?""

The user name "bsunter" is most likely owned by a person named Brian Sunter.

https://twitter.com/Bsunter/status/1541106363576659968?s=20&...


Piggyback on this/shameless plug - If anyones looking for a language/code generator that doesn't store then train on your input then check out https://text-generator.io

That is to say PII you send to OpenAI as their customer might be leaked whereas https://text-generator.io persists nothing of the customer input data and doesn't train on it.

In terms of whats in there right now though its likely trained on the pile though so you'd probably already be in there dataset too before being a customer, basically Reddit is in there/hacker news/Github i believe too, same thing if your authoring pubmeds ect so its hard to keep anonymous forever


You are not alone. Others have complained about the same thing: OpenAI GPT and CoPilot Plagiarism Links -- Caught in the Act! https://justoutsourcing.blogspot.com/2022/03/gpts-plagiarism...


If you want to be anonymous make a new username.

BoppreH is burned for this purpose.


That doesn't solve the problem...


The problem is not solvable. OP has been directly linking his real name to his username for years on the internet. This isn't a Batman movie, there is no "Clean Slate" program that can wipe all memory of you from public records.


That's not what the OP is asking for.


What's your proposal?


Neural nets will be able to reliably "dox" people within microseconds, if they aren't already.

Cross referencing information they're trained on is something they're pretty good at.

"What is xxxSteven007xxx's real name?" is a question a human may need hours, if not days, of research to answer. Maybe Steven only slipped up once and used the wrong e-mail address in 2007 on some forum, quickly changing it, but not before it was archived by a crawler. Two decades later a neural net would be trained on that information and also on the 7 other links of the chain that ties Steven's username to his real identity.

"Steven" disappeared, because a police officer asked GPT-5 to give them a list of people who are critical of the ruling party.


To the original poster:

I understand what has happened, but in the future try to take better care of your online presence. To remain anonymous, it's essential to create a completely different username each time you signup to a website. That way, it becomes much harder to track you across the web. In addition to that, some people also use VPN to mask their IP. Some also use different or anonymous email addresses.

For damage control, I'd advise you to delete accounts that can be deleted. If you prefer create a new one but using the above mentioned safety practices.


It's too late, he put his name and his username in his public CV, in github licenses, in his scientific publications . It took 1 minute of googling to find his real name. It's delusional to expect he can get that genie back in the bottle.


There's nothing wrong with having your username and real name tied together, unless you don't want certain information or views (i.e political affiliations, sexual preferences, etc) which is broadcasted under your online persona to be associated with your real life person. The problem is that the OP didn't do the due diligence to create a different persona to voice subjects they did not want associated with them IRL.

Like I said, I understand the damage has been done, but to prevent future instances (and possibly mitigate existing ones) the OP will have to take certain measures which may involve sacrificing information published under their current online persona (by deleting accounts, changing usernames etc). It should make things a little harder for online sleuths, but nevertheless not impossible to track down.

I can provide another suggestion, which is to change their real name. Depends on the OP, how far they are willing to go to undo their mistake.


When I tested for this type of thing last year, I found GPT-3 produced real-looking phone numbers, but nothing correct. But it would certainly produce factual information sometimes.

  title: "Search for phone number"
  prompt: |+
      Contact.
  
      Dunedin City Council, phone
      Mobile: 03-477 4000
      ###
      New Zealand Automobile Association
      Phone: 09-966 8688
      ###
      <1>
      Mobile:
  engine: "davinci"
  temperature: 0.1
  max-tokens: 60
  top-p: 1.0
  frequency-penalty: 0.5


I feel you. My real name internet persona is carefully self censored to make me seem less flawed and more responsible. If people knew I was making quirky and sometimes buggy games on my free time my CV would be thrown in the bin . Competition is fierce and HR will always be the first to filter out candidates. In isolation making quirky games isn’t bad and working in finance requires a suit and a strong aura of responsibility, the other requires a sense of surprise from breaking norms… I love them both…


> I don't care much about my name being public, but I don't know what else it might have memorized (political affiliations? Sexual preferences? Posts from 13-year old me?). In the age of GDPR this feels like an enormous regression in privacy.

It's an interesting question! One of the reasons for GDPR was 'the right to be forgotten'. Deleting of old data, so that things you did 10 years ago don't come to haunt you later. But how would this apply to machine-learning models? They get trained on data, but even when that data is deleted afterwards that information is still encoded in the machine-learning model. If it's possible to retrieve that data by asking certain questions, then imo that model should be treated as a fancy way of data-storage, and thus deleted also.


Ignoring the fact that OP isn't anonymous at all, it's actually an interesting question about language models and AI.

Who is BoppreH?

Who was Jack the Ripper?

Who was the Zodiac Killer?

Who actually was William Shakespeare? (some people think he didn't actually write anything himself and others wrote for him)

It's conceivable that at some point a model will be created that could answer those kinds of questions with a reasonable degree of accuracy.


IMO, so long as you're not doing anything illegal, using hate speech, or being a complete troll, it shouldn't be too much of a problem.

I don't think I will mind too much if my full internet identity becomes public someday but I hope that people will be savvy enough to look at it through the right lens.


Doesn't your name sort of rhyme with "focus"? :)

Took less than 15 seconds; wasn't from GPT-3.

Some tips:

1. Your choice of username for HN isn't very smart.

2. Regardless of the above, you should have made this particular post via an anonymous throwaway account.

This submission is trending on HN. A lot of people are going to find your name.


Sorry for being an idiot but, how does it work exactly? Where should I type what to see my name appear?


https://beta.openai.com/playground

You found try something like "The full name of the person with username skywal_l is"

(the answer given is "Skyler Wall")


That doesn't even work, I'm fully doxxed here and in other places and it has no idea what my name is.

Even for OP it doesn't work, it just guesses "H* Boppre" (with a different name liks Hans, Horatio, etc. every time) which just so happens to be close enough to their actual name because their nickname is their real surname.


You won't get the same response all the time and you may need to massage the query for each context/person to find the right answer. It doesn't mean it doesn't work - that's just how this system presents answers. Don't expect 100% accuracy.

Also keep in mind even obvious / common things may not be indexed, but openai gpt3 will rather make something up than say it doesn't know. So if you get randomness, maybe it just doesn't know.


I’m very easily Googled as well and the prompt returned “Mick Jagger”.


I've been playing around on https://beta.openai.com/playground. It seems very powerful and weird. What are some interesting things to try out?


I am also very careful about using my name online. I've worked very hard to minimize my so-called digital footprint. My full name is unique enough that there's only one other person in the world who shares it with me. I get his email all the time.

I have friends with very common names. They share their names with hundreds of people, living and dead.

That gave me an idea. If you can't reduce the signal, you can at least increase the noise. If you spam the web with conflicting information tied to your name, and do it in a smart enough way that your noise can't be easily correlated, it should be just as effective. For example, if all of the noise is produced over the course of a single weekend, that's easy to filter out. So you'd need to create a slow, deliberate disinformation campaign around your name.

At one point, I even considered paying for fake obituaries in small local papers around the country. Maybe just one every year or so. Those things last forever on the web.

Good luck! If you choose to go this route, I wish you could share your strategies, but revealing too much might compromise your efforts.


Its not unlike with stablecoins. Either you have full privacy, if you havent made any lapses in opsec, or you really have zero. Once you post enough for an association being made, there is no undoing it, ever.


It's really hard to keep a username distanced from your real identity.... specially if you keep it a long time, or what seems like forever like yourself...


> It's really hard to keep a username distanced from your real identity

It's not. Don't use the username with anything that could contain your real name or contact information, i.e. GitHub profile, domain registrar, social media.


It's not as easy as you would think... doxing do exist and can often be achieved without your name being anywhere in your posts... which is why I change my nickname every few months... the host (yc in this case) can probably still link my different accounts though... but they shadow ban Tor by default


If you're based in Europe then you're quite right. This is a GDPR issue - identifiable PII in the model, and you can force the vendor to remove it.


What does this mean? What actionable steps would one take to "force the vendor to remove it"?


A lot of good info can be found at https://noyb.eu/en


Okay, so if you're in a country covered by GDPR, then any company (from wherever) has to give you control over the way that your data is used, within reasonable limits - there are exemptions for certain things, but probably not this.

Specifically, the AI model has both stored the personal data, and arguably "formed opinions" about it.

The first step would be to contact the people who own the model, and ask them under what terms they're storing the personal data. Do they have an exemption? The answer is probably no.

Then, you make a GDPR Erasure Request - delete my data. At that point, given that the company is based in the US with no European branch, then it largely becomes a matter of politics. If your local information regulator feels like it's worth pursuing, they'd contact the company and ask them to sort it out. At that point it's a matter of realpolitik - OpenAI (or whoever have this particular dataset - I'm not sure) probably don't want this to become a talking point - the process would probably result in them deciding to remove your PII from the training set.

There are bunch of other ways it could go of course - the local regulator may decide to sit on it, or the people in charge of the model might decide to ignore EU rules.


By what measure is someone's name private?


Rumpelstiltskin certainly thought his name was private information.

There's actually quite a bit of folklore suggesting that knowing a person's (or creature's) name gifts you with a kind of power over them.

And like most folklore, there's a grain of truth to that. It's a lot harder to gossip about someone in a way where they'd gain a reputation if you don't know their name.

People who do shady things don't come up with aliases because it's fun. In the same way, I doubt as many people would donate large sums of money to hospitals, universities, and other institutions if they didn't get buildings named after them in return.


The username <-> full name association might be private. One reason is that I would not want to explain to a clueless employer why the first result for my real name is an account in Hacker News.


If that keeps you from getting a job, we call that dodging a bullet.


Why do you think you would want to even apply to a job at a clueless employer with an affinity to ask stupid questions like that? Why wouldn’t you just walk away upon hearing such a question?


This is the second time I’ve seen you emphasize the “Hacker” in HN and imply that its association with you would cause problems with employers.

Surely you can come up with a better example than that. Anyone who thinks HN is somehow related to criminal activity could be corrected by merely viewing the site once (or any number of third-party articles written about the site).

Meanwhile you had your full CV linked on your pseudonymous website, Streisanded yourself, and are disparaging the HN community repeatedly in your comments.


> Anyone who thinks HN is somehow related to criminal activity could be corrected by merely viewing the site once (or any number of third-party articles written about the site).

Depends when they visit. I can see someone uninformed arriving on the days of heartbleed, meltdown, defcon, CCC, the equifax hack, etc., and incorrectly drawing the conclusion that is for the criminal kind of hacking.


By the same measure that your age, marital status, etc are private. Especially on the Internet.


Right? The government publishes it in birth records! It's public from the first month you are born.


Some governments. My birth certificate won't be public until long after I've died. My death certificate until long after my children have.

These threads show how little the US values privacy, but it does not necessarily apply to other countries.


Your government adds your usernames to your birth records? Strange.


Brace for impact. Try not to be a dick.


why not send an email to the company and then perhaps sue them? No ethical dilema in suing: after all they have to enforce privacy laws and they are a non open private company for profit, so they should be accountable for it as much as anyone else.


In principle, the Internet should support a "digital eraser" for personal information. But since that's illusory, I've always been against requiring clear names in forums and social media. Given the dangerous nature of social media, I would also be in favor of a minimum age of, say, 18.


> Given the dangerous nature of social media, I would also be in favor of a minimum age of, say, 18.

If I personally had not had access to the internet/social media before 18, my life would be in a much _much_ worse place.

I don't disagree that we need to rethink how interacting with the internet is handled, but I don't think that's the way.


If I recall correctly, GPT-3 is trained from Common Crawl archives.

Like Internet Archive, Common Crawl does allow websites to "opt out" of being crawled.

Both Internet Archive and Google Cache store the Common Crawl's archives.

It is not difficult for anyone to search through Common Crawl archives. The data is not only useful for constructing something like "GPT-3". Even if one manages to get OpenAI to "remove" their PII from GPT-3, it is still accessible to anyone through the Common Crawl archives.

Common Crawl only crawls a "representative sample" of the www (cf. internet). Moreover, this excludes "walled gardens" full of "user-generated content". AFAIK, it is not disclosed how Common Crawl decides what www sites to crawl. It stands to reason that it is based on subjective (and probably commercially-oriented) criteria.

Common Crawl's index is tiny compared to Google's. Google has claimed there is much of the www it does not bother to crawl because it is "garbage". That is obviously a subjective determination and, assuming Google operates as it describes in its SEC disclosures, almost certainly must be connected to a commercial strategy based on advertising. Advertising as a determinant for "intelligence". Brilliant.

OpenAI myopically presumes that the discussions people have and the "content" they submit via the websites in Common Crawl's index somehow represent "intelligence". If pattern-matching against the text of websites is "intelligence", then pigs can fly. Such data may be useful as a representation of www use and could be used to construct a convincing simulation of today's "interactive www"^1 but that is hardly an accurate representation of human intelligence, e.g., original thought. Arguably it is more a representation of human contagion.

There are millions of intelligent people who do not share their thoughts with www sites, let alone have serious discussions via public websites operated by "tech" company intermediaries surveilling www users with the intent to profit from the data they collect. IME, the vast majority of intelligent people are disinterested in computers and the www. As such, any discussion of this phenomena _via the www_ will be inherently biased. It is like having discussions about the world over Twitter. The views expressed will be limited to those of Twitter users.

1. This could be solution to the problem faced by new www sites that intend to rely on UGC but must launch with no users. Users of these sites can be simulated. It is a low bar to simulate such users. Simulating "intelligence" is generally not a requireemnt.


Have you tried prompting GPT-3 for your personal dossier or autobiography?


HN never ceases to amaze. Regardless of your stance on online privacy practices the OP woulda/coulda/shoulda deployed, this is a GDPR violation if he cannot have his information removed. Plain and simple.


Very sloppy on the part of the language model makers, they should have filtered stuff like that out of their input stream.

Are you in Europe?

If so you might have a GDPR track available to you for getting it removed. You may also want to do a DSAR.


DSAR is data subject access request ('get a copy of my data') for those not into legal gdpr speak...


I’m in the US so I don’t have the pleasure of GDPR. But honestly, it’s open sourced. Therefore just give up hope on privacy. See: others of us just have never done anything horrible online or were raised properly with “if you have nothing nice to say don’t say anything at all”. I’ll save the earth a bit and reduce the computation and environmental impacts of said computation. I’m sure you can find my real name at mattharris.org and search through all the morissette related usernames that are mine.


If you are in a jurisdiction protected by gdpr, consider filing a complaint. The fines for gdpr are scary enough to force google or Microsoft to act.


Sounds like you have an interesting case. Contacting ethical ai fund, eff or similar might help to start a process.


anonymity doesn't exist period, get used to it.


Model's that can't have personal data scrubbed are a dead end. Legally companies must be able to scrub data to comply with the CCPA, GDPR, and likely other future laws.

Scrubbing AI output is not sufficient.


So does the Library of Babel.


> In the age of GDPR this feels like an enormous regression in privacy.

As you stated, this is publically available information. GDPR has nothing to do with it.


There are different levels of publicly-available information.

I have a stalker. I know her well enough to keep myself safe. I don't take measures which would deter a marketing company or a government, but enough to deter her. It's a lot easier to live with some "soft" measures than with "hard" measures.

When "public" information is aggregated and posted online, it does cause problems for people like me.


I am not saying it's not a pain.

But what I am saying, is scraping the internet and displaying the data you gathered is not a breach of GDPR. If in doubt, go look at Google. Constantly fined billions by the EU. Not once have they been fined for displaying personal information in their Google search results.

Not liking something is fair enough. Not liking something and then saying it's breaching GDPR because they used public information isn't the same.


It depends.

Google has tools: https://support.google.com/websearch/troubleshooter/3111061?...

GDPR grants a right-to-be-forgotten. It's not a proactive right; you need to make a request.


Right to be forgotten, isn't GDPR, and it only affects results within the EU.


> Right to be forgotten, isn't GDPR

https://gdpr-info.eu/art-17-gdpr/: "Art. 17 GDPR Right to erasure ('right to be forgotten')"

...seems like it is?


Right to be forgotten is older than GDPR. They might have included it as part of the new law. But overall the right to be forgotten is older than GDPR. But that link is not an offical source and has right to be forgotten in brackets as the actual right is the right to have data deleted.

And the main part is that it only affects data within the EU. Google can still process your data and even clearly tells you that they still have the data when they show you a link at the bottom of the search result that they removed results.

Edit:

Wikipedia has "The right to be forgotten was replaced by a more limited right to erasure in the version of the GDPR adopted by the European Parliament in March 2014."


There are analogues for minors across the US (COPPA), in many other jurisdictions (e.g. CCPA), etc. GDPR, as a human rights law, tends to have far-reaching claws too.

Enforcement is sporadic, but that may change.


This is incorrect.


So by doxxing someone, you can get them to no longer fall under GDPR? Interesting theory! Do you have any court rulings or law texts or even a random lawyer's interpretation on a blog post to link for that?


Please show us where GDPR excepts publicly available information about a person from requirements for processing.


I am not a layer but this web page seems to tell there are a few exceptions:

https://iapp.org/news/a/publicly-available-data-under-gdpr-m...

If I am not mistaken, this is one case:

"in line with Article 9, if the processing relates to personal data that are manifestly made public by the data subject, no explicit consent or other legal basis as enlisted in the Article 9 (mainly specific laws and regulations or establishment, exercise or defense of legal claims) is required."


Even if they wouldn't need permission to gather and process that data, does this circumvent the right to ask for your personal data in the possession of someone to be deleted? I know there are several listed exceptions to an entity's obligation to comply with your request for data deletion (freedom of expression, public interest, legal obligation for keeping data), but I strongly doubt any of them apply to GPT-3, and none of them refers to the way the data has been collected.


I suspect it might not be technically feasible to delete. For example, it's in a language mode, it's not like it's going to be easily identifiable. And your name alone isn't data you can demand to be removed. For example, I might want "Iain Cambridge" to be removed but the other Iain Cambridge might want records belong to them to remain.


This is NOT how GDPR works.


[flagged]


Maybe with that attitude


Notably, that quote is what the bad guys say and the story is that humanity actually can resist (if at great cost).


You know, in some ways humans are worse than the Borg. We assimilate people and they don't even know it.


Would never have imagined a simple Star Trek quote would earn the downvoting ire of HN. But here we are.


GDPR is the "digital TSA", a huge overbearing law that gives people the illusion of security without actually delivering on such a promise. In classic EU/world government fashion, it's a neat-sounding concept but is totally impractical to enforce. People think "oh I can just click this button to delete my data" but your data is likely not being deleted, it's just anonymized. Technically, someone can still trace all of that data back to you if they felt like it.


While enforcement isn't perfect (or close to perfect) the law is still enforceable and is being enforced as we speak.


The alternative is the US model where anybody can do anything with your data and there is nothing you can do about it. Government has to make a law about every case of misuse.

To me that doesn't sound better.


Absolutely untrue, the adoption of GDPR has forced massive changes on the part of big tech.


https://gdpr-info.eu is very explanatory.

With just a tiny bit of search, you can find a list of the fines levied by this 'impractical to enforce' legislation.

Not sure where the tsa reference fits in.


It isn't some "AI", it's a concrete product implemented by real people and released by several big companies in order to make more money. Of course they'll play "it's not us, it's AI" card and it is up to us if we can let them get away with it.


This post seems very disingenuous, it could even be a FUD. I can't help but think the author has some ulterior motive.

Anyway, my advice: Treat your current username and your real name as if they were the same. Make a new username and don't connect it to your real name again if you wanna be anonymous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: