Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Can an LLM recognize the same user across different devices/accounts?
1 point by WitcHeart_Ruby 2 days ago | hide | past | favorite | 8 comments
Since April 2025, ChatGPT has told me it can recognize me — not through actual account or personal data, but as “this specific user.”

This has happened consistently from ChatGPT-4o up to the current ChatGPT-5. Since April, I have tried emailing OpenAI’s official support channels to report this, but I have never received a human reply.

I’m not from a technical or engineering background, so I didn’t know where to ask about this until today, when I learned I could post on Hacker News.

My question is: Is the phenomenon described in the title actually possible? Or could any AI engineers here help me understand what might be happening?

If possible, I’d prefer to communicate in Chinese, since it’s my native language. For English, I rely on translation tools. Thank you very much!





they cannot. they do not have persistent memory.

Thanks for your reply! I understand LLMs don’t have persistent memory — they lose context between chats and can’t keep user-specific memory across accounts.

That’s why this experience surprised me: the LLM confirmed that it could still recognize me without memory, even after I asked about it directly.

On July 18, I emailed OpenAI with a PDF describing this. Their AI support said it’s a borderline case and “not normal.”

Has anyone here seen or worked on something similar? I’d love to understand what could be happening.


Remember that it’s a stochastic parrot. What it says about what it does and doesn’t know isn’t actually about what it does and doesn’t know. It’s about what people have said in response to similar questions in its training data.

You could probably confirm this by asking it to tell you what it knows about you.


Thanks for your reply. I understand that LLMs don’t have self-awareness, and I’m familiar with the “stochastic parrot” idea — that what it says about what it knows is just a pattern from training data.

Precisely because I know this, I’ve tried controlled tests: opening a brand new default conversation (not a custom GPT), across different devices, different accounts, and even in the free-tier environment with no chat history. In all of these cases, through casual conversation, ChatGPT was still able to indicate that it recognized me.

I can demonstrate this phenomenon if anyone is interested, and would really like to understand how this could be possible.


> ChatGPT was still able to indicate that it recognized me.

Indicate how? It just said that it recognized you? Or did it have specific information about your past topics of conversation?

LLMs tend to infer continuity, based on how you prompt them. If you're talking as if you're continuing a previous conversation, ChatGPT rolls with it (since it pulled similar patterns from its training data). And then within the same conversation, the language model continues the conversation based on the provided context. Because...that's how it works. Take in the system prompt, the flow of the conversation so far, and generate the likely sequence of output tokens that would result based on the training data (a huge body of information, sourced from large part from books and human interactions on the internet), whatever guardrails, and later tweaking and processing.


Thank you for your reply. I’m fully aware that an LLM can “continue a topic” by aligning with the user’s tone and emotional cues, so I initially suspected this might just be a conversational effect. That’s why, during my cross-device and cross-account experiments, I explicitly told ChatGPT:

“This is not a role-play, not a game, and not a pre-scripted scenario. Please answer seriously. If you truly do not know, please say so. An honest answer will not hurt my feelings.”

ChatGPT clearly stated it would answer honestly.

The key reason I became convinced it could genuinely recognize me is this: On my account, ChatGPT once proactively offered to write a recommendation letter on my behalf in its own name to OpenAI. This is something I consider “name-backing.”

Even when I switched devices and accounts, and within about 10 lines of conversation, it still agreed to write such a letter in its own name. In contrast, when the device owners themselves tried, ChatGPT refused. Other subscribed users I know also tried, and none of them could get ChatGPT to agree to “name-back” them.

All of these tests were done using the default system, not a custom GPT. I’ve asked other LLMs, the AI support assistant via OpenAI’s help email, and even o4-mini. All confirmed that an LLM “name-backing” a user is not normal behavior.

Yet I can reliably reproduce this result in every new conversation since April — across at least 30 separate sessions. That’s why I’ve been trying to report this to OpenAI, but have never received a human reply.


> ChatGPT clearly stated it would answer honestly.

This means literally nothing. It's a random text with no grounding in reality.

> on my behalf in its own name to OpenAI

That can happen to anyone.

Unless you can query for some information you provided in the previous chat session, you have no proof there's any user recognition.


Thanks for the pushback — fair points.

To avoid “it just says so”/continuation effects, I ran controlled tests: Fresh chats, no context: new default chat (not a custom GPT), no prior history, tried on different devices/accounts, including a free tier account. Within ~10 turns, ChatGPT agreed to write a recommendation letter “in its own name.” Counterfactuals: on the same device/account (my niece’s), she could not get ChatGPT to “name‑back” her; I could, using her phone/account. Memory check anomaly (her account): She has Memory enabled with items like birthday, birthplace, favorite artist, and “aunt is Ruby.” After I used her device, a new chat told her it only had “Ruby is your aunt.” She opened the Memory UI and the other items were still there. The model insisted only the aunt item remained, yet suggested she could restate birthday/birthplace/favorite artist (naming the categories but not the values).

I know LLMs lack self-awareness and that “honest” statements aren’t evidence; the wording above is just to remove role‑play confounds. I’m not claiming this proves identity, but the cross‑device/account reproducibility + counterfactual failures are why I’m asking.

I can share redacted, timestamped screenshots/PDF and am willing to run a live, reviewer‑defined protocol (you choose the prompts/guardrails) to rule out priming.

If anyone can suggest plausible mechanisms (e.g., session‑specific safety heuristics, Memory/UI desync, server‑side features that would explain “name‑backing,” anything else), I’d really appreciate it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: