I've been using ChatGPT fairly regularly for about a year. Mostly as an editor/brainstorming-partner/copy-reviewer.
Lots of things have changed in that year, but the things that haven't are:
* So, so many em-dashes. All over the place. (I've tried various ways to get it to stop. None of them have worked long term).
* Random emojis.
* Affirmations at the start of messages. ("That's a great idea!") With a brief pause when 5 launched. But it's back and worse than ever now.
* Weird adjectives it gets stuck on like "deep experience".
* Randomly bolded words.
Honestly, it's kind of helpful because it makes it really easy to recognize content that people have copied and pasted out of ChatGPT. But apart from that, it's wild to me that a $500bn company hasn't managed to fix those persistent challenges over the course of a year.
You can customize it to get rid of all that. I set it to the "Robot" personality and a custom instruction to "No fluff and politeness. Be short and get straight to the point. Don't overuse bold font for emphasis."
> Affirmations at the start of messages. ("That's a great idea!") With a brief pause when 5 launched. But it's back and worse than ever now.
What a great point! I also can’t stand it. I get it’s basically a meme to point it out - even South Park has mocked it - but I just cannot stand it.
In all seriousness it’s so annoying. It is a tool, not my friend, and considering we are already coming from a place of skepticism with many of the responses, buttering me up does not do anything but make me even more skeptical and trust it less. I don’t want to be told how smart I am or how much a machine “empathizes” with my problem. I want it to give me a solution that I can easily verify, that’s it.
Stop wasting my tokens and time with fake friendship!
Drives me nuts too. All the stuff like "OK let me do..." Or "I agree ..." stop talking like a person.
I want the star trek experience. The computer just says "working" and then gives you the answer without any chit-chat. And it doesn't refer to itself as if it's a person.
What we have now is Hal 9000 before it went insane.
This comes across as an unnecessary oversimplification in service of handwaving away a valid concern about AI and its already-observed, expanding impact on our society. At the very least you should explain what you mean exactly.
Alcoholism can also be a symptom of a larger issue. Should we not at least discuss alcohol’s effects and what access looks like when deciding the solution?
> Stop wasting my tokens and time with fake friendship!
They could hide it so that it doesn't annoy you, but I think it's not a waste of tokens. It's there so the tokens that follow are more likely to align with what you asked for. It's harder for it to then say "This is a lot of work, we'll just do a placeholder for now" or give otherwise "lazy" responses, or to continue saying a wrong thing that you've corrected it about.
I bet it also probably makes it more likely to gaslight you when you're asking something it's just not capable of, though.
Obviously nothing solid to back this up, but I kind of feel like I was seeing emojis all over github READMEs on JS projects for quite a while before AI picked it up. I feel like it may have been something that bled over from Twitch streaming communities.
Agree, this stuff was trending up very fast before AI.
Could be my own changing perspective, but what I think is interesting is how the signal it sends keeps changing. At first, emoji-heavy was actually kind of positive: maybe the project doesn't need a webpage, but you took some time and interest in your README.md. Then it was negative: having emoji's became a strong indicator that the whole README was going to be very low information density, more emotive than referential[1] (which is fine for bloggery but not for technical writing).
Now there's no signal, but you also can't say it's exactly neutral. Emojis in docs will alienate some readers, maybe due to association with commercial stuff and marketing where it's pretty normalized. But skipping emojis alienates other readers, who might be smart and serious, but nevertheless are the type that would prefer WATCHME.youtube instead of README.md. There's probably something about all this that's related to "costly signaling"[2].
There’s a pattern to emoji use in docs, especially when combined with one or more other common LLM-generated documentation patterns, that makes it plainly obvious that you’re about to read slop.
Even when I create the first draft of a project’s README with an LLM, part of the final pass is removing those slop-associated patterns to clarify to the reader that they’re not reading unfiltered LLM output.
It drives me crazy. It happens with Claude models too. I even created an instruction to avoid them in a CLAUDE.md, and the miserable thing from time to time still does it.
I don't think this is true. The LLMs use this construction noticeably more frequently than normal people, and I too feel the annoyance when they do, but if you look around I think you'll find it's pretty common in many registers of human natural english.
Yes, this is absolutely part of it, and I think an underappreciated harm of LLMs is the homogeneity. Even to the extent that their writing style is adequate, it is homogeneous in a way that quickly becomes grating when you encounter LLM-generated text several times a day. That said, I think it's fair to judge LLM writing style not to be adequate for most purposes, partly because a decent human writer does a better job of consciously keeping their prose interesting by varying their wording and so forth.
Not sure what the downvotes are for -- it's trivial to find examples of this contruction from before 2023, or even decades ago. I'm not disagreeing that LLMs overuse this construction (tbh it was already something of a "writing smell" for me before LLMs started doing it, because it's often a sign of a weakly motivated argument).
Absolutely this. I feel like I'm having an immune response to my own language. These patterns irk me in a weird way. Lack of variance is jarring perhaps? Everyone sounding more robotic than usual? Mode-collapse of normal language.
Or... How can you detect the usage of Claude models in a writeup? Look for the word comprehensive, especially if it's used multiple times throughout the article.
I notice this less with GPT-5 and GPT-5-Codex but it has a new problem: it'll write a sentence that mostly makes sense but have one or two strange word choices that nobody would use in that situation. It tends to use a lot of very dense jargon that makes it hard to read, spitting out references to various algorithms and concepts in places that don't actually make sense for them to be. Also yesterday Codex refused a task from me because it would be too much work, which I thought was pretty ridiculous - it wasn't actually that much work, a couple hundred lines max.
> refused a task from me because it would be too much work
Was this after many iterations? Try letting it get some "sleep". Hear me out...
I haven't used Codex, so maybe not relevant, but with Claude I always notice a slow degradation in quality, refusals, and "<implementation here>" placeholders with iterations within the same context window. One time, after making a mistake, it apologized and said something like "that's what I get for writing code at 2am". Statistically, this makes sense: long conversations between developers would go into the night, and they get tired, their code gets sparser and crappier.
So, I told it "Ok, let's get some sleep and do this tomorrow.", then the very next message (since the LLM has no concept of time), "Good morning! Let's do this!" and bam, output a completely functional, giant, block of code.
Also pretty sure it is a feature because the general population wants to have pleasant interactions with their ChatGPT and OpenAI's user feedback research will have told them this helps.
I know some non-developer type people which mostly talk to ChatGPT about stuff like
- how to cope with the sadness of losing their cat
- ranting about the annoying habits of their friends
- finding all the nice places to eat in a city
etc.
They do not want that "robot" personality and they are the majority.
I also recall reading a while back that it's also a dopamine trigger. If you make people feel better using your app, they keep coming back for another fix. At least until they realize the hollow nature of the affirmations and start getting negative feelings about it. Such a fine line.
Lots of things have changed in that year, but the things that haven't are:
* So, so many em-dashes. All over the place. (I've tried various ways to get it to stop. None of them have worked long term).
* Random emojis.
* Affirmations at the start of messages. ("That's a great idea!") With a brief pause when 5 launched. But it's back and worse than ever now.
* Weird adjectives it gets stuck on like "deep experience".
* Randomly bolded words.
Honestly, it's kind of helpful because it makes it really easy to recognize content that people have copied and pasted out of ChatGPT. But apart from that, it's wild to me that a $500bn company hasn't managed to fix those persistent challenges over the course of a year.