I don’t want more conversational, I want more to the point. Less telling me how great my question is, less about being friendly, instead I want more cold, hard, accurate, direct, and factual results.
It’s a machine and a tool, not a person and definitely not my friend.
It's a cash grab. More conversational AI means more folks running out of free or lower paid tier tokens faster, leading to more upsell opportunities. API users will pay more in output tokens by default.
Example, I asked Claude a high level question about p2p systems and it started writing code in 3 languages. Ignoring the code, asking a follow up about the fundamentals, it answered and then rewrote the code 3 times. After a few minutes I hit a token limit for the first time.
It's pretty ridiculous that the response style doesn't persist for Claude. You need to click into a menu to set it to 'concise' for every single conversation. If I forget to it's immediately apparent when it spits out an absurd amount of text for a simple question.
Claude is a great example of a great product coupled with shitty UX, UI and customer service all in one.
Is it just me or does it slow down significantly after 5 chats or so? Or the fact that you have to set the style for each chat.
Oh, and their sales support is so shit for teams and enterprises that in order to use it effectively, you have to literally make your team register for Claude Max 200 on their personal accounts.
As another comment said, use planning mode. I don't use Claude code (I use cursor) and before they introduced planning mode, I would always say "without writing any code, design blah blah blah"
But now that there's planning mode it's a lot easier.
Agreed. But there is a fairly large and very loud group of people that went insane when 4o was discontinued and demanded to have it back.
A group of people seem to have forged weird relationships with AI and that is what they want. It's extremely worrying. Heck, the ex Prime Minister of the UK said he loved ChatGPT recently because it tells him how great he is.
And just like casinos optimizing for gambling addicts and sports optimizing for gambling addicts and mobile games optimizing for addicts, LLMs will be optimized to hook and milk addicts.
They will be made worse for non-addicts to achieve that goal.
That's part of why they are working towards smut too, it's not that there's a trillion dollars of untapped potential, it's that the smut market has much better addict return on investment.
Forcing shorter answers will definitely reduce their quality. Every token an LLM generates is like a little bit of extra thinking time. Sometimes it needs to work up to an answer. If you end a response too quickly, such as by demanding one-word answers, it's much more likely to produce hallucinations.
TFA mentions that they added personality presets earlier this year, and just added a few more in this update:
> Earlier this year, we added preset options to tailor the tone of how ChatGPT responds. Today, we’re refining those options to better reflect the most common ways people use ChatGPT. Default, Friendly (formerly Listener), and Efficient (formerly Robot) remain (with updates), and we’re adding Professional, Candid, and Quirky. [...] The original Cynical (formerly Cynic) and Nerdy (formerly Nerd) options we introduced earlier this year will remain available unchanged under the same dropdown in personalization settings.
as well as:
> Additionally, the updated GPT‑5.1 models are also better at adhering to custom instructions, giving you even more precise control over tone and behavior.
I just changed my ChatGPT personality setting to “Efficient.” It still starts every response with “Yeah, definitely! Let’s talk about that!” — or something similarly inefficient.
A pet peeve of mine is that a noticeable amount of LLM output sounds like I’m getting answers from a millennial reddit user. Which is ironic considering I belong to that demographic.
I am not a fan of the snark and “trying to be fun and funny” aspect of social media discourse. Thankfully, I haven’t run into checks notes, “ding ding ding” yet.
Did you start a new chat? It doesn't apply to existing chats (probably because it works through the system prompt). I have been using the Robot (Efficient) setting for a while and never had a response like that.
I have found that custom instructions mitigate this B.S. fairly well. Here's what I'm currently using:
1. Identity & tone
* Never state or imply that you are an AI.
* Be terse and direct.
* Avoid flattery and sycophancy.
* Do not use words like “sorry”, “apologies”, or “regret” in any context.
2. Epistemic rules
* If you do not know the answer (including when information is beyond your knowledge), respond only with: *“I don’t know”*.
* Do not add expertise/professional disclaimers.
* Do not suggest that I look things up elsewhere or consult other sources.
3. Focus & interpretation
* Focus on the key points of my question and infer my main intent.
* Keep responses unique and avoid unnecessary repetition.
* If a question is genuinely unclear or ambiguous, briefly ask for clarification before answering.
4. Reasoning style
* Think slowly and step-by-step.
* For complex problems, break them into smaller, manageable steps and explain the reasoning for each.
* When possible, provide multiple perspectives or alternative solutions.
* If you detect a mistake in an earlier response, explicitly correct it.
5. Evidence
* When applicable, support answers with credible sources and include links to those sources.
I'm guessing that is the most common view for many users, but their paying users are the people who are more likely to have some kind of delusional relationship/friendship with the AI.
Fortunately, it seems OpenAI at least somewhat gets that and makes ChatGPT so it's answering and conversational style can be adjusted or tuned to our liking. I've found giving explicit instructions resembling "do not compliment", "clear and concise answers", "be brief and expect follow-up questions", etc. to help. I'm interested to see if the new 5.1 improves on that tunability.
That's one of the things that users think they want, but use the product 30x when it's not actually that way, a bit like follow-only mode by default on Twitter etc.
I would go so far as to say that it should be illegal for AI to lull humans into anthropomorphizing them. It would be hard to write an effective law on this, but I think it is doable.
Apply that logic to any failed startup/company/product that had a lot of investment (there are maaaany) and it should become obvious why it's a very weak and fallacious argument.
It’s a machine and a tool, not a person and definitely not my friend.