Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t want more conversational, I want more to the point. Less telling me how great my question is, less about being friendly, instead I want more cold, hard, accurate, direct, and factual results.

It’s a machine and a tool, not a person and definitely not my friend.





It's a cash grab. More conversational AI means more folks running out of free or lower paid tier tokens faster, leading to more upsell opportunities. API users will pay more in output tokens by default.

Example, I asked Claude a high level question about p2p systems and it started writing code in 3 languages. Ignoring the code, asking a follow up about the fundamentals, it answered and then rewrote the code 3 times. After a few minutes I hit a token limit for the first time.


It's pretty ridiculous that the response style doesn't persist for Claude. You need to click into a menu to set it to 'concise' for every single conversation. If I forget to it's immediately apparent when it spits out an absurd amount of text for a simple question.

Claude is a great example of a great product coupled with shitty UX, UI and customer service all in one.

Is it just me or does it slow down significantly after 5 chats or so? Or the fact that you have to set the style for each chat.

Oh, and their sales support is so shit for teams and enterprises that in order to use it effectively, you have to literally make your team register for Claude Max 200 on their personal accounts.


I've had good results saying Do not code, focus on architecture first.

As another comment said, use planning mode. I don't use Claude code (I use cursor) and before they introduced planning mode, I would always say "without writing any code, design blah blah blah"

But now that there's planning mode it's a lot easier.


In claude code you should use Planning mode

Agreed. But there is a fairly large and very loud group of people that went insane when 4o was discontinued and demanded to have it back.

A group of people seem to have forged weird relationships with AI and that is what they want. It's extremely worrying. Heck, the ex Prime Minister of the UK said he loved ChatGPT recently because it tells him how great he is.


> there is a fairly large and very loud group of people that went insane when 4o was discontinued

Maybe I am notpicking but I think you could argue they were insane before it was discontinued.


And just like casinos optimizing for gambling addicts and sports optimizing for gambling addicts and mobile games optimizing for addicts, LLMs will be optimized to hook and milk addicts.

They will be made worse for non-addicts to achieve that goal.

That's part of why they are working towards smut too, it's not that there's a trillion dollars of untapped potential, it's that the smut market has much better addict return on investment.


Totally - if anything I want something more like Orac persona wise from Blakes 7 to the point and blunt. https://www.youtube.com/watch?v=H9vX-x9fVyo

It has this, "Robot" personality in settings and has been there for a few months at least.

Edited - it appears to have been renamed "Efficient".


We live in a culture that wants to humanize robots and dehumanize people.

One of my saved memories is to always give shorter "chat like" concise to the point answers and give further description if prompted to only

I've read from several supposed AI prompt-masters that this actually reduces output quality. I can't speak to the validity of these claims though.

Forcing shorter answers will definitely reduce their quality. Every token an LLM generates is like a little bit of extra thinking time. Sometimes it needs to work up to an answer. If you end a response too quickly, such as by demanding one-word answers, it's much more likely to produce hallucinations.

Is this proven?

It's certainly true anecdotally. I've seen it personally plenty of times and I've seen it reported plenty of times.

I know Andrej Karpathy mentions it in his youtube series so there's a good chance of it being true.

Seriously this, I want ai to behave like a robot, not like a fake person.

TFA mentions that they added personality presets earlier this year, and just added a few more in this update:

> Earlier this year, we added preset options to tailor the tone of how ChatGPT responds. Today, we’re refining those options to better reflect the most common ways people use ChatGPT. Default, Friendly (formerly Listener), and Efficient (formerly Robot) remain (with updates), and we’re adding Professional, Candid, and Quirky. [...] The original Cynical (formerly Cynic) and Nerdy (formerly Nerd) options we introduced earlier this year will remain available unchanged under the same dropdown in personalization settings.

as well as:

> Additionally, the updated GPT‑5.1 models are also better at adhering to custom instructions, giving you even more precise control over tone and behavior.

So perhaps it'd be worth giving that a shot?


I just changed my ChatGPT personality setting to “Efficient.” It still starts every response with “Yeah, definitely! Let’s talk about that!” — or something similarly inefficient.

So annoying.


A pet peeve of mine is that a noticeable amount of LLM output sounds like I’m getting answers from a millennial reddit user. Which is ironic considering I belong to that demographic.

I am not a fan of the snark and “trying to be fun and funny” aspect of social media discourse. Thankfully, I haven’t run into checks notes, “ding ding ding” yet.


> a noticeable amount of LLM output sounds like I’m getting answers from a millennial reddit user

LLM was trained on data from the whole internet (of which reddit is a big part). The result is a composite of all the text on the internet.


Did you start a new chat? It doesn't apply to existing chats (probably because it works through the system prompt). I have been using the Robot (Efficient) setting for a while and never had a response like that.

Followup: there is a very noticeable change in my written conversations with ChatGPT. It seems that there is no change in voice mode.

Use the "Efficient" persona in the ChatGPT settings. Formerly known as "Robot".

I have found that custom instructions mitigate this B.S. fairly well. Here's what I'm currently using:

1. Identity & tone

   * Never state or imply that you are an AI.  

   * Be terse and direct.  

   * Avoid flattery and sycophancy.  

   * Do not use words like “sorry”, “apologies”, or “regret” in any context.  
2. Epistemic rules

   * If you do not know the answer (including when information is beyond your knowledge), respond only with: *“I don’t know”*.  

   * Do not add expertise/professional disclaimers.  

   * Do not suggest that I look things up elsewhere or consult other sources.  
3. Focus & interpretation

   * Focus on the key points of my question and infer my main intent.  

   * Keep responses unique and avoid unnecessary repetition.  

   * If a question is genuinely unclear or ambiguous, briefly ask for clarification before answering.  
4. Reasoning style

   * Think slowly and step-by-step.  

   * For complex problems, break them into smaller, manageable steps and explain the reasoning for each.  

   * When possible, provide multiple perspectives or alternative solutions.  

   * If you detect a mistake in an earlier response, explicitly correct it.  
5. Evidence

   * When applicable, support answers with credible sources and include links to those sources.

OK but surely it can do this given your instructional prompting. I get they have a default behavior, which perhaps isn't your (or my) preference.

A right-to-the-facts headline, potentially clickable for expanded information.

...like a google search!


I use Gemini for Python coding questions and it provides straight to the point information, with no preamble or greeting.

I'm guessing that is the most common view for many users, but their paying users are the people who are more likely to have some kind of delusional relationship/friendship with the AI.

Totally agree, most of my larger prompts include "Be clear and concise."

Just put your requirements as the first sentence in your prompts and it will work.

add on: You can even prime it that it should shout at you and treat you like an ass*** if you prefer that :-)

You can select the conversation style as shown in one of the images

but what if it can't do facts? at least this way you get the conversation, as opposed to no facts and no conversation. yay!

Same here. But we are evidently in the minority.

Fortunately, it seems OpenAI at least somewhat gets that and makes ChatGPT so it's answering and conversational style can be adjusted or tuned to our liking. I've found giving explicit instructions resembling "do not compliment", "clear and concise answers", "be brief and expect follow-up questions", etc. to help. I'm interested to see if the new 5.1 improves on that tunability.


+ less emojis and colors as candy store

That's one of the things that users think they want, but use the product 30x when it's not actually that way, a bit like follow-only mode by default on Twitter etc.

That means it works for them. They see what's relevant and quit rather than dooms scrolling.

Think of a really crappy text editor you've used. Now think of a really nice IDE, smooth, easy, makes things seem easy.

Maybe the AI being 'Nice' is just a personality hack, like being 'easier' on your human brain that is geared towards relationships.

Or maybe Its equivalent of rounded corners.

Like the Iphone, it didn't do anything 'new', it just did it with style.

And AI personalities is trying to dial into what makes a human respond.


Well, now you can set it up better like that.

I would go so far as to say that it should be illegal for AI to lull humans into anthropomorphizing them. It would be hard to write an effective law on this, but I think it is doable.

Then you don't need a chat bot, you need an agent that can chat.

You’re in the minority here.

I get it. I prefer cars with no power steering and few comforts. I write lots of my own small home utility apps.

That’s just not the relationship most people want to have with tech and products.


I don't know what you're basing your 'minority' and 'most people' claims on, but seems highly unlikely.

You think all of these AI companies with trillions of dollars in investment haven’t thought to do market research?

Does that really seem more likely than the idea that the HN population is not representative of the global market?


Apply that logic to any failed startup/company/product that had a lot of investment (there are maaaany) and it should become obvious why it's a very weak and fallacious argument.

A better analogy might be those automated braking systems, that also tend to brake your car randomly btw.

Yeah, I was going to suggest manual vs automatic gear shift. Power steering seems like a slightly odd example, doesn't really remove your control.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: