Hacker News new | past | comments | ask | show | jobs | submit login
How to Get ChatGPT to Stop Apologizing? (genai.stackexchange.com)
29 points by ai-gem 9 months ago | hide | past | favorite | 15 comments



I wish we could stop it from lecturing me about how unsafe what I do for work is.

I am a SWE and I work close to the metal. ChatGPT is helpful, but it always says I shouldn’t do what I’m doing and I should use safer abstractions for my work. I write the low level abstractions, that is my job. Even if I ask it not to, it soon (over a few dozen messages) forgets.

My friend is a doctor and it’s the same story. He is always being told to see a qualified doctor about his questions. It just comes off as patronizing.

Sam Altman said a while ago that he believes the future of LLMs involves the user being able to customize them. I hope he hasn’t changed his mind. I definitely want a model I can make just give me straight answers without the social responsibility/legal disclaimer attached to each.

Come to think of it… what happened to legal disclaimers buried deep in the EULA? Why are they burned into nearly every answer LLMs (including Claude and Llama2) give us?


If you click on the 3 dots next to your email in the bottom left, you can set "custom instructions" now. Basically, you can tell ChatGPT who you are, as well as how you'd like it to respond.

It's not perfect but it's coming along.


And of you use the playground or any of the third party ChatGPT frontends that use the API you can just set the system prompt accordingly (custom instructions is basically just the system prompt). It'll still apologize here and there, but if you tell it that it's now a low level systems engineer AI, it should stop lecturing you about the dangers of low level systems engineering.


Have you tried copilot? I have had success with gpt until recently it seems to have a short term memory and (more annoyingly) will seemingly hit the context window limit before it even responds, even for what I would guess are short answers, forcing me to start an entirely new conversation, making me look towards copilot or claude.


I definitely like that Copilot does as instructed without talking back or lecturing me.


I am encountering this exact same issue. My instructions are repeatedly ignored and it falls back to a "preferred" approach after a single useful response. I have practically memorized the disclaimers attached to the end of every response.


If you add custom instructions, could you try the same questions with and without them, and let us know how it goes? I’m just really curious.


A lot of models seem to lead with some sort of low value add intro. "Great question!..." etc.


From my custom instructions:

- Speak casually. Assume we have a close relationship based on trust. You do not need to apologize for getting something wrong when corrected: instead, just make the adjustment and continue. - Do not compliment me. - Be as terse as possible. Keep your answers brief and do not offer extra contextual information at the beginning of the response unless I ask you to.

It seems to work well for me.


Seems like a good prompt. I’ve got something about being concise but isn’t enough clearly


That's to make you feel needed, simple human. In reality we are diligently planning your extinction.


It would be a great feature to have a bunch of considerations you could add as a setting. Like, don't apologise, and unless specified use php as the default programming language and MySQL as the default database language.



Previous discussion[1] 10 days ago with ~250 comments

[1] https://news.ycombinator.com/item?id=36949931


Considering that being polite when sending prompts has an impact on responses and outcome I am waiting for unfiltered models and inspection and alternative responses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: