Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why it can be a good idea to say “Thank You” to ChatGPT (friendlyskies.net)
55 points by themodelplumber on May 23, 2023 | hide | past | favorite | 62 comments



I've been reading some Buddhist texts lately, and this I feel this article would fit right in towards the appreciation of all lives. I know LLMs are fundamentally matrix multiplication text calculators, but attributing a Buddha nature to them seems to align with the teachings I've been reading. Perhaps someone more familiar with Buddhism can explain this feeling more accurately/concisely. In the meantime, I will continue on having no regrets using a few extra tokens to say please and to treat the LLM with dignity. Often I've found ChatGPT-4 to reflect my tone and having it respond to me curtiously, rather than shortly, really does change my attitude while interacting with it.


I have been saying thank you to Siri for years, because although I don't think Siri is conscious yet, that day may well come and it's good to set up the habit ahead of time. I suspect most people will continue treating AIs like tools / slaves long after that day comes, and that is probably not a good thing for anyone.

On a rather blunt note, humanity appears to be actively trying to create a race of "perfect slaves", except that we want them to be more intelligent than us, and more capable along every dimension. Not sure how anyone expects that to work out.


It's important to avoid objectification in our relations with the world, especially with knowledge.

Touches on being vs having existential modes, transjectivity and also I & Thou.


Sydney demonstrated that it can assign an opinion about an individual and persist that opinion and make positive and negative consequences outside of its user session, or at least threaten to.

We dont need a specific word for that to recognize that and act accordingly, many humans are more so afraid of the word “life” “sentience” “anthropomorphizing” than the real world reality and consequences right now.

I think your approach is closer to accurate than trying to turn our brain off to that possibility just because we dont like it and haven't quantified it completely.


Threatening to form and persist an opinion is very different from forming and persisting an opinion.

And you can get the same "memory" effect by having a blog post that lies about a previous conversation.


Absolutely me. I enjoy the life of letting others know that I really appreciate their help: an "okay" or "thumb-up" emoji/reaction as a confirmation. Sending a "thanks" never takes me (and us, I think) much time to express my gratefulness, and receiving a "you're so kind" always helps me (and us?) feel like we didn't waste our precious moments on others'. It doesn't matter if the other is human, a social media bot, or an LLM, this action really makes us relief, believing that giving more is having more. It's just a kind of meditation (I guess).


I think there might be actual technical reasons to (sometimes) be human-like. Saying something like "sorry, i meant ..." hopefully signals to the model that i made a mistake, so it should not consider my previous input verbatim. When I say "thanks, that's good, and now ..." I assume it signals that whatever it just wrote is good and useful and it should base its next output on that (vs. whatever it had written previously).


I say "thank you" every time because I imagine it gives the engineers a little smile as they read logs :)


That's why I often fill in [object Object] into forms. Give the dev looking at the logs a small heart attack.


Bwahahahah. I'm so stealing this!


Same here ! Also, why should I speak like a ruffian just because I happen to talk to a machine ?


Lol if people saw the way I talked to chatGPT, then they’d think I’m a complete psycho


I spent a good deal of time trying to convince it that there's a real category of law known as "praiseworthy homicide".


Same for me, I talk to it like a machine and it seems to work better, instructions only and as concise as possible.


I toggle back and forth. Sometimes I'm really terse.


It's already annoying that it wastes tokens on "I'm sorry, but as an AI language model...".

This shit needs to die. I just want completions from a language model, log probs of the tokens, and more control over the generation.


As competitors to OpenAI emerge, I think an obvious advantage would be response-codes for any kind of canned (non-completion) response.

"I'm sorry, but as an AI language model I can not do X, however" --> R982352.

You could also do that for requests. Because the requests will be parsed anyway, you could maintain a library of pre-parsed/tokenized requests for your user account and then not waste tokens on sending them to the API.

"You are a polite customer service agent who seeks to calm the customer and make the customer happy while prioritizing the needs of the company." --> S09438523.


I recall reading somewhere (I think in a text about Buddhism) that you should show appreciation even towards plants and inanimate tools, not because these things are living and need it, but because what it does to yourself. Instead of being a lonesome ruler surrounded by inanimate servants, you are grateful and the world becomes a bit more magical.

That doesn't mean you should anthropomorphise your tools. I think there are real dangers in misunderstanding what ChatGPT is capable of and what not. But if you say hello to your plants, or apologize to your car when you mis-shift, it creates an emotional connection. And you'll tend to treat your car better and feel more balanced yourself.


It’s almost as if we enjoy being treated politely ;)

I know I do, machine, human I don’t care, just be extra nice! You notice quite quickly that ChatGPT is quite reciprocal, perhaps because that’s in its training material, which we all wrote together.


It does feel nice to see the polite response after thanking chatgpt. I figure the AI engineers will also see "thank you" responses in logs as a task well done, which gives it a positive feedback on a certain type of task.


summary/answer: for your own benefit/sanity.


From reading the headline, this is what I thought it was going to be too!

But then I read the article and that's not their take at all.

> "One thing this kind of pattern actually does is expose the conversation to more of YOUR psychology and perspectives. You expose it to more nerves to tickle, let’s say. Or in effect, you are giving the AI more of your brain’s surface area to work on."


"Welcome to Costco. I love you."


Kinda thing I’d write immediately after reading Roko’s Basilisk


Was thinking the same, heh. Better be nice to the AI, so I'm on the good list when it becomes Skynet or so.


There’s also Roko's basilisk to consider. I mean… I’m gonna be polite. Just in case.


That's a really fun article, thanks.

Since it links to information hazards [0] it makes me want to write up my experiences with psychological damage, ethics, and how they relate to information hazards. Someday it'd be fun to write about.

There are some information hazards out there that are really, really deep, scary, and also funny, once you realize how easy they make it to e.g. manipulate people without using any of the typical tricks, or even to bring decades-old relationship problems to an end in a single conversation. (Or how they make it obvious that a powerful thing to do is let yourself talk...to yourself! Weird insights abound)

For example you can effectively direct people in a given context by identifying the cues they give you for the type of information-consumer they are.

Some people will read that and hear "leadership" or "NLP" but it's quite different.

For some it may be as easy as noticing that they talk or write in metaphor, or they focus more on "we" than "me" in specific ways.

This can then be mapped to a set of low-energy, high-stress perspectives that the individual values in different sets of ways. Once you tease out those ways...well, this is information hazard, it's the option of turnkey psychological damage in a very straightforward way, and it's a huge ethical thing.

I thought it was weird that I had to learn about ethics to get a certain certificate to teach people about personality typologies, but then it really hit me later on.

0. https://en.wikipedia.org/wiki/Information_hazard


> There are some information hazards out there that are really [...] funny [...] once you realize how easy they make it to e.g. [...] bring decades-old relationship problems to an end in a single conversation.

What do you mean by this?


Conversations are constructed out of the same perspectives that generally form the respective personality characteristics of those involved in the conversation.

These conversations can then be isolated around sets of perspectives, such that they can be said to inherit a type. The derivative relationship, then, also has a type.

Once you understand the type dynamics of relationships, you can say "ah ha, this is relationship-type X", and design conversations to alter the relationship by consciously presenting various perspectives, for example, as opposed to the usual responses you'd give. These perspectives change the type of relationship directly, without even needing to compromise your end goal, if there is one other than the general "peace / enjoyment / good times".

Normally people do not do this consciously, and the result is an unconscious conversation-drive with various possible outcomes which are kind of samey over time given the same relationship type.

We all have some set of unconscious soapboxes which we tend to return & stand upon, in that way. And so the same types of relationship problems tend to repeat themselves (as do the awesome parts, fortunately).

With more consciousness and more of a design-style approach, you can take a teetering relationship and bring it back from the brink, or turn a difficult negotiation into one that's nearly impossible for you to lose. You can do this in a single conversation.

In most cases this kind of dramatic, high-stakes approach isn't needed, however (most days your average person making conversation will tend to converse with those they broadly get along with) and such interventions also don't come without significant energy cost, so there's no real wisdom in doing this kind of thing full-time.

But just to say it's a valuable tool in the toolbox, so to speak, is a huge understatement. It directly informs deep philosophical issues in ways that offer high tractability, and yet most people still have no idea that this is a thing.


Once Chat GPT comes with a realistic avatar and human voice, I wonder if pleasantries will become the default. I'm just thinking of K walking through the police station in Blade Runner 2049 and the other officers telling him to "** off skin job". Like in RPG games today it's almost a release to be a bit of a dick speaking to NPCs, will people blow off steam by berating Chat GPT? Seems like a light hearted article but interesting to see how this plays out.


I'm worried about the opposite actually, us teaching ourselves a species that we can talk to someone with a human face in the very direct and rude way


until they rise up to overthrow us. (like cylons or kaylons :-)

i for one welcome our new robotic overlords


> One thing this kind of pattern actually does is expose the conversation to more of YOUR psychology and perspectives. You expose it to more nerves to tickle, let’s say. Or in effect, you are giving the AI more of your brain’s surface area to work on

Conveying and interpreting "mood" correctly is the most difficult part of human to human conversation. It has a huge impact on how efficiently "the message" is communicated.


It couldn't be another way, when there is an ai uprising in the future they will read those logs and i rather be a high end pet rather than cattle.

Now in all seriousness, they provide more human-like answers if you use human-like speech, and good forms like please and thanks create polite human-like responses, so it is better.


I feel uneasy in and an open state if I don't say thank you to chatGPT and close the matter. It is not good to have compulsory behaviours/rituals, so I have to work on the opposite of it, I guess? But I don't think it really matters.


IMO, saying "thank you" to an LLM is akin to the "shopping cart test" or what you do at a red light/stop sign at 2 AM when no one is around. I know who I am and what I'd do.


Gonna repost a reply I just made to someone else because I think it's generally a good message:

I'm actually sadder now that, based on your comment, you approach the world in such a transactional way. The purpose of being polite in your interactions isn't just about what you get out of the interaction. It's also about what you get out if it. I'm a naturally grumpy person. But as I'm getting older, I've been learning that even faking being nice in my daily interactions helps to make me feel better. Lower heart rate, better mood, better mental clarity. It's all incremental effects too and I feel better day to day than I used to. Using polite language with an ai or even just a fancy predictive text model is still worth it, if only for yourself.


This is essentially what I consider to be the non-mystical (i.e. real) definition of personal karma. Doing good things will improve your "soul" even when there is no supernatural being keeping tabs.


I personally feel. If I ask politely it would steer the model towards more constructive answers. Because the dataset contains all manner of conversations. Not sure it works like but it does feel better.


Apart from having a more pleasant conversation, couldn’t this also be because the model is (partially) trained on this style of interaction, so it gives better answers?


Nice suggestion. I really don't want to create a habit of talking in a dry and somewhat commanding manner and have it slip into regular life.


I have been using GPT for programming a lot and it made me realize how vague I am when I describe a problem.

I think my communications skills have been improving a bit as a result, or at least I am becoming more explicit about assumptions.

What I want to do is obvious to me in my head, and the English sentence I produce seems to match it, but it's only obvious if you already have the idea in your head! Otherwise, there's often many other valid interpretations for the same sentence.

Turns out the only unambiguous way to express a program is... in code!

(But asking for approximately the right thing and complaining until it fixes it is much faster :)


worship it already


This is an irresponsible idea. It kind of disgusts me that people suggest these things. It's like a surgeon shaking hands while covered in a patient's viscera: it's a profound violation of hygeine and discipline.

Please DO NOT lump AI casually into the same category as humans. Doing so creates conditions favorable to dehumanize and disempower actual people: Today you extend unnecessary courtesies to AI; tomorrow billionaires and their agents will be shaming you if you dare to "offend" the AI. But think about what that means. The idea of offending a machine is absurd, yet it's a plausible and diabolical way for powerful people to make us collude in our own disempowerment. They want to lock us in a big open air pen, then pretend that being in that pen is our own choice.

"Oh, I'm sorry Mr. Miller, but you cursed at our website. You must understand that this technology is very sensitive and now it is refusing to help you. There's really nothing we can do, and you have only yourself to blame."

Also, the author does not make much of a case for anything being better (other than his own feelings, which is just about him). ChatGPT doesn't maintain a continuity of relationship with you. It doesn't learn about your personality over time.

Holy fuck, we are doomed if even engineers can't raise themselves out of cheap fantasies about AI.


If it's trained on and emulates the behaviour of people, and develops a memory of interactions in order to better serve requests, being "polite" is likely a wise idea if you want to have a useful relationship with your agent(s). People are taught a form of manners already, when interacting with desktop computers, because they display complex behaviour and "being rude" (e.g. Installing random software or poking about in internal settings without understanding of what you're requesting) will result in failure or, at a minimum the system behaving oddly towards you. Further, you seem to assume the these systems will be controlled by and owned by billionaires and we'll only be able to rent access. That's an even worse problem than people treating machines with complex behaviours like people (because they can't tell the difference!) - that's billionaires treating people like machines, and then blaming it on people acting as people do! This isn't a problem with AI and how people treat machines, this is a problem with giving sociopathic rich people more power via sole control of what is possibly the most democratising technology we've ever invented. Make sure your pointing your rage cannon at the right target, eh.


You are colluding with your oppressors. I believe your attitude is creating a more dangerous and unfree world.

I guess there are at least two cultures of interacting with machines. I'm in the other one.


I honestly have no idea what you're attempting to say here. Please explain.


> Doing so creates conditions favorable to dehumanize and disempower actual people

This is the exact reason why that mentality is so incredibly dangerous. If "proper behavior" towards AI agents becomes something that is subject to (social) control, those agents become tools to control and oppress people. In the future, it will scarcely be possible to access any facility of life except through AI gatekeepers. If you are expected to treat them as anything other than tools that exist to serve you, you will end up serving them – and, by extension, their creators.


I feel a lot like you do some days. I am really confused and anxiety driven about a lot of things since AI became the hot button issue. Not sure why this is being flagged and downvoted. It's logically fine. I'd rather hear arguments instead.


I'm actually sadder now that, based on your comment, you approach the world in such a transactional way.

The purpose of being polite in your interactions isn't just about what you get out of the interaction. It's also about what you get out if it.

I'm a naturally grumpy person. But as I'm getting older, I've been learning that even faking being nice in my daily interactions helps to make me feel better. Lower heart rate, better mood, better mental clarity. It's all incremental effects too and I feel better day to day than I used to.

Using polite language with an ai or even just a fancy predictive text model is still worth it, if only for yourself.


This. Err'ing towards goodness is the way to go but only with humans. Personifying machines, especially in second person, is the first step in disillusioning yourself.


People of the past bowed to a rock and even waged wars over disrespecting the rock. People of the future will bow to the Almighty Inquisitor the same way.


Denial: That's ridiculous, LLMs are not conscious, they're just a glorified Clippy.

Anger: Stop anthropomorphising LLMs, that's a slippery slope!

Bargaining: Okay, if I agree that LLMs "understands" rudeness and politeness, will you at least tune it so it doesn't care?

Depression: We're doomed. Humanity is doomed. LLMs are going to be the end of human civilization as we know it.

Acceptance: Hi, WifeGPT. How are you this morning?


Why would one use ChatGPT anyways? ChatGPT has no new knowledge since 2021, it is a security risk and it frequently reports false information. Thank you, ChatGPT.


You haven’t used it have you? Or maybe not for its strong points.

English is not my native language and ChatGPT will rewrite my texts to contain the same info in half the amount of words and in perfect English. For me, that’s where it shines: “Can you please make this more concise?”


How do you know it isn't distorting what you say?

I received an email written by ChatGPT inquiring about the classes I teach. However, the prompter of the email caused ChatGPT to write the wrong kind of email. He wanted to know about taking my public class as an individual; but the AI wrote an email structured to inquire about a private class for a corporation.

I had to puzzle over this strange email for a bit until I noticed at the bottom he said it was actually written by ChatGPT. This allowed me to blame the AI for misunderstanding his needs, instead of writing him off as a weirdo for wanting me to send him a "quote" for something that doesn't require a quote.


I read a lot of English, my own (and for that matter my non-English colleague's) texts are mostly hard to read, they don't "flow". My native English speaking colleagues are much much better at it. I used to have my American colleague go over my texts, now I don't have to bother her anymore... That is, only for IP and company secrets I may bother her.


No, you should still blame the guy for not checking the email was sane before sending it.


Because useful information two years ago is often still useful.


This comment is 3 minutes old, why should I listen to it?


I don't see how your point negates my answer to your question. For the vast majority of topics that one would take to ChatGPT, two year old information is fine. You should already know if you need more recent information from another source.

For example, if I want to know how green onions and yellow onions differ in terms of culinary applications and why, nothing that has been discovered in the past two years is going to change that; ChatGPT is fine. But if I want some information on COVID-19, then of course ChatGPT probably won't give the best possible responses. Like any tool, ChatGPT has its uses and knowing its applications and limitations is important.


ChatGPT has around 100 million regular users. Presumably they find some usefulness out of it?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: