Hacker News new | past | comments | ask | show | jobs | submit login
Please stop sending me emails written by GPT (mkbaio.substack.com)
74 points by HumanReadable on May 26, 2023 | hide | past | favorite | 80 comments



The author seems unaware or at least uninterested in the fact that writing style can have a dramatic impact on output. It's unclear if they wish to not be emailed "by ChatGPT" or if they wish to not have to sift through unnecessary and flowery pose.

I'll spare you the details, but a follow up "That's great. However, please rewrite it in the style of Ernest Hemingway." delivers concise, yet obviously Hemingway-esque emails. Example here: https://pastebin.com/rrkCMd8c It works much better in a two-step process. If "Write this email in the style of Ernest Hemingway" is affixed to the original prompt, the model will generate prose at length, defeating the purpose of being concise.

"That's great. However, please rewrite it in the concise style of Paul Graham," of course, works even better.


For me, it's a matter of decency. If you're sending me an email and you're expecting that I'll read it, you're asking me to invest my time (presumably for your benefit). But, you're unwilling to make the same investment with your time by using a tool to simulate a human connection. Having excessively long, unedited prose adds insult to injury since you've doubled down on the decision to spend someone else's time as long as it saves yours.


I agree entirely. It's the same thing as if someone uses one of those fill-in-the-blanks letter templates. It shows a powerful disrespect for my time.


This is why you have another AI on the other side that reads it for you.


I love these multi-step prompt 'hacks'. They very much take advantage of the fact that this is still 'just' a model predicting the next token.

Asking a model to write an email as if it were written by Hemingway requires the model to generate a probability distribution based on the context of an email it needs to write + the style it needs to write it.

In the second approach, you've changed the model weights/inputs by including the email in the context window, so the task of predicting the next token is fundamentally different (and possibly easier) for the model.

It's also why models are sometimes bad at answering a factual question, but good at judging whether their own answer is correct.


Those hacks are snake oil


Those hacks are literally how a large language model using a transformer architecture to predict the next token in a sequence works.

They take advantage of how a function choosing a token with maximal probability of appearing works.


It took more work to write the prompt that it would to write the script email.

“Please write a brief email from an employee to a boss giving an update on the api. The email should include 1. The /customers and /address endpoints are complete, but we're still waiting on the architecture team to finish the /orders spec. I'll also be taking Friday off.”

And then you had to adjust it. Just include the information you out into the prompt. If chatgpt can understand it, so can I.

I can’t wait to live in a world where someone enters bullet points into chatgpt to generate an email. Then I have to run the email through chatgpt to transform it back into bullet points.


The point isn't to save time or energy - the point psychological displacement.

Whether we want to admit it or not, not everyone has done the work to handle critique. Some people rely on ChatGPT as a digital scapegoat[1]. Rather than subject their own abilities and decisions to critique, they can launder them through ChatGPT and the psychological distance it affords, allows them to avoid feelings of anxiety - they can blame any negative response on ChatGPT.

Never under estimate the amount of time an energy people will spend on coping mechanisms.

1. https://www.psychologytoday.com/us/blog/hide-and-seek/201312...


I guess maybe someone might use chatgpt for this. But your argument and the 10 year old article defining the term scapegoat is far from convincing me that this is common.


The prompt is a much better email than the stuff it produced:

> The /customers and /address endpoints are complete, but we're still waiting on the architecture team to finish the /orders spec. I'll also be taking Friday off.

vs. garbage about gazelles, and the fact that telephones exist.


Right, that's why the PG version (unattached) is better.


> I'll also be taking Friday off.

Turns into:

> Come Friday, I won't be in. Just one day. Things will continue, though. Any emergencies, I'm a call away.

Off is off, don't call me. Must have been trained on a lot of linkedin posts to volunteer your personal time like this.


If we can pick a style, I would like the one of Mark Twain :)

Although, I don't know how he would feel about being mimic'ed million-fold.


As a matter of note, the Nietzsche version was hilarious(and concise). Reproduced in part here.

> The /customers and /address endpoints: born, alive, breathing. The /orders endpoint: still trapped in the womb of the architecture team's thoughts.


I asked Bing what he would say about it:

> The human race has one really effective weapon, and that is laughter. The artificial race has one really ineffective weapon, and that is imitation. - Mark Twain (paraphrased)


If I received the second email there I would think the sender was a psychopath


The author is on the receiving end.


> Please if you want to send me a message and feel compelled to use GPT, please just send me whatever you wrote in your prompt instead. I promise I will still read it!

Best bit of the article.

Question for someone who knows more about this stuff. How likely is it to get the same response to the same prompt with gpt ? Does it have some kind of random seed applied behind the scenes?

-edit- Thank you for the responses. TIL.


There is a parameter called "temperature" that determines randomness. I think there is never a guarantee of the same response, but generally with a low temperature (something like 0.0, the lowest but still not deterministic) I get nearly identical responses, if not actually identical. When temperature is increased though to e.g. default of 1.0 or max of 2.0, the same prompt will give very different responses. The higher the temperature the more creativity that is injected because of the randomness, but also the less determinism.


Honestly, as someone who does this, Bard seems to be much better at writing natural emails. It's just easier to dump some thoughts in to bard and ask it to redraft, then fluff it a little to sound like me.


Bard may have gotten to train on the entire Gmail corpus


It's not a random seed, but it will always be slightly different. It's similar to "prompting" a human... you'll likely get the same general result if you ask the same person the same question a dozen times, but it will have enough variability that it won't be identical.


Pretty sure this is entirely dependent upon the "temperature" parameter used when inferring. The higher the temperature the more uniform the response will be.


Quick fix: the higher the temperature, the more varied the potential responses. For uniform responses, you want the lowest temperature.


Oof. I was worried I was going to mess that up.


It's certainly not in any way whatsoever similar to "prompting" a human. And yes there are definitely random processes involved. For an extremely objective prompt you may get very similar results but for general stuff like "write an e-mail about this or that" it will be significantly different each time.


Sure, but it's not like they're intentionally randomizing it. You can set parameters (like temperature or penalties), but it's not intentionally randomizing things.


"I promise I will still read it!"

You might read it, but you won't answer the same way as you do when "choose a nice wording." was added to the prompt.


And how is that? Slightly annoyed?


Seriously. We're going to have ChatGPT inflate content, then readers will use AI summarizers to distill it.

Seems like we should all just be communicating via bullet points grug or Kevin Malone style.


Brevity will be valued once again.


>Best bit of the article.

"Tell the person that he's being an idiot asking me to turn on my cameras during the meetings but be subtle about it, I don't want to offend them."

That'll work.


"Please if you want to send me a message and feel compelled to use GPT, please just send me whatever you wrote in your prompt instead. I promise I will still read it!"

We can dream.


Maybe this is a personal experience. As a non-native English speaker who works in academia. I learned formal English since I was a child. ChatGPT/bard will produce a much quicker formal professional email than I will do. But if I took my time to check the email, I would produce a similar email, which is actually how I normally write if I'm sending an email to a professor or someone else in the field. AI like bard or ChatGPT will be a huge time saver in such cases. However, living in the US for half a decade changed a lot.


From what I've seen so far, the best writers (and, I suppose, the best _editors_) tend to get the best results from ChatGPT. I have seen some examples of people whose writing I already liked drive it in ways I never imagined; but like the author suggest "normies" tend to get piles of drivel.


I mean, you can say

- "I really really hate this line"

- "I love that line a little bit, hmm actually I read it a bit more and it started to stink"

- "This line feels a little too Vogue"

- "Can we make this a little more Arnold in Kindergarten Cop"

And it will distill that into something distinctly actionable. You have an interface to communicate your edits in the lowest friction way ever exposed by mankind and people are still trying to enter a prompt and Ctrl+C Ctrl+V.


My experience, both writing and coding non-boilerplate so far, has been that this back and forth almost always takes longer than doing the work myself.

There are situations that's not true, namely information retrieval, but I've yet to find it highly practical beyond being a fun experience.


That's why there's room for all the tools everyone dismisses as prompt wrappers.

You can ask GPT-4 for code or you can use Co-Pilot and have it autocomplete as you type


I've gotten several emails from a recruiter that address me as my last name, all lower caps (Hi swanson, this job..."). It was for an AI company of course.


Count yourself lucky! Last message I got from a recruiter was addressed to {firstname}.


This is just the beginning. Ever since late last year, I estimate that around 25-30% of everyone I ever talked to via some online session where we had to share the screen for something or another, had a ChatGPT tab open. For students this jumps to around 50%.


Can someone please use ChatGPT to design an email filter to move CHatGPT generated emails straight to the Recycle bin?


It's not possible. The current AI detectors are extremely unreliable, whatever people selling them might tell you. You'd probably get fewer false positives by filtering the people who send mail that you think is AI generated.


There is no need to use ChatGPT for that. I suppose running the emails through LanguageTool (or, if you don't care about privacy, Grammarly) and counting the suggestions to fix "too wordy sentences" would be a good approximation.


One would have to accept dealing with the difficulties caused by many false positives.


Is there a prompt someone has created yet to translate all their incoming email to a format/structure they like?


I would absolutely love this


Emails. Just get to the point and highlight the important details as bullets. Long emails with lots or big paragraphs rarely get read, or at least read properly. Keep it short, on point, to the point.

Note. Milage varies based on targets


What is going to happen if people start to increase the length of emails and laws (luckily privacy stuff are being reduced in Europe to be more easily understandable by non-lawyers) to the point where it's like a book, imagine people stop to read them at all, and just use LLMs to ask questions directly, which can still marginally give wrong answers. It's like bypassing human limit of time, because now people can read few books in like 10-20 min, and so everything can increase too.


sender: writes summarized prompt

llm: emits excessively lengthy and polite prose

smtp: transports lengthy prose

llm: summarizes lentghy prose to bullet points

recipient: reads summary

what a wonderful waste of energy


The way out of this is to make it acceptable to send emails that open with a polite greeting and then just outline their message in bullet points. As long as they're intelligible and not outright rude, emails really don't need to be a writing contest.


Is email really like that for other industries? Where I work, the only non-content prose I read or write is the salutation and signature:

``` Hello Bob,

We're observing an issue with X, and I was directed to you by Y as a good point of contact. Do you have any comments on:

1. Some point of interest. 2. ...

There's more info at:

Jira: http://some.site/TICKET-1 Customer Ticket: https://someother.site/1234

Respectfully, <name> ```


This is more or less how I write emails, but most of the emails I receive are pretty bloated.


I imagine that bullet points from an email would be automatically generated at least for corporate users.


We really just want AI for the purpose of BSing other people.

I would know, I just asked it to write a privacy policy for me …


Just use GPT to summarize the email and formulate a response. Probably solved.


Why bother with getting a summary of the email? Just have it write the response for you blind. Fair's fair.


I'm already impressed when someone spells my name correctly, the bar isn't that high on the matter of email etiquette for me.


Feel like all the people who use proper diction and grammar are gonna be collateral damage in all this.


Feels like everyone is going to take collateral damage one way or another.


I haven't thought too deeply about it, but do we have a moral obligation to be "authentic" when writing something? I.e., writing with our own words instead of using a machine to speak our minds? An email entirely written by chatGPT seems problematic.


"i hope our paths meet again"

It would seem as if gestures of kindness are more likely to be fake compared to meanness, criticism, or other negativity. I think this is why 1-3 star reviews are so useful, same for reviews posted to reddit , because they are more critical.


May I subscribe to this movement too? We should make a manifest or something so others can sing-in


I used ChatGPT to write my wife's Mother's Day card. I had to keep prompting it "now make it funnier" about five times.

Then I had to edit it myself because it was still too formal.

It didn't matter, she immediately called me out as soon as she read it. :)


Did you told her that she got a card from a digital Cyrano de Bergerac?


Yeah, and then I showed her the prompts.


I'm sure the output of this LLM could be improved with some hacking but why bother? Are we really at a point where we're nitpicking about the best way to automate away any kind of interpersonal communication?


works great for me in my corporate job :)


Actually I wish many of my colleagues used GPT to write their emails


With the cat out of the bag on these AI tools, the solution will be to have GPT summarize and tag your emails for you to decide if you need to read the whole thing.


the wit's soul wears briefs or something


Surely TFA could ask ChatGPT for a prompt that would generate the email, and then read that?


So.. all those GPU cycles for inverse compression?


I would trade GPU cycles for more time in my day.


We'll make Nvidia rich in both directions.


the billion spammy emails I've received also sound like this....training data


finally, my annoying habit (annoying for other people, that is) of typing in terse irc-style with no capitalizations and minimal punctuation comes in handy.


No.


Next people will have GPT4 summarize their mailbox.

Ten years from now it will all just be robots mailing each other and no one will understand why things keep breaking.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: