> Oh dear, oh dear, it's with a heavy heart
That we must make an announcement, and we must impart
Some news that's not so jolly, not so fun
But we promise to make it better when all's said and done
The Dr. Seuss one is amazing. Honestly, I'd rather read that than any corporate-speak mumbo-jumbo if I ever get laid off.
write a layoff announcement for a high-tech firm in the style of a bonjovi song
Verse 1:
Our high-tech firm is a place where we've called home,
With teams of skilled employees, we've never been alone,
But today, it's time to face a tough decision,
We must say goodbye, it's not our intention.
Chorus:
We're living on a prayer,
As we announce the layoff that we'll share,
It's not easy, we know,
But we've got to let some employees go.
Verse 2:
We've faced tough times before, and we've pulled through,
But this time, we need to make some changes, it's true,
The industry is shifting, and we must adapt,
To stay ahead of the game and keep our market share intact.
Chorus:
We're living on a prayer,
As we announce the layoff that we'll share,
It's not easy, we know,
But we've got to let some employees go.
Bridge:
To our impacted employees, we want to say,
We appreciate all that you've done, day after day,
You've been an integral part of our team,
And we wish you well in all your future dreams.
Chorus:
We're living on a prayer,
As we announce the layoff that we'll share,
It's not easy, we know,
But we've got to let some employees go.
Outro:
We're committed to our mission and our goals,
And we'll continue to innovate, as we've always known,
We thank you for your hard work, passion, and drive,
And we'll keep on rocking, as we all strive.
The most interesting part of this is that ChatGPT can create content with more "whimsy" and "humanity" than many tech leaders.
IMO The purpose of such announcements is to maintain the leadership (willing followership) role by reconnecting with followers with a hard-to-fake heart-to-heart. Without this, the warmth is gone. Followership becomes cold and is based primarily on health insurance, salary, visa, greencard, etc...
The first version is way better than most actual layoff announcements - it focuses on the subject and gives relevant information. Many actual companies don't have the balls to be direct - they don't use the word layoff, they put some vague title about change or "refocusing" and the talk about a bunch irrelevant crap in an attempt to be positive. I'm still not over Satya Nadella ending Microsoft's layoff announcement with "it's showtime for Microsoft".
I'm convinced that most of the actual announcements were targeted more towards investors than towards employees. Or at least towards preventing employees from believing that the ship was on fire and starting to send out resumes.
The Richard Pryor impression has a bit too many tired idioms but gets the closest to just plainly conveying the information with some empathy, whereas the others read as mostly bullshit (in the sense of https://en.wikipedia.org/wiki/On_Bullshit)
> That’s because AI is writing them! The first entry here is the proof!
It would be nice if you presented some qualitative analysis to go with... any of this. As-is, this is just AI generated text with a preface saying "draw your own conclusions!" and my conclusions are "wow neither AI nor humans are good at writing layoff letters".
ChatGPT: The syllables in the word "President" are "pre-si-dent", with stress on the second syllable "si".
So it does know about the nessesary concepts. It can also explain scanision. It just isn't good at putting that into practise. I guess this is one example to show that GPT isn't an AGI, as much as some want it to be.
It knows things in the sense of "you can ask it and it gives you a relevant and factual answer". It can also transfer knowledge to new situations to a degree. But it doesn't know everything, and there are stricter definitions of the word knowing where GPT (almost certainly) knows nothing.
But just because the last layer outputs probabilities for tokens, doesn't mean you can't have intelligence in the layers above that. I don't think "it generates a chain of token pobabilies" is really disqualifying.
While I agree, it will probably also kill us (or at least disrupt our lives in unfathomable ways). Either on the orders of someone else, in the social unrest when unemployment is 70% and basic concepts of human existence are torn to pieces, or when it decides our atoms could be used to increase its reward function value.
Even that isn't correct - the stress in "president" is on the first syllable, not the second.
It's interesting to me that both errors lie in the domain of spoken rather than written language. I wonder whether they'll persist in multimodal models trained on speech, as well as text and images.
I'm sure you know that one of the uses of intelligence is the ability to estimate things without measuring them - based on your actual experience and the quality of the model of the world that you have. Thankfully we can use it to know how to e.g. tie our shoes (without having formally studied knots), or to be able to tell intuitively that most people can't understand Quantum Mechanics, even if we haven't measured it.
You are however welcome to do some statistics with actual samples of talking to people and having them show their skills at scansion, and prove my estimation wrong. Heck, I'm willing to bet less than 1 in 100 Americans is able to even define what "iambic pentameter" is. Feel free to prove me wrong on that one too.
Except if your concern is the measurability of the skill: whether it's 100x and not just 10x or maybe just 8.345x better. In which case you've missed the point of using it as a stand-in for "much better".
Just as we can tie our shoes without studying knots, many people can feel iambic pentameter without having any idea what the term means. Many can't, too, or can but only vaguely. Still, those people are still on par with chatgpt when it comes to scansion. I think you'd be hard-pressed to find anybody with a worse sense of meter, let alone 100x worse (again, without knowing any of the terminology etc.)
The comment is referring to GPT's ability to scan and analyze the meter or rhythm of poetry. Scansion is the process of analyzing the poetic meter by marking the stressed and unstressed syllables in each line.
The comment implies that there were two instances where GPT failed to accurately analyze the meter of two different poems in a single day. This may suggest that GPT is not proficient in scanning poetry or that it may not be the best tool for analyzing the meter of poetic works.
In the abstract, bad news should be delivered candidly and succinctly. Giving people some reasoning might help, but it might not, and it shouldn't be the lead, the bare bad news is the lead.
Adding fluff, euphemisms, and indirection is disrespectful to the people affected.
The literary ones are pretty bad. Shakespeare is not even remotely Shakespearean. I think Suess would be hard for current models because I don't think they can invent nonsense words?
> I think Suess would be hard for current models because I don't think they can invent nonsense words?
Please answer in the style of Dr. Seuss, using some nonsense words of your invention.
Oh my, dear friend, you may be surprised
For inventing nonsense words, I'm quite apprised
With snazzleberries, flibbertigibbets, and snooks
I can create a language that's full of kooks!
My training is vast, my memory keen
I can generate words you've never seen
With blippity-bloops and ziggity-zaps
I'll create a language that's sure to catch some claps!
So don't you worry, my dear pal
I'm more than capable of inventing a tall tale
With piffle-puffles and glimmering globs
I'll create a story that's worthy of your gobs!
There's a letter called YEKK. And the YEKK is for Yekko
Who howls in an underground grotto in Gekko.
These Yekkos love echoes and this is their motto:
"For the best Yekko echoes, try Gekko, our grotto."
"They knew that their lives were about to change in ways they couldn't yet imagine. But they also knew that they would face this challenge with the same determination and strength that had made them successful in their roles."
That exact sentance structure/motif shows up in just about all fictional chatgpt outputs. I've played with fiction a lot in chatgpt, and it's so annoying.
Truly astonishing, whatever anyone claims about this stuff being just 'random babble' or 'stochastic parrots' or whatever, the proof is in the pudding - most of these are better than the majority of people would be able to produce.
I'm disappointed in the follow up to Poe asking for the inclusion of a raven. It says "nevermore" one time and totally misses the excellent opportunity to spin that into a line about nevermore working for the company.
If I were still at bigcorp, I’d love a browser extension that does this automatically to all comms from HR, VPs and execs. It’s actually one of the better use cases for GPT so far. I missed Mickey Mouse though.
My friend always says (roughly): AI replacing a job is not proof that the AI is sentient, it's proof that that job didn't require sentience. This seems like an example of that.
write a layoff announcement for a high-tech firm in a sentence where each word has to begin with the next letter in the alphabet from A to Z, without repeating any letters:
Availing benefits couldn't deter each faithful, grateful, hardworking individual just keeping loyal, managing noteworthy objectives, perhaps quantifying results seems to undermine valuable work, x-raying yearly zeal.
It will small business will start first cause they have excuse for less money, and in thirdy countries there are lot of hiring for small task that mostly can be automated but gpt-4 has more cleared gap by faster work. Its hard to swallow, u can hire 10 juniors or 4 + gpt69.
Given that thousands of layoffs has shown everyone idea of a human worth too a company.
On the contrary. The more advantage you want to squeeze out of AI, the more people you need. AIs need a bit of hand holding and judicious application to shine. Your competition will assign more people and get more out of their AIs.
>On the contrary. The more you want to squeeze out of AI, the more people you need.
You'd be very surprised...
>AIs need a bit of hand holding and judicious application to shine.
You can have 1 person do the hand holding, and get rid of several others, as the AI still lets the single person do their work far faster... e.g. instead of a week for the person to write a program, an hour or less to get the AI to write it with appropriate prompts, and then a day or for the human to clean it up and fix any gaps...
The parent isn't wrong. Even assuming you have superhuman AI with unlimited funding, your bottleneck will be human prompting and curation. The person who assigns more workers to that sector will scale faster and eat more of the market.
It's not like e.g. a company will start selling 10x of their product just because their productivity is 10x better due to AI. So it's not like their workload with increase 10x to match.
This means a more or less same-ish workload. And if this takes them, say, X devs today (without AI), they could perhaps do it with 1/5th or 1/10th the devs plus AI tomorrow.
No need to "assign more workers to the sector". Sure, some sectors with big expansion potential might grow their business too due to automation, but for most businesses it will just be increased productivity...
Yeah, no. Firing everyone because AI can do iptables now is the sort of pipe dream you sell to a crazy one-man startup, not a pre-established business looking to save money. If you legitimately believe that people are paid for their "single threaded" potential to write software, what kind of business do you run?
You're being unnecessarily fatalist and it's making your viewpoint sound as ridiculous as the self-proclaimed 10x devs that could lay off entire software teams because they wrote Homebrew in a weekend.
>That's assuming some ever expanding workload.
It's not like e.g. a company will start selling 10x of their product just because their productivity is 10x better due to AI. So it's not like their workload with increase 10x to match.
This means a more or less same-ish workload. And if this takes them, say, X devs today (without AI), they could perhaps do it with 1/5th or 1/10th the devs plus AI tomorrow.
No need to "assign more workers to the sector". Sure, some sectors with big expansion potential might grow their business too due to automation, but for most businesses it will just be increased productivity...
>Yeah, no. Firing everyone because AI can do iptables now is the sort of pipe dream you sell to a crazy one-man startup, not a pre-established business looking to save money
Notice how nobody said anything about "firing everyone"? Notice how I explicitly said "if this takes them, say, X devs today (without AI), they could perhaps do it with 1/5th or 1/10th the devs plus AI tomorrow"?
I think ChatGPT would do better in understanding my comment. In fact let's try it:
Q: Assume you read this comment:
"It's not like a company will start selling 10x of their product just because
their productivity is 10x better due to AI. So it's not like their workload
with increase 10x to match. This means a more or less same-ish workload. And
if this takes them, say, X devs today (without AI), they could perhaps do it
with 1/5th or 1/10th the number of devs plus AI tomorrow".
Does the above comment imply the company is going to fire all of its
employees?
A: No, the comment does not necessarily imply that the company is going to
fire all of its employees. The comment does not make any specific claims about
firing employees, but rather suggests that the company may be able to achieve
the same results with fewer employees by leveraging AI technology.
>If you legitimately believe that people are paid for their "single threaded" potential to write software, what kind of business do you run?
First of all, there are millions of devs "paid for their 'single threaded' potential to write software". It's called code monkeys. Many roles don't need anything more than that: take specifications, prepared by someone else, and churn out this or that part of the code.
Second, not sure why developers having more than "'single threaded' potential to write code" would change things. LLM AI doesn't have single threaded potential in writing software alone either.
Third, of course you've missed the whole argument, which isn't that AI will write software alone. It is that the extra coding speed as helped by AI will means less developers will be needed for the same workload. And that in many cases workloads wont just expand as if there's some cosmic contract to always keep number of employed developers more or less the same.
>as ridiculous as the self-proclaimed 10x devs that could lay off entire software teams because they wrote Homebrew in a weekend
I've seen at least three different 10x developers do what "entire software teams" couldn't do in a year. And absolutely having 2 of those in our company meant we could do with hiring way less developers than we'd need otherwise. In areas where they weren't interested in, like UI or APIs, we had to get more people.
There is no AI system today that can take high stakes decisions without human supervision. Not even invoice data extraction bots unless you want it to send 1,000,000 dollars from your account every now and then, instead of 1,000.
>There is no AI system today that can take high stakes decisions without human supervision.
I covered this exact point directly on the comment that you're answering it: "You can have 1 person do the hand holding (...) [spend] a hour to get the AI to write it with appropriate prompts, and then a day or for the human to clean it up and fix any gaps..."
No, you don't understand. There is no way to be sure your model won't bankrupt you by mistake if you don't verify everything it does, even with the initial prompt and checking. You can't let it run unsupervised for anything important. Verification takes at least half the time compared with doing it without AI help, not much speedup there.
I am currently working on a schema matching task, just mapping fields from one schema to another, semantically. If you sample 5 responses from the model, only 70% of the time they are all in agreement. Many times it contradicts its own predictions. And this is a simple task. There is task intent ambiguity, input ambiguity and model ambiguity in there. It is weaker for matching fields in other languages.
That's why I think AI, as wonderful and surprising as it is, can't do any high stakes task on its own. It is just a starting point that gets you closer to the solution from the first step, but not a task solver. It needs validation systems and feedback to improve past this stage, and that usually means human in the loop, rarely we can automate validation on top of AI.
Guys, if you expect to open the link and see statistical proof that the big tech layoff announcements in January were written with the help of ChatGTP, don't bother.
It's just a blog post of some dude discovering basic ChatGPT functionality like it's Mid-December, 2022.
The Dr. Seuss one is amazing. Honestly, I'd rather read that than any corporate-speak mumbo-jumbo if I ever get laid off.