Hacker News new | past | comments | ask | show | jobs | submit login

Prompt to Claude 3.5 Sonnet: Below is a recently published essay followed by some of the discussion about it on Hacker News. Write 10 short jokes on this theme, making sure that the jokes are funny in the way described by the essay's author. [essay and discussion omitted]

Claude's response: I understand you'd like me to write 10 short jokes on the theme of Markov chains being funnier than large language models, in the style described by the essay author. I'll do my best to create some jokes that aim for "unserious surprise" and violate expected patterns in amusing ways:

1. Why did the Markov chain cross the road? To get to the other side of unpredictability.

2. An LLM and a Markov chain walk into a bar. The LLM orders a statistically probable drink. The Markov chain orders a lampshade made of cheese.

3. What do you call a Markov chain that tells dad jokes? A Mark-ov Twain.

4. How many LLMs does it take to screw in a lightbulb? One - but it'll spend 20 minutes explaining the optimal lightbulb-screwing process first.

5. Knock knock. Who's there? Markov. Markov who? Markov chain reaction of nonsensical hilarity.

6. An LLM, a Markov chain, and GPT-4 walk into a bar. The bartender says "We don't serve your kind here." GPT-4 leaves. The LLM stays to debate ethics. The Markov chain orders a bicycle made of spaghetti.

7. What's an LLM's favorite movie? Predictable and Furious 17: The Safest Driving Yet.

8. How does a Markov chain tell a story? "Once upon a time, there was a princess who lived in a castle made of bananas and ruled over a kingdom of sentient toasters."

9. What did the Markov chain say to the LLM? "Your mother was a abacus and your father smelled of silicon!"

10. Why don't Markov chains make good therapists? Because their advice is always "And then you should banana the feelings until they become a hat."




"An LLM, a Markov chain, and GPT-4 walk into a bar. The bartender says "We don't serve your kind here." GPT-4 leaves. The LLM stays to debate ethics. The Markov chain orders a bicycle made of spaghetti."

This is actually gold.


It’s... not?

Even for the low bar of a geek joke it makes no sense since GPT-4 is an LLM.


In normal English usage this would imply that the LLM was not GPT-4 LLM but some stereotypical anonymous LLM.

In business terms GPT-4 can be said to be superior because it understood the instruction and left, in AI terms the anonymous LLM might be superior because it may have understood the instruction but responded in an "intelligent" manner by arguing about the morality of the instructions.

At a meta-level the joke thus argues that GPT in achieving business ends has had its intelligence hampered. As have we all.

At the same meta-level as the joke was constructed by Claude it can be argued that Claude is commenting on both the intellectual limitations of the Markov chain (insane babblings), and GPT-4 (unimaginative, inhibited business type) and that the best version is some LLM that is not GPT-4 with its limitations - an LLM like Claude. Sneaky Claude.


It's implied that GPT-4 has so many restrictions that will not argue and just do what is asked. In the context of the joke, an unfiltered LLM will just debate you.


That’s what makes it gold.


Does the markov chain would write something that make more sense ?


You're watching a stage play - a banquet is in progress. The guests are enjoying an appetizer of raw oysters. The entree consists of boiled dog.


Is this to be an empathy test?


All of the half decent ones could be made funnier by replacing the lolrandom part of the punchline with an actual Markov-chain style 'you're a right sentence but you just walked into the wrong association, buddy' twist. It's not just about lolrandom. Markov chaining is more likely to make a kind of sense, but the wrong kind of sense.

An LLM, a Markov chain, and GPT-4 walk into a bar. The bartender says "We don't serve your kind here." GPT-4 leaves. The LLM stays to debate ethics. The Markov chain orders a coup.


The knock knock joke (no. 5) was a decent attempt.


> 2. An LLM and a Markov chain walk into a bar. The LLM orders a statistically probable drink. The Markov chain orders a lampshade made of cheese.

That’s pretty decent!


"How many LLMs does it take to screw in a lightbulb? One - but it'll spend 20 minutes explaining the optimal lightbulb-screwing process first." that was not funny that is accurately painful!


Claude 3.5 Sonnet in general is the first modern LLM I've tried that's actually good at jokes that are inventive. The GPT-based LLMs are all too RLHFed to be wacky.


GPT is too... robotic? Claude is much better at everything without overexplaining everything.


> What do you call a Markov chain that tells dad jokes? A Mark-ov Twain.

I honestly thought that one was pretty good.


was it instructed to insult Mark Twain? Because otherwise, I take exception.


How runtime-y of you.


So was it that LLMs used to be capable of making actual jokes, or were they always this bad and I was just more impressed by the talking computer back then?


It's a different style of comedy. Absurdism vs. joke setups (and not quite nailing it)


Uncensored LLMs are funnier but most comedy just falls flat in text format. Once the uncensored multimodal models start rolling out we’ll get some real laughs.

Moshi is actually pretty funny just for having a 72 IQ

https://www.moshi.chat/


I would argue that Markov chains were a better tool for comedic purposes. Notice that in all of the examples of using Markov chains, the person would see a potential, come up with a purpose, exercise the setup, and then fill the setup with generated text. Likewise in a random generation examples, the person would asses the results, then find what parts were actually funny, and choose to spread this parts. LLMs output has less potentially funny results, it has less potential for unexpectedly change realistic-sounding output to absurdist output, and so as a tool it is less fit for comedic purposes.


I chuckled a bit. They are OK, if you don't get exposed to them too often. And with an LLM you can get as much exposure as you want (and all of the jokes are naturally from roughly the same probability distribution).

I don't expect too much until AI self-play learning will be made possible, so I don't get disappointed by the expected shortcomings.


It's the "impressed by the spectacle" one. I tried jokes with LLMs many times, and they're always this. Riffing on a couple of themes loosely related to what was asked. Always unfunny and uncreative.


I found some of those jokes good, definitely better than I would've ever written them. If you watch shows about comedy like say Hacks you'll see human comedians riff on stuff and a lot of the off the top jokes get discarded or improved. So Claude did fine in my book


I wonder, though, whether jokes like these could be useful to professional humorists who have to come up with gags on a deadline. From what I’ve read about monologue writing teams for late-night talk shows and the like, the writers first propose many ideas, most of which are shot down quickly and the remainder of which get tweaked and polished before being used. Some of the above jokes by Claude look to me as though they might serve as good starting points for such brainstorming. At least, they’re better than anything I could create in a short amount of time.


LLMs were never very good at directly generating original jokes, for a simple reason: writing a good joke generally starts with finding a good punchline, and then setting it up. An LLM generating token after token will first write a set-up, and then try to shoehorn a punchline into it. Prompt engineering can fairly easily work around this, but just straight-up asking an LLM for a joke never really produced good results on average.


I didn't like any of these jokes specifically (too on-the-nose), but I definitely think you invented a funny category of jokes I could like a lot!


These are ok but they got nothing on the absurdist Markov Chain jokes (but that being said, the MC misses a lot of times as well)


Why are bananas the funniest food? Even Claude seems to have caught on


Probably all of the Despicable Me minions memes fed into the training material.


I'm sorry but these all sound like a Redditor's terrible attempt at humor, predictable formulae with 'le quirkiness'


And what is the conclusion you draw?

IMO these are mid to meh or fall completely flat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: