Hacker News new | past | comments | ask | show | jobs | submit login

Creativity is the one area where LLMs are completely unimpressive. They only spit out derivative works of what they’ve been trained on. I’ve never seen an LLM tell a good joke, or an interesting story. It doesn’t know how to subvert expectations, come up with clever twists, etc. they just pump out a refined average of what’s typical.



Claude can make some interesting guitar tabs if you prompt it to transcribe an instrument/music that wouldn't normally be something a rock guitar player would be influenced by.

It is like saying the paint brush and canvas lack creativity. Creativity is not a property of the tool, it is a property of the artist.

We also have a very poor understanding of human creativity from selection bias.

Last weekend I found a book at the library that was Picasso's drawings 1966 to 1968. There must have been 1000-1500 drawings in this book. Many were just half finished scribbles.

The average person seems to believe though that the master artist only produces masterpieces because they didn't bother to look at all the crap.


> They only spit out derivative works of what they’ve been trained on

How is that different from humans? Do we get magic inspiration totally separate from anything we’ve learned?

Show me any great book, song, movie, building, sculpture, painting. I will tell you the influences the artist trained on.


Humans are obviously influenced by others but we can also invent novel things that didn't exist before. LLMs trained on the outputs of LLMs collapse into gobbledygook whereas humans trained on humans build civilisation.


Humans trained on human output also build death cults and other harms. And humans believe that nonsense.

I’m not sure “can produce good outputs, can produce terrible outputs” is a good way to differentiate humans and LLMs.


Humans can be said to create from a combination of life experiences, artistic influences, and pure imagination.

LLMs have no life experiences, are only familiar with the most mainstream literary works with the most mainstream internet discussions, and use a fancy RNG formula on the next most likely word as a not so great substitute for imagination.


They're different because they're trying to find the most likely output, and humans usually. You can ask and LLM to make weird combinations and use unusual framings, but it's only going to do so once you've already come up with that.


I asked ChatGPT “ Write a one paragraph pitch for a novel that combines genres and concepts in a way that’s never been done before.”

I’m not going to claim this is Pulitzer-worthy, but it seems fairly novel:

> In Spiritfall: A Symphony of Rust and Rose Petals, readers traverse the borders of time, taste, and consciousness in a genre-bending epic that effortlessly fuses neo-noir detective intrigue, culinary magic realism, and post-biotechnological body horror under the simmering threat of a cosmic opera. Set in a floating, living city grown from engineered coral-harps, the story follows a taste-shaper detective tasked with unraveling the murder of an exiled goddess whose voice once controlled the city’s very tides. As he navigates sentient cooking knives, ink-washed memory fractals, and teahouses that serve liquid soul fragments, he uncovers conspiracies binding interdimensional dream-chefs to cybernetic shamans, and finds forbidden love in a quantum greenhouse of sentient spices. Every chapter refracts expectations, weaving together genres never before dared, leaving readers both spellbound and strangely hungry for more.


...that pitch is a mess. The majority of it is nonsense and it doesn't sound like a good story to me (I think. I can hardly parse it.)


Like I said, it’s not good, but I was using it to falsify the claim that LLMs can only produce concepts that are in the training set or prompt.

If I were using this for real I’d ask it to iterate, to create a story arc, etc.


Well, all of the conceptual elements it used are in the training set; it just combined them in ways that don't even make syntactic sense. Yes, I know we "just" combine ideas too when we're creating. My point is that I don't think it was producing new concepts, just slamming words together in grammatically acceptable ways. Do any of its absurd phrases mean anything to you? They don't mean anything to me. I could create something conceptually sound based on its absurd phrases, but that's still me doing the work where the LLM is acting as an algorithmic name generator.

I'd be curious if it could explain those concepts and use them in consistent ways. If so, I'd be curious how novel it could really get. Is it just going to be repackaging well-trod scifi and fantasy devices, or studied philosophy? Or could it offer us a story with truly new understandings? For example, to my knowledge, House of Leaves is something truly novel. It's probably not the first book with intentional printing errors, or with layered narration, or with place-horror, etc. But I think House of Leaves is pretty widely considered a sort of "step forward" for literature, having a profound impact on the reader unlike anything that came before it.

(A really serious discussion will require analyzing exactly what that impact is and how it's novel.)


they also struggle to know when to break the rules of english, make up words, introduce pun, bounce between tones, write with subtext, introduce absurdity, allude to other ideas etc.

I'd say its less the work they have been trained on, and more what they have been reenforced to do, which is stay on topic. it causes them to dwell instead of drift.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: