Hacker News new | past | comments | ask | show | jobs | submit login

ChatGPT generates cargo-cult fiction. It is imitating the low-level structure of prose, but its model can't (currently) stretch to synthesize the high-level structure of a narrative.

We currently don't have any machine-level filter that would identify whether or not a piece of prose is actually a coherent work of narrative fiction or not. And annoyingly, if we did, AFAIK we'd also have everything we needed to train an AI (specifically, a GAN) to generate narrative fiction. So we're going to be dealing with this right up until the point where it doesn't matter any more, rather than being able to do any kind of quick hack to banish it "early."




I mean, once you can generate the content -- good content -- cheaply enough, no one will pay for it.


The interesting thing is that there exist techniques to generate coherent and even interesting stories, that do not rely on copying large amounts of examples thereof:

https://thegradient.pub/an-introduction-to-ai-story-generati...

Unfortunately those are a) virtually unknown outside academic circles and b) about to die a death. The latter, the death, they're about to die it because their space is now being taken over by a much more prolific technology that generates low-grade bullshit cheaply.

Life, innit. It goes in circles.


Thats part of my concern with all the money being dumped into openAI et al after they productive their generative models.

If all the money is in generative models that give "good looking" results, who is going to spend money investing in the techniques that don't produce as "good looking" results right now but who have a potential beyond recombining it's training data?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: