By that definition machine to machine communication that happens "organically" (like how humans do it, where they sometimes strike up conversations unprompted with each other) is "slop".
You're not seeing how the future of the world will develop.
This won't solve anything. There's a myriad of sampling strategies, and they all have the same issue: samplers are dumb. They have no access to the semantics of what they're sampling. As a result, things like min-p or XTC will either overshoot or undershoot as they can't differentiate between the situations. For the same reason, samplers like DRY can't solve repetition issues.
Slop is over-representation of model's stereotypes and lack of prediction variety in cases that need it. Modern models are insufficiently random when it's required. It's not just specific words or idioms, it's concepts on very different abstraction levels, from words to sentence patterns to entire literary devices. You can't fix issues that appear on the latent level by working with tokens. The antislop link you give seems particularly misguided, trying to solve an NLP task programmatically.
Research like [1] suggests algorithms like PPO as one of the possible culprits in the lack of variety, as they can filter out entire token trajectories. Another possible reason is training on outputs from the previous models and insufficient filtering of web scraping results.
And of course, prediction variety != creativity, although it's certainly a factor. Creativity is an ill-defined term like many in these discussions.
You should read the follow-up work from Entropix folks, or reflect on the extremely high review scores min_p is getting, or look at the fact the even trivial shit like top_k=2 + temperature = max_int works as evidence that models do in fact "have access to the semantics of what they're sampling" via the ordering of their logprobs.
DRY does in fact solve repetition issues. You're not using the right settings with it. Set the penalty sky high like 5+. Yes that means you're going to have to modify the ui_paramas in oobabooga cus they have stupid defaults on what limits you can set the knobs to.
There's several other excellent samplers which deserve high ranking papers and will get them in due time. Constrained beam search, tfs (oldie but goodie), mirostat, typicality, top_a, top-n0, and more coming soon. Don't count out sampler work. It's the next frontier and the least well appreciated.
Also, contrastive search is pretty great. Activation/attention engineering is pretty great, and models can in fact be made to choose their own sampling/decoding settings, even on the fly. We haven't even touched on the value of constrained/structured decoding. You'll probably link a similarly bad paper to the previous one claiming that this too harms creativity. Good thing that folks who actually know what they're doing, i.e. the developers of outlines, pre-bunked that paper already for me: https://blog.dottxt.co/say-what-you-mean.html
I'm so incredibly bullish on AI creativity and I will die on the hill that soon AI systems will be undeniably more creative, and better at extrapolation, than most humans.
Having Slop generations from an LLM is a choice. There are so many tricks to make models genuinely creative just at the sampler level alone.
https://github.com/sam-paech/antislop-sampler
https://openreview.net/forum?id=FBkpCyujtS