I thought about it - a quick way to verify whether something was created with LLM is to feed an LLM half of the text and then let it complete token by token. Every completion, check not just for the next token but the next n-probable tokens. If one of them is the one you have in the text, pick it and continue. This way, I think, you can identify how much the model is "correct" by predicting the text it hasn't yet seen.
I didn't test it and I'm far from an expert, maybe someone can challenge it?
This is nice of you. I just want to say to the GP it's mostly random IMHO. Survivorship bias is hard. You hear about only what you hear about, not all the great stuff. It's a matter of big numbers, don't give up, play the long game.
There is no such thing as over regulation, just regulation done wrong. And the solution for a bad regulation might be a better regulation rather than no regulation at all.
Adobe Animate (née Flash) still exists and exports HTML5-ready MP4s now. Which, as an actual user of Macromedia Flash and dabbler in Newgrounds uploads, is a much better situation. Flash the plugin sucked shit and everyone knew it, including Tom Fulp.
> Beats me. AI decided to do so and I didn't question it. I did ask AI to look at the OxCaml implementation in the beginning.
This shows that the problem with AI is philosophical, not practical
reply