The funny part is that Googe has all the edit history data. In other words, it's a piece of cake for them to train a model that mimics human editing process.
The only thing prevents them from doing so is the fact Google is too big to sell a "plagiarism assistant."
So the model is going to spend hours hesitantly typing in a google doc, moving paragraphs around, cutting and pasting, reworking sentences, etc so that the timestamped history matches up with something a human could realistically have done?
I’m very tempted to write a tool that emulates human composition and types in assignments in a human-like way, just to force academia to deal with their issues sooner.
seems like if we can't find some consensus on regulation, the best option is to create protective mechanisms, attitudes perhaps, some barriers (through norms?) that keep out the slop.
Is "we" the two of us? I imagine we'd have some disagreements... so what happens when it's 2 countries? The world is supposed to agree?
> the best option is to create protective mechanisms
Humans have shown we don't choose the best option, especially when these mechanisms will slow down other goals.
> that keep out the slop
The slop was here before, it was just human product. Now we have way more slop. I don't think we've ever been good at controlling slop/spam/junk online and can't see how this AI situation is any easier to regulate.
this is a major risk factor that the LLM providers aren't contending with adequately. from chatbot addiction to the risk of manipulation. there are a lot of "AI Safety" issues that hardly have to do with potential superintelligence.
I'm glad you're having fun with it. Was definitely just built for entertainment purposes and to explore the fringes of generative, networked (if hallucinated) knowledge production.
reply