Hacker Newsnew | past | comments | ask | show | jobs | submit | johanam's commentslogin


edit history in Google docs is a good way to defend yourself from AI tool use accusations


The funny part is that Googe has all the edit history data. In other words, it's a piece of cake for them to train a model that mimics human editing process.

The only thing prevents them from doing so is the fact Google is too big to sell a "plagiarism assistant."


So the model is going to spend hours hesitantly typing in a google doc, moving paragraphs around, cutting and pasting, reworking sentences, etc so that the timestamped history matches up with something a human could realistically have done?


I’m very tempted to write a tool that emulates human composition and types in assignments in a human-like way, just to force academia to deal with their issues sooner.


Ironic that one of the biggest AI companies is also the platform to offer a service to protect yourself from allegations of using it.


some argue that we've already achieved it, albeit in minimal form: https://www.noemamag.com/artificial-general-intelligence-is-...

but in reality, it's a vacuous goal post that can always be kicked down the line.


AI generated text like a plume of pollution spreading through the web. Little we can do to keep it at bay. Perhaps transparency is the answer?


do you have a blog?


No lol I have thought of starting one but I am not a self promoting type and so I'm not even sure how to get it off the ground.


the whole book is available for free here: https://whatisintelligence.antikythera.org/


Thanks! we've changed the top URL to that from https://mitpress.mit.edu/9780262049955/what-is-intelligence/.


I'm so confused why a $36.95 purchase page is a hackernews headline, especially when your link is clearly what they should have used


what alternative mechanism would you propose?

seems like if we can't find some consensus on regulation, the best option is to create protective mechanisms, attitudes perhaps, some barriers (through norms?) that keep out the slop.


> if we can't find some consensus on regulation

Is "we" the two of us? I imagine we'd have some disagreements... so what happens when it's 2 countries? The world is supposed to agree?

> the best option is to create protective mechanisms

Humans have shown we don't choose the best option, especially when these mechanisms will slow down other goals.

> that keep out the slop

The slop was here before, it was just human product. Now we have way more slop. I don't think we've ever been good at controlling slop/spam/junk online and can't see how this AI situation is any easier to regulate.


Couldn't agree more.


this is a major risk factor that the LLM providers aren't contending with adequately. from chatbot addiction to the risk of manipulation. there are a lot of "AI Safety" issues that hardly have to do with potential superintelligence.


I'm glad you're having fun with it. Was definitely just built for entertainment purposes and to explore the fringes of generative, networked (if hallucinated) knowledge production.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: