Hacker News new | past | comments | ask | show | jobs | submit login

I think this could be dangerous because a generated answer might be 100% wrong and look correct, yet the question asker might accept it nonetheless. The more this happens, the worse the entire site becomes. I tend to agree with the concept of preventing as much AI as possible, simply because it is much harder to distinguish wrong AI answers than organic ones. Just my opinion though.



You could just make the AI answer private to the asker. That way there's hopefully minimal harm to the repository of answers, it just stops containing questions that are easy enough for the AI to handle.


From the perspective of AI Safety (may get important soon), it is perhaps a very bad idea to allow AI agents to suggest execution/injection of arbitrary code into random places with a review from a single human that needed help from Stack Overflow.

[And don’t believe ChatGPT that claims it is an only a Language Model. It is not. It is a RL Agent trained with PPO.]


That's a fair point. Does the same argument apply to GitHub Copilot / ChatGPT?

Does it apply to things that aren't programming? e.g. People are already using these AIs for legal work.


One of the first steps of a runaway AI attempting to secure its presence would likely be inference/GPU/TPU compute access. And code injection is a vector. There are multiple other vectors.

But, perhaps considering AI as an adversary is a bad idea. And alignment along the “Love is all you need” lines is the actual solution. Tricky problem…


I was laughing last night when I realized that even if you want to make an AI that only knows and cares about maximizing paperclip output, you also need to teach it to love all humans unconditionally to prevent the apocalypse.


> You could just make the AI answer private to the asker.

So then the asker gets the wrong answer and no one else can correct it? Seems even worse.


That doesn't solve the problem that GPT-based AIs don't "know" when they're wrong. They will happily spit out authoritative-looking nonsense when presented with questions they don't know how to answer -- and the people asking the questions may not be able to recognize those answers as wrong.


I’ve used plausible but wrong human code too. I understand the difference is scale but it’s worth remembering that people write answers like this too


Where scale really comes into play and gets scary is when bots can vote on other bots. Right now, most wrong human answers are pretty rapidly downvoted or corrected. But the supply of humans to make incorrect posts is limited and relatively balanced by the supply of humans to downvote them. Subtle errors in AI posts could become so widespread that it's impossible to counter them effectively.


They'd probably present it differently from human answers.

If people really want to be careless they can get AI generated code from copilot or chatgpt on their own already, I don't think this would be worse than that.


I've banned this account because of the Hitlerian (really?!*) URL in the profile. You can't propagate that stuff on HN. We'd ban an account for doing that in comments, and a profile is no different.

It's a pity, because you've also posted good comments and I think the proportion of good comments has been getting better over time, which is great, but that doesn't make things like the above ok. Also, you have a history of using this site for ideological battle and we don't want that here—it's not what this site is for, and destroys what it is for.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.

* Edit: on closer look, I can't tell if it might have been a bad joke instead.


Just as someone who's helped people deal with nuanced reasons why exact SQL queries optimize better or worse under various scenarios, a hundred or more times on S.O., I think the reason they turn for human advice is that they place more value in an answer to their specific question that can't be generated by AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: