Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wouldn't even mind so much if the answers were right. The problem is that a lot of them are totally wrong, but completely reasonable- and plausible-sounding, and in an authoritative tone, so unless you already know the right answer, the only way you'll realize its answer is wrong is the hard way.


That's the worst part. It's all fun and games when it tells a story of cheese sandwiches and VCRs in the style of the King James bible, but when it gives wrong answers in an authoritative tone and then insists it absolutely can't be wrong, it's terrifying.

I don't understand what good could come of this. Or at least make it detect what is fiction and what isn't.

People are treating it like it's Wikipedia, but it's not. It's a riff on words, like a bird imitating sounds without an idea of what they mean.


Just imagine the hell we'd be in if people could give wrong answers in an authoritative tone and then insist they absolutely can't be wrong!


For some reason, we assume what comes out of a computer is more trustworthy than what people say. We think computers are transparent, reliable, idempotent and don't have an agenda. Even more so if we call it "intelligent"...

But ChatGPT is a bullshit machine, and that much is new.


At least the good part of the answers being on stack overflow then is, like they used to say "On the internet no one knows you're a dog". So whether the answer came from ChatGPT or an aggressively overconfident fool, a wrong answer should get the same downvotes regardless, and a correct answer should get the same up votes. Probably the two biggest issues with ChatGPT being used to provide answers is whether it's wrong often enough to start swinging the experience of the site negative, and more importantly that some people are getting fake internet points unfairly.


Who's "we"? :)

To the extent this perception exists -- and I don't think "came from a computer" falls within the top 5 actually effective methods of laundering bullshit nowadays, though maybe it used to -- you might expect that it gets crushed into dust as the public gets more exposure to high-profile counterexamples.

And, wait, isn't the concern usually that people read AI-generated content and trust it but don't think it came from a computer?


Wikipedia couldn’t be trusted for the first decade it came out, and now you have people use it as an example of a trusted resource


This trust arose through a sophisticated bureaucracy of checks and balances. Stackoverflow isn't quite as complex.


Well, at least a human had to put in the work to write it. Now you can automate this low tier content.


At least one upside of answering StackOverflow questions is that (at least for now) the presumably human who asked the question will attempt the solution provided and mark the answer as correct if it worked.

If the AI spits out some gobble-de-gook that doesn't even work, the answer will probably be downvoted.

However, that doesn't mean that the answer will be necessarily high quality or bug-free, the same can be said about answers given by a human. At the same time, if it's stupid but it works...


Maybe it's an opportunity for people to get better at critical review of the information they are presented. Maybe they'll learn just because they are being told something authoritatively that doesn't mean it's right?

And so what if it people misuse the the tool while they learn? What's wrong with being wrong?


We learn by example. What this tool teaches is that self-confidence is more important than content. And maybe it is! in life in general. But not in the pursuit of knowledge.

Instead of patronizing users by adding a stupid sentence at the end of each answer reminding them to be careful, it could say every time "I'm just a machine and what I say is random; if it's true it's by accident; it's mostly probably wrong and only sounds like the truth".

Here's an example I posted in another thread, where the machine is repeatedly mistaken but affirms it absolutely, positively can't make a mistake:

https://news.ycombinator.com/item?id=33852236


My logic is undeniable


What you said may sum up the current state of AI. People's minds are continually being blown, but will there be a realization that these AIs are specialized to provide specious output and nothing else? There's a canyon of difference between something that sounds correct and a thing that is actually correct.


Exactly. ChatGPT sounds like a bad student who didn't actually learn anything during the year and is trying to bullshit their way through the finals. Or a politician, maybe. It's formalizing the worst traits of humanity.


Yes. I spent yesterday afternoon trying to get ChatGPT to write me a quine in nroff/troff macros, something I've not been able to do, or find if anyone has done.

The generated quines look like they'd work, but don't.

Same with an M4 macro processor quine - looks maybe correct, doesn't work.

It did generate a Go quine.


>> It did generate a Go quine.

Because there are several of them on the web:

https://duckduckgo.com/?t=ffcm&q=Go+quine&atb=v344-1&ia=web


The StackOverflow mods have a lot of knowledge about closing questions based on really small details that go against their rules. I'm sure they will do well spotting AI-generated answers.


I doubt it. They've had problems with cheap automated answers for years where bots would essentially search for the question on SO and then copy an answer from another question verbatim. The answers were rarely useful because questions happen to be different even though they use the same keywords.

Not only did they never bother to block that, they also didn't mind it and wanted to rely on the community down-voting those answers instead of at least blocking the bot -- and that's with a trivial check (identical answer already in DB). With something that's AI-generated, there's no chance. And with the general quality of many of the answers, there's no way to tell apart wrong answers from AI or humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: