Hacker News new | past | comments | ask | show | jobs | submit login

I just saw a post on a financial forum where someone was asking advice on investing in individual stocks vs ETFs vs investment trusts (a type of closed-end fund); the context is that tax treatment of ETFs in Ireland is weird.

Someone responded with a long post showing scenarios with each, looked superficially authoritative... but on closer inspection, the tax treatment was wrong, the numbers were wrong, and it was comparing a gain from stocks held for 20 years with ETFs held for 8 years. When someone pointed out that they'd written a page of bullshit, the poster replied that they'd asked ChatGPT, and then started going on about how it was the future.

It's totally baffling to me that people are willing to see a question that they don't know the answer to, and then post a bunch of machine-generated rubbish as a reply. This all feels terribly dangerous; whatever about on forums like this, where there's at least some scepticism, a lot of laypeople are treating the output from these things as if it is correct.






Seen this with various users jumping into GitHub issues, replying with what seem like well written, confident, authoritative answers. Only looking closer, it’s referencing completely made up API endpoints and settings.

It’s like garbage wrapped in a nice shiny paper, with ribbons and glitter. Looks great, until you look inside.

It’s at point where if I hear LLMs or ChatGPT I immediately associate it with garbage.


However, it is a handy way to tell which users have no qualms about being deceptive and/or who don't care about double checking.

I share your experienced frustration dealing with these morons. It's an advanced evolution of the redditoresque personality that feels the need to have a say on every subject. ChatGPT is an idiot amplifier. Sure, it's nice for small pieces of sample code (if it doesn't make up nonexistent library functions).

Compounding the problem is that Reddit-esque online culture rewards surface level correctness and black and white viewpoints so that stuff gets upvoted or otherwise ranked highly and eaten by the next generation of AI content scrapers and humans who are implementing roughly the same workflow.

Man reddit loves surface level BS. And then the AI bots repost it to look like legit accounts, and it generates a middlebrow BS consensus that has no basis in fuckin anything.

if it weren't for the fact that google and or discord are worse I'd have abandoned reddit ages ago


Parent voted up for the wonderful phrase "ChatGPT is an idiot amplifier". May I quote you, Sir?

Or how about a lawyer and fake court cases? This was over a year ago: https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer...

Tangential, but related anecdote. Many years ago, I (a European) had booked a journey on a long distance overnight train in South India. I had a reserved seat/berth, but couldn't work out where it was in the train. A helpful stranger on the platform read my ticket, guided me to the right carriage and showed me to my seat. As I began to settle in, a group of travellers turned up and began a discussion with my newfound friend, which rapidly turned into a shouting match until the train staff intervened and pointed out that my seat was in a completely different part of the train. The helpful soul by my side did not respond by saying "terribly sorry, I seem to have made a mistake" but instead shouted racist insults at his fellow countrymen on the grounds that they visibly belonged to a different religion to his own. All the while continuing to insist that he was right and they had somehow tricked him or cheated the system.

Moral: the world has always been full of bullshitters who want the rewards of answering someone else's question regardless of whether they actually know the facts. LLMs are just a new tool for these clowns to spray their idiotic pride all over their fellow humans.


> LLMs are just a new tool for these clowns to spray their idiotic pride all over their fellow humans.

While I agree, that's a bit like saying the nuclear bomb was just a novel explosive device. Yes, but the scale of it matters.


> It's totally baffling to me that people are willing to see a question that they don't know the answer to, and then post a bunch of machine-generated rubbish as a reply.

Because ChatGPT has been sold as more than it is. It's been sold as being able to give real answers, instead of "having a bunch of data, some of which is accurate".


It's a fantastic "starting point" for asking questions. Ask, get answer, then check to see if the answer is right. Because, in many cases, it's a lot easier to verify an answer is right/wrong than it is to generate the answer yourself.

> It's been sold as being able to give real answers, instead of "having a bunch of data, some of which is accurate".

So, basically, exactly like human beings. Until human-written software stops having bugs, doctors stop misdiagnosing, soft sciences stop having replication crises, and politicans stop making shit up, I'm going to treat LLMs exactly as you should treat humans: fallible, lying, hallucinating machines.


I doubt any human would write anything as nonsensical as what the magic robot did in this case, unless they were schizophrenic, possibly. Like, once you actually read the workings (rather than just accepting the conclusion) it made no sense at all.

Yes, but here we are on hn, i don't expect the average joe to realize immediately that any llm could spew lies without even realizing what it's saying might not be true

It's not a new problem.

"On two occasions I have been asked [by members of Parliament!], `Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." --Charles Babbage


I think that one's a _slightly_ different type of confusion about a machine. With an LLM, of course, even if you provide the right input, the output may be nonsense.

Searching for the validation without being actual expert on the topic and doing the hard work of actually evaluating things and trying to sort them out to be understandable. Which very often is actually very hard to do.

How is that any different though from regular false or fabricated information gleaned from Google, social media or any other source? I think we crossed the rubicon on generating nonsense faster than we can refute it long ago.

Independent thinking is important -- it's the vaccine for bullshit -- not everybody will subscribe or get it right but if enough do we have herd immunity from lies and errors and I think that was the correct answer and will be the correct answer going forward.


> How is that any different though from regular false or fabricated information gleaned from Google, social media or any other source?

This was so obviously nonsense that it could only have been written maliciously by a human. In practice, you won't find that much, at least on topics like this.

And I think people, especially laypeople, do tend to see the output of the bullshit generating robot as authoritative, because it _looks_ authoritative, and they don't understand how the bullshit generating robot works.


> How is that any different though from regular false or fabricated information gleaned from Google, social media or any other source?

It lowers the barrier to essentially nothing. Before, you'd have to do work to generate 2 pages of (superficially) plausible sounding nonsense. If it was complete gibberish, people would pick up very quickly.

Now you can just ask some chatbot a question and within a second you have an answer that looks correct. One has to actually delve into it and fact check the details to determine that it's horseshit.

This enables idiots like the redditor quoted by the parent to generate horseshit that looks fine to a layman. For all we know, the redditor wasn't being malicious, just an idiot who blindly trusts whatever the LLM vomits up.

It's not the users that are to blame here, it's the large wave of AI companies riding the sweet capital who are malicious in not caring one bit about the damage their rhetoric is causing. They hype LLMs as some sort of panacea - as expert systems that can shortcut or replace proper research.

This is the fundamental danger of LLMs. They have crossed past the uncanny valley. It requires a person of decent expertise to discover the mistakes generated and yet the models are being sold to the public as a robust tool. And the public tries the tools and in absence of being able to detect the bullshit, they use it and regurgitate the output as facts.

And then this gets compounded by these "facts" being fed back in as training material to the next generation of LLMs.


Oh, yeah, I’m pretty sure they weren’t being malicious; why would you bother, for something like this? They were just overly trusting of the magic robot, because that is how the magic robot has been marketed. The term ‘AI’ itself is unhelpful here; if it was marketed as a plausible text generator people might be more cautious, but as it is they’re lead to believe its thinking.

this is corporate life.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: