Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In other words, everyone should assume that it's lying at all times. Nobody should be using a product that both lies to you and then lies to you about the fact that it's lying to you.


Stating a falsehood without intent to deceive is just incorrectness, not a "lie". Let's dial the hyperbole back a few notches.


If the people running the platform are advertising a product that produces truths while delivering falsehoods and pathetically refusing to label their falsehoods as falsehoods, that's a lie.


There is a bit right on the chat box above every conversation that says, verbatim, "may occasionally generate incorrect information".

Determining truth is not in the wheelhouse of a language learning model and it is not defective for not doing so in the same way a tablesaw is not defective for not driving nails. Adjust your expectations accordingly.


> "may occasionally generate incorrect information"

The points of this thread are 1) this is a dramatic understatement; ChatGPT's output is not just occasionally incorrect, but usually incorrect in some dimension, and 2) in the absence of any fine-grained confidence value for individual statements, you must pessimistically assume that all statements are incorrect until proven otherwise, which dramatically reduces the utility of the tool compared to how its fanboys portray it as the second coming of Christ.


The first point is a bald-faced assertion with only anecdotal evidence, the second is a reduction to the absurd. It is absurd because if you uncritically accept the words of anything you read online, without, say, validating them with common sense, your own experience and knowledge, and so forth, the problem is with the reader, not the words.

You and the author of this article are making this false dichotomy where there is no middle ground between "usually incorrect" (which is hyperbolic nonsense and trivially falsified by five minutes of using it), and "always correct" (which even your straw "fanboys" have not done to my knowledge), and then using this dichotomy to set up another one to pretend that the only way to act on information read from a computer screen is either uncritical acceptance or presume that it's bullshit.

Neither of these models are accurate and neither of them have any relation to how people in the real world generally interact with information online or with ChatGPT.

Furthermore, your insistence on "labeling falsehoods" is not something we can do accurately anyways, let alone in the context of a language model which has no concept of truth or falsehood in the first place! You are asking for something completely unreasonable and I can't tell if you're doing it out of ignorance or bad faith.


That would depend on a) how OpenAI are marketing it, and b) what disclaimers/etc. they put in the UI.


It's not lying about the fact that it lies, they have a disclaimer.

It's the user's responsibility to verify truth. This was trained on the internet and everyone knows not to believe everything you see on the internet. This isn't any different.


Was looking for the disclaimer comment, they do genuinely mention on their page it's not fit for consumption, although so do many services we often use.


And you don't see how this makes it useless as a paid product? Truth is the product. An AI chatbot that makes no pretensions to the truth might as well be a Markov chain for all the good it does me. Are people really so blinded by the song and dance of this mechanical monkey that they can't see through the ruse?


For many of us truth is not the product.

For a lot of people it's a faster way to google something you're familiar with, a more convenient stackoverflow.

When used right its hallucinations don't matter, and the user takes advantage of producing things quickly that would have been tedious to write manually. Like one-off shell scripts.

If your use case necessitates truth then I see why you don't think it's worth paying for.

However this is the only digital product I'm paying a subscription for. It's faster than googling and trudging between useless results, ads, and useless blog content, so it's worth every cent.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: