Hacker News new | past | comments | ask | show | jobs | submit login

How sure are you it's not lying to you? I asked ChatGPT to write a description of common plasma cutter table features, and it didn't know the difference between initial height sensing and torch height control.

I closed the browser tab and haven't gone back since.




In many contexts it's easy to verify what ChatGPT tells you. There are ways to use ChatGPT as a tool that do not require it to always be right for it to be useful.

For example, the other day I asked it something about the Flask codebase, and it found the relevant part of the codebase immediately. When I asked it about the behavior of the code, it wasn't always correct, but it still showed me the relevant code so I could read it way faster than I would have found it myself.

Initially my impression of ChatGPT was the same as yours - I asked it some questions in a specialized domain I know well, and when it was wrong I decided ChatGPT is useless. But after enough people told me they find it useful, I took another look and tried finding more applications. And since then I've been impressed by what it can do.


I can definitely see it being good for assisted learning. Quickly groking codebases seems to be one of the most popular uses.


I just used it to write some tests, then I implemented and it failed. It then continued to explain what my setup was missing. Regarding coding v4 is become pretty accurate. And if it makes a mistake it's able to explain what went wrong.

Regarding certain medical conditions I've asked to list the studies and explain them, it does that pretty well.

But just as talking with a human, or with googling info on a website, or with Stackoverflow, Im always assuming I need to double check it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: