Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Many (most?) people who try it (lay and expert alike) agree there is something qualitatively different going on. You could say it responds in a way that may be coherent enough to be useful in certain contexts. I am not aware of previous instances where this was the case to the same extent, although I would be happy to be proven wrong!

I think it's fair to be skeptical of the economic utility in the near term, but cautiously optimistic about the potential in defining and expanding on what exactly it is that causes people to perceive it in this way.



It just keeps telling me it can't do this or that. I think I asked who are the tallest whoever in some country and it came back and said that information is off limits.

It's usefulness is similiar to searching google and reading the top box if it wasn't filtered.

You can ask it for a story and it will give you a weak rewording of your prompt.

There is nothing more meaningless than talking to it. It's a void


Not sure if you meant this comment rhetorically or not, but try asking ChatGPT to do something useful like convert some text into JSON, or what conjugation of a verb you should use, or for writing feedback on an email. If you treat it like a toy it will be disappointing but it's useful as a tool.


It's not very useful, unless you only want to write a single isolated class in Java. Every time I refine my prompt I get a completely different answer and often it just repeats answers it gave me earlier which wipes out improvements made in the meantime. It produces bad code faster than you can check it.

When you ask it to do something simple like chess it whips out crazy illegal moves. "the omnipotent f6 pawn" is already an internet meme [0]. You might now claim that it is a language model and it shouldn't be able to play chess but this is a very real limitation that you are going to pretend doesn't exist later on when it is about "language tasks". If it can't follow basic rules like rooks only moving horizontally or vertically or bishops only moving diagonally which is purely defined by logic then how on earth can it possibly do something that actually requires complex reasoning?

The problem with these large language models is that it gets harder and harder to notice the limitations, but they are still the same regardless of the size of the model. So every iteration you get to hype up more people about the same thing. They will make the same bad projections into the future like everyone else.

[0] https://www.reddit.com/r/AnarchyChess/comments/10yym4o/why_i... (Context: ChatGPT will play pawn f6 which summons a pawn out of nowhere, beating the queen)


Untrue. I get a tremendous amount of utility. It’s generative (new compared to traditional search) and I think I get valuable responses for maybe like 85% of my questions. Incredibly helpful for navigating new domains (when you don’t know what you’re looking for)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: