Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I talked to GPT yesterday about a fairly simple problem I'm having with my fridge, and it gave me the most ridiculous / wrong answers. It new the spec, but was convinced the components were different (single compressor, for example, whereas mine has 2 separate systems) and was hypothesizing the problem as being something that doesn't exist on this model of refrigerator. It seems like in a lot of domain spaces it just takes the majority, even if the majority is wrong.

It's seem to be a very democratic thinker, but at the same time it doesn't seem to have any reasoning behind the choices it makes. It tries to claim it's using logic, but at the end of the day it's hypotheses are just occam's razor without considering the details of the problem.

A bit, how do you say, disappointing.





Clearly a skill issue where you're expecting it to know all of the specifications of a particular refrigerator model.

You didn't provide it with the correct context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: