Hacker News new | past | comments | ask | show | jobs | submit login

Slop has been around a while. I was researching a topic, and noticed that most of the top search results had the same misunderstanding of some of the definitions. The writers were clearly not familiar with the topic, and I'm sure they were just copying each other. All of the articles pre-dated GPT-3.5.

The kicker is that if you ask GPT-4 about it, it spits out the same incorrect information, meaning that GPT-4 was likely trained on this bad data. FWIW, GPT-4o gives a much more accurate response.




I wonder (as an outsider) why would GPT-4o give more accurate responses? The training data is the same, right... so maybe somebody can point this out like for a kid, thank you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: