Hacker News new | past | comments | ask | show | jobs | submit login

Maybe if you tell it to pull the answer from a limerick instead of generally asking?

Edit: Ok no, I tried giving it a whole bunch of hints, and it was just making stuff up that was completely unrelated. Even directly pointing it at the original dataset didn’t help.




Yeah I also tried to get it to complete some limericks from the dataset. Curiously it believed it had heard of the limerick but would then recite a hallucination.

So the good news is that the NIAN score might be real, bad news is you can't rely on it to know what it knows.


If you ask it to complete a limerick and it finishes it differently from the original, but it still works as a limerick is that really a hallucination?


Come on guys, it’s already far beyond superhuman if it’s able to do that and so quickly. So if it’s not able to do that, what’s the big deal? If you’re asking for AG.I., then it seems that the model performs beyond it in these areas.


We were mainly trying to determine if there was a reasonable chance that the model was trained on a certain dataset, nothing else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: