Hacker News new | past | comments | ask | show | jobs | submit login

You have no proof that every modification of the architecture will continue to have hallucinations. How could you prove that? Even LeCunn admits that the right modification could solve the issue.

You're trying to make this point in a circular way - saying it's impossible just because you say it's impossible - for some reason other than trying to get to the bottom of the truth. You want to believe that there's some kind of guarantee that no offspring of the auto regressive architecture can never get rid of hallucinations.

I'm saying they're simply no such guarantee.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: