Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Probably not. We're actually headed towards many smaller models that call each other, because VRAM is the limiting factor in application, and if the domains aren't totally dependent on each other it's easier to have one model produce bad output, then detect that bad output and feed it into another model that cleans up the problem (like fixing faces in stable diffusion output).

The human brain is modularized like this, so I don't think it'll be a limitation.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: