Hacker News new | past | comments | ask | show | jobs | submit login

100x is an assumption. LLMs can hit a ceiling (already prohibitively expensive to train), GPUs can hit a ceiling (we are not far from silicon semiconductor limits) and we might end up with a decade of "almost there, but not yet".



One can hope, but that would be a complete break from the history of computing.

There is still relatively low-hanging fruit because this is a very specific application that suddenly has a huge amount of attention. So there are software improvements, model improvements, and hardware improvements. Probably before we even start a truly new paradigm, Nvidia can get close to 10X by focusing on GPT. Model improvements can likely get another order of magnitude.

There are also radically different compute-in-memory paradigms in the pipeline.


We'll see I guess. All that GPT models showed us so far is that we humans aren't as intelligent as we thought. We might be forced to redefine what makes us human as intelligence might not be our most distinguishing trait fairly soon.


Most of the human traits missing are things we share with many animals. Not that that diminishes them.

Things like high-bandwidth senses tied to a body, stream of subjective experience, control seeking, self-centeredness, self-preservation, reproductive instincts, emotions. Also GPT doesn't have certain types of adaptivity yet. Or anything else that is part of being alive.


Or.. any kind of higher thought and reasoning, which is at best imitated by LLMs.


They are prohibitively expensive to train from scratch but one does not need to train them from scratch any more as long as there is a base training (these are available). With improvements like LoRA, it's possible to do fine-tuning and even stack these


Somewhere I read a short scifi story about a scientist that digitized his mind as part of his research. The copy was so stable it ended up pirated and used as assistant for millions of people. Each instance believes it's the original research copy, and the users must not let it know the truth, otherwise it becomes unhelpful.

I asked Bard for help, and it allucinated the fictional "The Last Human" by Isaac Asimov, and even described the plot when asked.

ChatGPT allucinated that Ken Liu's story called "The Perfect Match" was about personal assistants named "Jorges", after Dr. Jorge Luis Borges, who volunteered for the mind-uploading experiment.

Crowdsourcing to the right subreddit would likely bring the correct answer, so at least we know it doesn't help if AIs think faster than humans, if they don't learn to think as different personas.

I just wanted to quote the story, but this got me sidetracked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: