Hacker News new | past | comments | ask | show | jobs | submit login

One day, someone came up with the Transformer architecture for neural networks. The ideas didn't quite come out of nowhere, but their impact did - I don't think that anyone predicted the degree to which ChatGPT and its ilk are superior to LSTMs and the like. More importantly, nobody predicted the details, the way that the GPTs behave, the way they're willing to invent facts like they're spooling out plausible thoughts without being constrained to observed reality.

From a history-of-technology standpoint, the state of the art for generative AI went from LSTMs to ChatGPT overnight. There was no warning. Nobody could have looked at the Attention Is All You Need paper and predicted how much it would improve the state of the art.

And worst of all, we already know that AIs can be hostile. What else is a public corporation's corporate bureaucracy but a Chinese Room running an AI whose goal is to maximize quarterly dividends? What else is a government but another Chinese Room trying to survive? We talk about corporations being faceless or friendly, we ascribe them wants and needs and behaviors and habits, we give them names, and they suffer natural selection and mutation and reproduce and evolve over time so as to select from those bureaucracies that are most fit for survival. And this class of AI is already responsible for global climate change and the Cold War and so many other things that might already be on the path to push us into extinction.

LLMs are nowhere near as humanlike as a corporation. Corporations are bound by regulations and capitalism and basic human decency. It is absolutely possible to build an LLM whose fundamental goal is to kill people. All you have to do is tell ChatGPT to pretend it's playing a game of global thermonuclear war, and some joker on 4chan would do that in a heartbeat.

So what? Why does this matter?

We have no way to predict when the next revolution in AI capability will occur.

We have no way to predict what the next revolution will be.

We have no way to predict the capabilities of the next generation of generative AIs.

We have no way to predict how the next generation of AIs will "think", insofar as they do.

And because we have no predictions, we have no way to prepare.

So the real question is, how much risk are you willing to tolerate? Are you wiling to take a coin flip on it? A spin of the cylinder? A roll of the dice? Because that's what we're looking at here - the next generation of AIs has no reason to be any less inimical to human existence than the examples we already have, and AI will _only_ get better.




exactly. where were all these self proclaimed experts, who are now making confident predictions about AI, before attention is all you need? literally 100% confidently predicted AI like chatGTP was hundreds or thousands of years away, or would have if asked. these people are the real bots.

and i never saw a single person on HN or anywhere else raise concerns or even appreciate the true impact GTP would have today in 2018 when GTP was released and samples available! people coundnt get it right even when it was right in front of them. how can they think for a single second theyre getting it right now?

to my credit i recognized immediately what was going on in 2018 and to my knowledge i was the only person leaving comments warning people and begging people to support a pause of research. so by the only material metric, i am more qualified to make predictions than anyone here or even the experts who all confidently predicted that what is happening now would never happen in our childrens lifetimes.

i will never forget when hinton saw the light. when he started warning of danger and using the key words that are at the heart of the issue. and i realized i had been ahead of him. just because youre an expert in one thing, even the implementation of ML or AI, doesnt make you an expert on the broader consequences of AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: