Warning people about potential extreme risks from advanced AI does not make you a cultist. It makes you a realist.
I love GPT and my whole life and plans are based on AI tools like it. But that doesn't mean that if you make it say 50% smarter and 50 times faster that it can't cause problems for people. Because all it takes is systems with superior reasoning capability to be given an overly broad goal.
In less than five years, these models may be thinking dozens of times faster than any human. Human input or activities will appear to be mostly frozen to them. The only way to keep up will be deploying your own models.
So to effectively lose control you don't need the models to "wake up" and become living simulations of people or anything. You just need them to get somewhat smarter and much faster.
We have to expect them to get much, much faster. The models, software, and hardware for this specific application all have room for improvement. And there will be new paradigms/approaches that are even more efficient for this application.
For hyperspeed AI to not come about would be a total break from computing history.
A realist is someone who accepts reality as it is, not as they might be able to anxiously envision that it could be. Life is too short and attention too precious to fill the meme space with every dreamer's deepest concerns. None of these dramatic X-risk claims is based on anything but beliefs and conjecture. "Thinking dozens of times faster?" What do you even mean? These are models executing matrix multiplies billions of times faster than our brains propagate information, and they represent knowledge in a manner which is unique and different from human brains. They have no goals, no will, and no inner experience of us being frozen or fast or anything else. We are so prone to anthropomorphize willy-nilly. We evolved in a paradigm of resource competition so we have drives and impulses to protect, defend, devour, etc., of which AI models have zero. Anyone who has investigated reinforcement learning knows that we are currently far away from understanding let alone implementing systems which can effectively deconstruct abstract goals into concrete sub-tasks, yet people are soooo sure that these models are somehow going to all of a sudden be an enormous risk. Why don't we wait until there is even the slightest glimmer of evidence before listening to these prophets of doom?
This pseudo-intellectual belief structure is very cult like. Its an end of the world scenario that only an elite few can really understand, and they, our saviors, our band of reluctant nerd heroes, are screaming from the pulpit to warn us of utter destruction. The actual end of days. These "black box" (er, I mean, we engineered them that way after decades of research, but no, nobody really understands them, right?) shoggoths will be so incredibly brilliant that they will be able to dominate all of humanity. They will understand humans so well as to manipulate us out of existence, yet they will be so utterly stupid as to pursue paper clips at all cost.
Maybe instead these models will just be really useful software tools to compress knowledge and make it available to humanity in myriad forms to develop a next level of civilization on top of? People will become more educated and wise, the cost of goods and services will drop dramatically, thereby enriching all of humanity, and life will go on. There are straighter paths from where we are today to this set of predictions than there are to many of the doomsday scenarios, yet it has become hip among the intelligentsia to be concerned about everything. Being optimistic is somehow not real, (although the progress of civilization serves as great evidence that optimism is indeed rational) while being a loud mouthed scare mongerer or a quiet, very serious and concerned intellectual, is seen as respectable. Forget that. All the doomers can go rot in their depressive caves while the rest of us build a bad ass future for all of humanity. Once hail bop has passed over I hope everyone feels welcome to come back to the party.
Let's try to rewrite this in a somewhat more dispassionate style:
A pragmatic perspective requires one to accept the present reality as it is, rather than hypothesize an exaggerated potential of what could be. Not all concerns surrounding existential risks in technology are necessarily grounded in empirical evidence. When it comes to artificial intelligence, for instance, current models operate at a speed vastly superior to human cognition. However, this does not equate to sentient consciousness or personal motivation. The projection of human traits onto these models may be misplaced, as AI systems do not possess inherently human drives or desires.
Many misconceptions about reinforcement learning and its capabilities abound. The development of systems that can translate abstract objectives into detailed subtasks remains a distant prospect. There seems to be a pervasive certainty about the risks associated with these models, yet concrete evidence of such dangers is still wanting.
This belief system, one might argue, shares certain characteristics with a doomsday cult. There is a narrative that portrays a small group of technologists as our only defense against a looming, catastrophic end. These artificial intelligence models, which were engineered after extensive research, are often misinterpreted as inscrutable entities capable of outsmarting and eradicating humanity, while simultaneously being so simplistic as to obsess over trivial tasks.
Alternatively, these AI models could be viewed as valuable tools for knowledge compression and distribution, enabling the advancement of civilization. As a result, societal education levels could improve, and the cost of goods and services might decrease, which could potentially enrich human life on a global scale. While there seems to be a tendency to worry about every potential hazard, optimism about the future is not unfounded given the trajectory of human progress.
There are certainly different perspectives on this issue. Some adhere to a more fatalistic viewpoint, while others are working towards a brighter future for humanity. Regardless, once the present fears subside, everyone is invited to participate in shaping our collective future.
No, it's really not, because your riff on 'shoggoths that are both so brilliant as to be dangerous, yet so stupid that they maximize paperclips' touches on an important point that the summarized version completely omits.
AI is exactly that kind of stupid. What it lacks isn't 'brilliance' but intentionality. It can do all sorts of rhetorical party tricks, including those that are good at influencing humans, it can even very likely work out WHICH lines of argument are good at influencing humans from context, and yet it has no intentionality. It's wholly incapable of thinking 'wait, I'm making people turn the world to paperclips. This is stupid'.
So it IS likely to turn its skills to paperclip maximization, or any other hopelessly quixotic and destructive pursuit. It just needs a stupid person to ask it to do that… and we're not short of stupid people.
Not sure you read my comment carefully enough. I am an optimist. I do believe that AI can and probably will be a positive and transformative force.
But I also think it's more anticipatory than speculative to envision AI systems (quite possibly on the request of a human faction) taking control.
And GPT-4 absolutely does do abstract reasoning and subgoals. No it doesn't have many other capabilities or characteristics of humans or other animals but as I said it doesn't need those to be dangerous.
We need to prohibit manufacture or design AI hardware that has performance beyond a certain level. It is not too early to start talking about a risk that could end humanity. I do hope that we can get away with something a few orders of magnitude better than what we have today, but it's really of asking for trouble the more we optimize it, and we may be walking a fine line within a decade or so. Or less. It takes years to design hardware and get manufacturing online, especially for new approaches.
And two orders of magnitude faster may be only a few years away.
I love GPT and my whole life and plans are based on AI tools like it. But that doesn't mean that if you make it say 50% smarter and 50 times faster that it can't cause problems for people. Because all it takes is systems with superior reasoning capability to be given an overly broad goal.
In less than five years, these models may be thinking dozens of times faster than any human. Human input or activities will appear to be mostly frozen to them. The only way to keep up will be deploying your own models.
So to effectively lose control you don't need the models to "wake up" and become living simulations of people or anything. You just need them to get somewhat smarter and much faster.
We have to expect them to get much, much faster. The models, software, and hardware for this specific application all have room for improvement. And there will be new paradigms/approaches that are even more efficient for this application.
For hyperspeed AI to not come about would be a total break from computing history.