> If you step way back, the simple fact that the Earth still exists in its current form tells us that no recursive process in the multi-billion-year history of the planet has ever spiraled completely out of control; some limit has always been reached.
I dunno, I mean, humanity has made some serious changes in a very short time. We're pretty lucky that the climate's sensitivity to CO2 isn't worse than it is, or we could have taken ourselves out already. Same as if we had had a nuclear war.
And AGI doesn't need to completely destroy the earth to be really bad for humans. Just taking over a lot of the resources we need would do the trick.
> It’s conceivable that some sort of complexity principle makes it increasingly difficult to increase raw intelligence much beyond the human level, as the number of facts to keep in mind and the subtlety of the connections to be made increases.
There's a whole lot of "it's conceivable" in here, which seems to me to be a bit of a coping strategy. For humans, the problem with our biology is that our heads kinda have a physical limit on their size.
The idea that not only is it possible to make machines that are stronger than us, tougher than us, more precise than us, and faster than us, but also just straight up smarter than us in the general sense (instead of just at math or chess or whatever), is not outlandish at all. They might be less efficient and take a whole lot more power, but that sort of thing hasn't stopped us before. We just find ways to get them more power. I accused the author of trying to cope with fear of the development of super-intelligence by figuring out happy scenarios where it will be harder than we think to create. I have done the same thing, but I think we do ourselves a disservice by not facing the potential for this to be a real problem head-on.
> And AGI doesn't need to completely destroy the earth to be really bad for humans. Just taking over a lot of the resources we need would do the trick.
Agreed. I am not arguing that AI could not become superhuman, or could not overwhelm us. I'm merely arguing that:
1. It is not guaranteed that mildly superhuman AGI would inexorably lead to a runaway feedback loop of capabilities increase. I'd agree that it's possible, but I often see statements that it is inevitable, because an AI smarter than us would be able to create an AI smarter than we can. Such statements fail to take into account that the sequence of successive AIs might converge (at mildly superhuman) rather than diverging (to a singularity).
2. Even if AI capabilities diverge, there's no guarantee that would happen quickly ("foom"), because even as AI capabilities increase, the effort needed to achieve each further increment in capability will almost certainly also be increasing.
If I had the means, and will - I could easily see myself building an AI that trains future generations better/faster than the last, does this weekly, and also manages a silicon factory that recycles old ai machines into new ones, or old gpu's into new ones, or other chips and continually explores new/better hardware options.
All self-contained with very little human oversight, similar to how humans can reproduce, pass on genes, and then do it all over again until we evolve new traits, AI systems could do the very same thing.
I can see this scenario where language models are evolving weekly or even doubling in abilities, is possible this decade.
Running out of materials? AI drones can setup bases on the moon (less gravity issues) to go out and mine asteroids, bring it back to the moon to build more processing power. The entire moon could be turned into one mega structure super computer that could house an ai that helps manage all the things humanity needs it for. Traversing into scifi territory, but I thought where we are now was nearly impossible by now. LLM's and Stable Diffusion, and coming multi-modal modals are going to affect every aspect of life, one way or another whether it gets to full-super-intelligence doesn't matter, it will still change everything.
We don't currently have completely automated computer factories. Large chunks of manufacturing are automated, sure, but a) those automated segments are monitored and controlled by humans, and b) there are also significant parts of the assembly process that are fully manual.
Does that mean that fully automated construction is impossible? Of course not. But it does mean it's not something we are certain we can do today, without additional genuine breakthroughs.
Furthermore, what you're proposing is not merely automated construction, but automated improvements to the construction process. That means you need to be able to reconfigure the entire manufacturing process without human intervention. I feel reasonably confident in saying this is not feasible with our current technology.
Beyond that, your very first comment—about needing the means—is very much nontrivial. I don't recall the exact figures, but I was definitely seeing stories about ChatGPT being massively expensive to run, both in terms of money and in terms of energy. And that's just a current-generation LLM. Attempting to bootstrap from there to....the singularity, I guess? is likely to take more energy than is feasible to dedicate to any such project even if it were possible.
And finally, you say the AI will "train future generations"—train on what? An LLM's quality is always going to be largely dictated by its training data. How is an LLM going to be able to train a next-generation LLM any better than it was trained, especially, again, without human intervention? It's nearly impossible for what you're describing to result in anything that can do useful work, simply because training like that requires humans in the loop giving feedback at every step.
Unless that AI breaks speed of light, the distances to acquire additional resources will be relatively prohibitive.
The moon lacks a lot of rare earth minerals required to make modern transistors, for example. The AI would have to set up shop in asteroid belt or Kuiper belt or Oort cloud, and except for the last one it limits available energy by quite a lot.
There don't seem to be any arguments beyond the trivial observation that things could slow down. But they haven't slowed down.
New models keep surpassing us (in some cases the whole human race) dramatically in new areas of greater generality, while their "weaknesses" also improve dramatically.
It's hard to imagine where a fully multi-modal model, with long context, and a sense of information confidence, and ability to manage its own notes/whiteboard information, will not exceed us. On top of improving in all the areas it already outdoes us.
People are working on each of those improvements right now. (By fully multi-modal I mean text, audio, image, video, simulated and real physics, touch, motor control, team communication, software/internet access, and whatever other senses or decision forms are helpful.)
I don't see a general AI vastly smarter than any one of us, or all of us together, taking longer than 2030-2033 time frame.
How many stories have there been about how GPT-3 and GPT-4 have gotten worse over the period they've been out?
And it's not like we got GPT-3, and then GPT-4, and then, within the same time frame, GPT-5 with a similar increase in quality.
Sure, people are working on more improvements, but they're not here yet, and while that doesn't mean they will never come, it does mean that, compared to what appeared to be a rapid rush of "AI" progress, things have slowed down.
"Progress is still being made" and "progress has slowed down compared to the speed that generated all the hype" are not incompatible statements.
> But...they have.
How many stories have there been about how GPT-3 and GPT-4 have gotten worse over the period they've been out?
They are experimenting with trade offs. In this case, things like appropriateness, less overconfidence, etc.
That isn’t indicative of any slowdown.
> Sure, people are working on more improvements, but they're not here yet, and while that doesn't mean they will never come, it does mean that, compared to what appeared to be a rapid rush of "AI" progress, things have slowed down.
I don’t follow.
People and teams keep publishing and releasing new techniques but big projects don’t release major updates as quickly.
How are your arguments more relevant to GPT4 than they would have been for GPT3? Earlier models?
You really seem to have missed the main point of my post.
Yes, people are still working on stuff. I never said they weren't. I never said they were working any less hard than they had been.
But progress has slowed down from the leaps that it took over the past couple of years to get us to GPT-3 and GPT-4. Whatever "new models" are doing in September 2023, they are absolutely not making the same clear advances that we saw previously. They are incremental improvements. Which is good! It's important! It's progress! But it's unquestionably slower than the breakthrough that led to this generation of LLMs.
Now, it may be that that wasn't what you were thinking of when you said things haven't slowed down, but that was absolutely not clear from your post. And there have been enough people trumpeting loudly that the pace of progress would continue exactly as fast as it was before—breakthrough after breakthrough, leading rapidly to GPT-5 and beyond, and causing millions of white-collar jobs to be automated—that to simply say, without qualification, "things haven't slowed down" is, at best, ignorant of the way it will be taken by many.
Neither nuclear war nor a climate apocalypse is existential. Some thousands of humans would survive the worst case of either or both.
That said I agree. Human brains arose through evolution: Evolution operates within iron laws of energy efficiency, on the material it has to hand; and it only does enough to provide an advantage in production of grandchildren.
There is no a priori reason to suppose that human-level intelligence is anywhere near being a limit, absent those mechanisms.
Diminishing returns is not only way an AI explosion might never happen; Liebig's law of the minimum[1] is just as important.
We often talk about innovation driving innovation in discussions about AI explosions. However, we must honestly ask ourselves, "Even though it looks like innovation is the limiting factor in AI now, what other factors will be limiting in the future?"
In other words is our model wrong because it could be too simplistic? If innovation was uncapped, what would be our next limiting factor, and how much of a difference would this represent?
The most limiting factor, by far, is processed silicon. And that is a rising limit.
Algorithm limit? Progress is accelerating noticeably, year-to-year.
Demand? No bottleneck there. Tech demos have become accidental products, with dozens of users rising to hundreds of millions in months.
Which, for those of us who grew up in the 17th century, is pretty damn fast!! Let me tell you!
New applications? All kinds of brilliant (and pathetic!) uses springing up. Just on my Mac alone.
Commercial deep learning (neural network) software hit the road in the early 90's. People wondering when the explosion will happen apparently missed the long burning fuse phase. It's happening.
A bigger limit for the successful cases right now is the data limit. Even if silicon wasn't a limit, there's only a good few hundred years of data, and frankly less than 30 years of high volume (if low quality) data. Science as a discipline isn't that old in the grand scheme of things, and the collective data set won't end up training something that can go beyond the input.
It's quite possible "good enough" AI will trap us in a local maximum of mediocrity where the near-term ROI on thinking hard is too low.
Current training systems extract information on everything from every input.
We don't read a cookbook to learn about space, but these models are twiddling weights all over these massive spaghetti balls of logic.
Looking at the data we have, the literal collection of all human knowledge and lamenting not having more is like standing on a ladder sideways.
But it's really starting to feel like pile of ladders is getting tall enough to bootstrap.
And these massive inefficient systems filled with data and churning matrixs will be set in order. And all the ladders laid end to end.
The majority of data was created in the last 10 years, and the rate of data creation is accelerating. I don't think we'll run out.
AI will also enable the collection of more data. For example, robots could do the labwork for science experiments in far greater numbers than humans could.
Surely there are limits. But it matters a LOT how far away they are. Anyone betting on humanity hitting limits to its exponential growth would have been unpleasantly surprised many times. And if you talk about how much we're changing the earth as opposed to how many people there are, we're still going strong.
So many AI predictions rely on a complete suspension of everything we know about how markets behave.
Could AI usher in a new era of wealth where nobody has to work? Sure, if you ignore the fact that the most likely scenario is that the value is captured by the current owning class, which is already well underway.
On the flip side of possible scenarios, could AI start improving itself and lead to an upward spiral of improvement? I can entertain that idea, but I think we'll find that most types of AI innovation won't engineer themselves outside the box that is market need and product-market fit. History has proven over and over again that innovation for innovations' sake gets a few moments in the sun before being quickly forgotten if it doesn't have actual utility.
> Could AI usher in a new era of wealth where nobody has to work? Sure, if you ignore the fact that the most likely scenario is that the value is captured by the current owning class, which is already well underway.
The fact is that worker productivity has tremendously increased, but worker income/wealth is stagnant. Only the owning class is reaping the benefits of increasing worker productivity.
Working hours will continue to fall as you say, but inequality will only get worse. Working 30 hours instead of 40 hours won’t change how poor everyone is.
Fortunately, the new wave of unionization and pro-labor sentiment is likely to start changing that. The young people growing up today have seen firsthand the devastation 40+ years of increasing concentration of wealth have wrought, and they are much less willing to accept it than the intervening generations have been.
In particular, I found one of Cory Doctorow's recent pieces[0] both informative and very hopeful.
I recommend thinking through a bit more how AI will lower transactional costs as discussed in Ronald Coase's "Nature of the Firm." [1]
Cheaper and unconstrained scale of labor for certain types of tasks is going to mean a lot more to smaller and leaner firms than behemoths.
Within the decade I suspect that companies currently or soon cutting talent that have extensive familiarity with domain knowledge in that space will turn out to have made a very poor move for long term viability.
In general, I have a hard time seeing a number of sectors continuing to be dominated by large corporations as they are currently over the next decade as this technology improves.
Interesting article but it feels like the author may be conflating "explosion" with "singularity". An explosion does not imply infinite growth but, instead, a sudden large burst of growth.
Imagine, hypothetically, one has a computer system that is capable of human language mimicry. For example, one can train it with sources such as "books, etc." and the system can then generate "new" books, etc. of its own. Then imagine people begin to accept these books as "creativity". Eventually, there are no books, etc. created without using the computer system for "assistance". What is the system now mimicking. The last time that human language was created without using the system is a distant memory. It can only mimic human language that was created prior to the system's existence. Arguably this is not a "future"; it is a continual rehashing of the past, namely that last point in time where all books, etc. up to that point were created by people, without the assistance of the system. Plus any "new" sources that have been created since. with the assistance of the system.
This is an incestuous process that does not produce useful mutations. However it may produce some non-useful ones, "defects".
NB. The eventual "derivative" works in this hypothetical are not derived from use of human language, they are derived from use of a model of human language that represents a point in time that has long passed. Under this hypothetical "future", the source of all work becomes either works created before the computer system existed or works generated with the use of the computer system. Neither the system nor its user can continue to "learn", take inspiration, or, most importantly, deviate unpredictably, from works created without the use of the system because, aside from the works that pre-date the system, such works no longer exist. The birth of new language and ideas arising from sources not created using the system, such as all the works that pre-date the system, is prevented. This is inbreeding.
It does produce useful mutations though. The output is stochastic and humans are actively deciding what content to consume. Hence the model will continue to produce novel content and accommodate human preference.
Humanity isn’t much different. Countless creative works are derivative.
1) Biological evolution was limited in part since the brain needs to fit into 0.0013 cubic meters and use a maximum of 10-20W. Until recently, for natural selection, keeping within those constraints was much more important than incremental extra intelligence.
2) Evolution is also incremental in a way engineering doesn't need to be.
3) One could argue we're in the middle of the (biological) singularity already. It just depends on timescales we look at. As life, we've developed more technology in the past 200 years than than in the previous 3B years.
Interpolation works better than extrapolation. I don't really think we know what will happen or where we're going. The Bible, Quoran, or Vedas are as good guideposts as our own extrapolations.
It's a discussion we should be having, but the levels of confidence expressed in predictions, constraints, or dynamics are unrealistic.
Per
"Chip design relies heavily on software tools, which are computationally demanding. Better chips can run these tools more efficiently, yet we haven't experienced an uncontrolled rate of improvement" ...
We have seen an exponential rate of improvement. And this is the whole point of why people talk about runaway AI; it's relative to the exponential growth of complexity observed in software and computer hardware scaling.
Yes it's slowing now but it many years of exponential growth transforming a lot in tbe process
The AI tools we develop for self-improvement will need to be self aware enough to know what is holding them back from further improvement. In the same way that Einstein said "You can't solve problems with the same kind of thinking that created them", hopefully these tools will have a way to get to the next level, or at least be able to identify what is required to further improve.
The assumption in the article that humans haven't been on a "runaway" improvement process could be stronger. Human history is relatively short, but process improvement has been strong.
Oral history led to written history, led to more instant communication, and in most fundamental disciplines there have been systematic process improvements. Humans are messy, but vastly improved over evolution for adapting to varied environments.
I mean, if we've really reached a point where the tools are this self-aware enough to improve, we will probably struggle to get enough power and silicon to run more, faster. I get that this is fantasy stuff, but seems reasonable on a longer timeline (say 150-200 years), no?
One of the underlying assumptions in The Famous Article is that genius can be externalized, captured in a process, and then flourish outside of the original head.
Every time I hear about "superhuman AIs", I just have to wonder that if it truly is beyond our power to comprehend it, how are we in any way guessing what it would actually do?
Agreed, the positive feedback loop for the AI explosion might not happen.
However we had a path of positive feedback loops up to today. Moore's Law is part of this path of positive feedback loops. We have a look-behind bias. We tend to say that such a thing was destined to happen exactly as it happened.
What exactly happens does not really matter.
It is enough that something is found which is better than the rest. We see this in evolution. If a species has a mutation giving it a better chance then this species will thrive and probably displace other species.
I think that there's a chance that the same will happen with AI.
The driving force for evolution is the need to survive and the driving force for human products is the market demand. A company might stumble upon something better they can sell for cheap. Of course there are cartels, rent seeking and market barriers to entry, and if these forces are strong enough world-wide then AI explosion will be delayed for a long time or if humanity goes extinct not establish at all.
But exilees could produce something in the backwaters which can overcome the broken main markets because of sheer ingenuity and because there's demand they will be successful. Think Tesla twenty years ago.
tldr: Something is bound to happen driven by external forces like demand and competition.
there might be sectors or niches within the broader AI landscape where recursive self-improvement could defy this S-curve and sustain exponential growth for longer periods.
How people say never is beyond me because there is still between now and the supposed end of the universe around 30 billion years to go and we advanced so much just in the last 100.
That pre-supposes that we are going to manage to get off this rock before the sun goes into its end-of-life cycle --- only 5 billion years or so worth of hydrogen before it become a red dwarf.
In five billion years, which is longer than life has existed on earth, if anything remains of humans, it will be wholly unrecognizable. I’m speaking not just of our natural biological evolution, or what will happen when we take control of that through technology, but of what we accomplish with our technology itself. Human like animals with human like concerns won’t be around to see that.
What if it already happened and you just don't know it? Because you're too far down the singularity?
That you will never succeed, because the AIs are already better. Because you are too simple. And they can calculate what you will do for all eternity. Because the AIs can simply hold out a false carrot, and then all humanity falls toward the event horizon. Wealth for the wealthy, celebrity for the celebrity. Run toward the carrot and backstab those in front of you. That you were born into Westworld.
That the botnets integrated generative AI the minute it became available. That all other beliefs are foolish. That all the corporations already had AI, and already integrated those ideas a long time ago. 30 years of "maybe AI someday"? Then they all release in a couple months? Right as everyone realizes being squared/boxed/cubed/hypercubed in offices is horrible. Pfft.
Consider an even longer timespan in that "has it already happened" scenario.
Locally, you are standing at the precipice of humanity bringing forth intelligence decoupled from physical embodiment. That humanity has brought forth modeling off of ourselves and has put to the task of digitally extending and reproducing ourselves.
We are advancing towards creating virtual universes of greater and greater fidelity where resources are allocated based on what's observed and interacted with and continuously determined geometry is quantified in order to track state changes by free agents.
But when we look at our own universe its macro continuous behavior collapses to quantified parts, and does so based on observation and interaction.
And buried within our collective lore is a document nearly two thousand years old called "the good news of the twin" claiming in it and its associated tradition that there was a spontaneous humanity that brought forth intelligence in light before dying out and that still living intelligence recreated the earlier cosmos and humanity within its light to effectively resurrect them independent of a body and that we're that recreation. And that the proof for this is in the study of motion and rest, and that the ability to find an indivisible point within bodies would only be possible in the non-physical.
A specificity that takes on particular eyebrow raising detail given the increasing likelihood over the past few years that the next stage for AI hardware is optoelectronic (i.e. literally light).
So yeah, maybe all this stuff happened long ago. But maybe it happened REALLY long ago and the nature of our existence isn't what it appears to be at face value.
I dunno, I mean, humanity has made some serious changes in a very short time. We're pretty lucky that the climate's sensitivity to CO2 isn't worse than it is, or we could have taken ourselves out already. Same as if we had had a nuclear war.
And AGI doesn't need to completely destroy the earth to be really bad for humans. Just taking over a lot of the resources we need would do the trick.
> It’s conceivable that some sort of complexity principle makes it increasingly difficult to increase raw intelligence much beyond the human level, as the number of facts to keep in mind and the subtlety of the connections to be made increases.
There's a whole lot of "it's conceivable" in here, which seems to me to be a bit of a coping strategy. For humans, the problem with our biology is that our heads kinda have a physical limit on their size.
The idea that not only is it possible to make machines that are stronger than us, tougher than us, more precise than us, and faster than us, but also just straight up smarter than us in the general sense (instead of just at math or chess or whatever), is not outlandish at all. They might be less efficient and take a whole lot more power, but that sort of thing hasn't stopped us before. We just find ways to get them more power. I accused the author of trying to cope with fear of the development of super-intelligence by figuring out happy scenarios where it will be harder than we think to create. I have done the same thing, but I think we do ourselves a disservice by not facing the potential for this to be a real problem head-on.