> capable of constructing a slightly better version of itself
With just self-improvement I think you hit diminishing returns, rather than an exponential explosion.
Say on the first pass it cleans up a bunch of low-hanging inefficiencies and improves itself 30%. Then on the second pass it has slightly more capacity to think with, but it also already did everything that was possible with the first 100% capacity - maybe it squeezes out another 5% or so improvement of itself.
Similar is already the case with chip design. Algorithms to design chips can then be ran on those improved chips, but this on its own doesn't give exponential growth.
To get around diminishing returns there has to be progress on many fronts. That'd mean negotiating DRC mining contracts, expediting construction of chip production factories, making breakthroughs in nanophysics, etc.
We probably will increasingly rely on AI for optimizing tasks like those and it'll contribute heavily to continued technological progress, but I don't personally see any specific turning point or runaway reaction stemming from just a self-improving AGI.
I'm not just imagining self-improvement in the sense, that it optimizes its design to become more efficient or powerful by a few percent.
A system, that can 'think out of the box' may come up with some disruptive new ideas and designs.
Just a thought: Isn't lacking intellectual capacity, what keeps us from understanding, how the human brain actually works? Maybe an AGI will eventually understand it and will construct the next AGI generation with biological matter.
I think diminishing returns applies regardless of whether it's improving itself through optimization or breakthrough new ideas. There's only so much firepower you can squeeze out if everything else were to remain stagnant.
Like if all modern breakthroughs and disruptive new ideas in machine learning were sent back to the 70s, I don't think it'd make a huge difference when they'd still be severely hamstrung by hardware capabilities.
With just self-improvement I think you hit diminishing returns, rather than an exponential explosion.
Say on the first pass it cleans up a bunch of low-hanging inefficiencies and improves itself 30%. Then on the second pass it has slightly more capacity to think with, but it also already did everything that was possible with the first 100% capacity - maybe it squeezes out another 5% or so improvement of itself.
Similar is already the case with chip design. Algorithms to design chips can then be ran on those improved chips, but this on its own doesn't give exponential growth.
To get around diminishing returns there has to be progress on many fronts. That'd mean negotiating DRC mining contracts, expediting construction of chip production factories, making breakthroughs in nanophysics, etc.
We probably will increasingly rely on AI for optimizing tasks like those and it'll contribute heavily to continued technological progress, but I don't personally see any specific turning point or runaway reaction stemming from just a self-improving AGI.