Are we talking about "an AGI", or are we talking about overfitting large transformer models with human-written corpora and scaling up the result?
"An AGI"? I have no idea what that algorithm might look like. I do know that we can cover the majority of cases with not too much effort, so it all depends on the characteristics of that long tail.
> Combining Wu’s method with the classic synthetic methods of deductive databases and angle, ratio,
and distance chasing solves 21 out of 30 methods by just using a CPU-only laptop with a time limit of
5 minutes per problem.
AlphaGeometry had an entire supercomputer cluster, and dozens of hours. GOFAI approaches have a laptop and five minutes. Scale that inconceivable inefficiency up to AGI, and the total power output of the sun may not be enough.
It's always a hindsight declaration though. Currently we can only say that Intel has reused the same architecture several times already and cranking up the voltage until it breaks because they seem to be yet to find the next design leap, while AMD has been toying around with 3D placement but their latest design is woefully unimpressive. We do not know when the next compute leap will happen until it happens.
How much resource are you assuming an AGI would consume?