Hacker News new | past | comments | ask | show | jobs | submit login

If we succeed, the return will be exceed the cap by orders of magnitude.

Are there any concrete estimates of the economic return that different levels of AGI would generate? It's not immediately obvious to me that an AGI would be worth more than 10 trillion dollars (which I believe is what would be needed for your claim to be true).

For example, if there really is an "AGI algorithm", then what's to stop your competitors from implementing it too? Trends in ML research have shown that for most advances, there have been several groups working on similar projects independently at the same time, so other groups would likely be able to implement your AGI algorithm pretty easily even if you don't publish the details. And these competitors will drive down the profit you can expect from the AGI algorithm.

If the trick really is the huge datasets/compute (which your recent results seem to suggest), then it may turn out that the power needed to run an AGI costs more than the potential return that the AGI can generate.




AGI == software capable of any economically relevant task.

If you have AGI, it is very clear you could very quickly displace the entire economy, especially as inference is much cheaper than training: which implies there will be plenty of hardware available at the time AGI is created.


People thought the same thing about nuclear energy. A popular quote from the 50s was "energy too cheap to meter." Yet here we are in a world were nuclear energy exists but unforeseen factors like hardware costs cause the costs to be much more than optimists expected.

These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)


>These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)

I don't think I disagree very much with you then.

>People thought the same thing about nuclear energy. A popular quote from the 50s was "energy too cheap to meter." Yet here we are in a world were nuclear energy exists but unforeseen factors like hardware costs cause the costs to be much more than optimists expected.

In worlds where this is true, OpenAI does not matter. So I don't really mind if they make a profit.

Or to put it another way, comparative advantage dulls the claws of capitalism such that it tends to make most people better off. Comparative advantage is much, much more powerful than most people think. But nonetheless, in a world where software can do all economically-relevant tasks, then comparative advantage breaks, at least for human workers and the Luddite fallacy becomes a non-fallacy. At this point, we have to start looking at evolutionary dynamics instead of economic ones. An unaligned AI is likely to win in such a situation. Let's call this scenerio B.

OpenAI has defined the point at which they become redistributive in the low hundreds of billions. In worlds where they are worth less than hundreds of billions (scenerio A, which is broadly what you describe above) they are not threatening so I don't care - they will help the world as most capitalist enterprises tend to, and most likely offer more good than the externalities they impose. And as in scenerio A, they will not have created cheap software that is capable of replacing all human capital, comparative advantage will work its magic.

In worlds where they are worth more, scenerio B, they have defined a plan to become explicitly redistributive and compensate everyone else, who are exposed to the extreme potential risks of AI, with a fair share of the extreme potential upsides. This seems very, very nice of them.

And should the extreme potentials be unrealizable then no problem.

This scheme allows them to leverage scenerio A in order to subsidize safety research for scenerio B. This seems to me like a really good thing, as it will allow them to compete with organizations, such as FaceBook and Baidu, that are run by people who think alignment research is unimportant.


Lowest estimate of human brain’s compute capacity is 30 TFLOPs. More reasonable estimate is 1 PFLOPS. Achieving this level of compute would cost few thousand dollars per hour. Human costs $7-$500 per hour. On the top, energy consumption bills would be outrageous. Because of slowed down Moore’s law the price goes down by 10X about 15yrs. I also think it’s fair to assume that human brain sculpted over million year of evolution with extreme frugality is fairly optimal as far as hardware requirements are concerned. So I tend to think “affordable AGI” might be around 45 years away.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: