People thought the same thing about nuclear energy. A popular quote from the 50s was "energy too cheap to meter." Yet here we are in a world were nuclear energy exists but unforeseen factors like hardware costs cause the costs to be much more than optimists expected.
These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)
>These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)
I don't think I disagree very much with you then.
>People thought the same thing about nuclear energy. A popular quote from the 50s was "energy too cheap to meter." Yet here we are in a world were nuclear energy exists but unforeseen factors like hardware costs cause the costs to be much more than optimists expected.
In worlds where this is true, OpenAI does not matter. So I don't really mind if they make a profit.
Or to put it another way, comparative advantage dulls the claws of capitalism such that it tends to make most people better off. Comparative advantage is much, much more powerful than most people think. But nonetheless, in a world where software can do all economically-relevant tasks, then comparative advantage breaks, at least for human workers and the Luddite fallacy becomes a non-fallacy. At this point, we have to start looking at evolutionary dynamics instead of economic ones. An unaligned AI is likely to win in such a situation. Let's call this scenerio B.
OpenAI has defined the point at which they become redistributive in the low hundreds of billions. In worlds where they are worth less than hundreds of billions (scenerio A, which is broadly what you describe above) they are not threatening so I don't care - they will help the world as most capitalist enterprises tend to, and most likely offer more good than the externalities they impose. And as in scenerio A, they will not have created cheap software that is capable of replacing all human capital, comparative advantage will work its magic.
In worlds where they are worth more, scenerio B, they have defined a plan to become explicitly redistributive and compensate everyone else, who are exposed to the extreme potential risks of AI, with a fair share of the extreme potential upsides. This seems very, very nice of them.
And should the extreme potentials be unrealizable then no problem.
This scheme allows them to leverage scenerio A in order to subsidize safety research for scenerio B. This seems to me like a really good thing, as it will allow them to compete with organizations, such as FaceBook and Baidu, that are run by people who think alignment research is unimportant.
These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)