Hacker News new | past | comments | ask | show | jobs | submit login

(I work at OpenAI.)

I think this tweet from one of our employees sums it up well:

https://twitter.com/Miles_Brundage/status/110519043405200588...

Why are we making this move? Our mission is to ensure AGI benefits all of humanity, and our primary approach to doing this is to actually try building safe AGI. We need to raise billions of dollars to do this, and needed a structure like OpenAI LP to attract that kind of investment while staying true to the mission.

If we succeed, the return will be exceed the cap by orders of magnitude. See https://blog.gregbrockman.com/the-openai-mission for more details on how we think about the mission.




I believe you. I also believe there are now going to be outside parties with strong financial incentives in OpenAI who are not altruistic. I also believe this new structure will attract employees with less altruistic goals, that could slowly change the culture of OpenAI. I also believe there's nothing stopping anyone from changing the OpenAI mission further over time, other than the culture, which is now more susceptible to change.


Something something money and power corrupts?

We can just look at Google and see that “do no evil” does not work when you’ve got billions of dollars and reach into everyone’s private lives.


thanks for your reply, and I appreciate that you share your reasoning here.

However, this still sounds incredibly entitled and arrogant to me. Nobody doubts that there are many very smart and capable people working for OpenAI. Are you really expecting to beat the return of the most successful start-ups todate by orders of magnitude and to be THE company developing the first AGI? (And even in this -for me- extremely unlikely case, the cap would most likely don't matter, as if a company would develop an AGI worth trillions the government/UN would have to tax/license/regulate it.)

Come on, you are deceiving yourself (and your employees apparently as well, the Twitter you quoted is a good example). This is a non-profit pivoting into a normal startup.

Edit: Additinally it's almost ironic, that "Open"AI now takes money from Mr. Khosla, who is especially known for his attitude towards "Open". Sorry if I sound bitter, but I was really rooting for you and the approach in general and I am absolutely sure that OpenAI has become something entirely different now :/..


If we succeed, the return will be exceed the cap by orders of magnitude.

Are there any concrete estimates of the economic return that different levels of AGI would generate? It's not immediately obvious to me that an AGI would be worth more than 10 trillion dollars (which I believe is what would be needed for your claim to be true).

For example, if there really is an "AGI algorithm", then what's to stop your competitors from implementing it too? Trends in ML research have shown that for most advances, there have been several groups working on similar projects independently at the same time, so other groups would likely be able to implement your AGI algorithm pretty easily even if you don't publish the details. And these competitors will drive down the profit you can expect from the AGI algorithm.

If the trick really is the huge datasets/compute (which your recent results seem to suggest), then it may turn out that the power needed to run an AGI costs more than the potential return that the AGI can generate.


AGI == software capable of any economically relevant task.

If you have AGI, it is very clear you could very quickly displace the entire economy, especially as inference is much cheaper than training: which implies there will be plenty of hardware available at the time AGI is created.


People thought the same thing about nuclear energy. A popular quote from the 50s was "energy too cheap to meter." Yet here we are in a world were nuclear energy exists but unforeseen factors like hardware costs cause the costs to be much more than optimists expected.

These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)


>These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)

I don't think I disagree very much with you then.

>People thought the same thing about nuclear energy. A popular quote from the 50s was "energy too cheap to meter." Yet here we are in a world were nuclear energy exists but unforeseen factors like hardware costs cause the costs to be much more than optimists expected.

In worlds where this is true, OpenAI does not matter. So I don't really mind if they make a profit.

Or to put it another way, comparative advantage dulls the claws of capitalism such that it tends to make most people better off. Comparative advantage is much, much more powerful than most people think. But nonetheless, in a world where software can do all economically-relevant tasks, then comparative advantage breaks, at least for human workers and the Luddite fallacy becomes a non-fallacy. At this point, we have to start looking at evolutionary dynamics instead of economic ones. An unaligned AI is likely to win in such a situation. Let's call this scenerio B.

OpenAI has defined the point at which they become redistributive in the low hundreds of billions. In worlds where they are worth less than hundreds of billions (scenerio A, which is broadly what you describe above) they are not threatening so I don't care - they will help the world as most capitalist enterprises tend to, and most likely offer more good than the externalities they impose. And as in scenerio A, they will not have created cheap software that is capable of replacing all human capital, comparative advantage will work its magic.

In worlds where they are worth more, scenerio B, they have defined a plan to become explicitly redistributive and compensate everyone else, who are exposed to the extreme potential risks of AI, with a fair share of the extreme potential upsides. This seems very, very nice of them.

And should the extreme potentials be unrealizable then no problem.

This scheme allows them to leverage scenerio A in order to subsidize safety research for scenerio B. This seems to me like a really good thing, as it will allow them to compete with organizations, such as FaceBook and Baidu, that are run by people who think alignment research is unimportant.


Lowest estimate of human brain’s compute capacity is 30 TFLOPs. More reasonable estimate is 1 PFLOPS. Achieving this level of compute would cost few thousand dollars per hour. Human costs $7-$500 per hour. On the top, energy consumption bills would be outrageous. Because of slowed down Moore’s law the price goes down by 10X about 15yrs. I also think it’s fair to assume that human brain sculpted over million year of evolution with extreme frugality is fairly optimal as far as hardware requirements are concerned. So I tend to think “affordable AGI” might be around 45 years away.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: