Hacker News new | past | comments | ask | show | jobs | submit login

I was buying it until he said that profit is “capped” at 100x of initial investment.

So someone who invests $10 million has their investment “capped” at $1 billion. Lol. Basically unlimited unless the company grew to a FAANG-scale market value.




We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company.


Leaving aside the absolutely monumental if that's in that sentence, how does this square with the original OpenAI charter[1]:

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Early investors in Google have received a roughly 20x return on their capital. Google is currently valued at $750 billion. Your bet is that you'll have a corporate structure which returns orders of magnitude more than Google on a percent-wise basis (and therefore has at least an order of magnitude higher valuation), but you don't want to "unduly concentrate power"? How will this work? What exactly is power, if not the concentration of resources?

Likewise, also from the OpenAI charter:

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

How do you envision you'll deploy enough capital to return orders of magnitude more than any company to date while "minimizing conflicts of interest among employees and stakeholders"? Note that the most valuable companies in the world are also among the most controversial. This includes Facebook, Google and Amazon.

______________________________

1. https://openai.com/charter/


The Charter was designed to capture our values, as we thought about how we'd create a structure that allows us to raise more money while staying true to our mission.

Some companies to compare with:

- Stripe Series A was $100M post-money (https://www.crunchbase.com/funding_round/stripe-series-a--a3... Series E was $22.5B post-money (https://www.crunchbase.com/funding_round/stripe-series-e--d0...) — over a 200x return to date

- Slack Series C was $220M (https://blogs.wsj.com/venturecapital/2014/04/25/slack-raises...) and now is filing to go public at $10B (https://www.ccn.com/slack-ipo-heres-how-much-this-silicon-va...) — over 45x return to date

> you don't want to "unduly concentrate power"? How will this work?

Any value in excess of the cap created by OpenAI LP is owned by the Nonprofit, whose mission is to benefit all of humanity. This could be in the form of services (see https://blog.gregbrockman.com/the-openai-mission#the-impact-... for an example) or even making direct distributions to the world.


If I understand correctly from past announcements, OpenAI has roughly $1B committed in funding? So up until OpenAI attains a $100B valuation, its incentives are indistinguishable from a for-profit entity?


Because of dilution resulting from future investments, my understanding is that the valuation could be significantly higher before that threshold gets crossed.


You've already change the structure once, what is there to prevent the 100x cap becoming 1000x cap, or any other arbitrary number?


Fair enough, that seems like a good answer. There will still probably be concerns about whether or not the cap can be changed in the future, however. But I don't know enough about nonprofit laws to comment on that.


Sorry Greg, but look how quickly Google set aside “don’t be evil.”

You’re not going to accomplish AGI anytime soon, so your intentions are going to have to survive future management and future stakeholders beyond your tenure.

You went from “totally open” to “partially for profit” and “we think this is too dangerous to share” in three years. If you were on the outside, where you predict this trend was leading?


Maybe they can train their AGI to tell them were this is leading.


That sounds like the delusion of most start-up founders in the world.

Which one of these mission statements is Alphabet's for-profit Deepmind and which one is the "limited-profit" OpenAI?

"Our motivation in all we do is to maximise the positive and transformative impact of AI. We believe that AI should ultimately belong to the world, in order to benefit the many and not the few, and we’ll continue to research, publish and implement our work to that end."

"[Our] mission is to ensure that artificial general intelligence benefits all of humanity."


> > We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company.

> That sounds like the delusion of most start-up founders in the world.

huh? are you disputing that AGI would create unprecedented amounts of value?

"any existing company" only implies about $1T of value. that's like 1 year of indonesia's output. that seems low to me, for creating potentially-immortal intelligent entities?


Nobody can front-run the field of artificial intelligence, progress is too incremental, slow, and insanely well-funded by Google, Facebook, Baidu, and Microsoft. Not to mention the source of so many improvements coming from CMU, Stanford, Berkeley, MIT, and that's just in the US.

Edit: Remove Oxford, because originally I was making a full list .. but then realized I couldn't remember which Canadian school was the AI leader.


You nailed it. Anyone who thinks they're out in front of this industry just because they believe with all their heart their abstract word-salad mission statement belongs to a cult.


>> Edit: Remove Oxford, because originally I was making a full list .. but then realized I couldn't remember which Canadian school was the AI leader.

University of Toronto


> Oxford, and that's just in the US.

Eh...


Do you see capping at 100x returns as reducing profit motives? As in, a dastaradly profiteerer would be attracted to a possible 1000x investment but scoff at the mere 100x return?


I doubt they really do. Even 10x profit cap would be questionable in regards to this "not being a profit incentive".


I was going to make a comment on the line

>The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission

Is that the mission? Create AGI? If you create AGI, we have a myriad of sci-fi books that have explored what will happen.

1. Post-scarcity. AGI creates maximum efficiency in every single system in the world, from farming to distribution channels to bureaucracies. Money becomes worthless.

2. Immortal ruling class. Somehow a few in power manage to own total control over AGI without letting it/anyone else determine its fate. By leveraging "near-perfect efficiency," they become god-emperors of the planet. Money is meaningless to them.

3. Robot takeover. Money, and humanity, is gone.

Sure, silliness in fiction, but is there a reasonable alternative from the creation of actual, strong general artificial intelligence? I can't see a world with this entity in it that the question of "what happens to the investors' money" is a relevant question at all. Basically, if you succeed, why are we even talking about investor return?


As far as I can disentangle from [1], OpenAI posits only moderate superhuman performance. The profit would come from a variant where OpenAI subsumes much but not all of the economy and does not bring things to post-scarcity. The nonprofit would take ownership of almost all of the generated wealth, but the investments would still have value since the traditional ownership structure might be left intact.

I don't buy the idea myself, but I could be misinterpreting.

[1] https://blog.gregbrockman.com/the-openai-mission


>>3. Robot takeover

This is an interesting take of what could happen if humans loose control of such AI system [1]. [spoiler alert] The interesting part is that it isn't that the machines have revolted, but rather that from their point of view their masters have disappeared.

https://en.wikipedia.org/wiki/Blame!_(film)


The sheer scale of machines running amok in this series is pretty fun. The source Manga is better than the movie.


re 1) there may be no scarcity of food and widgets but there is only so much beachfront land. Money probably won't be worthless.


I hear you, but not everyone wants beachfront land. Furthermore, I do believe it would be possible to give everyone a way to wake up and see a beach, particularly in a post-scarcity world. I mean, let your imagination run wild - filling out existent islands, artificial island, towers, etc.


But there will be always preference. Whenever there is preference for finite resources (even if that resource is "number of meter from celebrity X") there needs to be a method for allocation.. which currently is money.


But if money is only useful to buy luxury things like beachfront land is it really going to be useful as a currency of exchange?


Sorry for being a buzzkill, but if you create something with an intellect on par with human beings and then force it to "create value" for shareholders, you just created a slave.


That depends on whether or not the machine has a conscious experience, and we have no way to interact with that question right now.

The reason we care about slavery is because it is bad for a conscious being, and we have decided that it is unethical to force someone to endure the experience of slavery. If there is no conscious being having experiences, then there isn't really an ethical problem here.


Isn't consciousness a manifestation of intelligence? I don't see how the two can be treated separately. Talking about AGI is talking about something that can achieve a level of intellect which can ask questions about "being", "self", "meaning" and all the rest that separate intelligence from mere calculation. Otherwise, what's the point of this whole endeavor?


No one knows what consciousness is. Every neuroscientist I've talked to has agreed that consciousness has to be an emergent property of some kind of computation, but there is currently no way to even interact with the question of what computation results in conscious experience.

It could be true that every complex problem solving system is conscious, and in that case maybe there are highly unintuitive conscious experiences, like being a society, or maybe it is an extremely specific type of computation that results in consciousness, and then it might be something very particular to humans.

We have no idea whatsoever.


I think a lot of would be opposed to special lobotomized humans where they didn't realize they were slaves. It really gets into hairy philosophy once we start reaching AGI


Let’s not get into the philosophical side of AGI right now, it’s such a distant reality that there is no point at this particular moment and it only serves as a distraction.


How is it a distraction if that's the one and only goal of OpenAI? What are the investors investing in?


I thought the mission was for the AGI to be widely available 'democratized'? It seems extremely unrealistic to be able to generate 100x profits without compromising on availability.


Not really. If you create an autonomous robot capable of performing any task that a trained human is capable of doing today, and offer this machine for some suitable low-ish sum to anyone who wants one, you've both democretized AI and created more value than any company that exists today.


Why would they choose a low-ish sum if they own the market? More money to make if margin is higher (presuming its below the buying threshold for enough people).


This is a bold statement lacking any serious scientific basis wrt advancing the state of sensory AI (pattern recognition) and robotics (actuation).

Universities and public research facilities are the existing democratic research institutions across the world. How can you defend not simply funding them and letting democracy handle it ?


Is there some legal structure in place to prevent you from raising the cap as partners being to approach the 100x ROI?


The nonprofit board retains full control, and can only take actions that will further our mission.

As described in our Charter (https://openai.com/charter/): that mission is to ensure that AGI benefits all of humanity.


So that is a no then? If the board decides that for AGI to benefit humanity they need more investors, they can just as well remove the cap for a future investor or raise it to 200x.


But the board already approved 100x, presumably they are free to approve 1000x? Which is to say, there is no new limitation on increasing the cap (ie removal of the power of the board or anyone else to increase the cap)?


We the OpenAI board have decided it would benefit all humanity for us to claim 100x profits.


There's no mention of the governing legal framework, I presume it's a US state?

Also, what are the consequences for failing to meet the goals. "We commit to" could really have no legal basis depending on the prevailing legal environment.

Reading pessimistically I see the "we'll assist other efforts" as a way in which the spirit the charter is apparently offered in could be subverted -- you assist a private company and that company doesn't have anything like the charter and instead uses the technology and assistance to create private wealth/IP.

Being super pessimistic, when the Charter organisation gets close a parallel business can be started, which would automatically be "within 2 years" and so effort could then -- within the wording of the charter -- be diverted in to that private company.

A clause requiring those who wish to use any of the resources of the Charter company to also make developments available reciprocally would need to be added.

Rather like share-alike or other GPL-style license that require patent licensing to the upstream creators.


AGI is of course completely transformative but this leaves me thinking you folks are just putting a "Do no Evil" window-dressing on an effort that was/continues to be portrayed as altruistic. Given partners like Khosla it seems to be an accurate sentiment.


And if you don't you'll be forced to open a DC office and bid on pentagon contracts.


Would you guys even release AGI? It's potentially more harmful than some language model...


How does that affect the incentives and motivations of investors? It doesn't matter how much value you create in the long run, investors will want returns, not safe AI.


> We believe that if we do create AGI,

Have you decided in which direction you might guide the AGI’s moral code? Or even a decision making framework to choose the ideal moral code?


Imagine some one else builds AGI and it does has that kind of a runaway effect. More intelligence begets more profits which buys more intelligence etc. to give you the runaway profits your suggesting.

Shouldn't it have some kind of large scale democratic governance? What if you weren't allowed to be on the list of owners or "decision makers"?


How do you envision OpenAI capturing that value though? Value creation can be enough for a non-profit, not for a company though. If we OpenAI LP succeeds, and provides a return on investment what product will it be selling and who will be buying it?


Kudos for having the guts to say it out loud; this would be a natural consequence of realizing safe and beneficial AGI. It's a statement that will obviously be met with some ridicule, but someone should at least be frank about it at some point.


This comment is going to be the "No wireless. Less space than a nomad. Lame." of 2029.

EDIT: Just to hedge my bets, maybe _this_ comment will be the "No wireless. Less space than a nomad. Lame." of 2029.


that's a big if


is it a misprint? 100%?

"100x" is laughable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: