Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI LP (openai.com)
269 points by gdb on March 11, 2019 | hide | past | favorite | 196 comments

Wow. Screw non-profit, we want to get rich.

Sorry guys, but before you were probably able to get talent which is not (primarily) motivated by money. Now you are just another AI startup. If the cap would be 2x, it could still make sense. But 100x times? That's laughable! And the split board, made up of friends and closely connected people smells like "greenwashing" as well. Don't get me wrong, it's totally ok to be an AI startup. You just shouldn't pretend to be a non-profit then...

I agree, this sounds disappointing to me as well. My issue is how they're positioning themselves, which is basically a hyper-growth startup where you can get rich (but only 100x richer, because we're not greedy like other startups) but we're also a non-profit here to benefit humanity so don't tax us like those evil corporations. What really bothers me though is I don't know if they honestly believe what they're saying or if it's just a marketing ploy, because honestly it's so much worse if they're deluding themselves.

Any returns from OpenAI LP are subject to taxes!

First of all, thank you for taking the time to respond to people criticizing this announcement.

I just feel like you're trying to have the best of both worlds. You want the hypergrowth startup that attracts talent and investors, but you also want the mission statement for people that aren't motivated by money. I suspect trying to maintain this middle ground will be an incredibly damaging factor moving forward, as the people who are purely profit driven will look elsewhere and the people who are truly mission driven will also look elsewhere.

I fully appreciate how challenging these things can be and that making these decisions aren't trivial and obviously you're trying to do something different...but I really think you're taking the wrong step here. But because you will see short term gains from all the extra initial investment you won't realize it for years until it's too late and the culture has permanently shifted.

That said, while my trust has somewhat eroded (also because you wouldn't release details of your model recently), I still wish you luck in your mission.

Thanks for the thoughtful reply, it's much appreciated.

> you also want the mission statement for people that aren't motivated by money

I wouldn't agree with this — we want people who are motivated by AGI going well, and don't want to capture all of its unprecedentedly large value for themselves. We think it's a strong point that OpenAI LP aligns individuals' success with success of the mission (and if the two conflict, the mission wins).

We also think it's very important not to release technology we think might be harmful, as we wrote in the Charter: https://openai.com/charter/#cooperativeorientation. There was a polarized response, but I'd rather err on the side of caution.

Would love people who think like that to apply: https://openai.com/jobs

(I work at OpenAI.)

I think this tweet from one of our employees sums it up well:


Why are we making this move? Our mission is to ensure AGI benefits all of humanity, and our primary approach to doing this is to actually try building safe AGI. We need to raise billions of dollars to do this, and needed a structure like OpenAI LP to attract that kind of investment while staying true to the mission.

If we succeed, the return will be exceed the cap by orders of magnitude. See https://blog.gregbrockman.com/the-openai-mission for more details on how we think about the mission.

I believe you. I also believe there are now going to be outside parties with strong financial incentives in OpenAI who are not altruistic. I also believe this new structure will attract employees with less altruistic goals, that could slowly change the culture of OpenAI. I also believe there's nothing stopping anyone from changing the OpenAI mission further over time, other than the culture, which is now more susceptible to change.

Something something money and power corrupts?

We can just look at Google and see that “do no evil” does not work when you’ve got billions of dollars and reach into everyone’s private lives.

thanks for your reply, and I appreciate that you share your reasoning here.

However, this still sounds incredibly entitled and arrogant to me. Nobody doubts that there are many very smart and capable people working for OpenAI. Are you really expecting to beat the return of the most successful start-ups todate by orders of magnitude and to be THE company developing the first AGI? (And even in this -for me- extremely unlikely case, the cap would most likely don't matter, as if a company would develop an AGI worth trillions the government/UN would have to tax/license/regulate it.)

Come on, you are deceiving yourself (and your employees apparently as well, the Twitter you quoted is a good example). This is a non-profit pivoting into a normal startup.

Edit: Additinally it's almost ironic, that "Open"AI now takes money from Mr. Khosla, who is especially known for his attitude towards "Open". Sorry if I sound bitter, but I was really rooting for you and the approach in general and I am absolutely sure that OpenAI has become something entirely different now :/..

If we succeed, the return will be exceed the cap by orders of magnitude.

Are there any concrete estimates of the economic return that different levels of AGI would generate? It's not immediately obvious to me that an AGI would be worth more than 10 trillion dollars (which I believe is what would be needed for your claim to be true).

For example, if there really is an "AGI algorithm", then what's to stop your competitors from implementing it too? Trends in ML research have shown that for most advances, there have been several groups working on similar projects independently at the same time, so other groups would likely be able to implement your AGI algorithm pretty easily even if you don't publish the details. And these competitors will drive down the profit you can expect from the AGI algorithm.

If the trick really is the huge datasets/compute (which your recent results seem to suggest), then it may turn out that the power needed to run an AGI costs more than the potential return that the AGI can generate.

AGI == software capable of any economically relevant task.

If you have AGI, it is very clear you could very quickly displace the entire economy, especially as inference is much cheaper than training: which implies there will be plenty of hardware available at the time AGI is created.

People thought the same thing about nuclear energy. A popular quote from the 50s was "energy too cheap to meter." Yet here we are in a world were nuclear energy exists but unforeseen factors like hardware costs cause the costs to be much more than optimists expected.

These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)

>These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)

I don't think I disagree very much with you then.

>People thought the same thing about nuclear energy. A popular quote from the 50s was "energy too cheap to meter." Yet here we are in a world were nuclear energy exists but unforeseen factors like hardware costs cause the costs to be much more than optimists expected.

In worlds where this is true, OpenAI does not matter. So I don't really mind if they make a profit.

Or to put it another way, comparative advantage dulls the claws of capitalism such that it tends to make most people better off. Comparative advantage is much, much more powerful than most people think. But nonetheless, in a world where software can do all economically-relevant tasks, then comparative advantage breaks, at least for human workers and the Luddite fallacy becomes a non-fallacy. At this point, we have to start looking at evolutionary dynamics instead of economic ones. An unaligned AI is likely to win in such a situation. Let's call this scenerio B.

OpenAI has defined the point at which they become redistributive in the low hundreds of billions. In worlds where they are worth less than hundreds of billions (scenerio A, which is broadly what you describe above) they are not threatening so I don't care - they will help the world as most capitalist enterprises tend to, and most likely offer more good than the externalities they impose. And as in scenerio A, they will not have created cheap software that is capable of replacing all human capital, comparative advantage will work its magic.

In worlds where they are worth more, scenerio B, they have defined a plan to become explicitly redistributive and compensate everyone else, who are exposed to the extreme potential risks of AI, with a fair share of the extreme potential upsides. This seems very, very nice of them.

And should the extreme potentials be unrealizable then no problem.

This scheme allows them to leverage scenerio A in order to subsidize safety research for scenerio B. This seems to me like a really good thing, as it will allow them to compete with organizations, such as FaceBook and Baidu, that are run by people who think alignment research is unimportant.

Lowest estimate of human brain’s compute capacity is 30 TFLOPs. More reasonable estimate is 1 PFLOPS. Achieving this level of compute would cost few thousand dollars per hour. Human costs $7-$500 per hour. On the top, energy consumption bills would be outrageous. Because of slowed down Moore’s law the price goes down by 10X about 15yrs. I also think it’s fair to assume that human brain sculpted over million year of evolution with extreme frugality is fairly optimal as far as hardware requirements are concerned. So I tend to think “affordable AGI” might be around 45 years away.

We have to raise a lot money to get a lot of compute, so we've created the best structure possible that will allow us to do so while maintaining maximal adherence to our mission. And if we actually succeed in building the safe AGI, we will generate far more value than any existing company, which will make the 100x cap very relevant.

Why not open this compute up to the greater scientific community? We could use it, not just for AI.

What makes you think AGI is even possible? Most of current 'AI' is pattern recognition/pattern generation. I'm skeptical about the claims of AGI even being possible but I am confident that pattern recognition will be tremendously useful.

What makes you so sure that what you're doing isn't pattern recognition?

When you lean a language, aren't you just matching sounds with the contexts in which they're used? What does "love" mean? 10 different people would probably give you 10 different answers, and few of them would mention that the way you love your apple is pretty distinct from the way you love your spouse. Though, even though they failed to mention it, they wouldn't misunderstand you when you did mention loving some apple!

And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns. Complicated and hard to describe patterns, for sure, but the RNN learns it can't say "the ball run" in just the same way you learn to say "the ball runs", by seeing enough examples that some constructions just sound right and some sound wrong.

If you hadn't heard of AlphaGo you probably wouldn't agree that Go was "just" pattern matching. There's tactics, strategy(!), surely it's more than just looking at a board and deciding which moves feel right. And the articles about how chess masters "only see good moves"? Probably not related, right?

What does your expensive database consultant do? Do they really do anything more than looking at some charts and matching those against problems they've seen before? Are you sure? https://blog.acolyer.org/2017/08/11/automatic-database-manag...

You have pointed to the examples where the tasks are pattern recognition. I certainly agree that many tasks that humans perform are pattern recognition. But my point is that not ALL tasks are pattern recognition and intelligence involves pattern recognition but that not all of intelligence is pattern recognition.

Pattern recognition works when there is a pattern (repetitive structure). But in the case of outliers, there is no repetitive structure and hence there is no pattern. For example, what is the pattern when a kid first learns 1+1=2? or why must 'B' come after 'A'? It is taught as a rule(or axiom or abstraction) using which higher level patterns can be built. So, I believe that while pattern recognition is useful for intelligence, it is not all there is to intelligence.

What I'm trying to point out is that if you had asked someone whether any of those examples were "pattern matching" prior to the discovery that neural networks were so good at them, very reasonable and knowledgeable people would have said no. They would have said that generating sentences which make sense is more than any system _which simply predicted the next character in a sequence of characters_ could do.

Given this track record, I have learned to be suspicious of that part of my brain which reflexively says "no, I'm doing something more than pattern matching"

It sure feels like there's something more. It feels like what I do when I program or think about solutions to climate change is more than pattern matching. But I don't understand how you can be so sure that it isn't.

Aren't axioms just training data that you feed to the model?

> And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns.

The shape of resultant word strings indeed form patterns. However, matching a pattern is, in fact, different than being able to knowledgeably generate those patterns so they make sense in the context of a human conversation. It has been said that mathematics is so successful because it is contentless. This is a problem for areas that cannot be treated this way.

Go can be described in a contentless (mathematical) way, therefore success is not surprising (maybe to some it was).

It is those things that cannot be described in this manner where 'AGI' (Edit: 'AGI' based on current DL) will consistently fall down. You can see it in the datasets....try to imagine creating a dataset for the machine to 'feel angry'. What are you going to do....show it pictures of pissed off people? This may seem like a silly argument at first, but try to think of other things that might be characteristic of 'GI' that it would be difficult to envision creating a training set for.

Anyone that argues AGI is possible intrinsically believes the universe is finite and discretized.

I have found Quantum ideas and observations too unnerving to accept a finite and discretized universe.

Edit: this in in response to GO, or Starcraft or anything that is boxed off -- these AIs will eventually outperform humans on a grand scale, but the existence of 'constants' or being in a sandbox immediately precludes the results from speaking to AI's generalizability.

I'm not sure what you're saying here.

Your arguments seem to also apply to humans, and clearly humans have figured out how to be intelligent in this universe.

Or maybe you're saying that brains are taking advantage of something at the quantum level? Computers are unable to efficiently simulate quantum effects, so AGI is too difficult to be feasible?

I admit that's possible, but it's a strong claim and I don't see why it's more likely than the idea that brains are very well structured neural networks which we're slowly making better and better approximations of.

Unless you assume some magic/soul/etc, then a human brain is a proof that there exists a non-impossible algorithm that learn to be a General Intelligence, and it can run on non-impossible hardware.

Yes, I assume a magic/soul/etc. and I believe that the human brain is not stand-alone in creating intelligence. Check out this exciting video for discussion on how 'thinking' can happen outside brain. https://neurips.cc/Conferences/2018/Schedule?showEvent=12487

What makes you believe that you'll get there first?

I share the sentiment with you. Majority of the 'research' from OpenAI has been scaling up the known algorithms and almost all the models have been built on top of research from outside of OpenAI. My assessment is that OpenAI is currently not the leader in the field but they want to get there by attracting talent through PR and money, which IMHO is a fine strategy.

I don't see the problem. If they get AGI, it will create value much larger than 100 billion. Much larger than trillions to be honest. If they fail to create AGI, then who cares?

> (AGI) — which we define as automated systems that outperform humans at most economically valuable work — [0]

I don't doubt that OpenAI will be doing absolute first class AI research (they are already doing this). It's just that I don't really find this definition of 'GI' compelling, and 'Artificial' really doesn't mean much--just because you didn't find it in a meadow somewhere doesn't mean it doesn't work. So 'A' is a pointless qualification in my opinion.

For me, the important part is what you define 'GI' to be, and I don't like the given definition. What we will have is world class task automation--which is going to be insanely profitable (congrats). But I would prefer not to confuse the idea with HLI(human-level intelligence). See [1] for a good discussion.

They will fail to create AGI--mainly because we have no measurable definition of it. What they care about is how dangerous these systems could potentially be. More than nukes? It doesn't actually matter, who will stop who from using nukes or AGI or a superbug? Only political systems and worldwide cooperation can effectively deal with this...not a startup...not now...not ever. Period.

[0] https://blog.gregbrockman.com/the-openai-mission [1] https://dl.acm.org/citation.cfm?id=3281635.3271625&coll=port...

I wouldn't be surprised if OpenAI had some crazy aquisition in its future by one of the tech giants. Press release says 'We believe the best way to develop AGI is by joining forces with X and are excited to use it to seel you better ads. We also have turned the profits we would have payed taxes on to a non profit that pays us salaries for researching the quality of sand in the Bahamas'

I was buying it until he said that profit is “capped” at 100x of initial investment.

So someone who invests $10 million has their investment “capped” at $1 billion. Lol. Basically unlimited unless the company grew to a FAANG-scale market value.

We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company.

Leaving aside the absolutely monumental if that's in that sentence, how does this square with the original OpenAI charter[1]:

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Early investors in Google have received a roughly 20x return on their capital. Google is currently valued at $750 billion. Your bet is that you'll have a corporate structure which returns orders of magnitude more than Google on a percent-wise basis (and therefore has at least an order of magnitude higher valuation), but you don't want to "unduly concentrate power"? How will this work? What exactly is power, if not the concentration of resources?

Likewise, also from the OpenAI charter:

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

How do you envision you'll deploy enough capital to return orders of magnitude more than any company to date while "minimizing conflicts of interest among employees and stakeholders"? Note that the most valuable companies in the world are also among the most controversial. This includes Facebook, Google and Amazon.


1. https://openai.com/charter/

The Charter was designed to capture our values, as we thought about how we'd create a structure that allows us to raise more money while staying true to our mission.

Some companies to compare with:

- Stripe Series A was $100M post-money (https://www.crunchbase.com/funding_round/stripe-series-a--a3... Series E was $22.5B post-money (https://www.crunchbase.com/funding_round/stripe-series-e--d0...) — over a 200x return to date

- Slack Series C was $220M (https://blogs.wsj.com/venturecapital/2014/04/25/slack-raises...) and now is filing to go public at $10B (https://www.ccn.com/slack-ipo-heres-how-much-this-silicon-va...) — over 45x return to date

> you don't want to "unduly concentrate power"? How will this work?

Any value in excess of the cap created by OpenAI LP is owned by the Nonprofit, whose mission is to benefit all of humanity. This could be in the form of services (see https://blog.gregbrockman.com/the-openai-mission#the-impact-... for an example) or even making direct distributions to the world.

If I understand correctly from past announcements, OpenAI has roughly $1B committed in funding? So up until OpenAI attains a $100B valuation, its incentives are indistinguishable from a for-profit entity?

Because of dilution resulting from future investments, my understanding is that the valuation could be significantly higher before that threshold gets crossed.

You've already change the structure once, what is there to prevent the 100x cap becoming 1000x cap, or any other arbitrary number?

Fair enough, that seems like a good answer. There will still probably be concerns about whether or not the cap can be changed in the future, however. But I don't know enough about nonprofit laws to comment on that.

Sorry Greg, but look how quickly Google set aside “don’t be evil.”

You’re not going to accomplish AGI anytime soon, so your intentions are going to have to survive future management and future stakeholders beyond your tenure.

You went from “totally open” to “partially for profit” and “we think this is too dangerous to share” in three years. If you were on the outside, where you predict this trend was leading?

Maybe they can train their AGI to tell them were this is leading.

That sounds like the delusion of most start-up founders in the world.

Which one of these mission statements is Alphabet's for-profit Deepmind and which one is the "limited-profit" OpenAI?

"Our motivation in all we do is to maximise the positive and transformative impact of AI. We believe that AI should ultimately belong to the world, in order to benefit the many and not the few, and we’ll continue to research, publish and implement our work to that end."

"[Our] mission is to ensure that artificial general intelligence benefits all of humanity."

> > We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company.

> That sounds like the delusion of most start-up founders in the world.

huh? are you disputing that AGI would create unprecedented amounts of value?

"any existing company" only implies about $1T of value. that's like 1 year of indonesia's output. that seems low to me, for creating potentially-immortal intelligent entities?

Nobody can front-run the field of artificial intelligence, progress is too incremental, slow, and insanely well-funded by Google, Facebook, Baidu, and Microsoft. Not to mention the source of so many improvements coming from CMU, Stanford, Berkeley, MIT, and that's just in the US.

Edit: Remove Oxford, because originally I was making a full list .. but then realized I couldn't remember which Canadian school was the AI leader.

You nailed it. Anyone who thinks they're out in front of this industry just because they believe with all their heart their abstract word-salad mission statement belongs to a cult.

>> Edit: Remove Oxford, because originally I was making a full list .. but then realized I couldn't remember which Canadian school was the AI leader.

University of Toronto

> Oxford, and that's just in the US.


Do you see capping at 100x returns as reducing profit motives? As in, a dastaradly profiteerer would be attracted to a possible 1000x investment but scoff at the mere 100x return?

I doubt they really do. Even 10x profit cap would be questionable in regards to this "not being a profit incentive".

I was going to make a comment on the line

>The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission

Is that the mission? Create AGI? If you create AGI, we have a myriad of sci-fi books that have explored what will happen.

1. Post-scarcity. AGI creates maximum efficiency in every single system in the world, from farming to distribution channels to bureaucracies. Money becomes worthless.

2. Immortal ruling class. Somehow a few in power manage to own total control over AGI without letting it/anyone else determine its fate. By leveraging "near-perfect efficiency," they become god-emperors of the planet. Money is meaningless to them.

3. Robot takeover. Money, and humanity, is gone.

Sure, silliness in fiction, but is there a reasonable alternative from the creation of actual, strong general artificial intelligence? I can't see a world with this entity in it that the question of "what happens to the investors' money" is a relevant question at all. Basically, if you succeed, why are we even talking about investor return?

As far as I can disentangle from [1], OpenAI posits only moderate superhuman performance. The profit would come from a variant where OpenAI subsumes much but not all of the economy and does not bring things to post-scarcity. The nonprofit would take ownership of almost all of the generated wealth, but the investments would still have value since the traditional ownership structure might be left intact.

I don't buy the idea myself, but I could be misinterpreting.

[1] https://blog.gregbrockman.com/the-openai-mission

>>3. Robot takeover

This is an interesting take of what could happen if humans loose control of such AI system [1]. [spoiler alert] The interesting part is that it isn't that the machines have revolted, but rather that from their point of view their masters have disappeared.


The sheer scale of machines running amok in this series is pretty fun. The source Manga is better than the movie.

re 1) there may be no scarcity of food and widgets but there is only so much beachfront land. Money probably won't be worthless.

I hear you, but not everyone wants beachfront land. Furthermore, I do believe it would be possible to give everyone a way to wake up and see a beach, particularly in a post-scarcity world. I mean, let your imagination run wild - filling out existent islands, artificial island, towers, etc.

But there will be always preference. Whenever there is preference for finite resources (even if that resource is "number of meter from celebrity X") there needs to be a method for allocation.. which currently is money.

But if money is only useful to buy luxury things like beachfront land is it really going to be useful as a currency of exchange?

Sorry for being a buzzkill, but if you create something with an intellect on par with human beings and then force it to "create value" for shareholders, you just created a slave.

That depends on whether or not the machine has a conscious experience, and we have no way to interact with that question right now.

The reason we care about slavery is because it is bad for a conscious being, and we have decided that it is unethical to force someone to endure the experience of slavery. If there is no conscious being having experiences, then there isn't really an ethical problem here.

Isn't consciousness a manifestation of intelligence? I don't see how the two can be treated separately. Talking about AGI is talking about something that can achieve a level of intellect which can ask questions about "being", "self", "meaning" and all the rest that separate intelligence from mere calculation. Otherwise, what's the point of this whole endeavor?

No one knows what consciousness is. Every neuroscientist I've talked to has agreed that consciousness has to be an emergent property of some kind of computation, but there is currently no way to even interact with the question of what computation results in conscious experience.

It could be true that every complex problem solving system is conscious, and in that case maybe there are highly unintuitive conscious experiences, like being a society, or maybe it is an extremely specific type of computation that results in consciousness, and then it might be something very particular to humans.

We have no idea whatsoever.

I think a lot of would be opposed to special lobotomized humans where they didn't realize they were slaves. It really gets into hairy philosophy once we start reaching AGI

Let’s not get into the philosophical side of AGI right now, it’s such a distant reality that there is no point at this particular moment and it only serves as a distraction.

How is it a distraction if that's the one and only goal of OpenAI? What are the investors investing in?

I thought the mission was for the AGI to be widely available 'democratized'? It seems extremely unrealistic to be able to generate 100x profits without compromising on availability.

Not really. If you create an autonomous robot capable of performing any task that a trained human is capable of doing today, and offer this machine for some suitable low-ish sum to anyone who wants one, you've both democretized AI and created more value than any company that exists today.

Why would they choose a low-ish sum if they own the market? More money to make if margin is higher (presuming its below the buying threshold for enough people).

This is a bold statement lacking any serious scientific basis wrt advancing the state of sensory AI (pattern recognition) and robotics (actuation).

Universities and public research facilities are the existing democratic research institutions across the world. How can you defend not simply funding them and letting democracy handle it ?

Is there some legal structure in place to prevent you from raising the cap as partners being to approach the 100x ROI?

The nonprofit board retains full control, and can only take actions that will further our mission.

As described in our Charter (https://openai.com/charter/): that mission is to ensure that AGI benefits all of humanity.

So that is a no then? If the board decides that for AGI to benefit humanity they need more investors, they can just as well remove the cap for a future investor or raise it to 200x.

But the board already approved 100x, presumably they are free to approve 1000x? Which is to say, there is no new limitation on increasing the cap (ie removal of the power of the board or anyone else to increase the cap)?

We the OpenAI board have decided it would benefit all humanity for us to claim 100x profits.

There's no mention of the governing legal framework, I presume it's a US state?

Also, what are the consequences for failing to meet the goals. "We commit to" could really have no legal basis depending on the prevailing legal environment.

Reading pessimistically I see the "we'll assist other efforts" as a way in which the spirit the charter is apparently offered in could be subverted -- you assist a private company and that company doesn't have anything like the charter and instead uses the technology and assistance to create private wealth/IP.

Being super pessimistic, when the Charter organisation gets close a parallel business can be started, which would automatically be "within 2 years" and so effort could then -- within the wording of the charter -- be diverted in to that private company.

A clause requiring those who wish to use any of the resources of the Charter company to also make developments available reciprocally would need to be added.

Rather like share-alike or other GPL-style license that require patent licensing to the upstream creators.

AGI is of course completely transformative but this leaves me thinking you folks are just putting a "Do no Evil" window-dressing on an effort that was/continues to be portrayed as altruistic. Given partners like Khosla it seems to be an accurate sentiment.

And if you don't you'll be forced to open a DC office and bid on pentagon contracts.

Would you guys even release AGI? It's potentially more harmful than some language model...

How does that affect the incentives and motivations of investors? It doesn't matter how much value you create in the long run, investors will want returns, not safe AI.

> We believe that if we do create AGI,

Have you decided in which direction you might guide the AGI’s moral code? Or even a decision making framework to choose the ideal moral code?

Imagine some one else builds AGI and it does has that kind of a runaway effect. More intelligence begets more profits which buys more intelligence etc. to give you the runaway profits your suggesting.

Shouldn't it have some kind of large scale democratic governance? What if you weren't allowed to be on the list of owners or "decision makers"?

How do you envision OpenAI capturing that value though? Value creation can be enough for a non-profit, not for a company though. If we OpenAI LP succeeds, and provides a return on investment what product will it be selling and who will be buying it?

Kudos for having the guts to say it out loud; this would be a natural consequence of realizing safe and beneficial AGI. It's a statement that will obviously be met with some ridicule, but someone should at least be frank about it at some point.

This comment is going to be the "No wireless. Less space than a nomad. Lame." of 2029.

EDIT: Just to hedge my bets, maybe _this_ comment will be the "No wireless. Less space than a nomad. Lame." of 2029.

that's a big if

is it a misprint? 100%?

"100x" is laughable.

Really neat corporate structure! We'd looked into becoming a B-Corp, but the advice that we'd gotten was that it was an almost strictly inferior vehicle both for achieving impact and for potentially achieving commercial success for us. I'm obviously not a lawyer, but it's great to see Open AI contributing to the new interesting structures to solve hard global scale problems.

I wonder if the profit cap multiple is going to end up being a significant signalling risk for them. A down-round is such a negative event in the valley, I can imagine a "increasing profit multiple" would have to be treated the same way.

One other question for the folks at OpenAI: How would equity grants work here? You get X fraction of an LP that gets capped at Y dollar profits? Are the fractional partnerships/transferable if earned into?

Would you folks think about publishing your docs?

Yes, we're planning to release a third-party usable reference version of our docs (creating this structure was a lot of work, probably about 6-9 months of implementation).

We've made the equity grants feel very similar to startup equity — you are granted a certain number of "units" which vest over time, and more units will be issued as other join employees in the future. Incidentally, these end up being taxed more favorably than options, so we think this model may be useful for startups for that reason too.

>>Incidentally, these end up being taxed more favorably than options, so we think this model may be useful for startups for that reason too.

Is this due to long term capital gains? Do you allow for early exercising for employees? Long term cap gains for options require holding 2 years since you were granted the options and 1 year since you exercised.

I'm also interested in how this corporate structure relates to a b-corp (or technically, a PBC)

> OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.

One of the key reasons to incorporate as a PBC is to allow "maximizing shareholder value" to be defined in non-monetary terms (eg impact to community, environment, or workers).

How is this structure different from a PBC, or why didn't you go for a PBC?

We needed to custom-write rules like:

- Fiduciary duty to the charter - Capped returns - Full control to OpenAI Nonprofit

LP's have much more flexibility to write these in an enforceable way.

How do you actually define "Fiduciary duty to the charter"? Since charters tend to be vague, and are definitely not legal documents, almost anything can be said to be aligned with the charter.

Similarly, what's stopping an investor from implicit control by threat of removing their investment?

They were able to attract talent and PR in the name of altruism and here they are now trying to flip the switch as quietly as possible. If the partner gets a vote/profit then a "charter" or "mission" won't change anything. You will never be able to explicitly prove that a vote had a "for profit" motive.

Elon was irritated that he was behind in the AI intellectual property race and this narrative created a perfect opportunity. Not surprised in the end. Tesla effectively did the same thing - "come help me save the planet" with overpriced cars. [Edit: Apparently Elon has left OpenAI but I don't believe for a second that he will not participate in this LP]

My reading that the design of this structure is not to require partners to make decisions in the interest of the mission, but to remove incentives for them to make decisions against the interest of the mission. With a cap on returns, there's a point at which it stops making sense to maximize short-term value or reduce expenses, and with the words about fiduciary duty, it becomes defensible to make decisions that don't obviously increase profit. That is, this structure seems much better than the traditional startup structure and I suspect many entities that are currently actual, normal startups would do more good for the world under a structure like this. (Or that many people who are bootstrapping because they have a vision and they don't want VCs to force them into short-term decisions could productively take some VC investment under this sort of model.)

I agree this isn't a non-profit any more. It seems like that's the goal: they want to raise money the way they'd be able to as a normal startup (notably, from Silicon Valley's gatekeepers who expect a return on investment), without quite turning into a normal startup. If the price for money from Silicon Valley's gatekeepers is a board seat, this is a safer sort of board seat than the normal one.

(Whether this is the only way to raise enough money for their project is an interesting question. So is whether it's a good idea to give even indirect, limited control of Friendly AI to Silicon Valley's gatekeepers - even if they're not motivated by profit and only influencing it with their long-term desires for the mission, it's still unclear that the coherent extrapolated volition of the Altmans and Khoslas of the world is aligned with the coherent extrapolated volition of humanity at large.)

If they're willing to make this change, they might be willing to remove the cap in the future when they have something truly marketable.

Even more worrying is the prospect that they'll use their profit to lobby for regulation that aligns with their goals under their "non-profit ethical framework", shutting out any would-be competitors who have a different take. If they get big enough it is inevitable. This is overall a gross move that leaves a seriously bad taste. I hope no one takes their ethical arguments seriously as they pursue this path - doing so will endanger the industry.

> If the partner gets a vote/profit then a "charter" or "mission" won't change anything

(I work at OpenAI.)

The board of OpenAI Nonprofit retains full control. Investors don't get a vote. Some investors may be on the board, but: (a) only a minority of the board are allowed to have a stake in OpenAI LP, and (b) anyone with a stake can't vote in decisions that may conflict with the mission: https://openai.com/blog/openai-lp/#themissioncomesfirst

People who control the money, generally have a lot of influence, especially when money is running short, regardless if they are on the board or not.

> "(b) anyone with a stake can't vote in decisions that may conflict with the mission:"

Will never work in practice

No, Elon parted ways with OpenAI some time ago due to differences in opinion over their direction. Looks like we’re starting to learn the details.

Didn't know this - thanks for clarifying. I will update my comment if it is picked on further

So for some reason you're sure he will participate in OpenAI LP when OpenAI say he's not involved in OpenAI LP? Are they lying?

As an investor. What is not clear about that?

Ok, so you say they're lying. Got it.

OpenAI explicitly says somewhere that Elon's money won't be in the LP, directly or indirectly?

Quoting from the post you're commenting on:


Who’s involved

* OpenAI Nonprofit’s board consists of OpenAI LP employees Greg Brockman (Chairman & CTO), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Holden Karnofsky, Reid Hoffman, Sue Yoon, and Tasha McCauley.

* Elon Musk left the board of OpenAI Nonprofit in February 2018 and is not formally involved with OpenAI LP. We are thankful for all his past help.

* Our investors include Reid Hoffman’s charitable foundation and Khosla Ventures, among others. We feel lucky to have mission-aligned, impact-focused, helpful investors!

That seems to be the general consensus of /r/MachineLearning as well: https://www.reddit.com/r/MachineLearning/comments/azvbmn/n_o...

> "come help me save the planet" with overpriced cars.

You are helping the planet if those customers would've bought ICE luxury vehicles instead of BEV luxury vehicles. I'm not sure BEV could be done any other way but a top-down, luxury-first approach. So, what exactly is your gripe there? Are you a climate change denier or do you believe that cheap EVs were the path to take?

If you are trying maximise the benefit to society it may be necessary to crack AGI before Google or some other corporation does. That's probably not going to happen without serious resources.

What's to stop someone with a vote but not an investment from significantly investing in an AI application (business/policy/etc.) that directly aligns with one of OpenAI's initiatives? The spirit of this LP structure is commendable but it does not do enough to eliminate pure profit-minded intentions.

This seems like an unnecessarily cynical take on things. And ultimately, if the outcome is the same, what do you (or anyone) really care if people are making more money from it or if there are commercial purposes?

The OpenAI staff are literally some of the most employable folks on earth; if they have a problem with the new mission it's incredibly easy for them to leave and find something else.

Additionally, I think there's a reason to give Sam the benefit of the doubt. YC has made multiple risky bets that were in line with their stated mission rather than a clear profit motive. For example, adding nonprofits to the batch and supporting UBI research.

Their's nothing wrong with having a profit motive or using the upsides of capitalism to further their goals.

It most certainly is not unnecessarily cynical. The point is that money clouds the decision-making process and responsibilities of those involved - which is the whole ethos that OpenAI was founded on.

Investor returns are capped at 100x, thats quite a high cap for a non-profit.

Interesting way to think about it:

This is equivalent to saying:

"If you put 10m$ into us for 20% of the post-money business, anything beyond a 5B$ valuation you don't see any additional profits from" which seems like a high but not implausible cap. I suspect they're also raising more money on better terms which would make the cap further off.

Yeah but they've already said they need to raise billions not millions. It's a completely implausible cap.

First not publishing the GPT-2 model, now this...hopefully I am wrong but it looks like they are heading towards being a closed-off proprietary AI money making machine. This further incentivizes them to be less transparent and not open source their research. :(

OpenAI's mission statement is to ensure that AGI "benefits all of humanity", and its charter rephrases this as "used for the benefit of all".

But without a more concrete and specific definition, "benefit of all" is meaningless. For most projects, one can construct a claim that it has the potential to benefit most or all of a large group of people at some point.

So, what does that commitment mean?

If an application benefits some people and harms others, is it unacceptable? What if it harms some people now in exchange for the promise of a larger benefit at some point in the future?

Must it benefit everyone it touches and harm no one? What if it harms no one but the vast majority of its benefits accrue to only the top 1% of humanity?

What is the line?

Greg, you seem to be answering questions here so I have one for you:

This change seems to be about ease of raising money and retaining talent. My question is: are you having difficulty doing those things today, and do you project having difficulty doing that in the foreseeable future?

I'll admit I'm skeptical of these changes. Creating a 100x profit cap significantly (I might even say categorically) changes the mission and value of what you folks are doing. Basically, this seems like a pretty drastic change and I'm wondering if the situation is dire enough to warrant it. There's no question it will be helpful in raising money and retaining talent, I'm just wondering if it's worth it.

Our mission is articulated here and does not change: https://openai.com/charter/. As we say in the Charter, our primary means of accomplishing the mission is to build safe AGI ourselves. That means raising billions of dollars, without which the Nonprofit will fail at its mission. That's a huge amount of money and not something we could raise without changing structure.

Regardless of structure, it's worth humanity making this kind of investment because building safe AGI can return orders of magnitude more value than any company has to date. See one possible AGI application in this post: https://blog.gregbrockman.com/the-openai-mission#the-impact-...

How much progress do you think you've made in the past 3 years towards that goal and what makes you think that you'll get there within the next few decades?

Also what makes you believe that Open AI will get there way ahead of thousands of other research labs?

Yes or no: will you remain a registered non-profit organization (401/501-type orgs or similar), or were you ever? It's fine to call your self non-profit but if you don't have to abide by the rules of them then you aren't, period.

I think all of us here are tired of "altruistic" tech companies which are really profit mongers in disguise. The burden is on you all to prove this is not the case (and this doesn't really help your case).

The article says that OpenAI the registered non-profit now owns OpenAI LP. This isn't unprecedented: for instance, the Mozilla Foundation owns the Mozilla Corporation, and IKEA is owned by a non-profit. (I mention these two examples precisely because it's common to find Mozilla's corporate structure reasonable and IKEA's corporate structure unreasonable.)

Yes, OpenAI Nonprofit is a 501(c)(3) organization. Its mission is to ensure that artificial general intelligence benefits all of humanity. See our Charter for details: https://openai.com/charter/.

The Nonprofit would fail at this mission without raising billions of dollars, which is why we have designed this structure. If we succeed, we believe we'll create orders of magnitude more value than any existing company — in which case all but a fraction is returned to the world.

In other words, you have no downside. Create AGI and you win the game. Don't and you walk away with profit from the ride.

'Do no evil'

'We're connecting everyone'

I admire the will to want to do things differently, but the lack of self awareness of some drinking the koolaid bothers me - I don't know the word for it exactly.

A 100x cap on returns is simply not a non-profit, point blank. Any investor, with any risk profile, in any industry, at any stage - would be super happy with a 100x return.

I also don't doubt the 'market reality' of needing to have such a situation in order to attract investment, I mean, it'd be very hard to bring in money without providing some return ....

... but this is called 'capitalism'.

i.e. the reality of the market, risks etc. are forcing their hand into a fairly normative company profile, with some structural differences which facilitate mostly PR spin.

Companies are not 'for profit' because they are inherently greedy, it's just a happy equilibrium for most scenarios: you need investors? Well, they want risk-adjusted returns.

Also implicit in the 100x capped returns is the implication that the company gets to keep the money! The money goes into more staff, capex, higher prices to suppliers, or lower prices to consumers. But within that framework is just massive payouts to employees as well, in a very cynical take.

I love the motivation, but I wish we would be more objective in terms of 'what things really are' these days. Photos of 'families with babies' don't make a company 'better'. We all have families that we love, workmates we like (or not sometimes).

They are looking to raise billions and cap returns at x100? That means the returns will be capped at the "trillions"? So if they raise $5bn, they need to generate $500bn for the money to start pumping to the non-profit organization.

More like: If we make enough money to own the whole world, we'll give you some food not to starve.

Reactions on Reddit seem different from here - https://redd.it/azvbmn

Genuine question: Is this restructure for the purpose of taking government military contracts? I don't see how investors would be getting 100x returns otherwise and my understanding was that salaries for employees was competitive with big tech companies. Curious where Open AI feels like there's money to be made.


OpenAI is slowly but surely turning into another for profit AI company. They are slowly killing all the original ideals that made OpenAI unique over the hundreds of AI startups. They should just rebrand it.

And they are unironically talking about creating AGI. AGI is awesome of course but maybe that is a tiny little bit overconfident ?

Ok, so when OpenAI was still a straight non-profit the Charter made sense in the context and there wasn't much need to specify it any further.

Now with OpenAI leaving the non-profit path the Charter content, fuzzy as it is, is 100% up for interpretation. It does not specify what "benefit of all" or "undue concentration of power" means concretely. It's all up for interpretation.

So at this point the trust that I can put into this Charter is about the same that I can put into Google's "Don't be evil"...

The Nonprofit has full control, in a legally binding way: https://openai.com/blog/openai-lp/#themissioncomesfirst

Will investing be open to all accredited investors or just a handpicked selection? Opening a crowdsourced investment opportunity would be in line with your vision to democratize the use of AI. The more people that have a non-operational ownership stake in Open AI the better.

Great question, and how OpenAI LP handles accepting investments will say a lot.

Is there something about the mystical nature of AGI that attracts sketchiness and flim-flammery? I remember the "Singularity Institute for Artificial General Intelligence" trying to pull similar scams a decade ago.

Clearly deep learning has solved the heardest AI problem of them all: that of funding.

> ... Sam Altman (CEO) ...

Was this announced before or is this the first time they've mentioned it?

Nope, it was mentioned last Friday in a blog post (that he was stepping down from YC):


And TechCrunch had a source last Friday as well that said Altman intended to become CEO of OpenAI:


It was tangentially implied in the "YC Updates" thread from a few days ago, where it mentioned Sam "stepping away" to "focus on open.ai".

I did not think this was implied by the previous statement. But I have not been following org structure of openai at all.

They say they have started this new form of company because there's is no "pre-existing legal structure" suitable.

But there are precedents for investing billions of dollars into blue sky technologies and still being able to spread the wealth and knowledge gathered - it's called government investment in science - it has built silicon chips and battery technologies and ... well quite a lot.

Is this company planning on "fundamental" research (anti-adversarial, "explainable" outcomes?) - and why do we think government investment is not good enough?

Or, worryingly, are the major tech leaders now so rich that they can honestly taken on previous government roles (with only the barest of nods to accountability and legal obligation to return value to the commons)

I am a bit scared that it's the latter - and even then this is too expensive for any one firm alone.

These people have spent their entire life in the Valley, they don't know any better.

I was very much behind the mission; now I’m not so sure. If it was this easy for OpenAI to start down this path, think of what Amazon or Facebook will do - people with no moral compass whatsoever. It’s probably not too early to start thinking about government regulation.

Presumably OpenAI created a lot of IP with donor dollars under the original nonprofit entity. Who owns that IP now? I imagine it got appraised and sold by the original nonprofit to the new OpenAI LP. That seems like a difficult process, given no one really knows what this type of IP is worth. If this is what happened, who did that appraisal and how was it done?

If no IP was sold to the new OpenAI LP because some or all of the IP created under the original nonprofit OpenAI was open sourced, will the new OpenAI LP continue that practice?

(I work at OpenAI.)

See my tweet about this: https://twitter.com/gdb/status/1105173883378851846

> We had the fair market value of anything transferred from the nonprofit to the LP determined by an outside firm.

Greg, would you please elaborate more on this part of your tweet? Also, can the OpenAI LP commercialize work/research produced by OpenAI non-profit? Can you use grants that were raised by the non-profit into recruiting for the LP?

Thanks for talking questions and engaging in conversations to make things clear for our community.

So first they withhold the model they built and now this. I’m not implying anything but this looks fishy

Very cool idea. Like some others here, I really appreciate attempts to create new structures for bringing ideas to the world.

Since I'm not a lawyer, can you help understand the theoretical limits of the LP's "lock in" to the Charter? In a cynical scenario, what would it take to completely capture OpenAI's work for profit?

If the Nonprofit's board was 60% people who want to break the Charter, would they be capable of voting to do so?

To OpenAI team, that is not right but it's very well played.

You guys raised free money in forms of grants, acquired the best talent in the name of a non-profit that has a purpose of saving humanity, and always had publicity stunts that is actually hurting science and the AI community, and talking the first steps against reproducibility by not releasing gpt2 so you can further commercialize your future models.

Also, you guys claim that the non-profit board retains full control, but seems like the same 7 white men on that board are also on the board of your profit company and have a strong influence there.

Call it what you want, but I think this was planned out from day one. Now, you guy won the game. It's just a matter of time to dominate the AI game, keep manipulating us, and appear on the Forbes list.

Also, I expect that you guys will dislike that comment instead of having an actual dialogue and discussion.

> We are traveling a hard and uncertain path, but we have designed our structure to help us positively affect the world should we succeed in creating AGI—which we think will have as broad impact as the computer itself and improve healthcare

Grammar--would change to: as broad an impact

Could some random regular person, who is an accredited investor after US rules (e.g. a non-US person) invest, say, $10,000 in this venture as a minor investor/contributor? Or is OpenAI LP only interested in much larger investment amounts?

OTOH, it is exciting to see people who are not google/facebook/uber going in the race for-profit. Perhaps they 'll feel some competition over real products now. (but the "100x cap" thing is just childish)

One object lesson in how this can go wrong: REI

This "cooperative" ostensibly elects its board. In reality, nomination by existing members of the REI board is the only way to stand for election by the REI membership, and when you vote you only by marking "For" the nominated candidates (there's no information on how to vote against, though at another time they indicated that the alternative was "Withold vote"). While the board members don't earn much, there is a nice path from board member to REI executive ... which can pay as much as $2M/year for the CEO position.

Interesting, I'm super tempted to apply to that Mechanical Engineer opening. How exactly does OpenAI make money though? It is sponsored or is there external investment (Can you invest in a non-profit?)?

I don't think we can guarantee AGI will benefit all humanity, open-sourcing it may help but not necessarily. My heart actually sinks when I read that mission statement on this page, it's like in the movies where the guy has a gun to someone's head and gets them to give up the information they know before blowing their brains out.

Is there any indication of what avenues OpenAI will be (or would consider) using to generate revenue? A lot of the most financially lucrative opportunities for AI (surveillance/tracking, military) are morally ambiguous at best.

If they actually make a strong, general artificial intelligence, sci fi has a couple answers.

In one (I can't find the title), MIT students make an SGAI and somehow manage to keep it contained (away from the internet). They feed it animated Disney movies and it cranks out the best animated movies ever made. They make billions. Eventually they make "live-action" movies that are indistinguishable from the real thing. Then they make music, books, etc, and create an unstoppable media force.

They could leverage the AI to discover hyper-efficient supply chain methods.

They could sequence genomes and run experiments, and sell the data.

Possibly exciting things around weather prediction.

Very exciting things around any research.

Certainly if they make a strong AGI, money is no longer an issue. I'm curious what they will do in the, ah, interim, on the off-chance that inventing SAGI turns out to be a difficult problem.

Without an obviously stated business model to satisfy the investor returns, it’s hard to take the values platitudes seriously. Do you plan to make your pitch deck public? That’d help.

I admire the attempt to create a sustainable project that's primarily about creating a positive impact!

For those (including myself) who wonder whether a 100x cap will really change an organization from being profit-driven to being positive-impact-driven:

How could we improve on this?

One idea is to not allow investors on the board. Investors are profit-driven. If they're on the board, you'll likely get pressure to do things that optimize for profit rather than for positive impact.

Another idea is to make monetary compensation based on some measure of positive impact. That's one explicit way to optimize for positive impact rather than money.

Why the Limited Partnership at all? What can the nonprofit "Inc" do through the for-profit "LP" shell that it could not do in its own right?

Hey Greg,

Since you seem to be answering questions in this thread, here's one:

How does OpenAI LP's structure differ from that of a L3C (Low-profit Limited Liability company)?

To "ensure it is used for the benefit of all" requires limiting how AGI is used.

How will OpenAI do that?

There are a couple of comments on the theme that this is taking a non-profit into a for-profit company, and that that is a bad thing.

I'd like to offer up an alternate opinion: non-profits operating models are generally ineffective compared to for-profit operating models.

There are many examples.

* Bill Gates is easy; make squillions being a merciless capitalist, then turn that into a very productive program of disease elimination and apparently energy security nowadays.

* Open source is another good one in my opinion - even when they literally give the software away, many of the projects leading their fields (eg, Google Chrome, Android, PostgreSQL, Linux Kernel) draw heavily on sponsorship by for-profit companies using them for furthering their profits - even if the steering committee is nominally non-profit.

* I have examples outside software, but they are all a bit complicated to type up. Things like China's rise.

It isn't that there isn't a place for researchers who are personally motivated to do things, there is a just a high correlation between something making a profit and it getting done to a high standard.

So are they looking for capital or they have it?

"Our investors include Reid Hoffman’s charitable foundation and Khosla Ventures, among others."

I'm assuming these investors have already provided capital.

Looking at their team it's basically both. They'll get whatever they want if they haven't already.

Mission oriented for-profit companies is an oxymoron. Profit comes from competing in markets and markets determine what you end up doing. That’s why I’m always skeptical of Don’t Be Evil types of missions because when you’re starting you can’t even imagine what market pressure will end up making you do.

Between the market pressures from investors, employees, competitors, to what extent can a company really stay true to its business and deny potential profit that conflicts with it.

Also, it’s hard to root for specific for profit companies (although I’m rooting for capitalism per se).

Why not make OpenAI LP a B-Corp?

Is it just me or there are not many african americans working in AI research and industry. I don't have stats to back me up but that's my personal observation. People in the field, what are your thoughts on it.

This twitter may be of interest to you, they aggregate information on this topic. https://twitter.com/black_in_ai. I don't know why race is relevant to this article though. Must we make everything a race issue?

Thank you for this!!!

You're welcome!!!

There are just not many AA in tech in the bay area. Most pictures from Silicon Valley often look like that.

I thought a similar thing when I saw that pic: Not a single black male or female. And I'm all like, "Where am I?" LoL

I don't have any statistics for you, but Google at least is looking to improve on this a bit. A good friend of mine from their NYC Brain office moved to Accra, Ghana just last week to help build out their new office there.

I don't think an office in Ghana would be employing very many African Americans (or, for that matter, very many Americans of any background).

A Ghana office would be presumably to recruit Africans, not African Americans. There is a difference :)

Certainly there seem to be no black folks in the photo captioned "OpenAI team and their families at our November 2018 offsite."

African-American Founder/Research Engineer of AGI venture (Monad.ai) here. We exist.

And Khosla Ventures is one of their key investors.

Let's not forget that Khosla himself does not exactly care about public interest or existing laws https://www.google.com/amp/s/www.nytimes.com/2018/10/01/tech...

the article is probably the most selfish thing I've read in a while. It's not evil or exploitative, it's just very selfish.

But I think it's not so morally wrong that OpenAI should not do business with him since they have the security mechanisms and limit his power. He just an grumpy old guy who does not want to share his beach.

"He just an grumpy old guy who does not want to share his beach."

Not his beach. That's the point.

it wasn't meant in a literal sense, maybe some quotation marks around his would have helped.

They need billions and OpenAI has implemented this type of LP structure. I don't imagine they'll as picky with investors.

Non-AMP link: https://www.nytimes.com/2018/10/01/technology/california-bea...

I just read the article, and am not sure I see the issue. Quote from his lawyer: “No owner of private business should be forced to obtain a permit from the government before deciding who it wants to invite onto its property"

Where's the issue here? They guy basically bought the property all around the beach, and decided to close down access. I wouldn't say it's a nice thing to do, but it's legal. If I buy a piece of property, my rights as the owner should trump the rights of a bunch of surfers who want to get to a beach. The state probably should have been smart enough not to sell all the land.

Failing that, just seize a small portion via eminent domain: a 15-foot-wide strip on the edge of the property would likely come at a reasonable cost, and ought to provide an amicable resolution for all.

It's actually not legal, at least in California. That's why he's trying to take it to the Supreme Court- he's hoping to get the federal government to override state laws.

He was also completely aware of this when he bought the property, so it's not like this is a surprise or someone forcing him to change things. He's the one who broke the law and broke the status quo that had existed at that beach.

Property rights are constitutionally protected, and under the precedents surrounding the 14th amendments, this overrules California's rules. Eminent domain remains legal, but legally speaking, Khosla is in the right here.

The Supreme Court begs to differ.

The Supreme Court didn't grant cert; that's different. That means they don't want to set precedent or believe sufficient precedent exists already. This was last adjudicated in 1999 with Saenz v. Roe, where California tried to set new residents' welfare to what they got in other states for one year. The court ruled this violated the constitutional protection of interstate travel, and upheld the view that the 14th amendment applied all constitutional rights to all states. Source: https://www.law.cornell.edu/wex/fourteenth_amendment_0

This undoubtedly then applies the 5th amendment takings clause: “…nor shall private property be taken for public use, without just compensation.” This is clearly violated in this sense, and the state cannot violate this right (see above).

The fact that the Supreme Court didn't grant cert probably means they believe there is already precedent here, or just as probably that they didn't have the time. They always have a full docket; they were probably just out of slots.

I urge others to rebut this from a legal sense, not just say they disagree. People keep killing my comments, but it seems like they all just dislike the "selfish" appearance of the actions.

The Supreme Court refused to overturn the appeal, which means they upheld the decision of the lower courts.

I honestly don't understand how you can take that action and try to turn it around the way you are.

> The Supreme Court refused to overturn the appeal, which means they upheld the decision of the lower courts.

Completely incorrect. "The Court usually is not under any obligation to hear these [appealed] cases, and it usually only does so if the case could have national significance, might harmonize conflicting decisions in the federal Circuit courts, and/or could have precedential value. In fact, the Court accepts 100-150 of the more than 7,000 cases that it is asked to review each year." Source: https://www.uscourts.gov/about-federal-courts/educational-re...

The SC not hearing the case doesn't mean they uphold the lower court's ruling, it means they aren't hearing the case.

check Article X, Section 4 of California’s constitution.

The courts already decided agaist him, he just can affort to pay the fine and continue restricting access

See my above reply; the 14th amendment applies the takings clause to California, and the supremacy clause further reinforces that The Constitution trumps california's constitution.

Sam Altman here to shake things up it seems!

We've been working on this new structure together for the past two years!

Oh that’s pretty cool. Do you by chance have any articles or posts going through your initial thought process and/or eventual realization for why you ultimately thought this transition was necessary for OpenAI?

Was it a particular event, a conversation, perhaps just an incremental ideation without any actual epiphany needed, etc?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact