
OpenAI LP - gdb
https://openai.com/blog/openai-lp/
======
jpdus
Wow. Screw non-profit, we want to get rich.

Sorry guys, but before you were probably able to get talent which is not
(primarily) motivated by money. Now you are just another AI startup. If the
cap would be 2x, it could still make sense. But 100x times? That's laughable!
And the split board, made up of friends and closely connected people smells
like "greenwashing" as well. Don't get me wrong, it's totally ok to be an AI
startup. You just shouldn't pretend to be a non-profit then...

~~~
gdb
(I work at OpenAI.)

I think this tweet from one of our employees sums it up well:

[https://twitter.com/Miles_Brundage/status/110519043405200588...](https://twitter.com/Miles_Brundage/status/1105190434052005889)

Why are we making this move? Our mission is to ensure AGI benefits all of
humanity, and our primary approach to doing this is to actually try building
safe AGI. We need to raise billions of dollars to do this, and needed a
structure like OpenAI LP to attract that kind of investment while staying true
to the mission.

If we succeed, the return will be exceed the cap by orders of magnitude. See
[https://blog.gregbrockman.com/the-openai-
mission](https://blog.gregbrockman.com/the-openai-mission) for more details on
how we think about the mission.

~~~
nck4222
I believe you. I also believe there are now going to be outside parties with
strong financial incentives in OpenAI who are not altruistic. I also believe
this new structure will attract employees with less altruistic goals, that
could slowly change the culture of OpenAI. I also believe there's nothing
stopping anyone from changing the OpenAI mission further over time, other than
the culture, which is now more susceptible to change.

~~~
nojvek
Something something money and power corrupts?

We can just look at Google and see that “do no evil” does not work when you’ve
got billions of dollars and reach into everyone’s private lives.

------
danielcampos93
I wouldn't be surprised if OpenAI had some crazy aquisition in its future by
one of the tech giants. Press release says 'We believe the best way to develop
AGI is by joining forces with X and are excited to use it to seel you better
ads. We also have turned the profits we would have payed taxes on to a non
profit that pays us salaries for researching the quality of sand in the
Bahamas'

------
aerovistae
I was buying it until he said that profit is “capped” at 100x of initial
investment.

So someone who invests $10 million has their investment “capped” at $1
billion. Lol. Basically unlimited unless the company grew to a FAANG-scale
market value.

~~~
gdb
We believe that if we do create AGI, we'll create orders of magnitude more
value than any existing company.

~~~
komali2
I was going to make a comment on the line

>The fundamental idea of OpenAI LP is that investors and employees can get a
capped return if we succeed at our mission

Is that the mission? Create AGI? If you create AGI, we have a myriad of sci-fi
books that have explored what will happen.

1\. Post-scarcity. AGI creates maximum efficiency in every single system in
the world, from farming to distribution channels to bureaucracies. Money
becomes worthless.

2\. Immortal ruling class. Somehow a few in power manage to own total control
over AGI without letting it/anyone else determine its fate. By leveraging
"near-perfect efficiency," they become god-emperors of the planet. Money is
meaningless to them.

3\. Robot takeover. Money, and humanity, is gone.

Sure, silliness in fiction, but is there a reasonable alternative from the
creation of actual, strong general artificial intelligence? I can't see a
world with this entity in it that the question of "what happens to the
investors' money" is a relevant question at all. Basically, if you succeed,
why are we even talking about investor return?

~~~
tim333
re 1) there may be no scarcity of food and widgets but there is only so much
beachfront land. Money probably won't be worthless.

~~~
komali2
I hear you, but not everyone wants beachfront land. Furthermore, I do believe
it would be possible to give everyone a way to wake up and see a beach,
particularly in a post-scarcity world. I mean, let your imagination run wild -
filling out existent islands, artificial island, towers, etc.

~~~
ivalm
But there will be always preference. Whenever there is preference for finite
resources (even if that resource is "number of meter from celebrity X") there
needs to be a method for allocation.. which currently is money.

------
estsauver
Really neat corporate structure! We'd looked into becoming a B-Corp, but the
advice that we'd gotten was that it was an almost strictly inferior vehicle
both for achieving impact and for potentially achieving commercial success for
us. I'm obviously not a lawyer, but it's great to see Open AI contributing to
the new interesting structures to solve hard global scale problems.

I wonder if the profit cap multiple is going to end up being a significant
signalling risk for them. A down-round is such a negative event in the valley,
I can imagine a "increasing profit multiple" would have to be treated the same
way.

One other question for the folks at OpenAI: How would equity grants work here?
You get X fraction of an LP that gets capped at Y dollar profits? Are the
fractional partnerships/transferable if earned into?

Would you folks think about publishing your docs?

~~~
gdb
Yes, we're planning to release a third-party usable reference version of our
docs (creating this structure was a lot of work, probably about 6-9 months of
implementation).

We've made the equity grants feel very similar to startup equity — you are
granted a certain number of "units" which vest over time, and more units will
be issued as other join employees in the future. Incidentally, these end up
being taxed more favorably than options, so we think this model may be useful
for startups for that reason too.

~~~
eanzenberg
>>Incidentally, these end up being taxed more favorably than options, so we
think this model may be useful for startups for that reason too.

Is this due to long term capital gains? Do you allow for early exercising for
employees? Long term cap gains for options require holding 2 years since you
were granted the options and 1 year since you exercised.

------
stevievee
They were able to attract talent and PR in the name of altruism and here they
are now trying to flip the switch as quietly as possible. If the partner gets
a vote/profit then a "charter" or "mission" won't change anything. You will
never be able to explicitly prove that a vote had a "for profit" motive.

Elon was irritated that he was behind in the AI intellectual property race and
this narrative created a perfect opportunity. Not surprised in the end. Tesla
effectively did the same thing - "come help me save the planet" with
overpriced cars. [Edit: Apparently Elon has left OpenAI but I don't believe
for a second that he will not participate in this LP]

~~~
gdb
> If the partner gets a vote/profit then a "charter" or "mission" won't change
> anything

(I work at OpenAI.)

The board of OpenAI Nonprofit retains full control. Investors don't get a
vote. Some investors may be _on_ the board, but: (a) only a minority of the
board are allowed to have a stake in OpenAI LP, and (b) anyone with a stake
can't vote in decisions that may conflict with the mission:
[https://openai.com/blog/openai-
lp/#themissioncomesfirst](https://openai.com/blog/openai-
lp/#themissioncomesfirst)

~~~
timavr
People who control the money, generally have a lot of influence, especially
when money is running short, regardless if they are on the board or not.

------
fuddle
Investor returns are capped at 100x, thats quite a high cap for a non-profit.

~~~
estsauver
Interesting way to think about it:

This is equivalent to saying:

"If you put 10m$ into us for 20% of the post-money business, anything beyond a
5B$ valuation you don't see any additional profits from" which seems like a
high but not implausible cap. I suspect they're also raising more money on
better terms which would make the cap further off.

~~~
MattRix
Yeah but they've already said they need to raise __billions __not millions. It
's a completely implausible cap.

------
bilater
First not publishing the GPT-2 model, now this...hopefully I am wrong but it
looks like they are heading towards being a closed-off proprietary AI money
making machine. This further incentivizes them to be less transparent and not
open source their research. :(

------
zestyping
OpenAI's mission statement is to ensure that AGI "benefits all of humanity",
and its charter rephrases this as "used for the benefit of all".

But without a more concrete and specific definition, "benefit of all" is
meaningless. For most projects, one can construct a claim that it has the
potential to benefit most or all of a large group of people at some point.

So, what does that commitment mean?

If an application benefits some people and harms others, is it unacceptable?
What if it harms some people now in exchange for the promise of a larger
benefit at some point in the future?

Must it benefit everyone it touches and harm no one? What if it harms no one
but the vast majority of its benefits accrue to only the top 1% of humanity?

What is the line?

------
tschwimmer
Greg, you seem to be answering questions here so I have one for you:

This change seems to be about ease of raising money and retaining talent. My
question is: are you having difficulty doing those things today, and do you
project having difficulty doing that in the foreseeable future?

I'll admit I'm skeptical of these changes. Creating a 100x profit cap
significantly (I might even say categorically) changes the mission and value
of what you folks are doing. Basically, this seems like a pretty drastic
change and I'm wondering if the situation is dire enough to warrant it.
There's no question it will be helpful in raising money and retaining talent,
I'm just wondering if it's worth it.

~~~
gdb
Our mission is articulated here and does not change:
[https://openai.com/charter/](https://openai.com/charter/). As we say in the
Charter, our primary means of accomplishing the mission is to build safe AGI
ourselves. That means raising billions of dollars, without which the Nonprofit
will fail at its mission. That's a huge amount of money and not something we
could raise without changing structure.

Regardless of structure, it's worth humanity making this kind of investment
because building safe AGI can return orders of magnitude more value than any
company has to date. See one possible AGI application in this post:
[https://blog.gregbrockman.com/the-openai-mission#the-
impact-...](https://blog.gregbrockman.com/the-openai-mission#the-impact-of-
agi_1)

~~~
m_ke
How much progress do you think you've made in the past 3 years towards that
goal and what makes you think that you'll get there within the next few
decades?

Also what makes you believe that Open AI will get there way ahead of thousands
of other research labs?

------
dannykwells
Yes or no: will you remain a registered non-profit organization (401/501-type
orgs or similar), or were you ever? It's fine to call your self non-profit but
if you don't have to abide by the rules of them then you aren't, period.

I think all of us here are tired of "altruistic" tech companies which are
really profit mongers in disguise. The burden is on you all to prove this is
not the case (and this doesn't really help your case).

~~~
gdb
Yes, OpenAI Nonprofit is a 501(c)(3) organization. Its mission is to ensure
that artificial general intelligence benefits all of humanity. See our Charter
for details: [https://openai.com/charter/](https://openai.com/charter/).

The Nonprofit would fail at this mission without raising billions of dollars,
which is why we have designed this structure. If we succeed, we believe we'll
create orders of magnitude more value than any existing company — in which
case all but a fraction is returned to the world.

~~~
nycthbris
In other words, you have no downside. Create AGI and you win the game. Don't
and you walk away with profit from the ride.

------
csomar
They are looking to raise billions and cap returns at x100? That means the
returns will be capped at the "trillions"? So if they raise $5bn, they need to
generate $500bn for the money to start pumping to the non-profit organization.

More like: If we make enough money to own the whole world, we'll give you some
food not to starve.

------
ktta
Reactions on Reddit seem different from here -
[https://redd.it/azvbmn](https://redd.it/azvbmn)

------
formalsystem
Genuine question: Is this restructure for the purpose of taking government
military contracts? I don't see how investors would be getting 100x returns
otherwise and my understanding was that salaries for employees was competitive
with big tech companies. Curious where Open AI feels like there's money to be
made.

~~~
gdb
No.

------
bibyte
OpenAI is slowly but surely turning into another for profit AI company. They
are slowly killing all the original ideals that made OpenAI unique over the
hundreds of AI startups. They should just rebrand it.

And they are unironically talking about creating AGI. AGI is awesome of course
but maybe that is a tiny little bit overconfident ?

------
rsp1984
Ok, so when OpenAI was still a straight non-profit the Charter made sense in
the context and there wasn't much need to specify it any further.

Now with OpenAI leaving the non-profit path the Charter content, fuzzy as it
is, is 100% up for interpretation. It does not specify what "benefit of all"
or "undue concentration of power" means concretely. It's all up for
interpretation.

So at this point the trust that I can put into this Charter is about the same
that I can put into Google's "Don't be evil"...

~~~
gdb
The Nonprofit has full control, in a legally binding way:
[https://openai.com/blog/openai-
lp/#themissioncomesfirst](https://openai.com/blog/openai-
lp/#themissioncomesfirst)

------
hhw3h
Will investing be open to all accredited investors or just a handpicked
selection? Opening a crowdsourced investment opportunity would be in line with
your vision to democratize the use of AI. The more people that have a non-
operational ownership stake in Open AI the better.

~~~
mark_l_watson
Great question, and how OpenAI LP handles accepting investments will say a
lot.

------
Mizza
Is there something about the mystical nature of AGI that attracts sketchiness
and flim-flammery? I remember the "Singularity Institute for Artificial
General Intelligence" trying to pull similar scams a decade ago.

------
YeGoblynQueenne
Clearly deep learning has solved the heardest AI problem of them all: that of
_funding_.

------
m_ke
> ... Sam Altman (CEO) ...

Was this announced before or is this the first time they've mentioned it?

~~~
eitally
It was tangentially implied in the "YC Updates" thread from a few days ago,
where it mentioned Sam "stepping away" to "focus on open.ai".

~~~
bredren
I did not think this was implied by the previous statement. But I have not
been following org structure of openai at all.

------
lifeisstillgood
They say they have started this new form of company because there's is no
"pre-existing legal structure" suitable.

But there are precedents for investing billions of dollars into blue sky
technologies and still being able to spread the wealth and knowledge gathered
- it's called government investment in science - it has built silicon chips
and battery technologies and ... well quite a lot.

Is this company planning on "fundamental" research (anti-adversarial,
"explainable" outcomes?) - and why do we think government investment is not
good enough?

Or, worryingly, are the major tech leaders now so rich that they can honestly
taken on previous government roles (with only the barest of nods to
accountability and legal obligation to return value to the commons)

I am a bit scared that it's the latter - and even then this is too expensive
for any one firm alone.

~~~
Cacti
These people have spent their entire life in the Valley, they don't know any
better.

------
projectileboy
I was very much behind the mission; now I’m not so sure. If it was this easy
for OpenAI to start down this path, think of what Amazon or Facebook will do -
people with no moral compass whatsoever. It’s probably not too early to start
thinking about government regulation.

------
chaseadam17
Presumably OpenAI created a lot of IP with donor dollars under the original
nonprofit entity. Who owns that IP now? I imagine it got appraised and sold by
the original nonprofit to the new OpenAI LP. That seems like a difficult
process, given no one really knows what this type of IP is worth. If this is
what happened, who did that appraisal and how was it done?

If no IP was sold to the new OpenAI LP because some or all of the IP created
under the original nonprofit OpenAI was open sourced, will the new OpenAI LP
continue that practice?

~~~
gdb
(I work at OpenAI.)

See my tweet about this:
[https://twitter.com/gdb/status/1105173883378851846](https://twitter.com/gdb/status/1105173883378851846)

~~~
david2016
> We had the fair market value of anything transferred from the nonprofit to
> the LP determined by an outside firm.

Greg, would you please elaborate more on this part of your tweet? Also, can
the OpenAI LP commercialize work/research produced by OpenAI non-profit? Can
you use grants that were raised by the non-profit into recruiting for the LP?

Thanks for talking questions and engaging in conversations to make things
clear for our community.

------
thewarrior
So first they withhold the model they built and now this. I’m not implying
anything but this looks fishy

------
elefanten
Very cool idea. Like some others here, I really appreciate attempts to create
new structures for bringing ideas to the world.

Since I'm not a lawyer, can you help understand the theoretical limits of the
LP's "lock in" to the Charter? In a cynical scenario, what would it take to
completely capture OpenAI's work for profit?

If the Nonprofit's board was 60% people who want to break the Charter, would
they be capable of voting to do so?

------
greenburg
To OpenAI team, that is not right but it's very well played.

You guys raised free money in forms of grants, acquired the best talent in the
name of a non-profit that has a purpose of saving humanity, and always had
publicity stunts that is actually hurting science and the AI community, and
talking the first steps against reproducibility by not releasing gpt2 so you
can further commercialize your future models.

Also, you guys claim that the non-profit board retains full control, but seems
like the same 7 white men on that board are also on the board of your profit
company and have a strong influence there.

Call it what you want, but I think this was planned out from day one. Now, you
guy won the game. It's just a matter of time to dominate the AI game, keep
manipulating us, and appear on the Forbes list.

Also, I expect that you guys will dislike that comment instead of having an
actual dialogue and discussion.

------
codekilla
> We are traveling a hard and uncertain path, but we have designed our
> structure to help us positively affect the world should we succeed in
> creating AGI—which we think will have as broad impact as the computer itself
> and improve healthcare

Grammar--would change to: _as broad an impact_

------
marvin
Could some random regular person, who is an accredited investor after US rules
(e.g. a non-US person) invest, say, $10,000 in this venture as a minor
investor/contributor? Or is OpenAI LP only interested in much larger
investment amounts?

------
buboard
OTOH, it is exciting to see people who are not google/facebook/uber going in
the race for-profit. Perhaps they 'll feel some competition over real products
now. (but the "100x cap" thing is just childish)

------
leot
One object lesson in how this can go wrong: REI

This "cooperative" ostensibly elects its board. In reality, nomination by
existing members of the REI board is the only way to stand for election by the
REI membership, and when you vote you only by marking "For" the nominated
candidates (there's no information on how to vote against, though at another
time they indicated that the alternative was "Withold vote"). While the board
members don't earn much, there is a nice path from board member to REI
executive ... which can pay as much as $2M/year for the CEO position.

------
syntaxing
Interesting, I'm super tempted to apply to that Mechanical Engineer opening.
How exactly does OpenAI make money though? It is sponsored or is there
external investment (Can you invest in a non-profit?)?

------
darepublic
I don't think we can guarantee AGI will benefit all humanity, open-sourcing it
may help but not necessarily. My heart actually sinks when I read that mission
statement on this page, it's like in the movies where the guy has a gun to
someone's head and gets them to give up the information they know before
blowing their brains out.

------
czr
Is there any indication of what avenues OpenAI will be (or would consider)
using to generate revenue? A lot of the most financially lucrative
opportunities for AI (surveillance/tracking, military) are morally ambiguous
at best.

~~~
komali2
If they actually make a strong, general artificial intelligence, sci fi has a
couple answers.

In one (I can't find the title), MIT students make an SGAI and somehow manage
to keep it contained (away from the internet). They feed it animated Disney
movies and it cranks out the best animated movies ever made. They make
billions. Eventually they make "live-action" movies that are indistinguishable
from the real thing. Then they make music, books, etc, and create an
unstoppable media force.

They could leverage the AI to discover hyper-efficient supply chain methods.

They could sequence genomes and run experiments, and sell the data.

Possibly exciting things around weather prediction.

Very exciting things around _any_ research.

~~~
czr
Certainly if they make a strong AGI, money is no longer an issue. I'm curious
what they will do in the, ah, interim, on the off-chance that inventing SAGI
turns out to be a difficult problem.

------
wilde
Without an obviously stated business model to satisfy the investor returns,
it’s hard to take the values platitudes seriously. Do you plan to make your
pitch deck public? That’d help.

------
mkolodny
I admire the attempt to create a sustainable project that's primarily about
creating a positive impact!

For those (including myself) who wonder whether a 100x cap will really change
an organization from being profit-driven to being positive-impact-driven:

How could we improve on this?

One idea is to not allow investors on the board. Investors are profit-driven.
If they're on the board, you'll likely get pressure to do things that optimize
for profit rather than for positive impact.

Another idea is to make monetary compensation based on some measure of
positive impact. That's one explicit way to optimize for positive impact
rather than money.

------
i7rgf98o7fk
Why the Limited Partnership at all? What can the nonprofit "Inc" do through
the for-profit "LP" shell that it could not do in its own right?

------
jamessemaj
Hey Greg,

Since you seem to be answering questions in this thread, here's one:

How does OpenAI LP's structure differ from that of a L3C (Low-profit Limited
Liability company)?

------
zestyping
To "ensure it is used for the benefit of all" requires limiting how AGI is
used.

How will OpenAI do that?

------
roenxi
There are a couple of comments on the theme that this is taking a non-profit
into a for-profit company, and that that is a bad thing.

I'd like to offer up an alternate opinion: non-profits operating models are
generally ineffective compared to for-profit operating models.

There are many examples.

* Bill Gates is easy; make squillions being a merciless capitalist, then turn that into a very productive program of disease elimination and apparently energy security nowadays.

* Open source is another good one in my opinion - even when they literally give the software away, many of the projects leading their fields (eg, Google Chrome, Android, PostgreSQL, Linux Kernel) draw heavily on sponsorship by for-profit companies using them for furthering their profits - even if the steering committee is nominally non-profit.

* I have examples outside software, but they are all a bit complicated to type up. Things like China's rise.

It isn't that there isn't a place for researchers who are personally motivated
to do things, there is a just a high correlation between something making a
profit and it getting done to a high standard.

------
thoughtstheseus
So are they looking for capital or they have it?

~~~
orky56
"Our investors include Reid Hoffman’s charitable foundation and Khosla
Ventures, among others."

I'm assuming these investors have already provided capital.

------
paraschopra
Mission oriented for-profit companies is an oxymoron. Profit comes from
competing in markets and markets determine what you end up doing. That’s why
I’m always skeptical of Don’t Be Evil types of missions because when you’re
starting you can’t even imagine what market pressure will end up making you
do.

Between the market pressures from investors, employees, competitors, to what
extent can a company really stay true to its business and deny potential
profit that conflicts with it.

Also, it’s hard to root for specific for profit companies (although I’m
rooting for capitalism per se).

------
estill01
Why not make OpenAI LP a B-Corp?

------
ckugblenu
Is it just me or there are not many african americans working in AI research
and industry. I don't have stats to back me up but that's my personal
observation. People in the field, what are your thoughts on it.

~~~
nickparker
I don't have any statistics for you, but Google at least is looking to improve
on this a bit. A good friend of mine from their NYC Brain office moved to
Accra, Ghana just last week to help build out their new office there.

~~~
perennate
I don't think an office in Ghana would be employing very many African
Americans (or, for that matter, very many Americans of any background).

------
option
And Khosla Ventures is one of their key investors.

Let's not forget that Khosla himself does not exactly care about public
interest or existing laws
[https://www.google.com/amp/s/www.nytimes.com/2018/10/01/tech...](https://www.google.com/amp/s/www.nytimes.com/2018/10/01/technology/california-
beach-access-khosla.amp.html)

~~~
mises
Non-AMP link: [https://www.nytimes.com/2018/10/01/technology/california-
bea...](https://www.nytimes.com/2018/10/01/technology/california-beach-access-
khosla.html)

I just read the article, and am not sure I see the issue. Quote from his
lawyer: “No owner of private business should be forced to obtain a permit from
the government before deciding who it wants to invite onto its property"

Where's the issue here? They guy basically bought the property all around the
beach, and decided to close down access. I wouldn't say it's a nice thing to
do, but it's legal. If I buy a piece of property, my rights as the owner
should trump the rights of a bunch of surfers who want to get to a beach. The
state probably should have been smart enough not to sell __all __the land.

Failing that, just seize a small portion via eminent domain: a 15-foot-wide
strip on the edge of the property would likely come at a reasonable cost, and
ought to provide an amicable resolution for all.

~~~
tedivm
It's actually not legal, at least in California. That's why he's trying to
take it to the Supreme Court- he's hoping to get the federal government to
override state laws.

He was also completely aware of this when he bought the property, so it's not
like this is a surprise or someone forcing him to change things. He's the one
who broke the law and broke the status quo that had existed at that beach.

~~~
mises
Property rights are constitutionally protected, and under the precedents
surrounding the 14th amendments, this overrules California's rules. Eminent
domain remains legal, but legally speaking, Khosla is in the right here.

~~~
dEnigma
The Supreme Court begs to differ.

~~~
mises
The Supreme Court didn't grant cert; that's different. That means they don't
want to set precedent or believe sufficient precedent exists already. This was
last adjudicated in 1999 with Saenz v. Roe, where California tried to set new
residents' welfare to what they got in other states for one year. The court
ruled this violated the constitutional protection of interstate travel, and
upheld the view that the 14th amendment applied all constitutional rights to
all states. Source:
[https://www.law.cornell.edu/wex/fourteenth_amendment_0](https://www.law.cornell.edu/wex/fourteenth_amendment_0)

This undoubtedly then applies the 5th amendment takings clause: “…nor shall
private property be taken for public use, without just compensation.” This is
clearly violated in this sense, and the state cannot violate this right (see
above).

The fact that the Supreme Court didn't grant cert probably means they believe
there is already precedent here, or just as probably that they didn't have the
time. They always have a full docket; they were probably just out of slots.

I urge others to rebut this from a legal sense, not just say they disagree.
People keep killing my comments, but it seems like they all just dislike the
"selfish" appearance of the actions.

~~~
tedivm
The Supreme Court refused to overturn the appeal, which means they upheld the
decision of the lower courts.

I honestly don't understand how you can take that action and try to turn it
around the way you are.

~~~
mises
> The Supreme Court refused to overturn the appeal, which means they upheld
> the decision of the lower courts.

Completely incorrect. "The Court usually is not under any obligation to hear
these [appealed] cases, and it usually only does so if the case could have
national significance, might harmonize conflicting decisions in the federal
Circuit courts, and/or could have precedential value. In fact, the Court
accepts 100-150 of the more than 7,000 cases that it is asked to review each
year." Source: [https://www.uscourts.gov/about-federal-courts/educational-
re...](https://www.uscourts.gov/about-federal-courts/educational-
resources/about-educational-outreach/activity-resources/supreme-1)

The SC not hearing the case doesn't mean they uphold the lower court's ruling,
it means they aren't hearing the case.

------
lawrenceyan
Sam Altman here to shake things up it seems!

~~~
gdb
We've been working on this new structure together for the past two years!

~~~
lawrenceyan
Oh that’s pretty cool. Do you by chance have any articles or posts going
through your initial thought process and/or eventual realization for why you
ultimately thought this transition was necessary for OpenAI?

Was it a particular event, a conversation, perhaps just an incremental
ideation without any actual epiphany needed, etc?

