> Angel investor Ron Conway wrote, "What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI."
With all sympathy and empathy for Sam and Greg, whose dreams took a blow, I want to say something about investors [edit: not Ron Conway in particular, whom I don't know; see the comment below about Conway]: The board's job is not to do right by 'Sam & Greg', but to do right by OpenAI. When mangement lays off 10,000 employees, the investors congratulate management. And if anyone objects to the impact on the employees, they justify it with the magic words that somehow cancel all morality and humanity - 'it's business' - and call you an unserious bleeding heart. But when the investor's buddy CEO is fired ...
I think that's wrong and that they should also take into account the impact on employees. But CEOs are commanders on the business battlefield; they have great power over the company's outcomes, which are the reasons for the layoffs/firings. Lower-ranking employees are much closer to civilians, and also often can't afford to lose the job.
There is why you do something. And there is how you do something.
OpenAI is well within its rights to change strategy even as bold as from a profit-seeking behemoth to a smaller research focused team. But how they went about this is appalling, unprofessional and a blight on corporate governance.
They have blind-sided partners (e.g. Satya is furious), split the company into two camps and have let Sam and Greg go angry and seeking retribution. Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.
For me there is no justification for how this all happened.
As someone who has orchestrated two coups in different organizations, where the leadership did not align with the organization's interests and missions, I can assure you that the final stage of such a coup is not something that can be executed after just an hour of preparation or thought. It requires months of planning. The trigger is only pulled when there is sufficient evidence or justification for such action. Building support for a coup takes time and must be justified by a pattern of behavior from your opponent, not just a single action. Extensive backchanneling and one-on-one discussions are necessary to gauge where others stand, share your perspective, demonstrate how the person in question is acting against the organization's interests, and seek their support. Initially, this support is not for the coup, but rather to ensure alignment of views. Then, when something significant happens, everything is already in place. You've been waiting for that one decisive action to pull the trigger, which is why everything then unfolds so quickly.
How are you still hireable? If I knew you orchestrated two coups at previous companies and I was responsible for hiring, you would be radioactive to me. Especially knowing that all that effort went into putting together a successful coup over other work.
Coups, in general, are the domain of the petty. One need only look at Ilya and D'Angelo to see this in action. D'Angelo neutered Quora by pushing out its co-founder, Charlie Cheever. If you're not happy with the way a company is doing business, your best action is to walk away.
Let me pose a theoretical. Let’s say you’re a VP or Senior Director. One of your sibling directors or VPs is over a department and field you have intimate domain knowledge. Meaning you have a successful track record in that field both from a management side and an IC side.
Now, that sibling director allows a culture of sexual harassment, law breaking, and toxic throat slitting behavior. HR and the Organizations leadership is aware of this. However the company is profitable, outside his department happy, and stable. They don’t want to rock the boat.
Is it still “the domain of the petty” to have a plan to replace them? To have formed relationships to work around them, and keep them in check? To have enacted policies outside their department to ensure the damage doesn’t spread?
And most importantly to enact said replacement plan when they fuck up just enough leadership gives them the side-eye, and you push the issue with your documentation of their various grievances?
Because that… is a coup. That is a coup that is atleast in my mind moral and just, leading to the betterment of the company.
“Your best action is to walk away” - Good leadership doesn’t just walk away and let the company and employees fail. Not when there’s still the ability to effect positive change and fix the problems. Captains always evacuate all passengers before they leave the ship. Else they go down with it.
> “Your best action is to walk away” - Good leadership doesn’t just walk away and let the company and employees fail.
Yes, exactly. In fact, it's corruption of leadership.
If an engineer came to the leader about a critical technical problem and said, 'our best choice is to pretend it's not there', the leader would demand more of the engineer. At a place like OpenAI, they might remind the engineer that they are the world's top engineers at arguably the most cutting edge software organization in the world, and they are expected to deliver solutions to the hardest problems. Throwing your hands up and ignoring the problem is just not acceptable.
Leaders need to demand the same of themselves, and one of their jobs is to solve the leadership problems that are just as difficult as those engineering problems - to deliver leadership results to the organization just like the engineer delivers engineering results, no excuses, no doubts. Many top-level leaders don't have anyone demanding performance of them, and don't hold themselves to the same standards in their job - leadership, management - as they hold their employees.
> Not when there’s still the ability to effect positive change and fix the problems.
Even there, I think you are going to easy on them. Only in hindsight do you maybe say, 'I don't see what could have been done.' At the moment, you say 'I don't see it yet, so I have to keep looking and innovating and finding a way'.
Max Levchin was an organizer of two coups while at PayPal. Both times, he believed it was necessary for the success of the company. Whether that was correct or not, they eventually succeeded and I don’t think the coups really hurt his later career.
PayPal had an exit, but it absolutely did not succeed in the financial revolution it was attempting. People forget now that OG PayPal was attempting the digital financial revolution that later would be bitcoin’s raison d'être.
Dismissing PayPal as anything but an overwhelming business success takes a lot of confidence. Unless you Gates or Zuckerburg, etc., I don't know how you have anything but praise for PayPal from that perspective.
Comparing PayPal's success in digital finance to cryptocurrency's is an admission against interest, as they say in the law.
I think getting to an IPO in any form during the wreckage of the Dotcom crash counts as an impressive success, even if their vision wasn't fully realized.
Yep. PayPal was originally a lot like venmo (conceptually -- of course we didn't have phone apps then). It was a way for people to send each other money online.
This example seems to be survivorship bias. Personally, if someone approached me to suggest backstabbing someone else, I wouldn't trust that they wouldn't eventually backstab me as well. @bear141 said "People should oppose openly or leave." [1] and I agree completely. That said, don't take vacations! (when Elon Musk was ousted from PayPal in the parent example, etc.)
> I wouldn't trust that they wouldn't eventually backstab me as well.
They absolutely would. The other thing you should take away from this is how they'd do it-- by manipulating proxies to do it with/for them, which makes it harder to see coming and impossible to defend against.
Whistleblowers are pariahs by necessity. You can't trust a known snitch won't narc on you if the opportunity presents itself. They do the right thing and make themselves untrustworthy in the process.
(This is IMO why cults start one way and devolve into child sex abuse so quickly-- MAD. You can't snitch on the leader when Polaroids of yourself exist...)
> don't take vacations!
This can get used against you either way, so you might as well take that vacation for mental health's sake.
I had this exact thing happen a few weeks ago in a company that I have invested in. That didn't quite pan out in the way the would-be coup party likely intended. To put it mildly.
I feel like in the parent comment coup is sort of shorthand for the painful but necessary work of building consensus that it is time for new leadership. Necessary is in the eye of the beholder. These certainly can be petty when they are bald-faced power grabs, but they equally can be noble if the leader is a despot or a criminal. I would also not call Sam Altman's ouster a coup even if the board were manipulated into ousting him, he was removed by exactly the people who are allowed to remove him. Coups are necessarily extrajudicial.
It also looks like Sam Altman was busy creating another AI company, along his creepy WorldCoin venture, wasteful crypto/bitcoin support and no less creepy stories of abuse coming from his younger sister.
Work or transfer of intellectual property or good name into another venture, while not disclosing it with OpenAI is a clear breach of contract.
He is clearly instrumental in attracting investors, talent, partners and commercialization of technology developed by Google Brain and pushed further by Hinton students and the team of OpenAI. But he was just present in the room where the veil of ignorance was pushed forward. He is replaceable and another leader, less creepy and with fewer conflicts of interest may do a better job.
It it no surprise that OpenAI board had attempted to eject him. I hope that this attempt will be a success.
Why is there a presumption that it must take precedence over other work?
I've run or defended against 'grassroots organizations transformations' (aka, a coup) at several non-profit organizations, and all of us continued to do our daily required tasks while the politicking was going on.
Because any defense of being able to orchestrate a professional coup and do your other work with the same zeal and focus as you did before fomenting rebellion I take as seriously as people who tell me they can multitask effectively.
It's just not possible. We're limited in how much energy we can bring to daily work, that's a fact. If your brain is occupied both with dreams of king-making and your regular duties at the job, your mental bandwidth is compromised.
I’ve hired people that were involved in palace coups at unicorn startups, twice. Justified or not, those coups set the company on a downward spiral it never recovered from.
I’m not sure I can identify exactly who is liable to start a coup, but I know for sure that I would never, ever hire someone who I felt confident might go down that route.
> I’ve hired people that were involved in palace coups at unicorn startups, twice...I know for sure that I would never, ever hire someone who I felt confident might go down that route.
So you hired coupers but you would never hire...coupers? Did you not know about their coups cuz that's the only way I can see that makes sense here. Could you clarify this, seems contradictory...
These people were early hires at a company I co-founded (but was not in an official leadership role at). They had never pulled a coup before, but they would do so within two years of being hired. The coup didn’t affect me directly, and indeed happened when I was out of the country and was presented as a fait accompli. But nevertheless I left not long thereafter as the company had already begun its downward slide.
The point in my comment was this: in retrospect, I’m not sure there’s anything that would have tipped me off to that behavior at the time of interview. But if this was something I could somehow identify, it would absolute be my #1 red flag for future hires.
Edit: The “twice” part might have made my comment ambiguous. What I meant was after I hired them, these people went on to pull two separate, successive coups, which indicates to me the first time wasn’t an aberration.
>So you hired coupers but you would never hire...coupers? Did you not know about their coups cuz that's the only way I can see that makes sense here. Could you clarify this, seems contradictory...
You might have missed this from GP's comment:
>>I’m not sure I can identify exactly who is liable to start a coup
In other words, at least once these people have pulled the wool over their eyes during the hiring process.
If I'm confident in my competence and the candidate has a trustworthy and compelling narrative about how they undermined incompetent leadership to achieve a higher goal - yep, for sure.
Also, ones persons incompetent is anothers performer.
Like, being crosswise in organizational politics does not imply less of a devotion of organizational goals, but rather often simply different interpretation of those goals.
But being in a situation where this was called for twice?
That strikes me as someone who is either lacks the ability to do proper due diligence or they're straight up sociopaths looking for weak willed people they can strong arm out. Part of the latter is having the ability to create a compelling narrative for future marks, to put it bluntly.
The regular HN commenter says "ceos are bad useless and get paid too much" but now when someone suggests getting rid of one of them suddenly its the end of the world
Agreed. Whilst I don’t trust China’s CCP, I sure as heck don’t trust anything from Falun Gong. Those guys are running an asymmetric battle against the Chinese State and frankly they would be capable of saying anything if it helped their cause.
Interesting. The Wikipedia's declaration about them being "new religious movement" is inconsistent with the body of the article. It looks like it started as some kind of Chi Kung exercise and wellness group, but it got big very fast and Chinese Government got concerned about their popularity. Then, under CCP persecution, it escalated and morphed into a full-blown political dissident movement. Initially viewed favorably by the press as a dissident movement. Now, Wikipedia article is very unfavorable because The Epoch Times misalignment with press. Ok, I think I understand.
I wouldn't trust either the CCP or Falun Gong to speak my weight, they are both power structures and they are both engaging in all kinds of PR exercises to put themselves in the best light. But to Falun Gong's credit: I don't think they've engaged in massive human rights violations so they have that going for them. But there are certain cult like aspects to it and 'new religious movement' or not I think that the fewer such entities there are the better (and also fewer of the likes of the CCP please).
I would never work with you. This is why investors have such a bad reputation. If I had not retained 100% ownership and control of my business, I am sure someone like you would have tossed me out by now.
> They have blind-sided partners (e.g. Satya is furious), split the company into two camps and have let Sam and Greg go angry and seeking retribution.
Given the language in the press release, wouldn't it be more accurate to say that Sam Altman, and not the board, blindsided everyone? It was apparently his actions and no one else's that led to the consequence handed out by the board.
> Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.
From all current accounts, doesn't that seem like what Altman and his crew were already trying to do and was the reason for the dismissal in the first place?
The only appropriate target for Microsoft's anger would be its own deal negotiators.
OpenAI's dual identity as a nonprofit/for-profit business was very well known. And the concentration of power in the nonprofit side was also very well known. From the media coverage of Microsoft's investments, it sounds as if MSFT prioritized getting lots of business for its Azure cloud service -- and didn't prioritize getting a board seat or even an observer's chair.
Microsoft terminating the agreement by which they supply compute to OpenAI and OpenAI licenses technology to them would be an existential risk to OpenAI (though other competing cloud providers might step in and fill the gap Microsoft created under similar terms), but -- whether or not OpenAI ended up somewhere else immediately (the tech eventually would, even if OpenAI failed completely and was dissolved) Microsoft would go from the best positioned enterprise AI cloud provider to very far behind overnight.
And while that might hurt OpenAI as an institution more than it hurts Microsoft as an institution, the effect on Microsoft's top decision-makers personally vs. OpenAI's top decisionmakers seems likely to be the other way around.
Non-zero time, but not a lot either. Main hangup would be acquiring data for training, as their engineers would remember the parameters for GPT-4 and Microsoft would provide the GPUs. But Microsoft with access to Bing and all its other services ought to be able to help here too.
Amateurs on hugging face are able to match OpenAI in impressively short time. The actual former-OpenAI engineers with unlimited budget ought to be able to do as good or better.
If Open AI were to be in true crisis, I'm sure Amazon will step in to invest, for exclusive access to GPT4 (in spite of their Anthropic investment). That would put Azure in a bad place. So not exactly "All" the power.
Not to mention, after that, MSFT might be left bagholding onto a bunch of unused compute.
Sam and Greg have already said they’re starting an OpenAI competitor, and at least 3 senior engineers have jumped ship already. More are expected tonight. Microsoft would just back them as well, then take their time playing kingmaker in choosing the winner.
That's true, but Sutskever and Co still have the head start. On the models, the training data, the GPT4 licenses, etc. Their Achilles heel is the compute which Microsoft will pull out. Khosla Ventures and Sequoia may sell their Open AI stakes at a discount, but I'm sure either Google or Amazon will snap it up.
All Sam and Greg really have is the promise of building a successful competitor, with a big backing from Microsoft and Softbank, while OpenAI is the orphan child with the huge estate. Microsoft isn't exactly the kingmaker here.
Sutskever built the models behind GPT4, if I reckon correctly (all credit to the team, but he's the focal point behind expanding on Google transformers). I don't see Sam and Greg working with him under the same roof after this fiasco, since he voted them out (he could have been the tiemaker vote).
OpenAI leadership (board, CEO) didn't say that ... your link said their "Chief Strategy Officer" Jason Kwon said it.
Most likely outcome here does seem to be that Altman/Brockman come back, Sutskever leaves and joins Google, and OpenAI becomes for all intensive purposes a commercial endeavor, with Microsoft wielding a lot more clout over them (starting with one or more board seats).
Could they? I don't know the details of MSFTs contracts with OpenAI... but even if they can legally just walk away, it would certainly have some negative impact on MSFTs reputation when dealing with future negotiations for them to do so.
They loved to trot out the “mission” as a reason to trust a for-profit entity with the tech.
Well, this is proof the mission isn’t just MBA bullshit, clearly Ilya is actually committed to it.
This is like if Larry and Sergei never decided to progressively nerf “don’t be evil” as they kept accumulating wealth, they would have had to stage a coup as well. But they didn’t, they sacrificed the mission for the money.
I wonder if there's a specific term or saying for that, maybe "projection" or "self-victimization" but not quite: when one person biasedly frames that other people were responsible for a bad thing, when it is they yourself that were doing the very thing in the first place. Maybe "hypocrisy"?
The split existed long prior to the board action, and extended up into the board itself. If anything, the board action is a turning point toward decisively ending the split and achieving unity of purpose.
Can someone explain the sides? Ilya seems to think transformers could make AGI and they need to be careful? Sam said what? "We need to make better LLMs to make more money."? My general thought is that whatever architecture gets you to AGI, you don't prevent it from killing everyone by chaining it better, you prevent that by training it better, and then treating it like someone with intrinsic value. As opposed to locking it in a room with 4chan.
If I'm understanding it correctly, it's basically the non-profit, AI for humanity vs the commercialization of AI.
From what I've read, Ilya has been pushing to slow down (less of the move fast and break things start-up attitude).
It also seems that Sam had maybe seen the writing on the wall and was planning an exit already, perhaps those rumors of him working with Jony Ive weren't overblown?
Anything that is language related. Extracting summaries, writing articles, combining multiple articles into one, drawing conclusions from really big prompts, translating, rewriting, fixing grammar errors etc. Half of the corporations in the world have such needs more or less.
I don't think the issue was a technical difference of opinion regarding whether transformers alone were needed or other architectures required. It seems the split was over speed of commercialization and Sam's recent decision to launch custom GPTs and a ChatGPT Store. IMO, the board miscalculated. OpenAI won't be able to pursue their "betterment of humanity" mission without funding and they seemingly just pissed off their biggest funding source with a move that will also make other would be investors very skittish now.
Making humanity’s current lives worse to fund some theoretical future good (enriching himself in the process) is some highly impressive rationalisation work.
Literally any investment is a divert of resources from the present (harming the present) to the future. E.g. planting grains for next year rather than eating them now.
There is a difference between investing in a company who is developing ai software in a widely accessible way that improve everyone’s lives and a company that pursues software to put out of work entire sectors for the profit of a dozen of investors
"Put out of work" is a good thing. If I make a new js library which means a project that used to take 10 devs now takes 5 I've put 5 devs out of work. But ive also made the world a more efficient place and those 5 devs can go do some other valuable thing.
Who can afford it? When LawyerAI and AccountAI are used by all of the mega corps to find more and more tax loopholes and many citizens can’t work then where will UBI come from?
I think the EA movement has been broadly skeptical towards Sam for a while -- my understanding is that Anthropic was founded by EAs who used to work at OpenAI and decided they didn't trust Sam.
I don't think it has to be unfettered progress that Ilya is slowing down for. I could imagine there is a push to hook more commercial capabilities up to the output of the models, and it could be that Ilya doesn't think they are competent/safe enough for that.
I think danger from AGI often presumes the AI has become malicious, but the AI making mistakes while in control of say, industrial machinery, or weapons, is probably the more realistic present concern.
Early adoption of these models as controllers of real world outcomes is where I could see such a disagreement becoming suddenly urgent also.
That's literally what we already do to each other. You think the 1% care about poor people? Lmao, the rich lobby and manufacture race and other wars to distract from the class war, they're destroying our environment and numbing our brains with opiates like Tiktok.
> OpenAI is well within its rights to change strategy even as bold as from a profit-seeking behemoth to a smaller research focused team. But how they went about this is appalling, unprofessional and a blight on corporate governance.
This wasn't a change of strategy, it was a restoration of it. OpenAI was structured with a 501c3 in oversight from the beginning exactly because they wanted to prioritize using AI for the good of humanity over profits.
This isn't going to make me think in any way that OpenAI will return to its more open beginning. If anything it shows me they don't know what they want
I agree. They've had tension between profit motive and the more grandiose thinking. If they'd resolved that misalignment early on they wouldn't be in this mess.
Note I don't particularly agree with their approach, just saying that's what they chose when they founded things, which is their prerogative.
Yet they need massive investment from Microsoft to accomplish that?
> restoration
Wouldn’t that mean that over the longterm they will just be outcompeted by the profit seeking entities. It’s not like OpenAI is self sustainable (or even can be if they chose the non-profit way)
>Yet they need massive investment from Microsoft to accomplish that?
massive spending is needed for any project as massive as "AI", so what are you even asking? A "feed the poor project" does not expect to make a profit, but, yes, it needs large cash infusions...
> a blight on corporate governance
> They have blind-sided partners (e.g. Satya is furious)
> the threat that a for-profit version of OpenAI dominates the market
It's seeming like corporate governance and market domination are exactly the kind of thing the board are trying to separate from with this move. They can't achieve this by going to investors first and talking about it - you think Microsoft isn't going to do everything in their power to prevent it from happening if they knew about it? I think their mission is laudable, and they simply did it the way it had to be done.
You can't slowly untangle yourself from one of the biggest companies in the world while it is coiling around your extremely valuable technology.
In other words, it’s unheard of for a $90B company with weekly active users in excess of 100 million. A coup leaves a very bad taste for everyone - employees, users, investors and the general public.
When a company experiences this level of growth over a decade, the board evolves with the company. You end up with board members that have all been there, done that, and can truly guide the management on the challenges they face.
OpenAI's hypergrowth meant it didn’t have the time to do that. So the board that was great for a $100 million, even a billion $ startup falls completely flat for 90x the size.
I don’t have faith in their ability to know what is best for OpenAI. These are uncharted waters for anyone though. This is an exceptionally big non-profit with the power to change the world - quite literally.
And yet it’s very heard of for corporations to poison our air and water, cut corners and kill peoples, and lie, cheat, and steal. That happens every day and nobody cares.
And yet four people deciding the put something - anything - above money is somehow a disaster.
Sorry I don't see the 'how' as necessarily appalling.
The less appalling alternative could have been weeks of discussions and the board asking for Sam's resignation to preserve the decorum of the company. How would that have helped the company ? The internal rife would have spread, employees would have gotten restless, leading to reduced productivity and shipping.
Instead, isn't this a better outcome ? There is immense short term pain, but there is no ambiguity and the company has set a clear course of action.
To affirm that the board has caused a split in the company is quite preposterous, unless you have first hand information that such a split has actually happened. As far as public information is concerned 3 researchers have quit so far, and you have this from one of the EMs.
"For those wondering what’ll happen next, the answer is we’ll keep shipping. @sama & @gdb weren’t micro-managers. The comes from the many geniuses here in research product eng & design. There’s clear internal uniformity among these leaders that we’re here for the bigger mission."
This snippet in fact shows the genius of Sam and gdb, how they enabled the teams to run even in their absence. Is it unfortunate that the board fired Sam, from the engineer's and builder's perspective yes, from the long term AGI research perspective, I don't know.
And instead of resolving this and presenting a unified strategy to the company they have instead allowed for this split to be replicated everywhere. Everyone who was committed to a pro-profit company has to ask if they are next to be treated like Sam.
> Everyone who was committed to a pro-profit company has to ask if they are next to be treated like Sam.
They probably joined because it was the most awesome place to pursue their skills in AI, but they _knew_ they were joining an organization with explicitly not a profit goal. If they hoped that profit chasing would eventually win, that's their problem and, frankly, having this wakeup call is a good thing for them so they can reevaluate their choices.
The possibility of getting fired is an occupational hazard for anyone working in any company, unless something in your employment contract says otherwise. And even then, you can still be fired.
Biz 101.
I don't know why people even need to be explained this, except for ignorance of basic facts of business life.
This is the biggest takeaway for me. People are building businesses around OpenAI APIs and now they want to suddenly swing the pendulum back to being a fantasy AGI foundation and de-emphasize the commercial aspect? Customers are baking OpenAI's APIs into their enterprise applications. Without funding from Microsoft their current model is unsustainable. They'll be split into two separate companies within 6 months in my opinion.
I'm sure my coworkers at [retailer] were not happy to be even shorter staffed than usual when I was ambush fired, but no one who mattered cared, just as no one who matters cares when it happens to thousands of workers every single day in this country. Sorry to say, my schadenfreude levels are quite high. Maybe if the practice were TRULY verboten in our society... but I guess "professional" treatment is only for the suits and wunderkids.
I have noticed you decided to use several German words in your reply, trying not to be petty but at least you should attempt to write them correctly. It’s either Wunderkind (German word for child prodigy) or english translation: wonder kid.
You are correct, though I must be Mandela Effect-ing, because I could have sworn that "wunderkid" was an accepted American English corruption of the original term, a la... Well, "a la" (à la).
My use of "schadenfreude", in general, can be attributed largely to Avenue Q and Death Note. Twice is coincidence.
And the stupid thing is, they could have just used the allegations his sister made against him as the reason for the firing and ridden off into the sunset, Scott-free.
OpenAI is a nonprofit charity with a defined charitable purpose that has a for-profit subsidiary that is explicitly subordinated to the purpose of the nonprofit, to the extent investors in the subsidiary are advised in the operating agreement to treat investments as if they were more like donations, and that the firm will prioritize the charitable function of the nonprofit which retains full governance power over the subsidiary over returning profits, which it may never do.
It is, only it has an exotic ownership structure. Sutskever has just used the features of that structure to install himself as the top dog. The next step is undoubtedly packing the board with his loyalists.
Whoever thinks you can tame a 100 billion dollar company by putting a "non-profit" in charge of it, clearly doesn't understand people.
But as far as I can tell, unless you are in the exec suites at both OpenAI and at Microsoft, these are just your opinions, yet you present them as fact.
The way Altman behaved and manipulated the board to form this Frankenstein company is also appalling. I think it's clear now that openAI board are not business ppl, and they had no idea how to work with someone as cold and manipulative as Altman, thus they blundered and made fools of themselves, as often happen to the naive.
> Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.
If it was so easy to go to the back of the queue and become a threat, Open AI wouldn't be in the dominant position they're in now. If any of the leavers have taken IP with them, expect court cases.
> The board's job is not to do right by 'Sam & Greg', but to do right by OpenAI.
The board's job is specifically to do right by the charitable mission of the nonprofit of which they are the board. Investors in the downstream for-profit entity (OpenAI Global LLC) are warned explicitly that such investments should be treated as if they were donations and that returning profits to them is not the objective of the firm, serving the charitable function of the nonprofit is, though profits may be returned.
> charitable mission of the nonprofit of which they are the board
This exactly. Folks have completely forgotten that Altman and Co have largely bastardized the vision of OpenAI for sport and profit. It's very possible that this is part of a larger attempt to return to the stated mission of the organization. An outcome that is undoubtedly better for humanity.
What is the evidence of that, and what is your evidence that this "return to mission" will "undoubtedly better for humanity"
After all, as we see by looking to history, the road to hell is paved with good intentions, lots and lots of altruistic do gooders have created all manner of evil in their pursuit of a "better humanity".
I am not sure I agree with Sam Altman's vision of a "better tomorrow" any more than I would agree with the OpenAI boards vision of that same tomorrow. In fact I have great distrust of people that want to shape humanity into their vision of what is "best" that tends to lead to oppression and suffering
I met Conway once. He described investing in Google because it was a way to relive his youth via founders who reminded him of him at their age. He said this with seemingly no awareness of how it would sound to an audience whose goal in life was to found meaningful, impactful companies rather than let Ron Conway identify with us & vicariously relive his youth.
Just because someone has a lot of money doesn’t mean their opinions are useful.
I mostly agree with you on this. That being said, I've never gotten the impression Ron is the type of VC you're referring to. He's definitely founder-friendly (that's basically his core tenant), but I've never found him to be the type of VC who is ruthless about cost-cutting or an advocate for layoffs. (And I say this as someone who tends to be particularly wary of investors)
Just a heads up, the word is 'tenet' (funny enough, in commercial real estate there is the concept of a 'core tenant' though -- i.e. the largest retailer in a shopping center).
Corporate legal entities should have a mandatory vote of no confidence clause that gives employees the ability to unseat executives if they have a supermajority of votes.
That would make things more equitable perhaps. It’d at least be interesting
it's hilarious how much people for no reason, want to defend the honor of Sam Altman and co. i mean ffs, the guy is not your friend and will definitely backstab you if he gets the opportunity.
i'm surprised anyone can take this "oh woe is me i totally was excited about the future of humanity" crap seriously. these are SV investors here, morally equivalent to the people on Wall Street that a lot here would probably hold in contempt, but because they wore cargo shorts or something, everyone thinks that Sam is their friend and that just if the poor naysayers would understand that Sam is totally cool and uses lowercase in his messages just like mee!!!!
they don't give a shit that your product was "made with <3" or whatever
they don't give a shit about you.
they don't give a shit about your startup's customers.
they only give a shit about how many dollars they make from your product.
boo hooing over Sam getting fired is really pathetic, and I'd expect better from the Hacker News crowd (and more generally the rationalist crowd, which a lot of AI people tend to overlap with).
Yeah it’s crazy how much the tech community is defending this random CEO, considering the relatively unsympathetic response to the tech layoffs over the last year.
you get that you sow. The way Altman publicly treated Cruise co-founder establishes like a new standard of "not do right by". After that I'd have expected nobody would let Altman near any management position, yet SV is a land of huge money sloshing care-free, and so I was just wondering who is going to be left holding the bag.
I read this as, and both Conway and Y Combinator are famous for their defense of founders.
He might be emotional and defend his friends that’s not in challenge, he likes the guys— and he might be more cynical when it comes to firing 10,000 engineers —that’s less what I’ve heard about him personally, but maybe— however, in this case, he’s explicitly defending not an employee victim of the almighty board, but the people who created the entity, who later entrusted the board with some responsibility to keep the entity faithful to its mission.
Some might think Sam deserves that title less than Greg… not sure I can vouch for either. But Conway is trying to say that all entities (and their governance) owe their founders a debt of consideration, of care. That’s filial piety more than anything contractual. That isn’t the same as the social obligation that an employer might have.
The cult for founders, “0 to 1” and all that might be overblown in San Francisco, but there’s still a legitimate sense that the people who started all this should only be kicked out if they did something outrageous. Take Woz: he’s not working, or useful, or even that respectful of Apple’s decisions nowadays. But he still gets “an employee discount” (which is admittedly more a gimmick). That deference is closer to what Conway seems to flag than the (indeed) fairly violent treatment of a lot of employees during the staff reduction of the last year.
That is a thoughtful exploration (or exposition) of what I was meaning to say. My point is that loyalty, that filial piety, should go to the employees who also work and sacrifice.
I think the distinction of founders is a rationalization of simple corruption: They know the founder, it's their buddy; they go to the same club, eat at the same restaurants, serve on the same boards, and have similar careers. Understanding the burden and challenges and the accomplishment of founders is instinctive, and appreciating founders is appreciating themselves.
>The board's job is not to do right by 'Sam & Greg', but to do right by OpenAI. When mangement lays off 10,000 employees, the investors congratulate management.
Thats why Sam & Greg wasn't all they complained about. They lead with the fact that it was shocking and irresponsible.
Ron seems to think that the board is not making the right move for OpenAI.
> They lead with the fact that it was shocking and irresponsible.
I can see where the misalignment (ha!) may be: someone deep in the VC world would reflexively think that "value destruction" of any kind is irresponsible. However, a non-profit board has a primary responsibility to its charter and mission - which doesn't compute for those with fiduciary-duty-instincts. Without getting into the specifics of this case: a non-profit's board is expected to make decisions that lose money (or not generate as much of it) if the decisions lead to results more consistent with the mission.
>However, a non-profit board has a primary responsibility to its charter and mission - which doesn't compute for those with fiduciary-duty-instincts
Exactly. The tricky part is that board started a second for profit company with VC investors who are co-owners. This has potential for messy conflicts of interest if there is disagreement about how to run the co-venture, and each party has contractual obligations to each other.
> Exactly. The tricky part is that board started a second for profit company with VC investors who are co-owners. This has potential for messy conflicts of interest if there is disagreement about how to run the co-venture, and each party has contractual obligations to each other.
Anyone investing in or working for the for-profit LLC has to sign an operating agreement that states the LLC is not obligated to make a profit, all investments should be treated as donations, and that the charter and mission of the non-profit is the primary responsibility of the for-profit LLC as well.
See my other response. If you have people sign a contract that says the mission comes first, but also give them profit sharing stocks, and cap those profits at 1.1 Trillion, it is bound to cause some conflicts of interest in reality, even if it is clear who calls the shots when deciding how to balance the mission and profit
There might be some conflict of interest but the resolution to those conflicts is clear: The mission comes first.
OpenAI employees might not like it and it might drive them to leave, but they entered into this agreement with a full understanding that the structure has always been in place to prioritize the non-profit's charter.
Which might only be possible with future funding? From Microsoft in this case. And in any case if they give out any more shares in the wouldn’t they (with MS) be able to just take over the for-profit corp?
The deal with Microsoft was 11 billion for 49% of the venture. First off, if open AI can't get it done with 11 billion plus whatever Revenue, they probably won't. Second, the way the for-profit is set up, it may not matter how much Microsoft owns, because the nonprofit keeps 100% of the control. Seems like that's the deal that Microsoft signed. They bought a share of profits with no control. Third, my understanding is that the 11 billion from Microsoft is based on milestones. If openai doesn't meet them, they don't get all the money
Just a nitpick. "Fiduciary" doesn't mean "money", it means an entity which is legally bound to the best interests of the other party. Non-profit boards and board members have fiduciary duties.
Thanks for that - indeed, I was using "fiduciary duty" in the context it's most frequently used - maximizing value accrued to stakeholders.
However, to nitpick your nitpick: for non-profits there might be no other party - just the mission. Imagine a non-profit whose mission is to preserve the history and practice of making 17th-century ivory cuff links. It's just the organisation and the mission; sometimes the mission is for the benefit of another party (or all of humanity).
The non-profit, in my use, was the party. I guess at some point these organizations may not involve people, in which case "party" would be the wrong term to use.
Investors are not gonna like when the business guy who was pushing for productizing, profitability and growth get ousted. We don’t know all the details about what exactly caused the board to fire Sam. The part about lying to the board is notable.
It’s possible Sam betrayed their trust and actually committed a fireable offense. But even if the rest of the board was right, the way they’ve handled it so far doesn’t inspire a lot of confidence.
Again, they didn't state that he lied. They stated that he wasn't candid. A lot of people here have been reading specifics into a generalized term.
It is even possible to not be candid without even using lies of omission. For a CEO this could be as simple as just moving fast and not taking the time to report on major initiatives to the board.
Its possible not to be candid without even using lies of omission (and be on the losing side of a vicious factional battle) and get a nice note thanking you for all that you've done and allowing you to step down and spend more time with your family at the end of the year too. Or to carry on as before but with onerous reporting requirements. The board dumped him with unusual haste and an almost unprecedented attack on his integrity instead. A lot of people are reading the room rather than hyperliterally focusing on the exact words used.
If I take the time to accuse my boss of failing to be candid instead of thanking him in my resignation letter or exit interview, I'm not saying I think he could have communicated better, I'm saying he's a damned liar, and my letter isn't sent for the public to speculate on.
Whether the board were justified in concluding Sam was untrustworthy is another question, but they've been willing to burn quite a lot of reputation on signalling that.
> hyperliterally focusing on the exact words used.
Business communication is never, ever forthright. These people cannot be blunt to the public even if their life depended on it. Reading between the lines is practically a requirement.
> Again, they didn't state that he lied. They stated that he wasn't candid. A lot of people here have been reading specifics into a generalized term.
OED:
candour - the quality of being open and honest in expression.
"They didn't state he lied ... without even using lies of omission ... they said he wasn't [word defined as honest and open]"
Candour encapsulates exactly those things. Being open (i.e. not omitting things and disclosing all you know) and honest (being truthful).
On the contrary, "not consistently candid", while you call it a "generalized term", is actually a quite specific term that was expressly chosen, and says, "we have had multiple instances where he has not been open with us, or not been honest with us, or both".
If "and" operates as logical "and," then being "honest and not open," "not honest and open," and "not honest and not open" would all be possibilities, one of which would still be "honest" but potentially lying through omission.
If they were looking out for investors "blindsided key investor and minority owner Microsoft, reportedly making CEO Satya Nadella furious" doesn't make it sound like they did a terribly good job.
I'm fairly certain that a board is not allowed to capriciously harm the non-profit they govern and unless they have a very good reason there will be more fall-out from this.
With all sympathy and empathy for Sam and Greg, whose dreams took a blow, I want to say something about investors [edit: not Ron Conway in particular, whom I don't know; see the comment below about Conway]: The board's job is not to do right by 'Sam & Greg', but to do right by OpenAI. When mangement lays off 10,000 employees, the investors congratulate management. And if anyone objects to the impact on the employees, they justify it with the magic words that somehow cancel all morality and humanity - 'it's business' - and call you an unserious bleeding heart. But when the investor's buddy CEO is fired ...
I think that's wrong and that they should also take into account the impact on employees. But CEOs are commanders on the business battlefield; they have great power over the company's outcomes, which are the reasons for the layoffs/firings. Lower-ranking employees are much closer to civilians, and also often can't afford to lose the job.