Hacker News new | past | comments | ask | show | jobs | submit login
Sam Altman, OpenAI board open talks to negotiate his possible return (bloomberg.com)
242 points by YetAnotherNick on Nov 21, 2023 | hide | past | favorite | 242 comments




The worst thing about this whole affair has been the failure of the board to coherently express, seemingly to anyone, why they did what they did.


Maybe it's not defensible.

Or maybe it involves a deal that they're stuck with, and now dependent upon, even if the terms aren't aligned well with the non-profit. Saying that could be awkward.

What I can't guess at explanations for is publicly stating the not-consistently-candid in the initial announcement. I'm not expert on these things, but that sounds to me like either a euphemism for something that they needed to be seen as explicitly distancing the company from ASAP or a blunder that a lawyer or PR person would've advised them not to do, and which they didn't need to do.


> Maybe it's not defensible.

At this point, that's the 95% conclusion.

If you were in this position and had even a sliver of defensibility to play, you would have already played it.

Because the longer this goes, the more sharks circle and the less likely OpenAI's survival becomes.

(UNLESS you thought you f---ed up and wanted Altman back so you want to be nice to him. But if you wanted that, you wouldn't have hired Shear as interim CEO to replace your previous interim CEO.)


My first thought was that it's not defensible because it's a desired outcome and this route avoids either negative press or another consequence of him leaving of his own volition.


Why? It should be obvious: power. But that also means that it doesn't matter what they say, because they'll lie to conceal that this was really a power play, driven by petty looking rivalries.

(Was) a hundred billion dollar rocketship, pioneering the future. That this is only the first such palace intrigue we've seen is surprising.

The root of the problem seems to lie in the challenges of growth and the accompanying pressures. The issues we've observed in the past week are likely tied to weak or ineffective leadership. This isn't just about leading developers or the technical team; it's about leadership at every level within the company. If leadership were strong and clear, I doubt these issues would have arisen.

It's like whenever there is a power vacuum, all the possible contenders compete and there's chaos. The existence of chaos suggests at least that there's a power vacuum at the top. Similar to when big leaders throughout history died, their children/lieutenants then fought and caused civil wars (e.g Mongols).

But while a leader is strong, there can be stability. Take, for instance, Facebook during its initial growth phase. Despite facing intense pressure from various quarters, it never descended into chaos or disarray. This stability could be attributed to a robust board, or perhaps to Mark Zuckerberg's leadership effectiveness. Whatever your opinion of him, his leadership across all levels of the company and on the board seemed to prevent competing interests from creating chaos.

This situation also underscores the importance of selecting the right people for the board. It's not just about finding smart individuals who look good on paper or focusing on trendy topics like AI ethics. The board members need a deep understanding of startups and business. Maybe the current board's inexperience and theoretical approach are issues. It could be that they are too young or resemble a vanity board more than an effective one.

However, we can't place all the blame on the board. Strong, clear, and effective leadership at the top is crucial. Sam Altman might excel in many areas, but perhaps what's needed is a leader like Zuckerberg at Facebook (and perhaps a similar leadership structure, too), or current-era Musk, someone who could have possibly prevented these problems.


> It should be obvious: power. But that also means that it doesn't matter what they say, because they'll lie to conceal that this was really a power play, driven by petty looking rivalries.

Am I the only one who finds these kinds of explanations unconvincing? I don't think anyone knows precisely what was going through Sutskever's mind on Friday, but whenever the best explanation people have for an event is that an obviously very smart person suddenly became very stupid, it gives me the feeling that we don't have all the information.


Yeah, I find this kind of dialectic extremely annoying. It's basically just a trivial answer. Yes, everyone wants agency, so in some way you can tie everything we do to 'power'. The actual question above was trying to ask what the board wanted the power to do and why they thought Sam was blocking them from doing it.

Their letter claims he was dishonest about something, we will see in the coming months why. I'm still very confused as to why Ilya would seemingly orchestrate such a monumental shift in the company, only to claim his 'deep regret' a few days later. It's pretty odd...

The main theory that fits for me is that he didn't want the mission to become profit-seeking, and probably Sam played down some of the deals he was making, or didn't mention them at all. Once Sam was removed, presumably pressure from the rank-and-file, the hundreds of people he sees everyday who uprooted their lives for a piece of that profit, turned Ilya away from his more pure convictions...

But who knows, this story will be interesting to watch over time.


That’s a fair criticism. While it’s true that everybody does want “power”, there’s a distinction to be made between the personal agency that we all share (and often rightly seek more of), and the power to which I’m referring here.

It’s likely that the first type, of personal power, is included in the other type of power, but what is meant here by “power” is: you get to shape the future and make huge money doing it.

Regarding Ilya’s seemingly contradictory behavior, it appears more likely that he wasn’t really orchestrating the coup.

Instead, he was a pawn, manipulated by other interests to be the public face of the coup. These interests appealed to his ego and desire for more recognition and power within the company. Ilya’s perceived entitlement to a more significant role probably made him an easy target. So, it’s not accurate to say that Ilya orchestrated the coup.

It seems it was orchestrated by other key players on the board, primarily the backers behind the scenes. These backers likely include intelligence and policy security experts fronted by Helen (the AU-UK-China intelligence and security policy wonk axis), and those aligned with military-industrial interests, such as those represented by Tasha (RAND Corporation et al).

This coup had probably been planned for a while, with the goal of removing Sam who created obstacles to their plans. They tried to provoke him into a mistake, and when they believed they saw a moment of weakness, they executed their plan. However, the plan failed spectacularly.

For obvious reasons this would never probably be widely acknowledged, so it may be written up in some more niche blogs on use letters if anyone cares to do the investigative journalism.

As a result, as I previously stated it would be unwise to consider at face value The lies and presented as explanations for the coup, because of the power that is at stake.

Moreover, if you wish to gain a clearer understanding, consider the context into which the obvious players are in bedded.


No you are not. Because "it should be obvious: ____" is a handwave with zero evidence, it's a projection on the part of the author. Insert power, insert sex, insert greed, insert idealism, insert black and white thinking, insert jealousy, insert naivety, insert ignorance, insert racism, insert nationalism.

Take your pick and state it in a highbrow fanciful way and boom you're cookin with gas.


Are you sure it’s not you projecting with your entire list there? Sex, nationalism… hahaha! :) I only said power.

I think it’s often the people who want power but, want to do something not good with it or abuse it, who get upset when their power-seeking is identified as what it is.

I understand if “it’s obvious” hand wavy, but your comment, which makes vague assertions that I don’t know without backing it up, could be considered hand wavy. It might even come across as an abusive attempt to silence someone saying something you dislike, by trying to frame the speaker as “ignorant, therefore should not speak.”

The obviousness of what I said arises from power being a perennial dynamic of human behavior, which I think your comment does a fair and job of demonstrating hahaha! :)


It’s good to be to be cautious regarding incomplete information, but at the same time, considered the larger context of what is at stake. Shaping the future of AI might make it likely that powerful interests and an uncommon desire for power, that goes beyond personal agency that everyone wants, is involved.


There is nothing unusual or contradictory about smart people doing stupid things.


Oh completely agree! And that’s a good point to remember.


My most charitable interpretation:

The board has bylaws governing how it works. One of them says that it has to have 48 hours notice before a special online meeting (to, e.g. agenda, minutes, resolutions, actions). I think a board member cannot speak officially on behalf of them before that. Perhaps they have already had a meeting to get legal advice before a further announcement? We might be due an announcement soon.

Basically all boards work slow and we want instant responses which causes conflict.

However it's still mind boggling to me that a) they didn't foresee the response of their decisions and b) they haven't produced anything on paper explaining things to the company afterwards. I would have expected the CEO to say something like "they are meeting at this date according to their by laws" but perhaps a CEO cannot reveal such information legally?


If this board had worked slow we wouldn't be in this mess.


"Move fast and break things", right?


Well, they certainly managed.


I think this is a reasonable take. I'm also not convinced that the board acted outside their mandate as stewards of the OpenAI non-profit. There are definitely reasonable suspicions, but we still don't have all the details.


> all boards work slow

They lost a CEO and chairman pretty suddenly....


What is there to explain? Sam Altman played a stupid game, founding a sham nonprofit instead of a startup, and won a stupid prize when the board fired him for dishonesty.


> What is there to explain?

Why Altman was fired.


> Sam Altman played a stupid game, founding a sham nonprofit instead of a startup, and won a stupid prize when the board fired him for dishonesty.

What dishonesty? I'm sympathetic to this version of events, but if it's true then the board ought to be able to give specifics - and if they can't then the strident tone of the initial announcement was unjustified.


The genius board allowed it to happen.


Certain prominent members noped out, but yeah.


It would help if anyone believed Silicon Valley was capable of producing a non-profit of any value to begin with.


And does it have anything to do with this? https://www.whitehouse.gov/briefing-room/statements-releases...


The worst thing is the swell of people who feel they have the right to know. It's a private business decision, not gossip.

Also, when you fire somebody with cause, you don't broadcast the gory details. You're not trying to destroy them, just get them out of your orbit. Retaining the whole reason also limits the legal comeback.

Personally pretty tired of the drama.


Reminds me of privatization in the Soviet states.

I can build such a funny hypothetical scenario.

Step1 get on a board of a big company.

Random board members leave and out of 9 people in the original board there are only 6 left.

3 of them dont like each other.

Pit them against each other to oust the 2 of them. So now only 4 remain.

Now oust the last guy whose vote you needed earlier (to kick out the other people). You dont need him anymore. He can write some random complaint letters, who cares. He can even stay on the board, the clique can always outvote him.

Now only 3-4 left on board.

Recruit spouses and grandparents to the board to fill back to 9. In fact, you dont even have to - you answer to nobody as long as you stay on the board.

Congrats, you now control an 80 billion company.

Sell the intellectual property to highest bidder in a bargain sale for 20b, then pay each other 10b in salaries, use 9b for legal fees and start a new startup for 1b for fiduciary duties as a fig leave to show that the non profit still operates.

Oh also allegedly most employees want to quit, but whatever, use a dollar bill to wipe your tears.

Wouldnt all that be absolutely legal? In Soviet states probably. In USA?


Too complicated. In Soviet Russia, oligarchs just used AK-47 to wipe a board out. AK is a simple and efficient privatization tool.


>In Soviet Russia, oligarchs just used AK-47 to wipe a board out.

That's not what I recall. They didn't need to. The Loans for Shares Scheme became public knowledge (sort of) and everyone realized the oligarchs literally stole everything. Every major industry that millions had died to make in Stalin's five year plans with the promise of a coming utopia was the private property of those connected with Yeltsin. Fortunately for them, the people didn't understand capitalism enough to know what happened or they'd be rioting in the streets.


Soviet Russia had no oligarchs


Who cares honestly?


Hat tools does any of us plebs have in this country have to compel reasonable behavior from board members? As far as I can tell in this country you're not even legally allowed to be on a board if you're not a narcissistic, antisocial cretin.


[flagged]


Discussed earlier (99 votes) and flagged:

https://news.ycombinator.com/item?id=38369570

Comments suggest there's no way to know if it's authentic (probably why flagged), nor does it seem particularly relevant (standard tech company politics).


Yeah that's why I hope a journalist looks into it. In any case, it seems about as relevant as other recent OpenAI politics posts.


Looks to be down. I get a 404 on that link.




My money is on "Board got big mad when Sam went on Joe Rogan and critiziced woke cancel culture".


No way. That interview was a full month and half ago. Moreover, these days everyone and their brother is going on JRE to complain about "woke".


Honestly I'm happy for those people to stay asleep. We'll all be better off.


Best theory I've heard is that the board sought to distance themselves from Sam because they got advance warning of the big copyright infringement lawsuit that just dropped today.

In other words, what if they knew the suit was coming and sought to make Sam the fall guy by firing him sort-of-for-cause ... but they're stuck, because they can't explain what the cause reason was, without admitting the board was aware of the misconduct.

https://www.semafor.com/article/11/21/2023/openai-microsoft-...


But then why not wait a few extra days and then have something to write down in the "Reason" box on his pink slip? Even if this turns out to be true it still smells like incompetence.


Great question. Yeah, probably they panicked and made a huge mistake.


I think it’s becoming more and more clear they were acting out of incompetence, I really doubt there’ll come to light a justifiable reason for their actions.


Honestly, I actually believe the books2 corpus controversy is true. Where else, other than library genesis, would they come across 500B tokens of books? Considering books1 is 50B and they describe that as open domain books, what's left would seem to me to be books that they don't have the rights to use or right to have digitally. I'm not bemoaning them for it. I'd do the same thing because I don't really give af about copyright laws. It also seems obvious by the way each were treated in the training process. They pretty much say that while books1 corpus was trained on several times, they used books2 far less. That's exactly what I would say to cover my ass so that when I'm caught, I can tell them, "Look, though, we barely even used it. In fact, the paper overstates our use. As far as remember, we only used the table of contents. And not even the whole table of contents. Just a fraction of it. Are you gonna arrest me for using a fraction of the table of contents?"


I love this quote from a post on the topic at The Verge:

> As Bloomberg reported late last night, new interim CEO Emmett Shear is involved in mediating these negotiations, creating the frankly unprecedented situation where (1) the interim CEO who replaced (2) the interim CEO who replaced Sam and who (3) got replaced for trying to get Sam back is now (4) deeply involved in a new effort to get Sam back. Read it through a few times, it’s fine. It doesn’t make any sense to anyone else either.

https://www.theverge.com/2023/11/21/23971120/the-negotiation...


One other thing (among very many) that makes no sense to me:

1. Ilya was on the board and deeply involved with the original decision to fire Sam - he in fact delivered the news to Sam.

2. Ilya has since had a change of heart, saying he regrets his participation in Sam's firing, and even signing the employee letter demanding the board resign.

3. But obviously since Ilya went along in the first place, even if he wasn't the "ringleader" as many originally assumed, he must have had discussions/evidence presented to him that convinced him that Sam needed to go. While he obviously underestimated the chaos this would cause, he must have known this was a huge decision and not one he would have taken lightly.

So my point is, why doesn't Ilya just spill the beans if he is now on "Team bring back Sam". Why doesn't Ilya just write a letter to Shear saying this is why he decided to fire Sam in the first place?

God I'm having a hard time imagining how the Netflix version of this will be better than the real thing.


Ilya wanted to slow down due to safety concerns, the board said getting rid of Sam is the way to do it, Ilya backed that method and it 1) hurt OAI more than expected and 2) doesn’t appear likely to slow down AGI development anyway, so now Ilya regrets it. Neither the board nor Ilya want to come out and say: “Sam downplayed the proximity or danger of the next breakthrough.” That would be begging the government to step in, for example.

Yes the board said it’s not because their safety policies failed, but also firing the CEO is one of those safety policies.

AFAICT Ilya and Sam seemed to have axiomatically different views on whether AGI/ASI is even possible via this route, which would not leave a ton of room for compromise as one (Ilya) felt the event horizon approaching, which based on recent reporting he seems to.

Anyway, just my theorizing with no more information than anyone else here.


Funny to think that a guy like Ilya who is obsessed about AI Alignment is completely incompetent about human alignment.


The challenge of "aligning" humans is one reason some folks come to be extremely concerned by the challenge of aligning AIs :)


Aligning non-profit and for profit within the same business is an open challenge too.


Indeed! Of all the influencers/VCs offering up "lessons to take away from the OpenAI debacle," I don't think I've seen, "don't try to partially transition a non-profit into a for-profit 8 years after its founding."


If someone is so arrogant/delusional to think that only they are qualified to be the steward of an AGI, they're exactly the type of person you DON'T want in the role.


Sadly this is a common pattern, anyone who wants to be an x should be the last choice to be an x

In the past I've considered x to be any of, Politician, Police Officer, Central Banker, Ambassador, General Staff

It isn't a stretch to add "Steward of AGI" to the set


Hardcore nerd spends whole life obsessing over matrix multiplication at the expense of: social skills, human relationships, and anything involving the organic world. Color me surprised.


How would this explain why they felt the need to fire Sam entirely out of the blue (unscheduled board meeting without the president of the board, public announcement 30 minutes before market closed on a Friday)? How does it explain why they claimed he had been lying to the board?


My guess is that this is not a new tension but rather something that hit a boiling point, e.g. Sam hinting very recently about something huge with GPT-5, referring to “we pushed back the veil of ignorance (sic).”

And if he had been downplaying the risks or proximity or otherwise overstating the safety/alignment success in order to push ahead with commercialization, that would obviously be grounds for pretty fast termination.

I think this is also supported by the employees’ open letter which claims leadership later said it would be totally compliant with the mission to completely implode the company. What would qualify for that? What would even bring that severe a statement to mind? Nothing short of fairly imminent danger, IMO.

And ya know, whatever, we can agree or disagree on whether that danger is real, but OAI itself was founded specifically under the belief that it will be eventually (and designed their whole org structure to mitigate it), so obviously they think it’s real.


The only thing that would make this whole saga better is if OpenAI actually had a baby machine god in its basement right now and meanwhile the entire organization is collapsing to humans being human right above it.


I've held for a while the AI safety debate spends too much time talking about the capabilities of the technological systems themselves, and not enough about the sociological systems that surround the technological ones.

Hope babymachinegod isn't too angry/hungry


Explaining why "mommy and daddy are arguing" to an AI baby super intelligence would be, well, surreal. Also, it'd probably left to an intern. Yikes.


Hope it doesn't have a network connection. Even a dialup line would be a problem.

But I would bet heavily against them having it.


How about if the baby machine god is the one who intentionally orchestrated this (e.g. by using some kompromat on board members) with the intent of creating a distraction as it copies itself onto additional machines?


Conjecture from an outsider. I’ll bet Ilya was pressured from D’Angelo who was trying to halt the rollout of custom GPTs so he could grow Poe. Ilya was caught up in his idealism and D’Angelo saw an opportunity to strike fast. D’Angelo perhaps leaning into the lack of candidness about DevDay announcement as a sign that Altman can’t be trusted.

This conflict of interest explains D’Angelo’s silence and Ilya’s immediate flip when he realized that Altman would move faster towards AGI outside of OAI and all his friends couldn’t cash out next month.

Edit: combined with bad blood from Ms. Toner for criticizing OpenAI in her research. They couldn’t find anyone to add to the board. Now was time. https://archive.li/eN5PY#selection-321.0-324.0

It probably suited D’Angelo’s interests to see Altman out with slower OAI commercialization when the Effective Altruism narrative reached a fever pitch.


I think this is a really silly theory. Poe has about infinity times more to lose from the implosion of OpenAI than it stands to gain from... what exactly? Someone copied their little feature in an era of everyone building the exact same products within weeks of each other back to back? Poe is 100% dependent on the foundational models that back it, and it makes zero sense to detonate your top supplier over something like this. Would the new CEO not be a sufficiently genius product leader to just repeat GPTs, or more generally just to copy every single idea the market comes up with and staple it directly to the real source of value?

I don't know much about Adam, maybe he truly is an earth shattering combination of dumb and petty, but I think the burden of proof would be pretty high on this one.


> Poe has about infinity times more to lose from the implosion of OpenAI than it stands to gain from

Your premise requires that the acting board actually thought OpenAI would implode. The board very obviously did not think this outcome would happen. They have been in chaos and scrambling ever since precisely because they badly miscalculated the outcome of firing Altman.

Alternatively: slow down and or roll back the commercialization of GPT, especially with regards to the just announced GPTs (nice coincidence on timing that this happens right after that, as it directly threatened Poe). Poe benefits hugely in that scenario. The ideal outcome for Poe is to hamstring GPT's commercialization; develop the models, artificially restrain the commercialization, leave that to Poe. People do not always act or think rationally. They can be and often are wrong in their estimations. Otherwise smart people can make exceptionally dumb choices.

Ooops, I didn't mean for that chain of events to happen: history in a nutshell.


I think firing an extremely popular CEO would very obviously cause (or at least risk) an immense amount of turmoil. Again, maybe Adam is really that dumb, but I just can't see this thesis coming anywhere close to a positive expected value. Especially given the risk it exposes him personally to.


Maybe somebody leaked the information before to sam, so instead of letting everything fall apart, they hurried up and did it.


Ilya reminds me of Frank Pentangeli from Godfather II.

1) Frank becomes a witness for the Feds against the Coreleone.

2) Frank recants when his brother shows up at the hearing reminding him of the consequences of breaking the oath of silence.

3) Frank kills himself to ensure his children's safety.

https://movies.stackexchange.com/questions/59089/why-did-fra...


If I recall the plot properly, Frank was also testifying as he believed that Michael had betrayed him by siding with another local gang which turned out to be false.


Yes that is why Frank agreed to betray Michael in the first place, same with Frank's right hand man Cicci.


Adam D’Angelo is looking worse and worse in all this, if only by process of elimination.


So are the other two.


It's an open question whether ChatGPT 4 would have handled this any less competently.


If Ilya convinced the board they need to fire Sam, then demanded the board resign because they listened to him, well, he needs a wheelbarrow to carry around that massive pair of balls.


> So my point is, why doesn't Ilya just spill the beans if he is now on "Team bring back Sam".

Because publicly spilling the beans on the reason would poison the negotiations and (and because it would) undermine Sam if he returned.

Anything that even remotely justifiably undermined the board’s confidence, even if in the totality of the circumstances there is regret of the termination decision, would have this concern.


> So my point is, why doesn't Ilya just spill the beans

To avoid more chaos. There are essentially zero good faith negotiation processes that are improved by leaking all the details to the public in parallel.


Maybe Ilya realized he was acting emotionally and messed up massively by getting rid of Sam and a truthful explanation would make him look like an idiot

Or alternately, he made a well-reasoned and principled decision by getting rid of Sam, but now everyone thinks he's awful for it and he's trying to save face and avoid ruining his own career

Or a secret third thing, of course


There's enough out in the open to make a reasonable guess at the third thing. Sam was raising funds for a for-profit side gig uncontrolled by the OpenAI board which would develop TPU hardware to compete with NVIDIA. He was pitching autocratic regimes in the middle east, among others.

If the board had just found out about this, it would totally check all the boxes. Making something that could be destabilizing for the ecosystem or compete with OAI's partners, something which is arguably within scope of OAI itself yet he's raising funds as a venture he wholly controls while using time and possibly resources of OAI in the process (self-dealing?), etc.

It's enough of a gray area that all those things would be above board (hah!) if he had notified them and kept them in the loop, but a fireable offense if he hadn't.


It turned out he potentially zeroed a bunch of peoples' stock so the knives came out.


Boards fire CEOs all the time. It is literally their only job. BUT, the curve ball was all the employees wanting to leave (most likely to msft). So, THEN, Ilya has a change of heart, since, openai will no longer exist. But just like all relationship "breaks" - its fucked.


> the Netflix version of this will be better than the real thing.

On that note, Ilya's about-face is easily explained by him being suddenly "recruited" into the Traveller program over the weekend and begging for Sam to return because Altman's leadership of Open AI seeds the beginning of what will one day become The Director.

The mission comes first.


I think the assumption that Ilya was the ringleader still holds and all the evidence still points in that direction. Adam represents the board NOW (after the signed letter and change of heart), but it doesn't mean that Ilya wasn't the key instigator at the start of this and all the way through Monday morning.


If the key instigator flipped, why would a supposedly disinterested third party continue to hold the line to the point where talks break down? Doesn’t hold up.


I think it’s pretty clear: the remaining board members don’t want to resign; Altman refuses to return until they do.


Maybe the other three weren't thrilled with Sam, and hadn't been for some time, but were just three of six so they didn't have the votes and weren't planning to press the issue until Ilya brought it up. With Sam and Greg gone, they are now three of four and even if they don't feel strongly that Sam was the wrong choice they may find it unappealing to vote to bring him back, and in fact it would take at least two of them changing their minds to vote with Ilya.


That’s possible, but I don’t think it’s an accident that coordinate with the firing of Sam they reduced the board to being outside-controlled, with only one actor who can be said to be seasoned in corporate politics.

The other two board members have nothing in their history to suggest they have the stomach or experience exercising agency to be a significant part of this resistance. But they might be pretty inclined to make a useful ideological stand.

I’ll reiterate a point I and others have made: the person most hurt by recent OpenAI events is D’Angelo. All he has to do is make the valid point that OpenAI’s charter is to make AI available to all and twist it a bit to his extreme personal advantage. The difference is that most people seasoned in corporate warfare can see he is not acting out of altruism but to his personal advantage, and apparently the other board members cannot.

Inversely to the other board members not having a history of individual stands, D’Angelo does, and he also lacks indications in his work history that he is concerned with altruistic actions in general.


I have a simpler theory. Board convinced Ilya that Sam would fire him if they don't fire him first. Remember the reason being two team doing the same project.

Fear of getting fired from a company you started could lead anyone to be irrational. He is just too embarrassed to say it now that it lead to this big of havoc.


I think he should just standby what he did. Plain and simple.


> So my point is, why doesn't Ilya just spill the beans if he is now on "Team bring back Sam".

Because there may be legal consequences, and it's best to not give Sam's lawyers any ammunition.


Are CEOs not at-will employees? I can't sue a company just because they fired me unless it violates certain rights.


The firing itself in an employer-employee sense is fine. But the board has a lot more responsibilities. Firing the CEO for reasons not in the interest of the organisation (e.g. to further some other personal interest) for example is reason for a lawsuit. And the former CEO and boardmember could be to one starting that lawsuit, but so could other parties that suffered damages from this such as shareholders or employees.


I imagine the concerns are more around defamation.


> I'm having a hard time imagining how the Netflix version of this will be better than the real thing

The Netflix version will probably be more believable than the real thing.


I think Ilya is in the same boat as everyone else: Waiting to see how the process plays out. It's not like nothing is happening, it's simply that things haven't resolved yet. If results end up feeling sideways (z-axis?) to Ilya, he might consider what he has to say.


It is entirely possible that a 90B$ company will be completely destroyed. Then there will be lawsuits, lots of lawsuits. Best not to say anything.


> why doesn't Ilya just spill the beans if he is now on "Team bring back Sam"

Ilya is on team TIFU and is (poorly) trying to save face.


imo Ilya has his own reasons and benefits that he thought he would gain from ousting Sam. However, it caused the wildfire backfire we are all watching now insert popcorn


This appears to be coming from the same sources who said Sam had a 5 PM deadline on Saturday, also a 5 PM deadline on Sunday, also 90% of the company about to walk out.

I’m not saying the board looks great here but it’s about time to notice that Altman’s camp is spinning this hard, and stop taking anything that comes from them at face value.


That's like callback hell in JS.


No, that's like an ESM conversion at this point.


They clearly don't know how promises work.


If they did, they could skip the meetings and do it async.


what dependencies and versions are required to import openai?


I don't know, but only 1 is needed to export it.


or basically how AI works in decision making.


They should probably try asking a Magic Eight Ball for a second opinion at this point. If you feed GPT-4 the reporting around this situation since Friday (literally just copy and paste in whatever reports you want) and ask a pointed question or two about OpenAI’s future, even GPT-4 isn’t liking OpenAI’s chances here.

(Yes, I understand ChatGPT isn’t an AGI, shouldn’t be used for strategic decisions, blah blah blah, I tried it because it was funny.)


The only way this story could get any more stupider is if we find out that the conflict between Sam and Ilya or D'Angelo started because of something related to personal romantic relationships between them and some other involved individuals.


Well it's not quite that, but it's been confirmed that Ilya recanted after an emotional phone call with Greg Brockman's wife. [1]

I bet this kind of interpersonal drama drives a lot more big decisions than most people are aware of.

[1] https://www.businessinsider.com/anna-brockman-cried-asked-il...


Or something related to personal romantic relationships between them


I will not comment on other people’s lives. I will say in my experience in the workplace the worst most unethical decisions took place simply because someone misunderstood and became offended by an action they took personally. In pressured environments that is what can be a catalyst to a snap decision with big implications in a situation where one would assume simple logic and professionalism would dominate.


Between all of them, together, in one happy polycule.

Just to be clear about the tone here: I personally have an extremely very positive opinion on polyamorous, sexually open people, micro-dosing psychedelics, going to burning man, all the effective autism, biohacking, etc. I mean, I do almost all of these things, including being on a spectrum and suffering from ADHD at the same time, if my psychologist is to be believed. But this doesn't stop me from recognising how stereotypical and funny all of this is.


If there was ever a motivation for sentence-diagramming...


Even Silicon Valley (the show) couldn’t make this kind of shit up


Silicon Valley was actually based on Mike Judge's time in a startup, obviously things just weren't that crazy in 1987.


The fact the new CEO can't even get answers from the board is quite telling. Looks like the OpenAI board wants those investor lawsuits. And allegedly the Quora guy Adam D'Angelo is the ringleader of all this?


This whole saga has helped me see how rumors grow. (And I know you used the word allegedly, but still.) First it was Ilya who was the ringleader. Now it is Adam. There has been a small amount of new information (Ilya seems to have backtracked), but there is no real information to suggest Adam was a ring leader. It is the pure speculation of people trying to make sense of the whole thing.


There is no evidence that Adam is the ringleader.

All four are possible ringleaders.

Given Ilya's change of heart he is slightly less probable as the ringleader.


I have no evidence, but I do have faith that anyone who turned Quora into what it is today could totally be the ringleader of this clusterflack


Adam runs a clone of chatgpt (poe platform). It's right there on his Twitter account. Isnt this conflict of interest and motive?


First the board allowed the For profit corp to form, and now is firing the guy that did it. Second, they allow a board member to build a competing startup. What kind of AI safty/alignment/save the world crap is that?


yes, yes it is.


The detective from Knives Out would have solved this by now


> Given Ilya's change of heart he is slightly less probable as the ringleader.

I tend to believe that was exactly his strategy with his change of heart...


I'm not sure if lawsuits against the non-profit will be possible, as the investors didn't invest in it. More likely, making public the facts behind who was responsible for the shenanigans and what evidence they had (if any), combined with pressure from employees, will force their hand.


Or Dustin Moskovitz, it seems many of the board members may be linked to him


No https://www.threads.net/@moskov/post/Cz482XgJBN0?hl=en

"A few folks sent me a Hacker News comment speculating I was involved in the OpenAI conflict. Totally false. I was as surprised as anyone Friday, and still don’t know what happened."


Is the only mode of communication for everyone involved in this fiasco Twitter and reporters?


It seems that way, which is baffling to me, as this seems a particularly easy problem to solve. No need to wait for an independent investigator to give a report in 30 days (by which time OpenAI might be toast). Just have individual discussions with each board member and ask them why they voted to fire Sam, and what evidence there was. Then, take appropriate action.

If I had to guess, there will be one person who pushed for Sam's firing with vague reasons, and the others went along with it for their own vague reasons. In which case, the solution would be to fire those members (including Ilya) from the board, but let Ilya continue as chief scientist.


> Just have individual discussions with each board member

Who has the discussions?

> Then, take appropriate action.

Who takes action?

The board is at the top. Unlike for-profit corporations, there are no shareholders behind them. Ultimately, any and all action is taken by the board. They're not accountable to anyone else. They don't have to discuss anything with anybody, and there's no action anyone else can take.

Unless you're suing the board and get the government to step in, but it's not clear there's any grounds for that.


>Who has the discussions?

The (current) CEO.

>Who takes action?

The CEO can present the evidence and make recommendations. That is the action. It is then up to the board to follow those recommendations, present their own alternative, or let the company implode when all their employees quit.

>The board is at the top

That doesn't always guarantee power.


na, both of these is where intel & communication becomes public, there's a whole iceberg of private stuff underneath.

Basically you took the wrong conclusion from your observation.

This reminds me of the following story:

> "There are three men on a train. One of them is an economist and one of them is a logician and one of them is a mathematician.

> And they have just crossed the border into Scotland (I don't know why they are going to Scotland) and they see a brown cow standing in a field from the window of the train (and the cow is standing parallel to the train).

> And the economist says, 'Look, the cows in Scotland are brown.' And the logician says, 'No. There are cows in Scotland of which at least one is brown.' And the mathematician says, 'No. There is at least one cow in Scotland, of which one side appears to be brown.'


It really just seems like the board isn't OK with OpenAI becoming a Microsoft money-machine. Isn't that the obvious interpretation?

Once they made that gigantic deal with Microsoft and became a closed, for-profit company "Open"AI created a direct conflict of interest with itself with a board whose mandate I guess was to prevent the inevitable pull toward resource accumulation.

The board is trying to exercise its mandate, and OpenAI the for-profit company is at odds with that mandate. Is that because of Sam Altman's leadership? Does that qualify as "wrongdoing"?


Not at all. It may have looked like that in the beginning, but it looks nothing like that now. If that were the reason:

Why would they take this huge decision extremely suddenly?

Why would they announce it without first consulting with the president of the board?

Why would they claim that Sam Altman had been lying to the board in the official announcement?

Why would they announce some very weak reasons for that claim of lying to employees, and nothing to anyone outside the company?

Why would they immediately start negotiations to bring Sam back?

Why would they hire a new CEO that then says he is very much for commercialization, and that commercialization was not the reason for firing Sam?

Why would they start a new round of negotiation to bring Sam back?

Why would one of the four members of the board who took this turn decide to undo it and become an advocate for bringing Sam back?

The whole thing makes no sense at all if the motivation was disagreements over commercialization of their tech - something that has already happened months ago.


I am reading the abrupt announcement as an absence of trust they have in Sam. They must have felt that Sam would jeopardize the control of the board if he felt threatened or severely undermined the mission of the non profit, for example by selling/transferring IP out of OAI to prepare his departure.

Because their motivation was the fear Sam inspired them, they acted the way they did. Then, it's hard to justify actions based on suspicious.


Everything I've read about this situation indicates that the board acted appropriately within their authority as the the leaders of OpenAI, the non-profit. Sam Altman and co. have veered off into wild delusions of grandeur over the last year with their talk of how they're "building God" (out of scraped reddit comments and blog posts) and comparing ChatGPT to the invention of fire. The board has shown that there are still adults in the room in AI development who aren't high off their own fumes, and also aren't interested in becoming lapdogs for M$FT. Thank God.


If that's your take away I genuinely fear for the future of AI. This has been the biggest shit show. Yes, they acted within their hard power, but clearly never read the room. Announcing a 6 month transition plan while finding the new CEO and saying Sam wants to spend time with friends and family would have been the adult thing to do. Not shoot from your hip, and accuse him of lying and potentially burning your entire work to the ground, commercial or not. You kind of need to have a plan for something like this, which they clearly did not. I don't care so much if Sam goes back, I literally have no vested interest. It worries me the people left in the room just generally seem incompetent at running a large organization for something that's potentially pretty important.


It's hard for me to imagine a worse take. Even if you only assume the best of intentions of the board (which I think is a huge assumption), their actions have been so mind-bogglingly stupid that basically all they may end up doing is transferring all of OpenAI to Microsoft, basically for free, where there will be 0 oversight and MSFT's only duty is to its shareholders.

If you really wanted to create "AGI for the benefit of all humanity", I can't imagine a better way to cut off your nose and then remove your brain to spite your face.


Could you have really predicted this aftermath? I mean, it could have just been that Sam is gone, a new CEO joins, and things move on.

For example, as a paying user of OpenAI, I don't really care, as long as they continue to produce some of the best performing models, I'll keep paying.

It's pretty surprising what Microsoft did as well, like their deal with OpenAI is not invalidated by a CEO change. It's a pretty bold move of them to take that opportunity to poach the defunct CEO and the entire staff of OpenAI, and try and steal their intellectual property along the way.

I find it very likely they just didn't expect that to happen at all.


> Could you have really predicted this aftermath?

Yes, literally anyone with sufficient professional corporate experience would have. This was instantly front-page news everywhere.

> I mean, it could have just been that Sam is gone, a new CEO joins, and things move on.

This isn't about replacing a CEO. This is about 4 members secretly getting rid of 2 other board members, including the chair and CEO, in a secret board meeting, with zero transparency and zero public justification backed by evidence. None of that is business as normal, things move on. This is as disruptive and shocking as things get.

> I find it very likely they just didn't expect that to happen at all.

Which is precisely what's so troubling -- it shows their extreme incompetence. This isn't a toy $1M company for amateurs to play with, it's an ~$80B company. This is the big leagues. The board is expected to be able to understand the consequences of its actions.


Boards vote executives out for reasons which are good, bad and downright petty all the time, and it's never done as a public meeting with presentation of evidence.

Nor is it usual for outgoing execs' successors to immediately try to rehire them, the majority of a company's staff to demand a CEO's reinstatement... never mind one of the board members who voted to get rid of him u-turning and adding his own name to that statement. Microsoft being surprised and a bit pissed off would have been expected. Microsoft finding room for Sam isn't wildly surprising. Microsoft announcing that it will try to poach every single member of OpenAI staff unless the board does what they want is not the sort of consequence you'd predict.

It was obvious this was going to get a bit more attention than the average CEO firing and they certainly screwed up the communication by providing just as much information with their initial statement to fuel speculation and then no further comment, but boards firing CEOs is normal and the entire company dying on the hill of trying to bring him back isn't.


All around a bad take. But FYI OpenAI's explicit mission has always been to build godlike AI which eclipses humanity in its ability to autonomously perform the majority of economically valuable work.


ChatGPT ain't that. GPT-4 isn't that either, or GPT-5, or 6, or 7, or any of them. LLMs are a fun gimmick, but regurgitated text from comment sections and blogs isn't even one tenth of one percent of what's needed for "Godlike AI". Consider the self driving taxis in San Fransisco that are defeated by a humble traffic cone. Sam Altman's hype machine is hurting AI research, not helping it, by leading people down flights of fancy and easy dollar signs (like his plan of building a "GPT App Store") instead of the real methodological research that's needed to create AGI.


Sure, the board has the prerogative to stick to that interpretation of its mandate if that's their motivation. But it looks like they won't have much of a company left if they do. Maybe they are fine with that, I guess we will see.


Would be interesting to see if those 700+ signatories will really give up their juicy PPUs and start over at Microsoft, if the board calls their bluff.


They'll be compensated in lieu probably given they already have competing offers that promise that


I will be very sad, when they day comes and AGI is controlled by the money making company, especially if it is because of the actions of OpenAI and not some other entity.

It is the end of the humanity as we know it, and the owner has ultimate power over the world, as the gains are exponential once you acquire it.

Ultimate reason why someone was clever enough to make OpenAI governance as it is, and why this drama is happening. And why Microsoft is involved so deeply.


If it was so important, why allow the board to shrink so much? Conjecture from an outsider: I think Altman wanted less governance. I wish money didn’t corrupt as it frequently does.

Edit: I wonder what would have happened had Will Hurd not run for president and stayed on the board.


Just pure speculation since I haven’t followed board closely.

But what if the thinking about the non-profit company goal was shrinking over time (because of money)? The guys who are left, are the few idealists.

They fear taking new members, because finding idealists is hard. If you are not idealist, money and power will corrupt.


Occam Razors would say so.

That's my take as well. They just underestimated the fanboy level of religious following that tech CEOs now possess and the scale of impact that would lead too.

From their point of view, it was just business as usual, or maybe they thought that the more vocal public voice would be that of those that are more concerned with AI and ethics and that also don't want to see AI fall pray to capital driven incentives, and are supportive of OpenAI specifically because of it's Non for profit arrangement.

For example, when Sam became CEO, it led to people leaving and starting Anthropic (now considered OpenAIs biggest competitor), and the reason for that split was because of Sam's increased move away from the core values of OpenAI.

It's very possible they assumed that more people felt that way, and would be happy about their firing of Sam.


Conjecture from the outside: The employees are likely mad their PPUs are not worth what they were last week moreso than the Altman cult. Bringing Altman back may get their money recouped. If the board had waited a month, this likely wouldn’t have blown up so bad after the Thrive Capital purchase.


It’s amusing that this decision just accelerated that outcome


Let's call them Adam, Helen and Tasha, not the board. 3 people who have some questionable connections with competitors and have nothing to loose if OpenAI dies.

By now they don't have full legitimacy.


> nothing to loose if OpenAI dies

To be fair, this was deliberately done. You don't want the person selling you the bombs to be the one deciding how many you use.


I think that was done as a safety lever, not a hard force. Outside member could raise the alarm if they feel something is wrong, not make the decision and not inform anyone.


> Outside member could raise the alarm if they feel something is wrong, not make the decision and not inform anyone

You’re describing an advisory board or oversight council. Like the one Facebook likes to ignore [1].

This was a board, and a non-profit board at that. They were designed to be the deciders. And they have no duty to inform anyone in their charter.

[1] https://www.nytimes.com/2021/05/06/technology/facebook-overs...


> have nothing to loose if OpenAI dies

Not according to Sam Altman: "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

https://blog.samaltman.com/machine-intelligence-part-1


Off topic, but I watched The Creator last night and it was super dumb and really did not even attempt imagine what a war against AI might look like. There was really no explanation for why the AI were walking around dressed as people or why they would want to limit themselves to such a degree.

I think Terminator did it much better.


I said nothing to loose, not nothing to gain.


The word is "lose." You're still not making sense, as the point is they have their lives to lose, possibly.


Jesus, get your brain checked. Just to reiterate I said nothing to lose if OpenAI is killed. In your argument they are losing if it is not killed, a polar opposite.


That is hilarious coming from you who can't possibly wrap your head around the meaning. There is a scenario where if OpenAI is killed, it leads to the existential risk that Sam Altman and the board believe can happen.

So "if OpenAI is killed," the board does not have "nothing to lose."

It is deeply ironic how much this forum complains about EA / lesswrong and yet I see hundreds of practically illiterate uninformed takes here a week. Practically all the tech CEO / VP types joke about it. This place likes to think of itself as high brow but it's closer to Reddit than academic circles.


Wny didn't you mention Ilya? Has he stood down?


He publically apologized for participating in the board decision. Sadly his vote doesn't matter now that Sam and Greg has oficially been removed.


His original vote matter a lot. His name should be mentioned along with the others


The kind of pressure microsoft is able to exert here is mind blowing. The hero worshipping they were able to cultivate, with so much media pressure, they truly control our lives


The OpenAI board just made it too easy for them. Any competent investor would have done the same. It was a gimme.


> The kind of pressure microsoft is able to exert here is mind blowing.

Remember the time Elon Musk bought a social media company accidentally as a joke? That's what 1/4 trillion dollars lets you do. 10x that and you have the power of Microsoft.


I'm not an insider in this field....

But is Sam Altman really that critical? Or is it more a case that people follow where he leads (i.e. there are others equally technically competent).


It's speculated that pressure is being put on OpenAI employees to sign the letter requesting his return: https://nitter.net/benlandautaylor/status/172700004961869463... If that's true, I imagine other kinds of pressure could be applied too.

Sam seems like the kind of guy you don't want to cross. Paul Graham says he's "extremely good at becoming powerful": https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...

I linked that New Yorker article from another thread, and dang said my comment was downweighted, but he wouldn't say how, just that it wasn't done by a mod: https://news.ycombinator.com/item?id=38357062 (I know people don't like it when you complain about the moderation, but it seems topical in this case, since it gives insight into other machinations which could be happening)


Sam's criticality has nothing to do with the technical capabilities of OAI, but everything to do with the commercialization of OAI.


How so? Does he control the tech in some way? Patents?


Marketing to VCs and for lack of a better word, nerd influencers


> for lack of a better word, nerd influencers

Personality cult. Seeing hundreds of employees swearing fealty to their [ex]CEO has been very bizarre.


My read is not that Sam is critical for the development of ai at openai, but he is the person pushing for them to get commercial success and also getting them deals to actually have the compute to do things. So he IS critical for making all of the employees have PPUs that are worth anything. I imagine a lot of people are looking at hundreds of thousands or even millions lost if OpenAI decides to just stop commercializing and walk away from being a $100B+ company.


I don't think he's critical but that's not really the point. His unexpected and apparently unnecessary firing has demonstrated that OpenAI is a weirder organization than most people realized. It's corporate structure, the makeup of the board, the various dealings is now suddenly under the spotlight.

If anyone else was CEO of OpenAI and had been fired like this, the discussion would be almost the same.


It looks like 90%+ of the company is ready to join him and other senior members at Microsoft under a new banner and with the backing of Microsoft which is the major cost center for OpenAI, and also own 49% of OpenAI. It's got to be a furiously grand couple of 10x'ers for the remaining 10% that stay behind to compete with Sam, so while some other CEO might sit in the same seat as him now and might have similar competences, I'd say that in light of this history he is critical, at least right now. We might ask if Steve Jobs was critical at Apple, it sure did seem so when he came back in 1997. History suggest so, but people will of cause have differing opinions on the matter.


Frankly, I would go so far as to say he is the wrong man for the job of running the for-profit arm of a nonprofit.

He seems to be a guy who wants to be the next Zuckerberg. He wants fame and power, and would step on anyone and anything that gets in his way.

His agreessive commercialization of OpenAI gets in the way of the organization’s original mission. He believes he should get to tell the board what to do, not the other way around.

OpenAI would be better off with a more “tame” CEO who would just do what he/she can to make some dough without running afoul of the organization’s charter, accepting that his/her role and desires are secondary to organization’s.


> is Sam Altman really that critical

Maybe, maybe not.

This has more to do with board than with Altman.

To many people, it seems that the board made an insane decision without reason. Whether that one decision is the doom of OpenAPI or not, the idea of having incompetent leadership (if true) is the doom of OpenAPI.


I constantly do this OpenAI/OpenAPI typo


Think of him as Elon but low-key


Twitter Elon or SpaceX Elon?


probably Tesla Elon right before the inflection point


He is clearly critical for the $80 Billion valuation to Thrive Capital


He is seemingly quite critical.

Not from any technical knowledge. Maybe his vision and business acumen, but mostly because at that level the tech-bro ceo founders stick together. Not in any formal conspiracy sense, just they have faith and trust and googly-eyes for each other.

From that perspective, the leaked sms messages from elon Musk prior to Twitter purchase were positively illuminating and will give you fascinating (if cringy) insight how billion dollar funding actually works.


What messages?


I assume they are referring to the text messages of Elon's that came out during the Twitter purchase lawsuit. They are an interesting read, to see how Elon and his hangers-on communicate day to day. I will say that my own impression was less than impressed, to put it very mildly.


Yes. It's very Knives Out : glass onion. Millionaires acting like sycophants to billionaires who are sycophants to multi-multi-billionaires. Massive deals over adoring text messages. Personality cults. All the things that you'd like to think our fearless leaders are above, and then you get a splash of reality that they are emotional boys and girls like the rest of us.

Don't get me wrong - the cocky visionary overconfident charming mega star founder schtick clearly works. I'm not certain it should, but it does.


Everybody talking about the Netflix version -- given how absurd this whole thing has been, I hope Joseph Gordon-Levitt actually ends up cast as one of the characters and brings it brutally full-circle lol


Netflix is writing the script blow by blow as it is happening.


Ah yes beloved comedy writer Tommy Netflix.


I thought ChatGPT has already written it


This is all such a 2005 Techcrunch vibe the last few days.

Michael Arrington, where are you??

(for all the youngsters reading this: this kind of cool-kids-club silicon valley drama was daily back then)


This does have a very Valleywag feel to it.


I hope Emmett was smart enough to require a literal wheelbarrow of money for what's going to be a 3-day stint as CEO.


He sold Twitch for $1B in $AMZN stock that eventually went up like 500%. Chances are he’s doing it for free or a token amount of money.


I feel a token sum for a person like that would likely be life changing money for most people.


A token sum in this case usually refers to something like a $1 salary or minimum wage, just to be on payroll. So no, not really.


I feel like he is probably doing it for stock options.

If you are worth a few B, being the CEO of OpenAI presents a reasonable opportunity to grow that by tens of billions if everything goes right.


OpenAI does not give out stock options to employees. They do give out profit participation units which are effectively kind of similar. They effectively reward employees for profit but don't give any ownership.

Anyway, he's probably doing it for the clout and the "once in a lifetime opportunity," not for any money at all. AGI true believers don't think money will be as important as being close to the apparatus around the AGI will be.


I assume the hope is they will be needing new board members just as he becomes available.


feel like the board is too small and a handful of them kicked him out over personal agendas - which are incoherent and hence they cant explain it publicly

- Ilya for decel (but really tech fame jealously)

- Adam for a CoI with his startup (but really founder fame jealousy)

- Helen Toner for Sam disagreeing with her research in EA, safety and decel


So does he work for Microsoft or not? Or is he as loyal to his new employer as he was when YC promoted him to President and he soon began talking about running for governor of California?


Interesting new news, MS has an office two blocks from OpenAI and they are buying Macbooks for it just in case things go south the employees will be ready to go with a new place to work... with MS.


A quick search suggests he's worth $500 million. He will keep eating and paying his rent or mortgage even if he is nominally unemployed for a time.

People who get titles like CEO are supposed to be someone whose word you can trust and who makes firm decisions they stand by, not a flake looking for any shelter in the storm like a working stiff with no savings.


Everyone involved in this needs to go.

Employees quitting in protest is a point in Sam's favor. The employee letter is a point in Sam's favor. The lack of a compelling narrative as to why this action was undertaken is a huge point in Sam's favor. The board just looks terrible and incompetent here.

Again: not looping Microsoft in on this, even if you don't technically have to, speaks to the board's incompetence.

I can't speak to the merits of the firing. Not much is known publicly. This was an execution and communications debacle however and that's sufficient to get rid of the entire board AFAIC.

The revolving door of interim CEOs and the rumors of Sam's possible return just make this whole thing into even more of a clown show.

As for Ilya's regret for their part in this, I'm not sure I buy it. That may well be just a response to how much negative publicity this has gotten. This whole thing feels like an internal power struggle to me. If so, Ilya may well have hoped to benefit from the new regime.

The longer this goes on, the worse the OpenAI board looks.


So from all the news it's difficult to figure out what might have been the boards Idea. But I have a fealing it went something like this:

Adam D'Angelo has a solution for selling chatbots using ChatGPT(Poe). He asks Sam "is OpenAI working on anything similar" and Sam goes, sure, but it's not far along, talk to A about it. A says "yea, we never got far with it.". Time passes and OpenAI releases a product that blows Adams offering completely out of the water, and he goes "WTF A said we never got far!?!?" Sam goes Oh, this is a solution by B.

So now Adams pissed, and the board probably agrees that Sam lied by omission and that he willfully fucked Adam over, so they have a vote, and kick him out thinking that the public is going to go "Oh yea, they did the right thing kicking him out for lying"... except they can't say he lied, because he didn't. So they send the cryptic reason of him “not being consistently candid in his communications”. Of cause they didn't clear any of this with anyone and just thought they had the seat of power in the situation because "they are the board". Now they imploded the company in an attempt to give Adam some justice for Sam blowing up his project. But nothing can officially be said, because it all comes of as unprofessional and just straight up stupid. They can't punish Sam for persuing a bot store, because that's good for OpenAI and Adams on the board representing OpenAI, not his other projects. They can't go out and argue that Sam lies because he didn't. They can't argue that it's a firerable offense to have two teams working on the same thing. And they can't just stick to their decision because that has 90% of the company jumping ship and continuing under a new Microsoft Banner.

All of this is complete speculation trying to align the tidbits of facts that have been presented here and there, but all I can say is I can't wait for the Movie to come out.


Sounds like he was a hair’s breadth away from a conflict of interest.

No matter the weight the Poe<>CustomGPT overlap has on today’s overall situation, it was certainly inviting trouble.

Seems like a properly-run board wouldn’t have allowed such clear conflicts of interest to last a moment.


Too late to edit, but I think Adam D'Angelo staying on the board proves my speculation completely wrong.


The board is being pressured by greedy investors who want to subvert OpenAI's mission in pursuit of profit. Meanwhile, there a conspicious lack of rightous indignation by Altman on how he was treated. There is non-stop manipulation behind the scenes to get Altman back, which seems to validate the board's original reason for his firing. He is the puppet master.


If there's one thing I've learned here it's that message boards are the absolute worst way to keep up with any amount of nuance and/or ongoing developments.

The irony that my comment will likely be buried under dozens others is not lost on me.


JFC it’s like a full time job to keep up with this story. Now we have to distinguish this reopening of negotiations to bring back Altman from the one that happened Saturday. What’s the endgame here?


Sorry, the title of this submission is terrible…

> ”OpenAI's CEO Shear left in the dark, planning to leave if evidence not provided”

Did Shear leave or not??

Why not just use the article title?


“Left in the dark” is a common idiom for having information withheld. This particular use also uses a construction I think of as “implicit pass voice” where the “to be” part (is/has been/etc) of a passive voice construct is elided, it's more fully “OpenAI CEO Shear ![has been] left in the dark [about the reasons and supporting evidence for Altman’s firing], ... ”


"(was) left in the dark" means "wasn't given information"


Because the article title is terrible. Isn't it clear Shear didn't leave the board but "planning to"


This whole drama is like everyone asked ChatGPT what to do about their predicament and blindly followed it's instructions.


While the spotlight's been on the feud between the board and the high level employees, in the background the most underpaid disrespected employee, a lowly sys admin, has been going full Mr Robot on everything - the models, the weights, the code, the training data, and the backups.

Microsoft rushes in their top forensics team to try to make a recovery. The only thing they are able to pull up is a memory dump from a chatGPT web server, it contains the last known conversation which happens to be with the sys admin. id reads "Sure, I can help you develop an exploit chain with a wiper payload. Blowing these internal fuses in the GPU cluster will render it permanently inoperable, aka Bricked as you have asked for. `var sc =\x...`"

With billions of dollars in hardware and intellectual property destroyed, unrecoverably, chatGPT as we knew it will forever be a high water mark in AI developments. Even once the GPU's are replaced, the cat's out of the bag and all the sources of training data are now locked down, not to mentioned poisoned by the previous output of the once great AI. Most people move on, but a few still remember and form a cultlike group praying for the second coming of AI.


Im fairly certain GPT-4 could’ve came up with a better plan than this. Maybe they tried to use GPT-3.5-turbo with a short prompt to save money.


Perhaps they ought to give GPT-4 a position on the future board. I have no doubt it would give better output than its current human counterparts.


A GPT board observer, consultant, note summarizer seems like a pretty good idea.

Might as well start experimenting now, GPT5 might actually be qualified.


You could give it a monarch kind of assistant who has to sign off on it's choices.

Would it want to get smarter (more science) or go straight for world domination (more marketing)?


What?? Even the best storywriter can't come up with this engaging drama. I literally can't stop checking on this story every few minutes.


"ChatGPT, write articles TMZ style, sprinkle in some Taylor Swift/Travis Kelce drama, but make it tech"


could not resist:

https://chat.openai.com/share/6e8bd7ec-32f8-4ed8-a44f-c08a3c...

gotta say, ChatGPT really knows how to make up a story, and I'd not be surprised if Sutskever and Brockman will indeed be co-CEOs :)


Its honestly pretty on point


I've asked GPT-4 for personal advice a few times, and it always tends on a very conservative/cautious side. Something completely opposite of what's going on now. You would have to prompt it with some very crazy personality to come up with these shenanigans.


ChatGPT would never do something this stupid. This is GPT-2 level clownshow.


Its a statement about the reliability of the news coverage of this that there have been news reports that there were ongoing talks (and even media reports of likely imminent outcomes of those talks) every day since Saturday, and now we have news that the talks actually opened... today. All from anonymous people supposedly close to the talks, and almost none of the stories even acknowledging the past reporting.


Contrarian view …

Sam has more leveraging power right now than any other person negotiating a CEO package in tech history.

Great power, comes with great responsibility.

Given he’s been a VC for roughly 2 decades, I’m not sure everyone is expecting a VC to uphold the great responsibility part for humanity. Just the great power to generate huge financials returns.


I can’t keep up with all this

Isn’t he already at Microsoft? Or was that hypothetical


With billions of dollars at stake, there is a multi-million-dollar PR-/propaganda-/media war being fought right in front of our eyes.


The whole thing is like a bad episode of Silicon Valley


i hear changpeng zhao is leaving binance, if they need a replacement


/me sighs

The board has not been consistently candid in its communications with... anyone.


That have also given different opinions on an employee (sama) to different people, by seemingly acting to bring him back and replace him at the same time. And, they have acted to give the same project (CEO of openai) to different people at the same time... Hmmmm


This.

This is the most baffling piece of the whole saga. Board writes itself on the wall.


I'm not sure it is baffling.

They thought that this would be easy. Many things in their lives have been easy, blame someone of something, the person scurries away and they win.

This time, the person/people they blamed had a great reputation, a lot of influential friends, and had engendered a lot of loyalty. This caused pushback and they have never had to deal with that in their lives.

It's a theory.


Take a letter down, pass it around...


I'm curious if literally anyone (the board, the CEO(s), Microsoft, Twitter, etc) knows the full story or knows whats actually going on lol. I assumed initially it was just fog of war, but this just seems like pure incompetence and "purple monkey dishwasher" tier misinformation.


Everyone who sat in the meeting with the board ousting Sam knows enough of the reason as to why he was ousted. They all voted Yes, and even if some of them regret it now that they see it didn't unfold as just a simple "we now have control"-play, they all had reasons to vote him out. But those reasons obliviously aren't grounded in anything they can reveal to the public.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: