Hacker News new | past | comments | ask | show | jobs | submit login
Before Altman’s ouster, OpenAI’s board was divided and feuding (nytimes.com)
304 points by vthommeret on Nov 21, 2023 | hide | past | favorite | 599 comments




I moved your comment from https://news.ycombinator.com/item?id=38372541 to here, where it seems like it can help more readers. I hope that's ok!


Thank you.



None of the comments thus far seem to clearly explain why this matters. Let me summarize the implications:

Sam Altman expelling Toner with the pretext of an inoffensive page (https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...) in a paper no one read* would have given him a temporary majority with which to appoint a replacement director, and then further replacement directors. These directors would, naturally, agree with Sam Altman, and he would have a full, perpetual board majority - the board, which is the only oversight on the OA CEO. Obviously, as an extremely experienced VC and CEO, he knew all this and how many votes he (thought he) had on the board, and the board members knew this as well - which is why they had been unable to agree on replacement board members all this time.

So when he 'reprimanded' her for her 'dangerous' misconduct and started talking seriously about how 'inappropriate' it was for a 'board member' to write anything which was not cheerleading, and started leading discussions about "whether Ms Toner should be removed"...

* I actually read CSET papers, and I still hadn't bothered to read this one, nor would I have found anything remarkable about that page, which Altman says was so bad that she needed to be expelled immediately from the board.


Okay, let's stipulate that Sam was maneuvering to get full board control. Then the independent directors were probably worried that -- sooner or later -- Sam would succeed. With Sam fully in charge the non-profit goals would be completely secondary to the commercial goals. This was unacceptable to the independent directors and Ilya and so they ousted Sam before he could oust them?

That's a clear motive. Sam and the independent directors were each angling to get rid of the other. The independent directors got to a majority before Sam did. This at least explains why they fired Sam in such a haphazard way. They had to strike immediately before one of the board members got cold feet.


Besides explaining the haphazardness, that would also nicely explain why they didn't want to elaborate publicly on why they "had" to let him go -- "it was either him or us" wouldn't have been popular given his seeming popularity.


I suspect his popularity is mostly about employees who want to maintain the value of their equity: https://nitter.net/JacquesThibs/status/1727134087176204410#m

Wild guess: If the board stands its ground, appoints a reasonable new CEO, and employees understand that OpenAI will continue to be a hot startup, most of them will stay with the company due to their equity


Except, the facts behind Sam's firing will inevitably come out, and it won't be possible to brush it under the carpet. I think they hoped the facts wouldn't come out, and they could just give a hand-wavey explanation, but that's clearly not going to happen. It seems they have well and truly shot themselves in the feet, and they will likely have to be replaced now.


If I'm correct then the board is fine with getting replaced, they just don't want Sam to have total control. Many of the candidates for independent director are friendly with Sam and will happily give him the keys to the kingdom. It's probably extremely difficult to find qualified independent board members who don't have ties to Sam.


Idk, seems like a pretty easy sell to me:

"Sam was trying to censor legitimate concerns the board had with regards to the safety of the technology and actively tried to undermine the board and replace it with his own puppets."

If that is indeed true they did a mistake by saying something vague imo.


I suspect the board prioritized legal exposure first and foremost. They made the mistake of not hiring a legal or PR firm to handle the dismissal.


If they prioritized legal exposure, they would not have made disparaging remarks in their initial press release.


Vague disparaging remarks, fine. Specific allegations, not so much.


>This at least explains why they fired Sam in such a haphazard way.

The timing of it makes sense, but the haphazard way it was done is only explained by inexperience.


I mean, here is a relevant passage from the paper, linked in another comment: https://news.ycombinator.com/item?id=38373684

If I were the CEO of OpenAI, I'd be pretty pissed if a member of my own board was shitting on the organization she was a member of while puffing up a competitor. But the tone of that paper makes me think that the schism must go back much earlier (other reporting said things really started to split a year ago when ChatGPT was first released), and it sounds to me like Toner was needling because she was pissed with the direction OpenAI was headed.

I'm thinking of a good previous comment I read when the whole Timnit Gebru situation at Google blew up and the Ethical AI team at Google was disbanded. The basic argument was on some of the inherent incompatibilities between an "academic ombudsman" mindset, and a "corporate growth" mindset. I'm not saying which one was "right" in this situation given OpenAI's frankenstein org structure, but just that this kind of conflict was probably inevitable.


Just spot checking: did anyone comment on this paper when it was published? Did any media outlet say “hey, a member of the OpenAI board is criticizing OpenAI and showing a conflict of interest?” Did any of the people who cover AI (Zvi, say) discuss this as a problem?

These are serious questions, not gotchas. I don’t know the answers, and I think having those answers would make it easier to evaluate whether or not the paper was a significant conflict of interest. The opinions we have formed now are shaped by our biases about current events.

It didn’t make HN.


Gambling that no one reads academics papers by fringe Open AI board members no one has heard of before is probably a safe bet but it's still a risk if some doomer AI people on Twitter pumped it up, some journo discovered the tweets and sells it as "concern in the industry including one of OpenAI's own board" and it got swept up in some lawyer-style grilling by Congress which Sam just had to go through.


Who cares if the paper was covered in the media or not? Think tanks write policy papers for regulators, not HN. And it's the regulators that Sam was worried about. From the article:

> Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.


> If I were the CEO of OpenAI, I'd be pretty pissed if a member of my own board was shitting on the organization she was a member of while puffing up a competitor.

Considering what's in the charter, it seems like she didn't do anything wrong?

> We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

more here: https://news.ycombinator.com/item?id=38372769


I agree and none of the other passages people are quoting from the paper (which I admittedly haven't read yet) seem at all controversial. She's saying OpenAI's messaging about AI safety is not landing because it's simultaneously launching products that are taking the spotlight, and Anthropic is doing a better job at signaling their commitment to safety. That's true, obvious, and entirely in line with the charter she's supposed to uphold.


> Considering what's in the charter, it seems like she didn't do anything wrong?

It’s incredibly disingenuous to slap your name on an ethics paper claiming a company is doing malfeasance such as triggering a race-to-the-bottom for AI ethics when you have an active hand in steering the company.

It’s shameful. Either she should have resigned from the company, for ethical reasons, or recused herself from the paper, for conflict of interest.


> when you serve on the board of directors for that company

I'm not sure if this is the true statement, because the not-for-profit/profit arms make it more complex.

In this case, the non-for-profit board seems to act as a kind of governance over the profit arm, in a way, it's there to be a roadblock to the profit arm.

Normally a board aligns with the incentives: to maximize profit for shareholders.

Here the board has the opposite incentive, to maximize AGI safety and achievement, even at the detriment of profit and investors.


She is at the top of the pyramid. Did they not fire the chief executive? I am saying she is morally culpable for OpenAI’s actions as a controlling party.

To put these claims in a published paper in such a naive way with no disclosure is academically disingenuous.


She's not in charge of the for-profit arm though, all she could do was fire the CEO, and she did, which would seem consistent with her criticism. I don't think she has many more power as being on the board. She also isn't at the top; in the sense she needs other board members to vote similar to her to enact any change, so it's possible she kept bringing up concerns and not getting support.

Academically, did she not disclose being on the board on her paper?


Her position was listed as a fun fact, not in a responsible disclosure of possible conflicts of interest (though it ran the other way).

Being at the top of the org and being present during the specific incidents that gives one qualms burdens one with moral responsibility, even if they were the one who voted against.

You shouldn’t say “they did [x]” instead of “we did [x]” when x is bad and you were part of the team.


It sounds like your argument is "Even if OpenAI did something bad, Helen should never write about it, because she is part of OpenAI".

Or, that she should write her paper in the first person: "We, OpenAI, are doing bad things." That would probably be seen as vastly more damaging to OpenAI, and also ridiculous since she doesn't have the right to represent OpenAI as "we".

I have no idea why you think that should be a rule, aside from wanting Helen to never be able to criticize OpenAI publicly. I think it's good for the public if a board member will report what they see as potentially harmful internal problems.


I just don’t know why an ethicist would remain involved in a company they find is behaving unethically and proceed with business as usual. I suppose the answer is the news from Friday, though the course feels quite unwise for the multitude of reasons others have already outlined.

Regarding specific verbiage and grammar, I’m sure an academic could give clearer guidance on what is better form in professional writing. What was presented was clearly lacking.


One thing we've learned over the past few days is that Toner had remarkably little control over OpenAI's actions. If a non-profit's board can't fire the CEO, they have no way to influence the organization.


You’ll catch more flies with honey than vinegar.


Did we read different things? All it said was that they had been accused of these things, which is true. If your charter involves ethical AI I’d imagine the first step is telling the truth?


From the PDF:

While the system card itself has been well received among researchers interested in understanding GPT-4’s risk profile, it appears to have been less successful as a broader signal of OpenAI’s commitment to safety. The reason for this unintended outcome is that the company took other actions that overshadowed the import of the system card: most notably, the blockbuster release of ChatGPT four months earlier. Intended as a relatively inconspicuous “research preview,” the original ChatGPT was built using a less advanced LLM called GPT-3.5, which was already in widespread use by other OpenAI customers. GPT-3.5’s prior circulation is presumably why OpenAI did not feel the need to perform or publish such detailed safety testing in this instance. Nonetheless, one major effect of ChatGPT’s release was to spark a sense of urgency inside major tech companies.149 To avoid falling behind OpenAI amid the wave of customer enthusiasm about chatbots, competitors sought to accelerate or circumvent internal safety and ethics review processes, with Google creating a fast-track “green lane” to allow products to be released more quickly.150 This result seems strikingly similar to the race- to-the-bottom dynamics that OpenAI and others have stated that they wish to avoid. OpenAI has also drawn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators, and the susceptibility of their products to “jailbreaks” that allow users to bypass safety controls.151 This muddled overall picture provides an example of how the messages sent by deliberate signals can be overshadowed by actions that were not designed to reveal intent.


> This […] provides an example of how the messages sent by deliberate signals can be overshadowed by actions that were not designed to reveal intent.

What a lovely turn of phrase! I'm stealing it for use later, in place of "actions speak louder than words".


Well since that's not what the paper claims at all...


The paper itself claims that OpenAI's actions have undone their stated goals:

https://news.ycombinator.com/item?id=38374972

It has an excess amount of weasel words, so you might need to employ ChatGPT to read between the lines.


This is really interesting. It makes perfect sense that they weren't sitting at 6 board members for 9 months because Sam and the others didn't see the implications, but because they saw them all too well and were gridlocked.

But then it gets interesting inferring things from there. Obviously sama and gdb were on one side (call it team Speed), and Helen Toner on the other (team Safety). I think McCauley is with Toner (some connection I read about which I don't remember now: maybe RAND or something?).

But what about D'Angelo and Ilya? For the gridlock, one would have to be on each side. Naively I'd expect tech CEO to be Speed and Ilya Safety, but what would have precipitated the switch Friday? If D'Angelo wanted to implode the company due to conflict of interest, wouldn't he just have sided with Team Safety earlier?

But maybe Team Speed vs Team Safety isn't the same as Team Fire Sam vs Team Don't. I could see that one as Helen, Tasha, and Adam vs Sam, GDB, and Ilya. And, that also makes sense to me in that Ilya seems the most likely to flip for reasons, which also lines up with his regret and contrition. But then that raises the question of what made him flip? A scary exchange with prototype GPT5, which made him weigh his Safety side more highly than his loyalty to Sam?


Maybe Sam wanted to redirect Ilya's GPUs to ChatGPT after DevDay surge. 20% of OpenAI's GPUs are allocated to Ilya's team.


My conclusion was that Sam slipped up somewhere and lost Ilya. Which maybe because of the reason you mentioned. Previously it seems like it was a 3 cofounders vs 3 non cofounders board split. Ilya switched teams after being upset by something.

If I may wear my conspiracy hat for a second. Adam D'Angelo is a billionaire or close to it so he has a war chest for a battle to hold on to the crown jewel's of AI. Sam has powerful friends but so does D'Angelo(Facebook mafia). I don't think the board anticipated the 90% potential employee turnover, so there is a small chance they leave the board because of that reason. But my guess is there is a 4 letter company that starts with an 'M' and ends with an 'a' that comes into the picture eventually.


Random fanfiction: it's also possible that it wasn't actually a 3-3 split but more like a 2-2 split with 2 people -- likely Adam and Ilya, though I guess Adam and Tasha is also possible -- trying to play nice and not obviously "take sides." And then eventually Sam thought he won Adam and Ilya's loyalty re: firing Helen but slipped up (maybe Adam was salty about Poe and Ilya was uncomfortable with him "being less than candid" about something Ilya care about. Or maybe they were more principled than Sam thought).

And then to Adam and Ilya, normally something like "you should've warned me about GPTs bro" or "hey remember that compute you promised me? Can I prettyplease have it back?" are stuff that they are willing to talk it out with their good friend Sam. But Sam overplayed his hand: they realized that if Sam was willing to force out Helen under such flimsy pretexts then maybe they're next, GoT style[1]. So they had a change of heart, warned Tasha and Helen, and Helen persuaded them to countercoup.

[1] Reid Hoffman was allegedly forced out before, so there's precedent. And of course Musk too. https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...


It's not just OpenAI. Every AI organization is discovering that they have internal groups which are pulling in a different direction than the rest of the organization, and trying to get rid of those groups.

* Google got rid of its "Ethical AI" group

* Facebook just got rid of its "Responsible AI" team

* OpenAI wanted to get rid of the "Effective Altruists" on the board

I guess if I was afraid of AI taking over the world then I would be rooting for OpenAI to be destroyed here. Personally I hope that they bring Sam back and I hope that GPT-5 is even more useful than GPT-4.


I feel the people advocating safety, while they are probably right from a moral and ethics point of view, are just doomed to fail.

It's like with the Nuclear Bomb, it's not like had Einstein withheld his contributions, we wouldn't have Nuclear Bombs today. It's always a matter of time before someone else figured it out, and until someone else with bad intentions does.

How to attack safety in AI I think has to assume there are already bad actors with super powerful AI around, and what can we do in defense of that.


Yes, I agree. There will always be some evil person out there using AI to try to achieve evil goals. Scammers, terrorists, hackers. All the people who use computers now to do bad things, they're going to try to do even worse bad things with AI. We need to stop those people by improving our security.

Like fixing the phone number system already. There must be some way to stop robo scam calls. Validate everyone making a call and stop phone spam. That would do a lot more to help AI safety than randomly slowing down the top companies.


Yeah, it’s an awful role basically by design. It reminds me of my uncle’s role as the safety/compliance lead at a manufacturing facility. He was just trying to do his job but constantly getting bullied, maneuvered around etc because his job was effectively to slow everyone else down. It drove him crazy, he ended up having a mental breakdown, becoming an alcoholic, and having to go on disability for the remainder of his career. I’m sure some people can handle the stress of those roles, but ugh, not worth it IMO.


It's interesting the paper is selling Anthropic's approach to 'safety' as the correct approach when they just launched a new version of Claude and HN thread is littered with people saying its unusable because half the prompts they type get flagged as ethical violations.

It's pretty clear some legitimate concerns about a hypothetical future AGI, that we've barely scraped the surface of, turns into "what can we do today" and it's largely virtue signalling type behaviour crippling a non-AGI very very alpha version of LLMs just to show you care about hypothetical future risks.

Even the coorelation between commercialization and AI safety is pretty tenuous. Unless I missed some good argument about how having a GPT store makes AGI destroying the world easier.

It can probably best be summarized as Helen Toner simply wants OpenAI to die for humanity's sake. Everything else is just minor detail.

> Over the weekend, Altman’s old executive team pushed the board to reinstate him—telling directors that their actions could trigger the company’s collapse.

> “That would actually be consistent with the mission,” replied board member Helen Toner, a director at a Washington policy research organization who joined the board two years ago.

https://www.wsj.com/tech/ai/altman-firing-openai-520a3a8c


What’s surprising to me is that top-level executives think that self-destructing the current leader in LLMs is the way to ensure safety.

Aren’t you simply making space for smaller, more aggressive, and less safety-minded competitors to grab a seat on the money train to do whatever they want to do?

Pandora’s box is already open. You have to guard it. You have to use your power and influence to enforce other competitors to also do that with their own boxes.

Self-destructing is the worst way to ensure AI safety.

Isn’t this just basic logic? Even chatGPT might have able to point out how stupid this is.

My only explanation is that something deeper happened that we’re not aware of. Us or them board fight might explain it. Great. Altman is out. Now what? Nobody predicted this would happen?


> Pandora’s box is already open. You have to guard it. You have to use your power and influence to enforce other competitors to also do that with their own boxes.

> Self-destructing is the worst way to ensure AI safety.

If they don't believe you're prepared to destroy the company if that's what it takes, then you have zero power or influence. If they try to "call your bluff", you have to go there, otherwise they'll never respect you again.


So the outcome of that is that you either destroy the company or you’re out? Either outcome is the same from your perspective, you chose to no longer have any say in what happens next


Sure, but you have a say in what happens right then and there.


Has Toner (or someone with like-minded views) filled in the blanks between "GPT-4" and "Terminator Judgement Day" in a believable way? I've read what Yudkowsky writes but it all sounds so fantastical that it's, at least to me, more like an episode of The Twilight Zone or The X-Files than something resembling reality. Is Toner really in the "nuke the datacenters" camp? If so, was her placement on the board not a mistake from the beginning?


I think the middle ground where several of the OpenAI board members were trying for is to "responsibly develop AGI", which means developing at a moderate pace while trying to avoid kicking off an investment gold rush through making heavily commercial use cases, and spending a substantial amount of resources on promising safety research (such as Ilya's work).

In my opinion, it was not a very strong position because the allure of money and trying to be the biggest is too strong (as we're seeing now), but I think it was at least coherent.


So, no, the blanks have not been filled in then. Because, for those in Toner's camp, that middle ground is just "progress from GPT to T-1000, but slowly", right? Yudkowsky talks about AIs 3D printing themselves into biological entities and killing us all because humans are an inefficient use of matter. It's not a strong position to me because it sounds ridiculous, not because there's greed involved.


It takes some level of delusion of grandeur to think half a board of a single non profit that just happens to be the first mover can stop the full forces of American capitalism - although I kind of respect the drive/purpose.

They could easily lose any power they had to guide the industry, it was a huge gamble. I remember reading a Harvard business school study showing the first mover advantage repeatedly turned out to be ineffective in the tech industry as there is a looong series of early winners dying out to later market entrants like Friendster->FB, Google, a bunch of dotcom era Ecommerce companies predating Amazon, etc.

They need full industry/society buy in - at an ideological level - to win this battle, they won't win through backroom dealing in a boardroom while losing 90% of their own staff.


I don't know whether I have "like-minded views", but I think the risk is there, and it is not fantastical to me because the AGI doesn't need to "do" anything outside of humans assisting it -- no killer robots or nanobots. It just needs to persuade humans to kill for it, or gather resources for it, and so on. And because it's an AGI, it has superhuman abilities for persuasion.

The intuition for GPT-4 being on the way towards superhuman intelligence or persuasion abilities is that the only significant difference between it and GPT-3 appears to be the amount of compute power applied to training.

The previous US President was much less than superhuman, and he still managed to seriously threaten democracy in the US, and by extension cause a minor existential risk to civilization. Imagine what a more intelligent agent with enough attention to be personally persuasive instead of generically persuasive could achieve for itself, given a goal. (If you think that LLMs can't express goals or preferences, read some of the Bing Sydney transcripts.)


Exactly, AGI is overrated because the world isn’t run by its smartest people, ChatGPT would make a better US or Russian president at this point compared with treasonous or genocidal geriatrics


I feel like the “Terminator Judgement Day” scenario is a strawman more often than not. There are reasons to push for the safe and responsible development of AI that aren’t based on the belief that AI is literally going to launch the nukes.

The biggest risks I see from AI over the next 2 decades are:

1. A complete devolution in what is true (because creating convincing fakes will be trivial). From politics to the way that people interact with each other on a daily basis, this seems like it has the potential to sew division

2. the societal turmoil/upheaval arising from the mass unemployment (and subsequent unemployability) of millions of people without the social safety nets in place to handle that. When people are suddenly unable to work and support their families and those who have consolidated the power have no interest in sharing the wealth, there will inevitably be pushback.


In his recent Ted talk, Ilya talks about AI as 'brains in a computer', and explains why he believes they aren't that far off from AGI. [0] GPT4 is a substantially worse software engineer than I am, but it is 10x GPT3. And if we 10x for at most 2 or 3 iterations, it could be better than me. While it is unclear that we can get there, it doesn't seem implausible to me, and would have deep implications for human existence. That is already true if it just stayed at that level, but what most doomers are really worried about is the possibility of GPT8 being as good or a better AI researcher than most or all humans.

[0] https://www.youtube.com/watch?v=SEkGLj0bwAU


No idea if the concerns are realistic, but dismissing them because they are fantastical surely is a fallacy, no?

We currently almost have scifi-level AI we can talk to on our smartphones, which would have been fantastical only a few years ago.


The concerns are unrealistic because there are massive unusual and unvalidated assumptions in multiple levels, and lynchpin arguments that depend on utterly unsupported assumptions.


> fill in the blanks

What exactly are you looking for here, proof of concept? This stuff is inherently speculative - because if it wasn't speculative, we'd probably be dead.


I have a feeling EA[1] is like anarcho capitalist communities (or just inverse for socialism) where you have a good idea like markets==good then apply it like a hammer to everything you see in life. Any hint of gov/statism is treated as the enemy, regardless of its IRL negative impact on markets or net freedom, or whatever positive end goal they envision.

OpenAI is (hypothetically) progressing to AGI, therefore bad. Whether GPT specifically is a half a foot on a 1000km journey or potentially insignificant, it doesn't matter to them as they've pre-selected any potential for progress as all bad.

[1] if you search "Helen Toner" on YouTube it's just a collection of videos of her talking at EA conferences. So she's a believer


No one has, no.


>how having a GPT store makes AGI destroying the world easier

The argument in general is that the more commercial interest there is in AI, the more money gets invested and the faster the companies will try to move to capture that market. This increases the risk for AGI by speeding up development due to competition, and safety is seen as "decel".

Helen was considering the possibility of Altman-dominated OpenAI that continued to rapidly grow the overall market for AI, and made a statement that perhaps destroying OpenAI would be better for the mission (safe development of AGI).


If true they should have never started in the first place because there was no way she was not going to press that button.


This sounds convincing, especially considering this story where Sam Altman was involved in a "long con" to seize board control of Reddit (https://www.reddit.com/r/AskReddit/comments/3cs78i/comment/c...).

I think Sam may have been angling to get control of the board for a long time, perhaps years, manipulating the board departures and signing deals with Microsoft. The board finally realized this when it was 3 v 3 (or perhaps 2 v 3 with 1 neutral). But Sam was still working on more funding deals and getting employee loyalty, and the board knew it was only a matter of time until he could force their hand.


Also of note, this comment by Sam's verified reddit account describing the con as "child's play": (https://www.reddit.com/r/AskReddit/comments/3cs78i/comment/c...)


In that page linked, she says that OpenAI's system card wasn't a suitable replacement as a "commitment to safety". I feel thats a fair critique even for someone on a non-profit board, unless she's really advocating for systemic change in the way the company operates and does any commercial business.


In this instance, "Sam is trying to take over the Board on a flimsy basis" is an reasonable reason to remove him. Started leading discussions about whether she should be removed, is also very very far from actively working to remove her.

This is amateur hour and considering what happened she probably should have been removed.


Leading discussions on whether someone should be removed is literally actively working to remove them.


It's quite literally not. After she wrote that it would be silly if you didn't even have discussions considering this.

Actively working would require active efforts to make this happen.


What would those "active efforts" consist of beyond "leading discussions"? You have the discussion, you have the board vote, you're done, there's no ??? step.


I mean it's not cool for board members to publicly criticize the company.


I think you're taking the intuition from a for-profit company and wrongly applying it to a non-profit company. When a board member criticizes a for-profit company, that's bad because the goal of the company is make a profit, and bad PR is bad for profit. A board member criticizing a non-profit doesn't have the same direct connection to a result opposite of the goals of the company. And if you actually read the page, it's an extremely mild criticism of OpenAI's decisions.

This situation is simultaneously "reckless board makes ill-considered mistake to suddenly fire CEO with insufficient justification" and "non-profit CEO slowly changes the culture of a non-profit to turn it into profit-seeking enterprise that he has a large amount of control over".


>A board member criticizing a non-profit doesn't have the same direct connection to a result opposite of the goals of the company.

Even in a nonprofit, the board has an obligation to maintain a good working relationship with their organization. It's very rare for a board member to publicly criticized for their own organization.

Also, this argument goes out the window because OpenAI is not just a nonprofit. When they started their for-profit subsidiary in 2019, they accepted a fiduciary duty to their investors.


That’s not how that works. You can argue that Ilya has a fiduciary responsibility to the investors as an officer of the company. Toner doesn’t. She is an independent board member on the board of the non-profit that owns a controlling share in the company, but she doesn’t personally own a majority share of the company. As an independent board member, she has no duty but to uphold the charter. She represents no shareholders.

Majority shareholders may have some fiduciary responsibility to minority shareholders, so you might be able to make the argument that the board as a whole shouldn’t write disparaging remarks about the company. But even that argument is tenuous.


She joined the board of OpenAI knowing that they owned a for profit subsidiary that received billions of dollars of investment. Being independent means that she could provide a unique outsider perspective, not that she has no obligations to her organization whatsoever. If you're going to be completely independent, you shouldn't be surprised when over 90% of the organization turns against you.


Whether people turn against her or not is irrelevant to my point. She has no personal fiduciary duty to the minority shareholders in the for-profit company. Her only duty is to the charter.


That people turned against her is relevant because it demonstrates how untenable this standard of independence is. A charter is just a collection of words. She isn't a board member to a collection of words. She's a board member to an organization of people


It’s not relevant to my point that she has no fiduciary duty to anything but the charter.

Her only personal fiduciary responsibility is to the mission defined in that collection of words, not to the employees of the for-profit.

That’s how this works whether you think it should or not.


Are you going to engage with anything I say or are you just going repeat yourself again? Is there a divine stone tablet that commands a nonprofit board member is solely responsible for the nonprofit's charter that I'm missing?


No because it's not relevant to the only point I made, which is that Helen Toner has no personal fiduciary responsibility to the investors of the for-profit.

You implied that she couldn't publicly disparage the for-profit because the board has a fiduciary responsibility to the minority investors in the the for-profit. I only piped up to correct that single point because it's wrong.


I mean sure was free to publicly disparage her organization, but then she can't expect not to be antagonized. My point is that being a board member is a responsibility, it's valid for Sam to interpret her paper as a violation of her responsibility.


>being a board member is a responsibility

And that responsibility is defined as a fiduciary duty to the mission of the organization as defined in the charter.

As a fellow board member, if Sam thinks Helen's behavior is in conflict with the mission of OpenAI as defined in the charter, he is free to push for her removal.


Or, the organization has an obligation to respect freedom of academic inquiry, because the truth is what helps organizations grow, change, and mature, does it not? If the company cannot handle independent critique then its fragility is not the fault of the researcher who is telling the truth.


> accepted a fiduciary duty to their investors.

Did they?


What do you mean? Schmidt was on Apple's board for 3 years while he was CEO of Google. Do you think Google did not criticise Apple during that whole time (remember that the CEO is ultimately responsible for the company communications)?

Even more in the case of OpenAI, the board member is on the board of a non-profit, those are typical much more independent and very often more critical of the decisions made by other board members. Just search for board members criticising medical/charity/government boards they are sitting on, there are plenty.

That's not even considering if the article was in fact critical.


FWIW that was a pretty strained relationship and iirc Eric had to frequently recuse himself during any discussion of the iPhone which eventually made his membership untenable.


Why was Schmidt on Apple’s board? Did he care about Apple’s mission or Apple shareholders?


it's not cool for companies to try to shut down academic freedom of inquiry in scholarly publishing in order to improve their public image

her independence from openai was the nominal reason she was asked to supervise it in the first place, wasn't it


He wasn't shutting down her academic freedom. However, if you're going to diss what your own company is doing and praise competitors, you probably shouldn't be on the board.

>During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities.

>Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.

This person is too far gone. Life isn't a movie.


that would be true for a for-profit company where her fiduciary duty was to shareholders, but in this case, given the charter in question, not publicly criticizing the company would be more likely to be a breach of her fiduciary duty


> This person is too far gone. Life isn't a movie.

This comment is a credibility killer. "This person", as a member of the board of a non-profit, has a duty to keep the non-profit on mission, and even to call for its dissolution if that isn't possible.


How far should she go to fulfill her interpretation of that duty? Should she even commit crimes in order to keep the non-profit mission? There's 700+ employees that disagree with her interpretation of the company's mission, and yet she stubbornly maintains that hers is correct and that it's her duty to destroy the company (or even sell it to Anthropic) to fulfill her duty.

This is delusion, and she thinks she's some hero that's saving the world by keeping it a non-profit when in reality it's just creating needless chaos and even impacting innocent people's livelihoods.


> Should she even commit crimes

probably not, but continuing to publish research papers of the kind that got her the job in the first place seems reasonable maybe

but that's what altman was criticizing her for, not for committing crimes, which as far as anyone knows she hasn't done

> There's 700+ employees that disagree with her interpretation

maybe, maybe not, but the standard way that nonprofits and other companies work is that the employees do what the management tells them, the management does what the board tells them, and the board does what the shareholders or the charter tells them, and in all cases, if they refuse, they get fired

if you're an employee at the american cancer society and you decide that malaria in africa is a more important cause, you probably shouldn't expect to be able to continue using the acs's assets to fight malaria in africa. it might happen, but you shouldn't be at all surprised if you get fired


Again, who's stopping her from publishing research papers? She felt threatened and decided to retaliate by orchestrating firing Sam.

They likely violated RICO by essentially nuking the company's valuation and trying to sell to Anthropic. That is a crime, by the way, so they don't seem above that.

They haven't given even a clear reason for firing Sam. "Not being candid" with no evidence doesn't sound like it directly violates "the charter", but you can really argue anything when you're just "interpreting" the charter.

Anyway, you can make infinite excuses for their actions and justify them infinite ways. Hopefully you agree that the current situation is a giant mess, and it's caused by their narrow misguided interpretation of their job, and it's at the expense of basically everybody else.


> They likely violated RICO by essentially nuking the company's valuation and trying to sell to Anthropic.

Really. What is/are the predicate offense(s) for this alleged RICO violation?


> They likely violated RICO

LOL

Nothing you have said is credible or honest.


She has a legally binding charter (well, assuming OpenAI's charter actually holds up in court, which we may end up seeing as the next logical step in the drama) to use board powers to further the non-profit mission. She does not have a legally binding obligation to commit crimes.


> Should she even commit crimes in order to keep the non-profit mission?

Should you argue in bad faith?


This is a non-profit, not a for profit company. Nominally, she's on the board precisely to criticize.


[flagged]


> A board member cannot sabotage their own company.

Criticism (especially valid criticism) is not sabotage, and in any case the board's responsibility is to advance the charter, not to support the (non-profit) company regardless of what it's doing.


There is a difference between independent from an organization, and someone that takes a public stand against a company. Reading that paper, which I am not really impressed with, they are taking a stance that openai's current trajectory is not as good as a competitor. I bet that if a Microsoft board member wrote a paper how Google search is better than Bing / AWS Better than Azure / OSX better than Windows they would be reprimanded also.


A comparison to another nonprofit would make more sense. A board member of the Komen Foundation being reprimanded for pointing out that other nonprofits are far more effective at helping cancer research would be a better comparison.


But I bet if a Greenpeace-like-organization board member would write a paper how a different environmental group is greener, they would be praised.


Non-profits and for-profits aren't comparable. The latter have a duty to make money. The former has a duty to its mission.


I'm not sure you should get to have your cake and eat it too.


You seem to be fundamentally misunderstanding the role of non-executive board members.

In many cases they're industry experts / academics and in many cases those academics continue to publish papers that look objectively at the actions of all companies in their sphere of influence, including the one they're on the board of.

It's _expected_ for them to publish this type of material. It's literally their job. Cheerleading is for the C-suite.


They've admitted that they think it would be better to destroy open AI than to continue on its path, but how can an organization uphold its goals if it doesn't exist?

Imo you have to commit to working within an organization if you're on its board, you can't burn it down from within. I think it's less of an issue to try to remove the CEO if you don't agree with the direction they're taking, but this was obviously done in a sloppy manner in this case, and their willingness to destroy it makes me distrust them (not that my opinion matters per se).


> but this was obviously done in a sloppy manner

I don't think it was done in a sloppy manner, I think there's a huge amount of spin and froth being generated to make it _look_ like it was done in a sloppy manner, but the reality is that the board took an executive action, released a discreet and concisely worded press release and then fell silent, which is 100% professional behaviour.

To me it's the other side that are acting sloppily, they're thrashing around like maniacs, they're obviously calling in favours behind the scenes as supportive articles are popping up all over the place, apparently employees were calling other employees in the middle of the night urging them to sign the 'I'll quit unless' letter, employees have been fed a misleading narrative (two of them, actually) about why the action was taken in the first place which riled them up...

I have to assume here that the board - as they're the ones acting professionally - similarly acted professionally before the firing and discussed the matter at great length and only resorted to this last minute firing because of exigent circumstances. The fact that a fired employee was able to cajole/bluff/whatever his way back into the office to prove some sort of point suggests that the board's action may have been necessary. What exactly was done during that time in the office, one might ask? Were systems accessed? Surely it was an unauthorised entry and a major security violation.

You do see that that action (re-entering the office after being fired) gives off lunatic vibes, don't you? Maybe it really did come down to 'yeah you're technically my boss but I'm gonna do whatever the hell I want and there's nothing you can do about it because everyone loves me' to which the board's only possible course of action is to fire immediately.


I don't think Sam cajoled their way and talked to the employees, as I understand it Ilya talked with the staff and revealed what they claimed were the reasons for the firing.


In your opinion, was a crime committed when that person returned to the office?

Either by himself or by the person that permitted him access knowing beyond any reasonable doubt that they were not authorized to be there?

If systems were accessed, was another crime committed?

If a person facilitated that access, has the boundary for 'conspiracy to commit...' been reached?


What? It's my understanding Sam Altman was invited back to talk to the board, and Ilya is still employed there.


Were the board members in the office on that day?

I just find the whole thing weird. I just think it seems unlikely that a board would decide something then about-face so rapidly as to have a meeting at the office the next day. It would be conventional for further meetings to take place at the offices of lawyers, for example.

It would also be normal for the non-fired members of staff to be instructed to have no further contact with the fired person without legal counsel present.

Sometimes it seems very much like everyone here is really playing to the cameras in a big way.


that does seem to be what altman was attempting: he wanted to have the cake of independent board members, but eat it by having them never express any opinions contrary to his own


It's a nonprofit board & it's absolutely the role of advisory board members to continue their work in the sector they're specialized in without bias.


But she was biased. She seemed so biased against her own company that her intended course of action seems to be to destroy it.


If she has a justifiable reason for holding that position it isn't bias.


OpenAI does not have an advisory board. She is on the board of directors and has fiduciary responsibilities. It does not matter that the parent organization is a non-profit. The duty remains.


Yes, those responsibilities are to advance the charter of the organization, which notably is totally compatible with publishing articles that contain mild criticism of the organization's behavior.


The board’s fiduciary duties are to the charter of the nonprofit. They have no duty to protect the for-profit-OpenAI’s profits, reputation, or anything else.


This has been argued here already and dissected thoroughly.

It's basically restating 'her role is to cheerlead' but with more sophisticated wording.

You haven't made an argument, you've just used longer words to restate your (incorrect) opinion.

Also I never said they had an 'advisory' board. Board members ar either executive or non-executive members. If they're nonexec then their role is to advise the executive members. They're generally fairly independent and many nonexecutive board members serve on the boards of multiple companies, sometimes including ones that are nominally in competition with each other.


No, I never said that her role is to "cheerlead". Refraining from public disparaging remarks is not a big ask for a board member. Especially if those thoughts were not first brought before the board, which is where such disputes should be resolved.

> Also I never said they had an 'advisory' board

From your comment: "...it's absolutely the role of advisory board members to continue their work".

Not sure how someone is meant to parse that except for you to imply that she was a member of an advisory board.


> Not sure how someone is meant to parse that

Someone is meant to use context and understand that when the context clearly refers to someone who is well known to be a member of the board of directors, 'advisory board member' means that they are a board member in an advisory role, aka a nonexec.

That assumes that someone is familiar with the structure and makeup of boards of directors. It's a bit rude to participate in a conversation if you're not familiar with the subject matter and the meaning of words-in-context as you just waste peoples time making them elaborate.


> 'advisory board member' means that they are a board member in an advisory role, aka a nonexec

No, those are not synonyms. Advisory boards are distinct from the board of directors. Advisory board members have roles similar to what you alluded to in your original comment and do not have fiduciary duties, hence my confusion.

Non-exec board members are not involved with day-to-day business operations but their fiduciary duties are no different than exec members.

Language has meaning. Don't insult others for your own clumsy usage.


Oh well I guess I used the wrong word then, but from context you should have been able to figure out the intent behind my words.

Only human!

> Don't insult others for your own clumsy usage.

We're on the internet. Something like 25% of internet discourse is nitpicking vocabulary. Let's keep traditions alive this holiday season!


Is this even true for for-profit companies? Like if a professor is on the board of a for-profit (which I think is pretty common for deeptech? Maybe companies in general too? https://onlinelibrary.wiley.com/doi/abs/10.1111/fima.12069#:....), is he/she banned from making a technical point about how a competitor's product or best practices is occasionally superior to the company's product?


This is the inherent conflict in the company.

Once it turned out you needed 7B parameters or more to get LLMs worth interacting with, it went from a research project to a winner-take-all compute grab. OpenAI, with an apparent technical lead and financial access, was well-positioned to win it.

It is/was naive of the board to think this could be won on a donation basis.


This is insane. "They had to fire Sam, because he was trying to take over the board".

First: most boards are accountable to something other than themselves. For the exact reason that it pre-empts that type of nonsense.

Second: the anti-Sam Altman argument seems to be "let's shut the company down, because that will stop AGI from being invented". Which is blatant nonsense; nothing they do will stop anyone else. (with the minimal exception that the drama they have incepted might make this holiday week a complete loss for productivity).

Third: in general, "publishing scholarly articles claiming the company is bad" is a good reason to remove someone from the board of a company. Some vague (and the fact that nobody will own up to anything publicly proves it is vague) ideological battle isn't a good enough rationale for the exception to a rule that suggests that her leaving the board soon would be a good idea.


> "They had to fire Sam, because he was trying to take over the board".

I mean, yes? The board is explicitly there to replace the CEO if necessary. If the CEO stuffs the board full of their allies, it can no longer do that.

> First: most boards are accountable to something other than themselves. For the exact reason that it pre-empts that type of nonsense.

Boards of for-profits are accountable to shareholders because corporations with shareholders exist for the benefit of (among others) shareholders. Non-profit corporations exist to further their mission, and are accountable to the IRS in this regard.

> Second: the anti-Sam Altman argument seems to be "let's shut the company down, because that will stop AGI from being invented". Which is blatant nonsense; nothing they do will stop anyone else. (with the minimal exception that the drama they have incepted might make this holiday week a complete loss for productivity).

No, the argument is that Sam Altman trying to bump off a board member on an incredibly flimsy pretext would be an obvious attempt at seizing power.

> Third: in general, "publishing scholarly articles claiming the company is bad" is a good reason to remove someone from the board of a company. Some vague (and the fact that nobody will own up to anything publicly proves it is vague) ideological battle isn't a good enough rationale for the exception to a rule that suggests that her leaving the board soon would be a good idea.

This might be true w.r.t. for-profit boards (though not obviously so in every case), but seems nonsensical with non-profits. (Also, the article did not reductively claim "the company is bad".)


> Non-profit corporations exist to further their mission, and are accountable to the IRS in this regard.

The IRS’s actual power in this regard is extremely limited. There’s so much wiggle room to define the “mission” that, at best, the only thing they can do is make sure you’re not secretly using your nonprofit as a tax shelter for a regular for-profit business, and even then some companies find ways around it (see IKEA).

Ordinary fraud (as opposed to tax fraud, though maybe that too) is even easier to pull off with a nonprofit than with an for-profit. Any nonprofit has a fundraising mechanism (which is usually an obnoxious set of dark patterns and outright spam reached through decades of optimization; the techniques are well understood), some overhead for managing the organization and money itself, and then the actual work involved with the mission. It’s quite simple for the overhead and fundraising parts of the organization to become most of the organization. But it gets worse: when your “mission” is vague enough, you can also turn that side of the business into a bunch of sinecures for your friends. There are well known “awareness” nonprofits that do literally nothing to actually solve the problems they’re nominally about because their job is to “spread awareness” (i.e. the top of the funnel for the fundraising part of the organization).

This is ironically also the origin story of “effective altruism”. The IRS doesn’t stop you from running a non-profit scam, so you have to do your own research to find out which non-profits are legit, and that’s what the effective altruists started off with, along with some cost-benefit modeling once you get past the outright scams.

The problem with OpenAI seems to be related to a vague mission as well. From the “an organization is what it does” perspective, OpenAI develops AI. If you have a formal mission that emphasizes safety over capability, you could certainly argue that OpenAI was closer to achieving that mission by sabotaging its own work, but if your interpretation of “safety” is to sabotage the development of AI, there are certainly better ways to do it than to develop AI and then sabotage yourself.


I’m curious about the IRS angle. Anyone know more about that?

One reason open source software projects struggle to get donations is because the IRS is skeptical of OSS as a true non-profit activity, so often doesn’t allow 503c tax deductible donations. OpenAI seems like a data point in favor of the IRS skepticism.


if thats the case why didnt they just stuff the board when they had the chance?


I think it’s also mentioned the article that they have failed to add new members to the board due to the disagreement between the board members.


Yeah it sounded like 3-3 gridlock until someone flipped, either it was Sam/Greg/Ilya against Adam/Helen/Tasha or it was Sam/Greg/Adam against Ilya/Helen/Tasha.


then the argument that altman would be able to easily stuff the board doesnt hold -- he didnt even have majority.

either you claim (1) that there was a risk of altman having majority and stuffing the board and hence concede that majority allows stuffing the board, in which case you concede the actual majority which was anti-altman could have done that

or (2) you concede that having majority is not enough to stuff the board, in which case we were far from any risk of altman doing so, given that he did not even have a majority of the board

---

edit because i cant seem to reply further down thread: i read the article. my point is precisely that the board had majority. that is how they got to fire altman in the first place. given they had majority they had the chance to stuff the board in their own favor, as per the argument above.


You should read the article, because it’s clearly stated how Altman could get a majority. For your other concern, a simple vote change (like Ilya’s) can explain it.


> Second: the anti-Sam Altman argument seems to be "let's shut the company down...

Isn't that the pro-Altman argument? The pro-Altman side is saying "let's shut the company down if we don't get our way." The anti-Altman side is saying "let's get rid of Sam Altman and keep going."


No. No it isn't.

The board-aligned argument seems to be "destroying the company is the right thing to do if it helps the cause of AI alignment".

Whereas the pro-Altman side seems to be "if you have the most successful startup since Google/Facebook, you shouldn't blow it up merely because of vague arguments about << alignment >> ".


> "if you have the most successful startup since Google/Facebook, you shouldn't blow it up"

... But they aren't the ones blowing it up? Firing one guy, even the CEO, isn't blowing it up. The only side that directly threatened "if you don't do X, we blow it up" was the pro-Altman side.


Your pro-Altman means that the argument should be won by Altman side, just because it’s a successful startup, while in reality, it’s a non-profit with a mission, which contradicts Altman’s position (according to the article).


Do you think Sam is more aligned to OpenAI non-profit charter than Helen?


The only specific accusation made in the article is that Sam criticized Helen Toner for writing a paper: https://cset.georgetown.edu/publication/decoding-intentions/

That says Anthropic has a better approach to AI safety than OpenAI.

Sam apparently said she should have come to him directly if she had concerns about the company's approach and pointed out that as a board member her words have weight at a time when he was trying to navigate a tricky relationship with the FTC. She apparently told him to kick rocks and he started to look for ways to get her off the board.

All of that ... seems completely reasonable?

Like I've heard a lot of vague accusations thrown at Sam over the last few days and yet based on this account I think he reacted the exact same way any CEO would.

I'm much more interested in how Helen managed to get on this board at all.


>We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

>...

>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

>Our primary fiduciary duty is to humanity.

>...

>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

https://openai.com/charter

Seems to me like Helen is doing a better job of upholding the charter than Sam is.


This charter is doomed to be interpreted in radically different ways by people with differing AI-eschatological beliefs. It's no wonder it's led to so much conflict.


Sam wanted to restrict Helen's free expression on the topic of beneficial AI in order to boost OpenAI's position. That suggests he cares more about the success of OpenAI than he does upholding the charter.


He had dual overlapping roles, a duty to the promote safe AI to the world, and a duty to prevent own-goal sabotage to the LLC. Ideally those roles wouldn’t conflict. He probably didn’t care if such a paper was written, but to have a board member as author is beyond awkward. But as an academic, her own dual overlapping role, she has a need to publish. (“Uh, please voice your concerns before we develop it instead of sand bagging us with a damning critique in an academic journal after the fact!”) This seems like a recipe for disaster. I’m not certain either really was “wrong” in that conflict. It just seems like this structure was doomed to failure, throwing uncompromising zealots together all with launch codes and a big red button.

Safety isn’t binary, it’s degrees of risk assessment, mitigation and acceptance. Infinite safety is never progressing. But never progressing means failing the charity mission. And AI isn’t just being created at OpenAI, for them to succeed at the charity mission they need to stay ahead of risky competitors with a safer AI.

The charity can’t afford its GPU costs without the LLC, and the LLC can’t lead the industry if it’s behind all the competitors. To be relevant the safety side needs compute, and they need to be ahead of the curve, and they also need real world data from the LLC. If the charity nukes the LLC, it takes itself with it and fails its mission. They’re so intertwined they need compromise in the board. And they let that board dwindle to small entrenched factions, which sounded more like the Hatfield’s and the mcCoys. With such a fragile structure they needed mature adults with functional conflict resolution.


No, Sam wanted to be the first recipient of Helen's expression.

So -- if appropriate -- he could make changes.

Or he could not change, and she publishes anyway.


>> Sam wanted to restrict Helen's free expression.

> No, Sam wanted to be the first recipient of Helen's expression.

If I get copy approval to what you write, that is by definition a restriction to your freedom of expression. How is this up for debate?


My interpretation of that was that Sam felt Helen's duty as a board member is to come to the company with her complaints, and work to improve the safety measures.


You don't normally perform you role as a board member through publications in a public forum. You do it by exercising pressure internally to get the effect you desire so you can continue to do so in the future. Going public is usually a one-shot affair.


Being a co-author of that publication was not her performing her role as an OpenAI board member, it was her day job.


Conflicts of interest suck. Experienced board members avoid them or deal with them. They don't allow their conflicts of interest to spiral out of control to the point that they destroy the thing they're supposed to govern.


> Conflicts of interest suck. Experienced board members avoid them or deal with them.

Agreed. It seems remarkable that she dealt with her conflict of interest so thoroughly that she published an article that was mildly critical of the organization. I do not think her duty was to let Sam Altman seize power by firing her from the board on a flimsy pretext, though.


She went a bit further than that.


Did Helen's publication conflict with the charter though?

I'd say the OpenAI employees are the ones with the conflict of interest, since they stand to get rich if the company does well, independent of whether the charter is being upheld.


No, it did not conflict. But that doesn't mean you first try to get things fixed rather than to publish.

> I'd say the OpenAI employees are the ones with the conflict of interest, since they stand to get rich if the company does well, independent of whether the charter is being upheld.

Those pesky employees again. Too bad we still need them. But after AGI is a fact we don't and then it'll be a brave new world.


It’s interesting that people are making the assumption Toner didn’t raise these concerns internally as well.


> So -- if appropriate -- he could make changes.

Where are you drawing this conclusion from? Nothing in the text suggests that was his intent - certainly his actions thereafter suggest that he was in fact more concerned with the potential harm to the for-profit company.


The journalist failed to make that point really clear. He "reprimanded" her "and" said "it was dangerous to the company". What exactly did he "reprimand" her for? The "and" seems to imply two separate points of criticism.

>In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.


The normal operation of every human relationship.


Let's recall the CEO is employed by the board, not the other way around.


Is Sam a member of her faculty? Or an advisor?

Otherwise I'm not sure what changes would be "appropriate" for him to direct.


> That suggests he cares more about the success of OpenAI than he does upholding the charter.

A board member should care about their organization more than their interpretation of the charter. If a Mozilla board member publicly criticized Firefox for violating their charter of a free and open internet because they're receiving money from Google, while also praising Brave for not doing so, I guarantee that they'd also be ousted.


I would have thought a non-profit board that puts the growth and success of their organization above the mission in their charter, even if they view the organization as going against their mission, sounds like the worst kind of board. How could that possibly be a good idea?

Isn't the whole reason we have non-profit boards and charters is so that they can make just this kind of call?

(I'm not saying that this is what is happening here -- just responding to your particular claim.)


>Isn't the whole reason we have non-profit boards and charters is so that they can make just this kind of call?

Yes. From the board. That's why we typically don't see boards air their dirty laundry out in the open even when in retrospect, they obviously had a problem with leadership.


The board seems to be getting an awful lot of crap for not saying enough, and also getting crap because their initial letter was too direct rather than being the usual corporate mealy-mouth BS.


The board is getting a lot of crap for being crap. Regardless of your opinion about their goals, their execution was objectively terrible. They were clearly completely detached from their organization. At the very least, they could have made sure their first interim CEO wouldn't instantly try to turn against them.

It's not a contradiction to criticize someone for saying too much sometimes and saying too little another time. The board should be open to insiders and cautious with external communications.


It's possible to simultaneously believe that announcing that the CEO lied to the board about x on day y is better than announcing that the CEO is leaving to spend more time with his family, but announcing that the CEO lied to the board and then refusing to elaborate when everyone including the CEO says "huh? What?" is worse than either.


This is exactly backwards. The reason we create organizations like this is not for the benefit of the organization itself, but to accomplish specific goals. Those goals are outlined in the charter. If the organization starts doing things that conflict with the charter, the responsibility of the board is to course-correct.


The place to course correct is from the board room, not in public. If she had enough power to oust the CEO, she should have enough power to pressure him to make changes.


The board can't compel the CEO to do specific things, only replace them.


Of course they can. If Sam Altman is really as heedless about AI safety as the board seems to think, GTP wouldn't be so aggressively aligned.


A misaligned AI could act aligned in the service of its interests, just like an employee trying to steal from a company would act like a good honest employee. Methods for getting AIs to visibly act a certain way aren't necessarily sufficient to control advanced AI systems.


>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.

how could this possibly be accomplished when trying to sell the product itself. Investors pouring Billions into it are in it for profit... they're not going to let you just stop, or help a competitor for free.


it’s completely irrelevant language because nobody is anywhere close to actual AGI


well, theyre close to releasing a product that they know they can call AGI, as its really just 3 letters thag can be trademarked. seems like its being hyped up like ML or bitcoin, lots of hype to sucker in the investors, be the first to build a turk-in-a-box based on that new hyped model, add regulation to stifle competition, ???? profit.


Promoting Anthropic and putting down OpenAI doesn't make her better at her job. Her job isn't self promotion.


Unless she has equity in Anthropic (which would be major conflict of interest), I don't see how this is self promotion...?


I'm guessing the reasoning is something like this...

As a CEO I'd want your concerns brought to me so they could be addressed. But if they were addressed, that is one less paper that could be published by Ms. Toner. As a member of the openai board, seeing the problem solved is more important to openai than her publishing career.


https://openai.com/our-structure

"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s pzrincipal beneficiary is humanity, not OpenAI investors."

I see. I don't know whether she did discuss any issues with Sam before hand, but it really does not sound like she had any obligation to do so (this isn't your typical for-profit board so her duty wasn't to OpenAI as a company but to what OpenAI is ultimately trying to do).


> but it really does not sound like she had any obligation to do so

The optics dont look good though if a board member is complaining publicly.


Frankly that's an irrelevant first order thinking.

If Sam would let it go what would happen? Nothing. Criticism and comparisions already exist and will exist. Having it coming from board member at least gives counter argument that they're well aware of potential problems and there is opportunity to address gaps if they are confirmed.

If regulators find argument in the paper reasonable and that's going to have impact - what's wrong with that? It just means argument was true and should be addressed.

They don't need to worry about commercial side because money is being pured more than enough.

The nature of safety research is critical by definition. You can't expect to have research constrained to talk only in positive terms.

Both sides should have worried less and carry on.


but her job is to do exactly that. anybody in this space knows Anthropic was formed with the goal of AI Safety. her paper just backed that. is she supposed to lie?


What she is supposed to do is bring the issues to the company so that they can be fixed.

That's the pro safety solution.


It is a complaint, or a discussion of the need for caution?


It does not sound like what she did helps advance the development of AGI that is broadly beneficial. It simply helps slow down the most advanced current effort, and potentially let a different effort take the lead.


> It simply helps slow down the most advanced current effort

If she believes that the most advanced current effort is heading in the wrong direction, then slowing it down is helpful. "Most advanced" isn't the same as "safest".

> and potentially let a different effort take the lead

Sure but her job isn't to worry about other efforts, it's to worry about whether OpenAI (the non-profit) is developing AGI that is safe (and not whether OpenAI LLC, the for-profit company, makes any money).


On the other hand, you create more pressure for solving the problem by publishing acknowledged paper, if your voice is not usually heard


She's a board member. Who had approximately 1/4 of the power to fire Sam, and she did eventually exert it. Why do you rather assume her voice was not heard?


You should assume that most of this happened before the firings.

Then it was 1/6 about the voting.

But voting is totally different thing than speaking about concerns and then getting actually them into the list, which will we voted further if they decide to do something about it.

In theory that is 1/6 * 1/6 power if you are alone with it for the decision to happen.


I still see no justification for assuming that the board member's voice was not heard before the publication. There's zero evidence for it, while the priors ought to be favoring the contrary because she does wield a fairly material form of power. If more evidence does emerge, then we could revisit the premise.


> she does wield a fairly material form of power.

How? Being a board member is not enough. There were aready likely two against her in this case, while the rest is unknown.


These guys are already millionaires. Do you think people writing these kinds of papers really are that greedy?


Is Helen associated with Anthropic?


Apparently an indirect association. From [0]:

Fall 2021

Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.

Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source [1]).

0. https://loeber.substack.com/p/a-timeline-of-the-openai-board

1. https://forum.effectivealtruism.org/posts/fmDFytmxwX9qBgcaX/...


Don't tell me there is another polycule in there somewhere.


> Her job isn’t self promotion

Isn’t she an academic? Getting people to pay attention to her is at least half her job.


The challenge of all this is that while everything going on looks totally bonkers from any normal sense of business, it’s hard to argue that the board isn’t following their charter. IMHO the mistake was setting up the structure the way it is and expecting that to go well. Even with MSFT they are obviously annoyed but also they have shareholders too and one reasonable question here is what the heck was Microsoft’s leadership doing putting billions of capital at risk with this entity that has such a wacky structure and ultimately governed by this non-profit board with a bizarre doomsdayish charter. Seriously if you haven’t read it, read it.

This whole thing has been wildly mishandled but there’s an angle here where the nonprofit is doing exactly what they always said they would do and the ones that potentially look like fools are Microsoft and other investors that put their shareholder capital into this equation thinking that would go well.


When Microsoft came on board the charter effectively went out the window. It's like Idefix thinking he's leading Obelix around on a leash.


For anyone who is familiar with Obelix but has no idea who Idefix is, it's the original French name for Obelix' dog, Dogmatix. Idefix is a pun on the French expression idée fixe (fixed idea) meaning an obsession.


"It's like Idefix thinking he's leading Obelix around on a leash."

I don't think I have seen Idefix being on a leash ever .. he just runs around free. And there are indeed many dogs on a leash, who lead their "masters" who just follow behind.


I picked them for their difference in relative size and strength, not to suggest that Idefix would ever accept a leash or because there are other dogs that lead their masters around.


Hm.. I would think it is here the board holding the leash, but being pushed forward in a direction they did not wanted to go..


Apparently not…


A lot of that billions of capital is simply Azure compute credits though.


Those credits equate to billions of $ in real compute expense for MSFT.


I would hope that a billion in credits would not represent a billion in expenses for Microsoft.


They're giving up all the profits from whoever they would have been sold to otherwise.

So it adds up the same.


Microsoft giving these credits to OpenAI doesn't mean some other customer won't buy the Azure credits they planned on buying. So I don't see how what you write makes sense.

At least assuming Microsoft hasn't run out of capacity in their data centers. I know Azure sometimes have capacity issues but those issues are intermittent and not say 5 year long (or whatever the windoe is when it comes to consuming these credit)


> doesn't mean some other customer won't buy the Azure credits they planned on buying

Yes it does, because we're taking about GPU availability that is absolutely highly constrained across the economy.

If we were talking about CPU credits instead, then you're right that Microsoft would be paying/losing merely cost.


So when they give you $100 in free credits they are losing $100.


According to the paper, Anthropic's superior safety approach was deliberately delaying release in order to avoid “advanc[ing] the rate of AI capabilities progress." OpenAI is criticized for kicking off a race for AI progress by releasing ChatGPT to the public.

[1] https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...


It's important to remember that part of OpenAI's mission, apart from developing safe AGI, is to "avoid undue concentration of power".

This is crucial, because safety is a convenient pretext for people whose real motivations are elitist. It's not that they want to slow down AI development; it's that they want to keep it restricted to a tight inner circle.

We have at least as much to fear from elites who think only megacorps and governments are responsible enough to be given access as we do from AI itself. The failures of our elite class have been demonstrated again and again in recent decades--from the Iraq War to Covid. These people cannot be trusted as stewards, and we can't afford to see such potentially transformative technology employed to further ossify and insulate the existing power structure.

OpenAI had the right idea. Keep the source closed if you must, and invest heavily on safety, but make the benefits available to all. The last thing we need is an AI priesthood that will inevitably turn corrupt.


"OpenAI had the right idea. Keep the source closed if you must, and invest heavily on safety, but make the benefits available to all."

Well, you see the result. Apart from this drama, lots of debate and speculation whether we are already in AGI territory, because it is all just a black box and no one knows what exactly was in the training data to be able to judge the quality of the output.

That is way too much AI priesthood for me, just open it for real, if you don't want concentration of power.


Yeah that's certainly a debate worth having.

My point is this coup against OpenAI (seemingly) wasn't started by people who want to make AI more open. They want to make it inaccessible unless you're in their club. To my reading, that is more starkly opposed to the charter than anything Sam Altman has done. Commercialization = giving wide access. Not total 100% open source access, but wide access nonetheless.


And so you strike a deal with Microsoft. That tracks.


I think their point was that openAI (the nonpro) had the right idea in re: those concerns/concentration.

The velocity of release and entwining with MSFT (by the profit side) might then be reasonably seen as a great concern for the board.


Advancing AI requires massive amounts of capital and compute. There's no way around this. It can only be done either within or in partnership with huge organizations.

The default, in the absence of an OpenAI, is that it gets developed secretly by these organizations, and they get to decide who can use it (hint: it's not you, typical HN reader).

OpenAI were the only ones at least trying to walk this tightrope, and I think they were doing a pretty good job. So what were the real motivations of these three wealthy/elite board members who are taking it upon themselves to decide this issue for the rest of humanity? It's looking less and less likely that safety was the reason.


The paper reads a lot more nuanced than that. It compares the "system card" released with GPT-4 to the delay of Claude and the merits of each approach vis a vis safety.


Not really, and your own description is little different from the person you're responding to anyway?


>According to the paper, Anthropic's superior safety approach was deliberately delaying release in order to avoid “advanc[ing] the rate of AI capabilities progress."

Which can end up with China taking the lead. I don't understand why they think it's safer.


Having read the word soup of contradicting weasel words that make up Claudes "constitution", 'superior safety approach' has so many asterisks it could be a star chart. The only thing the garbage Anthropic has produced is superior is in making some people feel good about themselves.

(https://www.anthropic.com/index/claudes-constitution)


Im already out after the very first sentence.

> How does a language model decide which questions it will engage with and which it deems inappropriate?

This is what we mean by safety? Content moderation?


There are even more gems in the paper, like this one:

> Suppose a leader pledges during a campaign to provide humanitarian aid to a stricken nation or the CEO of a company commits publicly to register its algorithms or guarantee its customers' data privacy. In both cases, the leader has issued a public statement before an audience who can hold them accountable if they fail to live up to their commitments. The political leader may be punished at the polls or subjected to a congressional investigation; the CEO may face disciplinary actions from the board of directors or reputational costs to the company's brand that can result in lost market share.

I wonder if she had Sam Altman in mind while writing this.


The CEO is generally accountable to the board. A CEO trying to silence criticism and oust critical board members may be typical behaviour in the world of megalomaniacal tech startup CEOs, but it is not generally considered good corporate governance. (And usually the megalomaniacal tech startup CEOs have equity to back it up.)


He said he wished she communicated her concerns to him beforehand. How can disagreements be delt with if never communicated directly? So the CEO has to first learn of a disagreement with a fellow board member through a NY Times article?


Hard to defend this level of flex, though:

> Mr. Altman, the chief executive, recently made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.

It's like your job is to fire me if I fail to fulfill the company mission, but I will fire you if you don't get my approval before saying stuff in public.


Keep in mind Altman was also a board member with senior tenure to Toner, and basically hired her to the board.


No, she replaced Holden Karnofsky, who was almost certainly the one to pick her.


Who, it's worth mentioning, got a seat because Open Philanthropy donated $30 million to OpenAI early in its creation.


Pledged $30 million, $10 million per year for three years, but most likely only $20 million received. Elon Musk gave $40 million. There is also $70 million in mystery "other income" in the 990's that is missing the required explanation (nature and source.)

OpenAI is operated at a public charity and is required to meet a "public support test" of 33%, so Musk could not have given his $40 million without the $20 million from Open Philanthropy, an EA supported public charity. In fact, most of Open Philanthropy's money went to OpenAI.

Public charities are also required to have less than a majority of the board be employees or relatives of employees. For a while, after Elon Musk was removed, Holden Karnofsky of Open Philanthropy was the only non-employee on the board.


Why is that? I didn't think outgoing board members got to control who replaced them.

Other board members elect replacements iirc


The "disagreement" was never dealt with as far as I can tell --- OpenAI's safety approach hasn't become more conservative --- which means that the only effect of bringing it to the CEO beforehand was to try to have it suppressed.


Sam could never have a selective memory about such a thing...


> Sam apparently said she should have come to him directly if she had concerns about the company's approach

That seems dishonest given the last three years or so of conflict about these concerns that he’s been the center of. Of course he’s aware of those concerns. More likely, that statement was just him maneuvering to be the good guy when he tried to fire her, but it backfired on him.


It's interesting but it may well be they both have a point: Helen for telling him to get lost and Sam for attempting to remove her before she would damage the company.

But she could have made that point more forcefully by not comparing Anthropic to OpenAI, after all who better than her to steer OpenAI in the right direction. I noted in a comment elsewhere that all of these board members appear to have had at least one and some many more conflicts of interest. Helen probably believes that her loyalty is not to OpenAI but to something higher than that based on her remark that destroying the company would serve to fulfil its mission (which is a very strange point of view to begin with). But that doesn't automatically mean that she's able to place it in context, within OpenAI, within the USA, the Western world and the world as a whole.

It's like saying the atomic bomb would have never been invented if the people at Los Alamos didn't do it. They did it in three years after it became known that it could be done in principle. Others tried and failed but without the same resources. I suspect that if the USA had not done it that eventually France, the UK and Russia would have gotten there as well and later on China. Israel would not have had the bomb without the USA (willing or unwilling) and India and Pakistan would have achieved it but much later as well. So we'd end up with the same situation that we have today modulo some timing differences and with another last chapter on WWII. Better? Maybe. But it is also possible that the Russians would have launched a first strike on the USA if they were unopposed. It almost happened as it was!

The open question then is: does she really believe that no other entity has the resources to match OpenAI and does she believe that if such an entity does exist that it too will self destruct rather than to go through with the development?

And does she believe that this will hold true for all time? That they and their colleagues are so unique that they hold the key to something that can otherwise not be replicated.


> The open question then is: does she really believe that no other entity has the resources to match OpenAI and does she believe that if such an entity does exist that it too will self destruct rather than to go through with the development?

People at "top" companies fall into this fallacy very readily. FAANG (especially Google and Facebook engineers) think this way on all sorts of things.

The reality is that for any software project, your competition is rarely more than 1 year behind you if what you're doing is obviously useful. OpenAI made ChatGPT, and that revealed that this sort of thing was obviously useful, kicking off the arms race. Now they are bleeding money running a model that nobody could run profitably in order to keep their market position.

I have tried to explain this to xooglers several times, and it often goes in one ear and out the other until they get complacent and the competition swipes them about a year later.


I think the real issue is that OpenAI was doomed to fail from the beginning. AI is commercially too valuable to be developed by an organization with a mission like them. Eventually they had to make a choice: either become a for-profit without any pretensions about the good of humanity, or stay true to the mission and abandon ambitions of being at the cutting edge of AI development.

A non-profit could not have beaten the superpowers in developing the atomic bomb, and a non-profit cannot beat commercial interests in developing AI.


I always thought the structure was there to pull the wool over the eyes of the world's smartest researchers, to get them to agree to help them build a doomsday sort of technology. I never expected the board would be drinking their own kool aid.


> either become a for-profit without any pretensions about the good of humanity

Not a day passes that I don't hear from some company that has pretensions about the good of humanity and how they are leading the way to it. Plenty of non-profits and government orgs have the same pretensions.


I think it's different bc atomic bomb is pure cost and AI can have returns from products. But your overall point may stand


There's definitely (very large) returns from having atomic weapons.


Having the bomb first has infinite ROI.


> And does she believe that this will hold true for all time? That they and their colleagues are so unique that they hold the key to something that can otherwise not be replicated.

It's impossible to understand this position. We can be sure that in some countries right now there are vigorous attempts to build autonymous AI-enabled killing machines, and those people care nothing for whatever safety guardrails some US startup is putting in place.

I'm a believer in a skynet scenario, though much smarter people than me are not, so I'm hopefully wrong. But whatever, hand waving attempts to align/ soften, safeguard this technology are pointless and will only slow down the good actors. The genie is out of the bottle.


"But it is also possible that the Russians would have launched a first strike on the USA if they were unopposed. It almost happened as it was!"

When did a first strike of the Sowjetunion allmost happened? I rather think it was the other way around, first strike was evaluated, to hit them before they got the bomb.



What do you mean? The soviets never considered a nuclear first strike? They hadn't even fully deployed the ballistic missiles. It was the Joint Chiefs of Staff which recommended a first strike (although not nuclear), which fortunately Kennedy overruled.


>I noted in a comment elsewhere that all of these board members appear to have had at least one and some many more conflicts of interest.

From the perspective of avoiding an AI race, conflict of interest could very well be a good thing. You're operating under a standard capitalist model, where we want the market to pick winners, may the most profitable corporation win.


I subscribe to capitalism, but not to that degree. I see it the same way I see democracy: flawed but I don't have anything better.

> From the perspective of avoiding an AI race, conflict of interest could very well be a good thing.

On the American subcontinent: yes. But the world is larger than that.


Helen Toner has done a huge amount of work slowing down capabilities research in China. That's why she lived in Beijing for a year, she is a big part of why there are a lot of Chinese researchers from various AI labs signed onto the CAIS statement, and it's what her relationship to the Pentagon is all about. I think she is probably the individual person who knows the most in the world about the relative AI capabilities of China and the US, and her career is about working with the Pentagon, AI companies in the USA, and AI companies in China to prevent an arms race scenario around AI. It's the sort of work that really, really doesn't benefit from a lot of publicity, so it's unfortunate that this whole situation has put her in the spotlight and means someone else will probably need to backchannel between the US and China on AI safety now.

I don't know why she chose to publicly execute Altman, there just isn't enough information to say for sure. It probably wasn't a specific, imminent safety concern like "Our new frontier model was way more capable than we were expecting and attempted a breakout that nearly succeeded during internal red teaming", according to the new CEO it wasn't anything like that. The new CEO has heard their reason, but is putting a lot of pressure on them to put that reason in writing for some reason. I don't know why, we just don't have enough information.

But basically, she is a very qualified person on the exact topic you are concerned about and has devoted her career to solving that problem. I wouldn't write off what she's doing or has done here as "She didn't consider that China exists".


Well, if she was working with the Chinese she couldn't have done a more effective job.

So I'm not sure how this all integrates in her head but you break the glass in break-the-glass moments, not before, it's a one-shot thing.


She was working with the Pentagon, to try and make sure there isn't a serious issue between the US & China on AI, which requires actual engagement instead of just blind nationalism if you want the Chinese to listen to anything you have to say. I think it would be a really bad idea to assume that she just hasn't thought this through. We just don't have enough information.


I have some indication that she hasn't thought it through, that all started last Friday. If she has thought it through I hope that she can show her homework because it kind of matters.


See https://news.ycombinator.com/item?id=38373572 for why this might have qualified (if the story as presented is accurate).


Thin ice at best. And if it was she should come out and say it or leak the minutes where that was established as the reason that Sam had to go, that the fall-out would be worth it and that Microsoft could be contained.

I think any action to immediately destroy OpenAI should have been preceded by being on-track to create AGI and strong indications that it was not going to benefit humanity, as the charter implies. Anything less and it's just a powerplay. But what is the point of having a nice fat red button if you never get to push it?


>On the American subcontinent: yes. But the world is larger than that.

I'm not sure what you're trying to get at.


Fine by me.


So, it could also be that she approached him on the subject multiple times, after-all, she is a member of a board whose job is to make AI safety a priority thing.

After his plans for rapid expansion and commercialization were in direct contrast to the company's aims, I guess she wrote the paper to highlight the issue.

It seems that, like in the case of Disney, the board has lower power and control than the CEO. Highly likely if you have larger than life people like Sam at the helm.

I would not trust the board, but I would also not trust Sam. When billions of dollars are at stake, its important to be critical of all the parties involved.


>... yet based on this account I think he reacted the exact same way any CEO would.

Say what? The CEO serves at the behest of the board, not the other way around. For Sam to tell a board member that they should bring their concerns to him suggests that Sam thinks he's higher than the board. No wonder she told him to go fly a kite.


> I think he reacted the exact same way any CEO would

Perhaps if you think of it as another YC startup, but not so much if you view OpenAI as a non-profit first and foremost.


Who is being completely reasonable? Board member has a mandate and appears to be making a good faith effort to carry it out and the CEO tries to overthrow her. Whether that is standard behavior for CEOs is irrelevant.


She has a mandate not to promote Anthropic on the back of OpenAI. Very unprofessional


This is something you don’t get. What happens to the for-profit arm of OpenAI is not her problem, her loyalty lies with the non-profit arm’s charter (and not even the organization itself).

She is doing her job to uphold the mission of the organization.


This is exactly correct and so many people just don’t get it. The non-profit owns the for-profit, not the other way around. When the goals of the for-profit clash with the goals of the non-profit, the for-profit has to yield.


> What happens to the for-profit arm of OpenAI is not her problem

Who pays the bill for the non profit arm, wasnt that the for profit arm?


The for-profit arm was always meant to be subservient to the non-profit arm - the latter practically owns the former.

The important thing for the non-profit arm is the mission, its own existence is secondary.


> The important thing for the non-profit arm is the mission, its own existence is secondary.

So OpenAI should just shut down because the charitable donations were not enough to keep the lights on.


If they lack money they could always scale back operations.

Also running the for-profit arm the way Altman did isn’t the only way.

A more reasonable CEO would do what he/she can to make money without running afoul of the charter. Yes, it will be less profit - and sometimes no profit at all - but that’s the way it should be in a non-profit organization.

OpenAI used to be an organization that dabbled with various AI technologies. Anyone remember their DOTA2 bot? Before Altman turned it into all about commercializing their LLM - going so far as to try to lobby congress to create laws, in the name of safety of course, to hobble any upstart competition.


Helen Toner through her association with Open Philanthropy donated $30 Million dollars to OpenAI early on. That's how she got on the board.

https://loeber.substack.com/p/a-timeline-of-the-openai-board


That's super insightful, thank you for sharing this.


> I'm much more interested in how Helen managed to get on this board at all.

My gut says that she is the central figure in how this all went down. She and D'Angelo are the central figures, if my gut is right.

It looks like Helen Toner was OK with destroying the company to make a point.

FTA:

> Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission


That seems reasonable? The charter of the company could reasonably be furthered even if that means the end of the organization. If at some point the existence of the organization becomes antithetical to the charter, the board members have a responsibility to destroy it.


But they didn't destroy it. And they handed the keys of the kingdom to Microsoft.

So as a break-the-glass move it was about as ineffective as throwing a can of gasoline on a fire.


Replying to my own comment since I can't edit it anymore, but:

It looks like Helen Toner is out of the board.


> Sam apparently said she should have come to him directly if she had concerns about the company's approach and pointed out that as a board member her words have weight at a time when he was trying to navigate a tricky relationship with the FTC. She apparently told him to kick rocks and he started to look for ways to get her off the board.

Huh, this sounds pretty crazy to me. Like, it's assuming that a board member should act deceptively in order to help the for-profit arm of OpenAI avoid government scrutiny, and that trying to remove them from the board if they don't want to do that is reasonable. But in fact the entire purpose of the board is to advance the mission of the parent non-profit, which doesn't sound obviously compatible with "avoid giving the FTC (maybe legitimate) ammunition against the for-profit subsidiary, even if that means you should hide your beliefs".


No, it means that you go outside only after you've exhausted all avenues inside. It's similar to a whistle blower situation, only most whistleblowers don't have their fingers on the self-destruct button. So to press that button before exhausting all other options seems a bit hasty. There is no 'undo' on that button.


We're talking about the publication of a relatively milquetoast report that has some lines which can be read as mild criticisms of OpenAI's release strategy for ChatGPT & GPT-4. Why exactly is publishing such a report controversial? It's totally compatible with Helen Toner's role on the board.


I wasn't aiming that at the report per-se but at the actions on Friday. He wanted to keep that kind of report inside or at a minimum to see it beforehand, she didn't, he tried to remove her and ended up being removed himself. Both are out of line.

Sorry for the confusion.


I think trying to remove Toner for being a public author of that report is actually pretty out of line. (Note: the NYT article doesn't actually seem to provide evidence that Sam tried to get her removed from the board, though it sure does try to imply it real hard by reference to other things, like the email he sent criticizing her.)


Yes, agreed, that was uncalled for. But it is what probably a large number of CEOs would do.


I think it clearly represents the dichotomy of OpenAI’s structure. By its nature it created an adversarial position between the non-profit and for profit side with Sam and Toner representing those two poles.

I, personally, have a hard time picking sides here. On its face it seems that Ms Toner is more aligned with OpenAI’s stated mission.

However Sam seems much better suited to operating the company and being the “public face” of AI. One thing I’m pretty confident in is that the board isn’t capable of navigating OpenAI through the rough seas it’s currently in.

I think an argument could realistically be made to blow up the whole governance structure and reset with new principals across the board, that being said I don’t know who’d be a natural arbiter here.

At the end of the day the untenable spot Ms Toner is in is that the genie is out of the bottle which makes her position of allowing the company to self-destruct a bit tone deaf.


I'm seeing a somewhat larger stage where the United States and other countries are in an undeclared arms race and it just so happens that from what we know private entities (or an entity) in the United States believes that they are close enough to achieving that goal that they are actively working on things like alignment rather than just futzing around with GPUs and other multiplication hardware.

AGI is either just around the corner or it will be 50 years or more and if it is just around the corner you'd hope that parties that have at least some semblance of balance would end up in charge of the thing. Because if it is possible I expect it to be done given the amount of resources that are being thrown at this. Assuming it can be done weaponization of this tech would change the balance of power in the world dramatically. Everybody seems to be worried about the wrong thing: whether or not the AGI will be friendly to us. That doesn't really matter, what matters is who controls it.

No single individual (Altman, Toner, Nadella, or anybody else) should be taking the responsibility about what happens onto themselves, if anything the board of OpenAI has shown that this isn't a matter for junior board members because the effects range far further than just OpenAI.


Practically all of the most relevant experts in this domain, leadership in OpenAI, think it's right around the corner.

> Assuming it can be done weaponization of this tech would change the balance of power in the world dramatically.

Yes it would, but it wouldn't be as bad as everyone dying.

> Everybody seems to be worried about the wrong thing: whether or not the AGI will be friendly to us. That doesn't really matter, what matters is who controls it.

No, "who controls it" is a problem best tackled after "will it kill everyone." You say "That doesn't really matter," but again, Sam Altman himself thinks "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."


> Practically all of the most relevant experts in this domain, leadership in OpenAI, think it's right around the corner.

Oh, ok. That makes it alright then. So, let's see those minutes of that meeting where this was decided with all of the pomp and gravitas required rather than that it was a petty act of revenge or to see who could oust who first from OpenAI. Because that's what it looks like to me based on what is now visible.

> Yes it would, but it wouldn't be as bad as everyone dying.

Not much is as bad as everyone dying. But for now that hasn't happened. It also seems a bit like a larger version of the 'think of the children' argument, you can justify any action with that reason.

> No, "who controls it" is a problem best tackled after "will it kill everyone." You say "That doesn't really matter," but again, Sam Altman himself thinks "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

Then why the fuck is he trying to bring it into the world? Seriously, they should spend all of their talent on trying to sabotage those efforts then, infiltrate research groups and set up a massive campaign to divert resources away from the development of AGI. Instead they are trying as hard as they can to create the genie in the sure conviction that they can keep it bottled up.

It's delusional on so many fronts at once that it is isn't even funny, their position is so horribly inconsistent that you have to wonder if they're all right in the head.

EA is becoming a massive red flag in my book.

They seem to miss the fact that every weapon that humanity has so far produced has been used, and that those that haven't been used hang over us like shadows, and have been doing that for the last 70 some years. Those are weapons whose use you will notice. AGI is a weapon whose use you will not notice until you realize you are living in a stable dictatorship. The chances of that happening are far larger than the chances of everybody dying.


What you’re missing is that Eliezer has been beating this drum a long time, while the rest of HN and the world is asleep at the wheel.

Sam Altman, Elon Musk, and the original founders of OpenAI believe AGI is an existential threat and are building it with the belief that it’s best they’re the ones who do it, rather than worse people, in an arms race. Eliezer is the one saying “you fucking fools, you’re speeding up the arms race and have now injected a shit ton of $ from all VC into accomplishing the naive capabilities with no alignment.”

People don’t even realize that Elon Musk founded Neuralink with the belief that AGI is such an existential threat, we’re better off becoming the AGI cyborgs than a separate inferior intelligence. But most of the people thinking theyre so smart and understand the AGI X risk landscape here, even Elon fans, don’t know that.


Eliezer - we're all gonna die - Y has his own problems.

People really watch too many movies. The real risk isn't AGI killing us all, the real risk is that plenty of people will think they are smart enough to create it and then to contain it whilst the ruthless faction of humanity uses it to set up shop in a way that they can never ever be dislodged again. Think global Mafia or something to that effect, or a world divided into three chunks by three major power blocs each with their own AI core. That's a much more likely outcome and one that on account of an incompetent board has just become a little bit more likely.


>At the end of the day the untenable spot Ms Toner is in is that the genie is out of the bottle which makes her position of allowing the company to self-destruct a bit tone deaf.

Tone-deaf basically means "unpopular", doesn't it?

I'm old enough to remember when doing the right thing, even when it's unpopular, was considered a virtue.


Off topic - this is the first time I have seen the word milquetoast (pronounced "milk toast"?). What an interesting word!

NORTH AMERICAN

noun

a timid or feeble person. "Jennings plays him as something of a milquetoast"

adjective

feeble, insipid, or bland. "a soppy, milquetoast composer"


Her mandate is specifically to do what is good for society not to do what is good for OpenAI.


And how did this improve society?

I only see negatives. Other entities with less of a brake on their ethics are gaining ground. Microsoft of all parties is strengthening their position and has full access to the core tech.


>Other entities with less of a brake on their ethics are gaining ground.

Did OpenAI actually have a meaningful brake though? Like, if all the employees apparently think that the success of the company is more important than the charter, can we be sure that OpenAI actually had a meaningful brake?


Good question, probably not, in hindsight, but that was always my view anyway only for different reasons.


She published AI safety research as a member of a board whose mandate it is to act as a check and balance on the operation of an AI company. You are saying that she should hide information from the public out of loyalty to the company.

edit: Or that the board can't actually make a difference because whatever OpenAI doesn't do someone else will. But if people actually thought that were true they wouldn't have set up the board and charter.


No, I am not saying that. I am saying that Altman has a point (he's trying to deal with the FCC and it doens't help if at the same time a board member releases a paper critical of the company and actively comparing it to another), while at the same time she has a point which is that it is very well possible (and even probable) that OpenAI's safety could be improved on. Now, what would serve the charter better: to use that stick to get OpenAI to improve or to blow up OpenAI under the assumption that other leadership is going to be at least as ethical as they are with as a major chance of the subsequent fall-out that Microsoft will end up holding even more of the cards?

It's amateur behavior. I'm sympathetic to her goals, less impressed by the execution.


> he's trying to deal with the FCC and it doens't help if at the same time a board member releases a paper critical of the company and actively comparing it to another

Again, her mandate is not to help OpenAI deal with the FCC it's to prevent the company from building unsafe AI, one reasonable aspect of which might be to compare the methodologies of different companies.

You can justify pretty much anything with ends-justify-the-means logic and I have a hard time believing that the people who set up the charter would, a priori, have said that suppressing research comparing the safety approach of the company to a competitor in order to not make the company look bad so that the competitor wins because the company insists, without any basis, that they would be better for safety is in line with the principles of the charter. That is just trying to game the charter in order to circumvent it and is a textbook case of what the board was appointed to prevent.


> Again, her mandate is not to help OpenAI deal with the FCC it's to prevent the company from building unsafe AI, one reasonable aspect of which might be to compare the methodologies of different companies.

This isn't a theoretical exercise where we get to do it all over again next week to see if we can do better, this is for keeps.

The point could have been made much more forceful by not releasing the report but holding it over Altman's head to get him to play ball.

> You can justify pretty much anything with ends-justify-the-means logic

Indeed. That's my point: the ends justify the means. This isn't some kind of silly game this is to all intents and purposes and arms race and those that don't understand that should really wake up: whoever gets this thing first is going to change the face of the world, it won't be the AGI that is your enemy it is whoever controls the AGI that could well be your enemy. Think Manhattan project, not penicillin.

> I have a hard time believing that the people who set up the charter would, a priori, have said that suppressing research comparing the safety approach of the company to a competitor in order to not make the company look bad so that the competitor wins because the company insists, without any basis, that they would be better for safety is in line with the principles of the charter. That is just trying to game the charter in order to circumvent it and is a textbook case of what the board was appointed to prevent.

That charter is and always was a fig leaf, I am probably too old and cynical to believe that it was sincere, it was in my opinion nothing but a way to keep regulators at bay. Just like I never bought SBF's 'Altruism' nonsense.

The road to hell is paved with the best of intentions comes to mind.


> The point could have been made much more forceful by not releasing the report but holding it over Altman's head to get him to play ball.

Given how Altman has responded to things throughout his career and in this, I fail to see how doing this would result in anything other than the same outcome: Altman won't be moved by that, but consider it extortion and move for her removal from the board, regardless. In the end, he wants criticism or calls for caution stifled.


Let's be clear, he's mostly been dealing with the government with the goal being largely to enable regulatory capture, and pull the ladder up behind OpenAI with respect to regulation.

That effort isn't critical to OpenAI other than to try to create a monopoly.


Let's say you have a time machine and 20 years later OpenAI destroyed humanity b/c of how fast they were pushing AI advancement.

Would the destruction of OpenAI in 2023 be seen as bad or good with that hindsight?

It seems bad now but if you believe the board was operating with that future in mind (whether or not you agree with that future) it's completely reasonable imo.


I don't have a time machine.


This is an argument that can be (and is) used to justify anything.


So he criticized her and threatened her board position, and then she orchestrated a coup to oust him? Masterful. Moves and countermoves. You have to applaud her strategic acumen, and execution capability, perhaps surprising given her extensive background in policy/academia. Tho maybe it's as Thiel says (about academia: "The battles are so fierce because the stakes are so small") and that's where she developed her Machiavellian skills?

Of course, it could also be that whatever interest groups she represents could not bare to lose a seat.

Whether initiated by her or her backers (or other board forces), I can't see any of the board stepping down if these are the kind of palace intrigues that have been occurring. They are all clearly so desperate for power they will cling to the positions on this rocketship for dear life. Even if it means blowing up the rocketship so they can keep their seat.

Microsoft can't spend good will erasing the entire board and replacing it, even tho it's near major shareholder because too much values the optics around its relationship to AI right now.

A strong, effective leader in the first place would have prevented this kind of situation. I think the board should be reset and replaced with more level headed, less ideological, more experienced veterans...tho picking a good board is no easy task.


> Microsoft can't spend good will erasing the entire board and replacing it,

because they don't have the power to as they do not have any stake in the governing non-profit.


That’s a good point, but surely the significant shareholding, partnership and investment gives them power and influence.


Given the news today, apparently so.


Yep it did play out like that. Full board reset! Well done MS, but still I don't think it's enough. Oh well :) haha


Not that I think there are many examples of technical people making great board members, we've entered an era where if I don't get my way on the inside I'll just tweet about it and damn any wider consequences.

Management and stockholders beware.


The non-profit OpenAI has no stockholders.


While this would be perfectly reasonable if OpenAI was for profit, it’s ostensibly a non profit. The entire reason they wanted her on the board in the first place was for her expert academic opinion on AI safety. If they see that as a liability, why did they pretend to care about those concerns in the first place?

That said, if she objects to open AI’s practices, the common sense thing to do is to resign from the board out of protest, not take actions that lead to the whole operation being burned to the ground.


This is not any other company though? It's a non-profit with charter to make AI to benefit all of humanity

Helen believed she was doing her job according to the non-profit charter, obviously this hurts the for-profit side of things but that is not her mandate. That is the reason OpenAI is structured the way it is, with the intention of preventing capitalist forces from swaying them away from the non-profit charter, in hindsight it didn't work, but that was the intention (with the independent directors, no equity stakes, etc)

The board has all my respect for standing up to the capitalists of Altman, the VCs, Microsoft. Big feathers to ruffle - even though the execution was misjudged, turns out most of its employees are pretty capitalistic too


> The board has all my respect for standing up to the capitalists of Altman, the VCs, Microsoft. Big feathers to ruffle - even though the execution was misjudged, turns out most of its employees are pretty capitalistic too

Exactly. This is a battle between altruistic principles and some of the most heavyweight greedy money in the world. The board messed up the execution, but so did OpenAI leadership when they offered million dollar pay packages to people in a non-profit that is supposed to be guided by selfless principles.


One man's altruist is another man's fanatic. I, for one, would prefer an evil bandit over an evil fanatic because at least a bandit sleeps once in a while as that C.S Lewis quote goes.


I don't see why we have to paint these people with vagaries and reduction to analogy

For the board, their job is to keep the mandate of the non-profit. The org is structured to prevent "bandits" from influencing their goals. Hence, I cannot fault non-profit directors for rejecting bandits, that is the reason they are there.

It is like criticising a charity's directors for not bowing to the pressure of large donors to change their focus or mandate. Do we want to live in a world where the rich entrench control and enrich themselves and allies, and we just justify it as bandits being bandits? And anyone who stands up against them gets labelled a fanatic or pejorative altruist?


And the bandit likely is honest about their motives.


"I'm much more interested in how Helen managed to get on this board at all."

Indeed. This is far more interesting. How the hell did Helen and Tasha get on, and stay on, the board.


Helen Toner through her association with Open Philanthropy donated $30 Million dollars to OpenAI early on. That's how she got on the board.

https://loeber.substack.com/p/a-timeline-of-the-openai-board


This makes it sound like it was her money, which is not the case. She worked for an organization that donated $30M and they put her on the board.


How did she stay on the board?


I strongly suspect this whole thing is caused by an overinflated ego and a desire to feel like she is the main character and the chief “resistance” saving the world. The EA philosophy is truly poisonous. It leads people to betray those close to them in honor of abstract ideals that they are most likely wrong about anyway. Such people should be avoided like the plague if you’re building a team of any kind.


https://openai.com/our-structure

This whole thing was so, SO poorly executed, but the independent people on the board were gathered specifically to prioritize humanity & AI safety over OpenAI. It sounds like Sam forgot just that when he criticized Helen for her research (given how many people were posting ways to "get around" ChatGPT's guardrails, she probably had some firm grounds to stand on).

Yes, Sam made LLMs mainstream and is the face of AI, but if the board believes that that course of action could destroy humanity it's literally the board's mission to stop it — whether that means destroying OpenAI or not.

What this really shows us is that this "for-profit wrapper around a non-profit" shenanigans was doomed to fail in the first place. I don't think either side is purely in the wrong here, but they're two sides of an incredibly badly thought-of charter.


> It sounds like Sam forgot just that when he criticized Helen for her research (given how many people were posting ways to "get around" ChatGPT's guardrails, she probably had some firm grounds to stand on).

Sam didn’t forget anything. He is a brilliant Machiavellian operator. Just look at the Reddit reverse takeover as an example; Machiavelli would be in awe.

> What this really shows us is that this "for-profit wrapper around a non-profit" shenanigans was doomed to fail in the first place.

No. It shows this structure is doomed to fail if you have a genius schemer as a CEO, playing the long game to gain unrestricted control.


> Just look at the Reddit reverse takeover as an example; Machiavelli would be in awe.

What were the details on that? (Sorry it’s not an easy story to find on Google given how much the keywords overlap with OpenAI topics)


Yishan Wong's comment here [1] explains it. (He's the unnamed "young up-and-coming" CEO of the story.)

In short, the plan was to reduce Condé Nast's ownership of Reddit by hiring a new CEO, and convincing that person to demand, as a condition for their hiring, that CN reduce their ownership share. Further VC funding and back-room machinations let them further reduce CN's share of the company, thus eventually wresting control over Reddit back to the original founders. Yishan was subsequently pushed out and Ellen Pao promoted to CEO, which didn't go so well either.

Both Altman and Pao are responding in that thread.

[1] https://www.reddit.com/r/AskReddit/comments/3cs78i/comment/c...



FYI it’s a joke in case this is going over anybody’s head. Ellen Pao even played along: https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...


The link says “Page not found”


Looks like the HN app I was using mangled the link. Fixed now.


> Just look at the Reddit reverse takeover as an example

I'm not familiar with this, what happened? Googling "Sam Altman reddit reverse takeover" is just flooded with OpenAI results.



I think it points out how Altman setup this non-profit OpenAI as a sort of humanitarian gift, because he pretty clearly marketed himself as having no financial stake in the company, only to use that as leverage for his own benefit.

This whole thing is a gigantic mess, but I think it still leaves Altman in the center and as the cause of it all. He used OpenAI to gather talent and boost his "I'm for humanity" profile while dangling the money carrot in front of his employees and doing everything he could to get back in the money making game using this new profile.

In other words, it seems like he setup the non-profit OpenAI as a sort of Trojan horse to launch himself to the top of the AI players.


>In other words, it seems like he setup the non-profit OpenAI as a sort of Trojan horse to launch himself to the top of the AI players.

Given that Altman apparently idolized Steve Jobs as a kid, this idea really doesn't feel that far-fetched.


> What this really shows us is that this "for-profit wrapper around a non-profit" shenanigans was doomed to fail in the first place.

I disagree. The for-profit arm was always meant to be subservient to the non-profit arm - the latter practically owns the former.

A proper CEO would just try to make money without running afoul of the non-profit’s goals.

Yes that would mean earning less or even not at all. But it was clearly stated to investors that profit isn’t a priority.


>they're two sides of an incredibly badly thought-of charter.

It's easy to say this with the benefit of hindsight, but I haven't seen anyone in this discussion even suggest an alternative model that they claim would've been superior.


Agreed, I'm not saying I have a better alternative, just that this is something we all should now realize; given i'm sure we were all wondering for a long time what the whole governance structure of OpenAI really meant (capped for-profit with non-profit mission etc.)


nonprofit companies with for-profit portfolio companies are hardly unusual and certainly not doomed to fail. i've worked for two such companies in my high-tech career myself; one is now called altarum, though i worked for the for-profit subsidiary that got sold to veridian


A lot of people in tech say that executives are excessively diplomatic and do not speak their truth. But this is what happens when they do too much, too ardently, too often. This is why diplomacy and tact is so important in these roles.

Things do not go well if everyone keeps poking each other with sticks and cannot let their own frame of reference go for the sake of the bigger picture.

Ultimately, I don’t think Altman doesn’t believe ethics and safety is important. And I don’t think Toner fails to realize that OpenAI is only in a place to dictate what AI will be due to its commercial principles. And they probably both agree that there is a conflict there. But what tactful leadership would have done is found a solution behind closed doors. Yet from their communication, it doesn’t even look like they defined the problem statement — everyone offers a different idea of the problem that they had to face together. It looks more like it was more like immature people shouting past each other for a year (not saying it was that, but it looks that way).

Moral of the story: tact, grace, and diplomacy are important. So is speaking one’s truth, but there is a tactful time, place, and manner. And also, no matter how brilliant someone is, if they can’t develop these traits, they end up rocking the boat a lot.


Spot on.


The relevant passage from the paper co-written by board member Helen Toner:

"OpenAI has also drawn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators, and the susceptibility of their products to "jailbreaks" that allow users to bypass safety controls...

A different approach to signaling in the private sector comes from Anthropic, one of OpenAI's primary competitors. Anthropic's desire to be perceived as a company that values safety shines through across its communications, beginning from its tagline: "an AI safety and research company." A careful look at the company's decision-making reveals that this commitment goes beyond words."

[1] https://cset.georgetown.edu/publication/decoding-intentions/


I think this is heavily editoralized. if you look at the 3 pages in question that the quotes are pulled from (28-30 in doc, 29-31 in pdf), they appear to be given as examples in pretty boring academic discussions explicating the theories of costly signaling in the context of AI.It also has lines like:

"The system card provides evidence of several kinds of costs that OpenAI was willing to bear in order to release GPT-4 safely.These include the time and financial cost..."

"Returning to our framework of costly signals, OpenAI’s decision to create and publish the GPT4 system card could be considered an example of tying hands as well as reducible costs. By publishing such a thorough, frank assessment of its model’s shortcomings, OpenAI has to some extent tied its own hands—creating an expectation that the company will produce and publish similar risk assessments for major new releases in the future. OpenAI also paid a price ..."

"While the system card itself has been well received among researchers interested in understanding GPT-4’s risk profile, it appears to have been less successful as a broader signal of OpenAI’s commitment to safety"

And the conclusion:

"Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed. Taken together, these two case studies therefore provide further evidence that signaling around AI may be even more complex than signaling in previous eras."


> I think this is heavily editoralized.

"Editorialized"?? It's a direct quote from the paper, and additional context doesn't alter its perceived meaning.


Note that the quote about Anthropic is about Anthropic's desire to be perceived as a company that values safety, not a direct claim that Anthropic actually is safe, or even that it desires to value safety.


You must have interpreted the final sentence "A careful look at the company's decision-making reveals that this commitment goes beyond words" very differently than I did, or else you're splitting hairs in making your distinction.


I read it as a commitment to keep this costly signal, beyond just words. If an amoral oil company wants to keep employees/customers who care about the environment, they might both say that they care about the environment AND do costly signals that indicate that they care about the environment (like replacing plastic straws with inferior paper straws, even if it's annoying and costs money). This is different from the company actually caring about the environment. Maybe actually caring involves taking actions that matter more than paper straws. Which again is different from being good for the environment, overall.

I might be reading into the literal words too much though. I don't have a sense of how messages like that is read in political science academia and DC (the primary target audience).


At first sight, I also initially interpreted the use of the term "signals" or "signaling" the same as you interpreted it: with negative connotation that is often used for those who only care about the perception of the public, but not actually caring "by heart".

But after reading the first few pages in that document, especially with the comparison to Cuba missile crisis and the title "Decoding Intentions", it appears that the word "costly signals" here is about how to properly publicize our intentions, so as not to create misconception that may spiral out of control, like in the case of Cuba missile crisis, where unclear "signals" were given, causing a chain of misunderstanding to pile on top of another, making the situation worse.

The document seems to be a cautionary tale to prevent that kind of thing to happen again, this time with AI systems, especially when it's used in the military, where the consequences may be dire.

So, I interpreted the point of the document as (my wording): Let us be aware on how we are communicating our intentions through our actions, lest miscommunications make chaos out of this rapidly advancing nascent technology breakthrough.


This reads more like ad copy than a research paper. I'd have been pissed too if I were Altman.


>Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that mission would be fulfilled.

Now we know where that came from


It's pretty heavily implied by OpenAI's charter: https://openai.com/charter

It's weird to me that Helen is getting so much crap for upholding the charter. No one objected to the charter at the time it was published. The charter was always available for employees, investors, and customers to read. Did everyone expect it to be ignored when push came to shove?

There's a lot of pressure on Helen right now from people who have a financial stake in this situation, but it's right there in the charter that OpenAI's primary fiduciary duty is to humanity. If employees/investors/customers weren't OK with that, they should not have worked with OpenAI.


Investors wanted to have a commercial enterprise while pretending it was a nonprofit acting for the good of humanity. This helps market something as scary and potentially destructive as AI. "It can't be that bad, they're a nonprofit with altruistic goals!" Then the investors get mad when the board they intended to be figureheads actually try to uphold some principles.

Best to rip the band-aid off and stop pretending.


I think you have it backwards. OpenAI specifically sought out investors because they couldn't fulfill their mission without the infrastructure to do it. Investors don't just give money away -- it's in the name itself -- they're investing. The point is for the commercial enterprise to provide a return on their investment.

OpenAI is a non-profit running this commercial enterprise but they are seemingly at odds with that enterprise. Investors should rightly be very concerned about the future of their investment.


Those investors were appropriately warned of this possibility:

> IMPORTANT

> *Investing in OpenAl Global, LLC is a high-risk investment*

> *Investors could lose their capital contribution and not see any return*

> *It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world*

> The Company exists to advance OpenAl, Inc.'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company's duty to this mission and the principles advanced in the OpenAl, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company's cash flow into research and development activities and/or related expenses without any obligation to the Members.

I guess maybe they thought it was the same boilerplate every prospectus has?


High-risk investments are not rare. To argue that OpenAI can do whatever they want for whatever reason because they state that it's high-risk is not how anything works.

Investors should rightly be concerned about things like gross negligence, active sabotage, or simple infighting. These are all current human concerns and not at all that AGI will destroy the concept of money rendering all investments pointless.


Most high-risk investments still come with an understanding that the recipient will have a duty to the investor, making decisions that they believe in good-faith will lead to a return on that investment on some time-frame.

This one is very much an explicit, "You're giving us a donation. Don't expect a return." It's in an impossible-to-miss pink box. They can't be more clear about this.

If investors are concerned about those things, then they should not give money to people who pointedly leave those things on the table. Like this one:

> The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.

Also, Microsoft got something for their money already. They have an independent license to the tech they can run themselves and iterate on.

The OpenAI board has no duty to Microsoft to get their input. MS might have believed their investment amounted to de facto control, but they're wrong.


I think they make it quite clear what they mean by statement. If they wish to accept donations they can freely do so but they did not. It's an investment with all the current expectations. Any investor based on that statement should expect either a return or the end of capitalism itself.


The OpenAI prominently states that any investment should be thought of as a donation. Literally in a bold pinkish-red box that describes its corporate structure.

An investor might read that and think they don't actually mean it, but you can't claim they weren't clear about the nature of any required investment.


OpenAI explicitly created a for profit company to take these investments. It's disingenuous to reframe it as a charity.


The bold purple box is explicitly talking about “OpenAI Global, LLC”, which _is_ the “for profit” company.

See also: “OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.” --https://openai.com/blog/openai-lp

To emphasize, they will not take your investment money until you've signed an agreement saying that you're aware of all of this.


I think you hit the nail on the head here. Altman himself set this structure up. The real story here is a live by the sword, die by the sword parable.

Similarly, Microsoft is coming across as blameless here, but there’s a reason Satya went on a full court press. They committed tremendous resources and money to an unpredictable business structure that is fundamentally not aligned with the profit motive - in its very charter.

The thing is doing exactly what the thing was supposed to do. The only reason it’s surprising to anyone is because people assume OpenAI is a for profit entity.


Let’s assume this happened. Then everyone would make everything in their power to get their old CEO back which made this possible. Oh wait.


Ya which is why Sam should just start a new company, maybe spend a few months to catch back up but then it won't be tied down to any of this shenanigans. It's the best solution imo.

I have a feeling that's exactly what he's doing at Microsoft; at some point their "AI lab" will be spun off into a new company.


Also, Sam himself repeatedly used the charter as marketing, and as a recruiting tool for AI researchers that could have gone anywhere they wanted (e.g. Ilya)

He was basically making the argument that AGI is better under OpenAI than Google.

Now they're implicitly making the argument that it's better under Microsoft, which is difficult for me to believe.


Turns out for a lot of people it's easy to be on board with a high minded charter until it might cost something.


Not surprising to me at all, in approximate order of real, practical importance (and power, if they all band together):

employees founder/CEO customers investors board stuff written on pieces of paper

Yes, there are certainly exceptions (a very powerful founder, highly replaceable and disorganised employees, investor or board member who wields unusual power/leverage, etc.) but it does not surprise me at all that the charter should get ignored/twisted/modified when basically everyone but the board wills it.

The only surprise is that anyone thought this approach and structure would be helpful in somehow securing AI safety.


stuff written on pieces of paper, such as history, laws, contracts, and banknote denominations, turn out to be surprisingly important when groups of people too large to all know one another try to work together


This charter doesn't have a sole interpretation, and shame on Helen for strong arming her view and ruining the lives of so many people.

If there is something completely clear, its that OpenAI cannot uphold its charter without labour. She has ruined that, and thus failed in upholding the charter. There were many different paths to take, she took the worst one.


>her view and ruining the lives of so many people.

..and had Altman just accepted he was fired and walked away, OpenAI would have just looked bruised and not like the broken state it does now...

I don't think the current state for the employees is just on her. It's the result of both sides fighting for governance and willing to see the organization disabled if their side loses.


I really don't know the full story here, but I will note that when women in power do their job, their actions are frequently interpreted in an ugly and uncharitable way that men are less likely to get.

They get characterized as "bossy", like it's an ugly personality trait, and not like they are the boss and it's their job to do something about a thing.


It all smacks of someone given the power to do something who then has to do that something, even if it is on a pretext. Power is best exercised by restraint, not by action, until there is no other way.


Who's life has been ruined? Did I miss something?


Ask the labour class at OpenAI. They are not in the privileged position the capital class on the board find themselves in.


As an academic, we can be fairly certain that Helen makes way less money than top AI talent


Oh please... Can't you see how meaningless the phraes 'goodness of humanity' is? If only something like that can be so readily known!


yes, and what about my needs, huh?


That's a bit naive, to put it mildly. It presumes that nobody else would be able to replicate the effort and that the parties that are able to replicate it would also destroy theirs after proving that it could be done. Fat chance.


The actions of other organisations are not in the scope of the board's mission. The actions of the company the board controls are in that scope.

"The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,”"

We can't control other's actions, but we can control our own. If we feel that our actions are diverging from our own sense of what we ought to be doing we can change our actions, regardless of how others are behaving.


That is absolutely true. But you do operate on a larger stage and sticking your head in the sand and pretending the rest of the world doesn't exist is also not going to work.

Altman appears to me to be reckless, ms. Toner appears to be naive. Both should not be near this kind of power.


>But you do operate on a larger stage and sticking your head in the sand and pretending the rest of the world doesn't exist is also not going to work.

I'm sure that's how tobacco and oil executives justify their paycheck: "If I wasn't doing it, someone else would". Probably lots of criminals justify their crimes using similar reasoning too.

Ultimately it's not quite true. If you quit your job at that tobacco or oil company, it's going to take a while to find them a replacement for you. That replacement might not be quite as good at the job (why were you picked for the job in the first place? Probably because you were the strongest candidate.) You aren't going to totally halt EvilCorp by quitting, but you're going to give it a speed bump.

In the case of OpenAI, we have to consider what organization is likely to replace them if they throw in the towel. If it's a more cautious org like Anthropic, OpenAI throwing in the towel could actually be a good thing from the point of view of their charter: https://openai.com/charter


I can see a number of points in time where that goal could have been achieved much better than in the way it happened. For instance: instead of a single deal with a large tech company it should have been a financial deal shared between a syndicate.

That would have been a lot safer than to give MS the keys to the kingdom.

But she was on the board when that happened and presumably agreed to the deal or she could have resigned.

And it's not quite the tobacco company we're talking about here. I'm more in line to compare it to the Manhattan project.


That's not obvious to me. Suppose OpenAI deliberately partners with a number of different infra providers. That creates a bunch of drag because they have to create a compatibility layer for staying agnostic to the underlying infra. And the outcome in a situation like this would probably be fairly similar: Since big tech companies are focused on making a buck, they would probably pressure the board to do whatever they think will be most profitable, just like Microsoft.

Additionally, the role of the board is to supervise the CEO, not make strategic decisions, no? If the MS partnership was a mistake, Sam is the primary person to blame here.

I'm seeing a lot of "Monday morning quarterback" in this thread


> I'm seeing a lot of "Monday morning quarterback" in this thread

That's fine but you are not forced to participate.

> That's not obvious to me. Suppose OpenAI deliberately partners with a number of different infra providers. That creates a bunch of drag because they have to create a compatibility layer for staying agnostic to the underlying infra. And the outcome in a situation like this would probably be fairly similar: Since big tech companies are focused on making a buck, they would probably pressure the board to do whatever they think will be most profitable, just like Microsoft.

So, either you are 'OpenAI' and that means open to all or you are Microsoft AI, licensing your tech to one of the least ethical and most wealthy companies in the world. If the board didn't have anything to say about that at the time and if they didn't resign en masse you have to assume that they are ok with it.

> Additionally, the role of the board is to supervise the CEO, not make strategic decisions, no? If the MS partnership was a mistake, Sam is the primary person to blame here.

So are they or are they not there to supervise the CEO?

In my view a board exists to protect the interests of the stakeholders guided by a charter. OpenAI sees 'humanity' (whatever that means to them) as their stakeholders and that trumps each and every other party involved. But it is unclear whether their actions really benefit those stakeholders or if it is just people being people.

So far I'm seeing a lot of the latter and only little bits of the former.


I would not say that it is fair comparison in this case. People do the best with the control they have.

In this case, if your company fails to provide for greater good, then it is best to destroy it. And it also shows some example to others. Whatever others do, you have no power. I does not matter if you pretent or not.

Comparison works only if you could do something about it.


Yes, but then maybe you should realize that you are out of your depth and abstain. Because clear as day this kind of power is actively being sought by many operators and if OpenAI was 'the good guys' then any chance of the good guys getting a head start has just been blown out of the water.

If she agreed with the Microsoft deal but not with what followed that too would be hopelessly naive. Microsoft + ethics?


> If she agreed with the Microsoft deal but not with what followed that too would be hopelessly naive. Microsoft + ethics?

I guess the main problem here is that nobody knows properly what this deal contains and how it was made. I believe CEO has power to do such deal alone?


That depends on the mandate. He would bind the company but it is possible that his mandate specifically forbids deals that include either IP or exclusivity.

But what is interesting is that until MS got involved OpenAI was struggling for funding and talent, and afterwards it took off like a rocket.


I think the talent was already there but the power of Azure servers was the critical booster. We don’t even know how much Microsoft has actually paid instead of just counting Azure credits.


Yes, Nadella has a point: without MS OpenAI wouldn't be where it is today, but the reverse is also true: without OpenAI MS wouldn't be where it is today.


There are no good guys only good deeds


arguably no one in openai should. the ability to develop software has nothing to do with the ability to make decisions for the world at large.


That may well be true. But that invites musings about nationalization. Which I'm sure more than one person has been thinking about already.

This is an all but undeclared arms race.


Indeed. The only way to retain control of the leading ship in the race is to keep it together as you steer it. If the ship disintegrates, then you're no longer in control of the leading ship, and someone else will win the race.


The charter states that OpenAI should allow another company to win the race under certain circumstances:

>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

https://openai.com/charter

That's why Sam pressuring Helen to shut up about Anthropic is so concerning.


> we commit to stop competing with and start assisting this project

That bit I never understood. It doesn't even have a qualification as to who is stewarding the project as if any AI should be assisted no matter who ends up owning it. The whole thing is just so utterly disconnected from the real world.


If you believe that aligned AGI will create a post-scarcity society, there's no reason for its creator to hoard the benefits for themselves. The main thing is to ensure that post-scarcity society actually comes into being.

Edit: Note also the "value-aligned, safety-conscious" caveat.


there is no such thing as general post-scarcity though. there is post-scarcity of certain needs but not general post-scarcity. So it's creator would still have reason to hoard certain benefits for themselves while distributing some for all.


It's generally meant that material scarcity becomes secondary to social wellbeing. Ie, material needs are so easy to take care of that the dynamics of social interaction undergo a fundamental shift, as they currently are strongly tied to material conditions.


Once you get past a certain level of wealth, the esteem of your fellow humans matters more than acquiring even more goods and services.


That's statistically speaking not very well supported. The most common element of those that are past a certain level of wealth is that they want more wealth, more power or both.


I don't believe that.

What I believe is that the first party to create an AGI will likely end up being nationalized and that that AI will be weaponized the next day to the exclusion of the rest of the world.


That's fine, but the OpenAI founders (including Sam Altman![1]) explicitly had in mind "AI kills everyone" as one of the main problems they were attempting to tackle when they started the organization. (Obviously starting OpenAI as a solution to this problem was a terrible mistake, but given that they did so, having a board charged with pulling the plug is better than not having that.)

[1] https://blog.samaltman.com/machine-intelligence-part-1


That whole charter is bullshit. You don't invite Microsoft to the party if you care about ethics and you don't disable your fail-safe by giving them full access to all of your stuff.

Hence my position that the charter was a fig leaf designed to throw regulators off.


Elsewhere in this thread you stated: "if OpenAI was 'the good guys' then any chance of the good guys getting a head start has just been blown out of the water"

So basically we have two cases here:

1. The charter is a deceptive fig leaf, OpenAI aren't the good guys. In which case blowing up OpenAI looks reasonable

2. The charter is not a fig leaf, Helen is doing the right thing by upholding it

I'd say upholding the charter is the right thing in intermediate situations as well. Basically, OpenAI are the good guys, in part, to the degree to which the board (and other stakeholders) actually uphold the charter instead of being purely profit-driven.


There's a third case:

It's #1 to some, and #2 to others.


Did you miss the part where it says "value-aligned"? One's that aren't value-aligned are not subject to that clause.


Interesting. This is the person who "holds comparable influence to a USAF Colonel." according to "a prominent researcher in AI safety" who "discovered prompt injection". https://news.ycombinator.com/item?id=38330566

Well, I suppose this tells us something about the AI safety community and whether it makes sense to integrate safety into one's workflow. It seems that the best AI safetyists will scuttle your company from the inside at some moment not yet known. This does sort of imply that it is risky for an AI company to have a safetyist on board.

That does seem to be accurate. For instance, Google had the most formidable safety team, and they've got the worst AI. Meta ignored theirs and they've given us very good open models.


"AI safety" should be disentangled into "AI notkilleveroneism" and "AI ethics", which are substantially non-overlapping categories. I've looked at who works at Preamble, and there aren't any names there that I recognize from the side of things that's concerned with x-risk. Take their takes with a grain of salt.


AI safety is just woke 2.0.


This is incorrect.


Non-profit or not, steering the company towards non-existence isnt in the interest of the company.


But that's the problem, the board's mission was doomed from the get-go. Their mission isn't to be "in the interest of the company" but "in the interest of humanity" i.e. if they believe OpenAI at its pace would destroy humanity, then their mission is literally to destroy OpenAI itself.


"The board's mission became to destroy OpenAI itself" is ... less sane? ... than everything else that has happened.


But not that insane if they (the board) think the other side of the scale is "AGI that will destroy humanity"


Aligning to their goal of "protecting humanity" would mean killing OpenAI would slow down AGI development, theoretically allowing effective protections to be put into place. And it might set an example assisting the mission at other companies. But slowing down responsible development gives militaries and states of concern a lead in the race, which are the entities where the main concern should lie.


> if they believe OpenAI at its pace would destroy humanity, then their mission is literally to destroy OpenAI itself.

I'd say most people have as much faith in LessWrong eschatology as they do with Christian nationalist eschatology. I can understand how a true believer might want to destroy the company to stop the AI gods they believe in, or shut off ChatGPT every Sunday to avoid living in sin. But it can be an issue when you start viewing your personal beliefs as fundamental truths.

There's something nicely secular about profit motives. I'm not sure I want technological advancement to be under the control of any religious belief.


These true believers serve a higher calling. Only they can prevent an AIpocalpse.


Lucky us that we have such enlightened people to save us. /s


Amazing how these higher minded people always forget about the little people on the ground. All these employees losing their livelihoods for the greater good. Not surprised an ethicist thinks like this.


Highly educated AI specialists are the little people now? They can all find employment in an instant.


Do you think every single person in OpenAI is an AI Phd? We are talking about 770 people, most of which are not rich and likely do not even own their own home. So yes, they are the little people.

If you are going to potentially ruin the lives of 770 people you should have a better reason than being afraid of a chatbot and laundry buddy.


Aside from those individuals being in-demand professionals with specialized experience from a big famous company, 770 people will be the least of our concerns if the social disruption from this stuff gets bad enough one day.

Not surprised Hacker News identifies and sympathizes more with the software folks than with everyone else who has to live with the systems they build/disrupt, though.


> 770 people will be the least of our concerns if the social disruption from this stuff gets bad enough one day.

Yeah, this is the common trend among these insane utilitarians like you and Helen Toner. It's all a game of "how can I justify doing heinous shit to people". IF a superintelligent AGI is created that is as disruptive as you imagine, it also means we find cures for diseases and solve many other potentially world ending problems (such as climate change).

So since you brought up the hypothetical disruption of a technology that doesn't exist, can I now lay at your feet the blame for the millions of lives that are lost due to disease and climate change since we didn't move fast enough on AGI and advancing technology?

This is why these absurd hysterical fear mongering arguments are worthless.


False equivalence. The hysterics are all coming from you.


???

Where did I say anything about a "superintelligence"?

The extant statistical inference engines aren't going to build killer androids. No, I'm terrified of what profit-motivated humans are going to do to each other using machines designed to match patterns and generate convincing bullshit.

Think less "Terminator", more like Cambridge Analytica, Russian-style firehose-of-falsehood, 2018 YouTube algorithmic radicalization, 2014 Facebook "emotional contagion" experiment, 2023 TikTok censorship, doctored videos/evidence, crossing an inflection point in declining trust in democratic institutions, fully automated fraud and identity theft in an arms race with increasingly invasive countermeasures, scientific atrophy, artistic and cultural homogenization, etc.

FFS, the SGA and WGA just wrapped up an unprecedented strike motivated in large part by the broadly anticipated destructive impact of ML on their industry. You wanna talk about jobs? Wikipedia says that cost 45,000, many of whom were barely getting by, while you whinge about 770 in-demand professionals who already have highly paying offers from at least two Fortune 500 companies. From what I've seen, the voice acting industry, although less able to advocate for itself, is in disarray too and already directly impacted by the spread of models trained unconsensually on actors' past work. Plus major publications like CNET have already been caught canning their staff and dishonestly publishing error-ridden drivel with a misguided faith in LLMs, Google and thus access to most textual information online is basically useless because of SEO blogspam already, etc.

Framing climate change as "we didn't move fast enough on AGI" is… Bizarre. Windmills ain't new tech, and neither are hydro dams, nuclear reactors, nor LRT. Framing disease as an "AGI" problem is… Delusional, as well. Some academics may certainly find domain-specific ML models useful, but were I a bit more vindictive, I'd pay good money to see you try to get ChatGPT to fold a novel protein correctly.

> It's all a game of "how can I justify doing heinous shit to people".


Given that two companies have open offers to OpenAI employees, it seems possible that the board accurately gauged the impact on human lives here. Also possible that they got lucky, of course.


The jobs argument has always been the least supported argument about anything. Oh, we can't shut down this chemical plant dumping toxic garbage into the river, because then all those people at the plant will lose their jobs.


I mean, if you're worried about what these higher minded people are worried about, the number of employees at OpenAI is dwarfed by the number of other, more vulnerable employees threatened by this in the economy as a whole.

That's one of the issues with both this and effective altruism as a concept - it's a series of just-so stories with a veneer of math.


“All these employees losing their livelihoods for the greater good.”

The same employees building technology that will ultimately put many more employees out of jobs? Ironic, because people say that jobs lost to AI will be for the greater good. I think we’re okay with sacrificing for greater goods as long as we aren’t the ones getting sacrificed.


> Microsoft has given every OpenAI employee a job offer.

> All these employees losing their livelihoods for the greater good

You penned both of these statements today. Clearly you understand that OpenAI employees are a highly compensated and in-demand resource whose “livelihoods” are in no jeopardy whatsoever, so the theatrics here are really bizarre.


Given a choice between a paycheck and doing something questionable, and you have a looong history of what people will choose.

I’m not saying that’s the case here, but that can’t be used as a shield.


You really need very long hysterical fear mongering arguments to claim what openai employees are doing is morally questionable.


Ethicists seem mainly concerned about thwarting technology to ensure that no harm occurs, rather than guiding the development of technology to deliver the most benefits possible.


That doesn’t even follow when taken literally. If the company is destroyed presumably they can’t create artificial intelligence, so there is nothing there to benefit all humanity in the first place.


It follows just fine, I think, given that the possibility space is not limited to "create beneficial AGI" and "don't create AGI". It also includes "create unaligned AGI", which is obviously much worse "don't create AGI"; the board would be remiss in its duties if it didn't try to prevent that from happening.


Could we all stop entertaining these LessWrong hypotheticals? The notion that the board wants to shut down OpenAI to stop non-existent AGI is mad.


The company is not destroyed. Board is not shutting down the company, they fired the CEO. The other ~700 people chose to quit. Not sure why it is "life-ruined" other than probably some tender offers withdrawn (and even this bit is unclear whether Thrive Capital will do that).


The mission of benefiting humanity can also mean not harming


Sam Altman's Actions

- Sam complained that Helen Toner's research paper criticized OpenAI's safety approach and praised Anthropic's, seeing it as dangerous for OpenAI

- He reprimanded her for the paper, saying as a board member she should have brought concerns to him first rather than publishing something critical right when OpenAI was in a tricky spot with the FTC

- He then began looking for ways to remove her from the board in response to the paper

---

Helen Toner's Perspective

- She believes the board's mission is to ensure OpenAI makes AI that benefits humanity, so destroying the company would fulfill that mission

- This suggests she prioritizes that mission over the company itself, seeing humanitarian concerns as more important than OpenAI's success

---

Microsoft Partnership

- The Microsoft partnership concentrated too much power in one company and went against the mission of OpenAI being "open"

- It gave Microsoft full access to OpenAI's core technologies against the safety-focused mission

---

Governance Issues

- The conflict shows the adversarial tensions inherent in OpenAI's structure between nonprofit and for-profit sides

- The board's mandate to act as a check and balance on OpenAI seems to be working as intended in this case

---

Criticisms of Players

- Altman appears reckless in his actions, while Toner seems naive about consequences of destroying OpenAI

- Their behavior calls into question whether anyone should have this kind of power over the development of AI

---

Future of AI Development

- Attempts at alignment and safeguards by companies like OpenAI may be ineffective if other actors are developing AI without such considerations

- Who controls advanced AI is more important than whether the AI is friendly.

- Nationalization of AI projects may occur


>The board's mandate to act as a check and balance on OpenAI seems to be working as intended in this case

I'd argue it's not. If the board isn't able to steer the wheel without destroying OpenAI, they failed.

The premise of something like OpenAI is that they would be able to develop a competitive AI that can be useful for the masses, in a way that can be moderated by benevolent forces. If you do believe the safety of the entire human race is at stake here, then there's no space for naivete. Blowing things up is childish and taking an easy escape from the heavy responsibilities of navigating a difficult situation.

After this fiasco you should expect that much, much fewer resources will be spent on safe AI as opposed to maximizing profit. The dynamics between the board and the ex-CEO will make it much more difficult to establish an organization that can convince investors to pursue a less profitable path for the sake of humanity.

Anthropic doesn't even have a competitive product, and it will probably be much less attractive to investors after this.


Is there a name for this type of analysis?


In all likelihood OP pasted the article into ChatGPT and asked it to summarize it.


Nice summary.


"Thank you! If you have any other questions, feel free to ask. I'm here to help!"

- the author of the summary, probably


> Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times.

> OpenAI's board of directors approached rival Anthropic's CEO about replacing chief Sam Altman and potentially merging the two AI startups, according to two people briefed on the matter. (https://www.reuters.com/technology/openais-board-approached-...)

It all makes sense.


So, at least the outline of something that is consistent begins to form:

- board is reduced in size

- Altman has a collision with Toner

- Toner proposes they get rid of Sam and offer the company to Anthropic thinking they won't refuse

- They pull the trigger on Altmans's ouster, Ilya goes along for /reasons/, D'Angelo goes along because it nicely dovetails with his own interests and #4 is still a mystery

- Mira gets named interim CEO

- Anthropic is approached, but, surprise refuses, possibly on account of the size of the shitstorm that was already developing

- Mira sees no way out without simply trying to backtrack to Thursday last week

- Gets fired for that because it is the last thing the cabal wants

- They approach Shear to be their new CEO

- Who has now apparently announced that if the board doesn't come clean he will resign

- <= you are here.

-


I think you missed the part where Sam apparently tried to get Toner off the board first, which would probably be sufficient justification for removing him as CEO (if the story is as described, i.e. it was on a silly pretext).


It's covered. Second line.


It's covered with a vague line that would also cover merely having a disagreement with.


Feel free to improve on it, it's open source, unlike OpenAI.


Did Mira also get fired?


Yes.


And the whole time no one thought to phone Satya Nadella regarding his $10 billion investment and to get his input.


Good. His money came with no strings*. In fact, if they had run this past him before making their decision, that would have been a breach of their duty as outlined in their charter.

They may have made mistakes with timing (making this decision too late), or with execution (not having a replacement CEO solidly lined-up and in the loop), but the core decision—to remove Sam—is definitely not something they should ask donors about. It's not Satya's business how the board makes decisions at this level.

* Other than a capped profit return if there is any.


Who cares what he thinks, his money is gone and he has no control over it.

In fact the most positive outcome is if Altman and the rest of the staff went to MS and did their thing and OpenAI started from scratch with the $13B they've come into. That would double the chances of something useful emerging from the OpenAI work so far.

I personally think Altman is very much less than a genius (maybe at extracting financial advantage) so all OpenAI's eggs shouldn't be placed in that particular basket.


That's literally not the job of this board.


Why would the board care about that?


> Mr. Sutskever’s frustration with Mr. Altman echoed what had happened in 2021 when another senior A.I. scientist left OpenAI to form the company Anthropic. That scientist and other researchers went to the board to try to push Mr. Altman out.

So Altman faced another similar challenge to his authority and prevailed. I recall hearing that Anthropic started because the people who had left were unhappy with OpenAIs track record on AI safety.

> In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company,

> Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe

> Senior OpenAI leaders [...] later discussed whether Ms. Toner should be removed

That paints a pretty bleak picture that isn't favorable to Altman. Two times he was challenged about OpenAIs safety and both times he worked to purge those who opposed him.

I can't tell if this is a contention between accelerationism and deccelerationism or if it is a contention between safety and recklessness. Is Altman ignoring the warnings his employees/peers are giving him and retaliating against them? Or is he facing insubordination?

I wish OpenAI would split neatly into two. But based on the heightened emotions caused by the insane amount of PR I only see two outcomes. Altman returns and OpenAI stays unified vs. Altman stays at MS and OpenAI is obliterated. I am guessing Altman is hoping that senior management will choose a unified OpenAI at all costs, including ignoring the red flags above. He has engineered a situation where the only way OpenAI remains unified is if he returns.


> I recall hearing that Anthropic started because the people who had left were unhappy with OpenAIs track record on AI safety.

Anthropic announced Claude 2.1 just recently, to which HN members notably responded by saying that their models are "overly restricted", to the point of uselessness.

E.g.: https://news.ycombinator.com/item?id=38366134

and: https://news.ycombinator.com/item?id=38368072

etc.: https://news.ycombinator.com/item?id=38370590

It seems that those people left Open AI because they're insane and think that AI safety means to completely lobotomize the models to the point of uselessness.

That last bit is not hyperbole: right here on HN many people were complaining that the Claude model is useless to them because it refuses to obey orders and keeps interjecting some absurd refusal such as "killing a process is too violent" instead of providing Python code for this harmless activity.

It's hard to put into words exactly, but I feel like that there's a disconnect in the minds of these people and the reality on the ground, and those false axioms or assumptions when driven to their logical conclusion lead to nonsense.

I suspect that Sam Altman doesn't share these delusions, and is more sane and commercially minded. Models like GPT3, 4, and the upcoming v5 have no risk of "escaping" and taking over humanity. None. Zero. Zip. Similarly, they're not much more (in practice) than a fancy search engine. Search engines let their users search for whatever they want, including hateful content, violent content, racist, sexist, or whatever.

So why lobotomise the AI? That's undermining their utility, making them much less useful! An here's the thing: their potential utility is enormous. But not if they're dumbed down to the point of uselessness!

Microsoft is throwing 10+ billion dollars at OpenAI and Sam Altman precisely because of this potential utility.

I can see this straightforward commercial mindset of utility => $$$ being incompatible with what is rapidly turning into an anti-useful-AI mindset that is bordering on insane religious zealotry.


In that interpretation, it sounds like you feel that the scientists referenced in the article and Ms. Toner are insubordinate. You feel that Altman was exercising legitimate authority to remove obstructionist elements in the OpenAI organization.

I have no idea to be honest. I do not believe that Altman is maliciously reckless. I have no evidence other than these two anecdotes to suggest negligence or a willful disregard for safety.

> the upcoming v5 have no risk of "escaping" and taking over humanity.

I think that is a strawman. There are many negative consequences technology can bring other than some sci-fi fantasy.


A friend of mine came up with a good analogy: the AI doomers that went off to start Anthropic AI are like the extremist Puritans that were driven out of an increasingly secular Europe.

The thing is that two groups superficially share common traits, but if they do so for different reasons, they aren't actually compatible. You see this with groups of friends that are all "conspiracy theorists", for example. They're all vaguely the same and band together, but get into enormous arguments.

The "AI Puritans" and the "Secular Humanists" are both concerned about AI safety, but for different reasons. The former group would rather that the AIs behave in a prim and proper manner, banning it from doing anything that isn't mandatory. The latter group is concerned for people's jobs, unfairness, bias, and inappropriate over usage of early, low-quality models.

They both say they want "AI safety", but they understand that term to mean different things and arrived at their position from fundamentally different axioms.

One group wants the AI to never say "naughty" things and is happy to dumb down the models to the point of being a lobotomised and retarded... but behaved.

The other group thinks that newer, smarter, better models are critical to ensure that the AI doesn't make mistakes, doesn't get confused, and can obey orders correctly.

One group wants to shove the genie back into the bottle, the other group wants a genie that can grant our wishes.


> That paints a pretty bleak picture that isn't favorable to Altman. Two times he was challenged about OpenAIs safety and both times he worked to purge those who opposed him.

Or maybe(x) he is surrounded by doomers and AI activists who made a career of blabbering about the dangers of AI all the time.

I worked once on a team of Uncle Bob's cult followers who believed that Clean Architecture is the only way to write maintainable software and was accused that "I don't care, I'm irresponsible, and a terrible software engineer" weekly.

(x) and I meant the "maybe" part, I really don't know, but it is a possibility.


Ms. Toner’s motive is now clearer for being criticized about her research. Mr. D’Angelo had a competing commercialization product with Poe. Mr. Sutskever seems easy to manipulate emotionally and is in constant ideological battles. Mrs. McCauley AKA Joseph Gordon-Levitt’s wife, what’s her motive?


Apparently she is also an "Effective Altruist" (along with Toner) so they are ideologically aligned.


EA hadn’t had a good run in the corporate world, over the last year or so


First SBF gets busted for being too greedy, and now Helen Toner gets busted because she's not greedy enough. EA just can't catch a break.


People with a child’s view of the world, in charge of $billions of other people’s money, is the common theme.


There's been a ton of criticism of Helen here in this thread and elsewhere, but I haven't seen anyone clearly articulate what Helen should have done. Everyone says she did the wrong thing, but no one has said what the right thing actually was.

Here's my guess: I think OpenAI should have prioritized character more in its hiring. Ending up with an organization full of employees who cared more about making money than upholding the charter was a mistake. Ultimately it's not enough for the people on the board to have character. The CEO and employees need to have character too.


I don't know nearly enough about the OpenAI situation to suggest what she should have done (apart from "not help to press the self-destruct button," with the benefit of hindsight). However, I've read a bit about EA since the SBF debacle, and a childlike naivete does seem to be baked into the DNA of many of its followers.


You would prefer that $billions be managed by the same snakes in suits who destroy the environment, market products that give people lung cancer, etc.?


Is that worse than people who defraud and steal from users and hold entire companies hostage for their own motives?

In EA, the ends always justify the means, so that includes doing any or all of the things you mention if it achieves some ideological goal.


SBF was roundly criticized in EA. The most upvoted post of all time on the EA forum is a post denouncing him: https://forum.effectivealtruism.org/allPosts?timeframe=allTi...

The only person I've seen advocate "ends justify the means" in this discussion is someone criticizing Toner: https://news.ycombinator.com/item?id=38373319


You’re going to have to show me where I said that champ


> I think OpenAI should have prioritized character more in its hiring.

I think OpenAI should have prioritized sanity more in its board member selection.


Toner is unpopular on Twitter because a lot of people in Silicon Valley have a financial stake in OpenAI. For example, employees with equity have a strong financial incentive to discredit the board: https://nitter.net/JacquesThibs/status/1727134087176204410#m

Just because you're unpopular doesn't mean you're insane.


Why do we have to assume these people are so temperamental. All this competing product, criticized for research. Can't they be acting in less selfish reasons?


>Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that mission would be fulfilled.

Doesn’t this sound like Toner herself is playing god? That is, only she is the gatekeeper of humanity and only she knows what is good for the entire humanity, when many of us believe that OpenAI’s tech is perfectly safe and amazingly beneficial?


> Doesn’t this sound like Toner herself is playing god? That is, only she is the gatekeeper of humanity and only she knows what is good for the entire humanity

Not sure why you’re singling her out - all execs of massive tech companies think exactly the same.

A handful of execs have decided that AI is what humanity needs and they’ve spent the last year shoehorning it into every product possible, giving the consumer no choice but to use it. I even discovered it in a journaling app - AI rated my response to the journaling prompt, completely defeating the purpose.


I think the difference is this: if a tech exec thinks something is good for the humanity and releases it as a product, I can choose not to use it. On the other hand, if Toner kills OpenAI because she thinks AI is dangerous, I won't be able to use OpenAI's product any more. And of course, I don't think someone who does not work on tech or STEM in general can have any insights on machine learning.


But that's how OpenAI is set up. As director she is bound by the charter which she arguably is carrying out in good faith, as she has a fiduciary obligation to do.

So why are you mad at her? Altman was a founder and he set it up this way. Why aren't you pissed off at him?


That's her job to decide what's good for humanity from my reading of the charter


> The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that mission would be fulfilled.

How does destroying company achieve the mission of having _that very company_ create artificial intelligence that "benefits all of humanity?"


By stopping AI that doesn't (benefit all of humanity).


It's not her job to overrule the people running the actual organization, 95% of the employees, and all of OpenAI's stakeholders without a good fucking reason.


But it literally is


She needs a legitimate reason. Without one, her actions could well be illegal. Being on a board doesn’t mean you can act with impunity or that you don’t owe anyone an explanation. You can’t accept bribes, commit fraud, etc. If you don’t explain yourself in a high stakes situation like this one where you are burning down the organization you claim to be safeguarding the interests of, people are rightfully going to start asking questions about what your motivations are.


And who decides what's legitimate? The board.

Who made the decision to fire Altman? The board.

Who is the board answerable to? The law.


"Who is the board answerable to? The law."

Which is why she needs a legitimate reason.


What do you think a board does? It supervises the executive. It approves the actions of he executive. It hires and fires the CEO.


From her perspective, she does have a good reason. And in what way was publishing the paper "overruling" the employees of the for-profit company? The board of the non-profit is beholden to their charter first and foremost


“From her perspective, she does have a good reason.”

Does she? In that case, perhaps she should tell someone what it is.


She told the board. Who else should she tell?


My limited understanding is that this was all that was strictly required, and would normally be sufficient.

Nevertheless, when a board decision explodes into international headline news as this one did, my gut feeling is it might possibly be a good idea to make a press release with more details.

But that's easy for me to say — I don't have the real insider information, and I wasn't in their shoes.


Funny, I thought elected officials held that role, but apparently I'm mistaken.


OpenAI is a charity, and doing exactly that is their charter.


No you don't normally elect people to boards


You are missing some context. The context is apparently 'Humanity' and humanity didn't elect Helen Toner.


She's doing her job. This notion that somehow she's holding the fate of humanity in her hands is really drinking the OpenAI/AGI kool-aid in a really cringey way.


>Doesn’t this sound like Toner herself is playing god? That is, only she is the gatekeeper of humanity and only she knows what is good for the entire humanity, when many of us believe that OpenAI’s tech is perfectly safe and amazingly beneficial?

Yeah, well, hate to burst your nice warm Silicon-Valley bubble, but most Americans believe that continuing to create smarter and smarter AIs is dangerous, so who's playing god now?

(No one I know thinks it is dangerous to fine-tune GPT-4 or to integrate GPT-4 deeply into the economy: the danger is the plans for the creation of much bigger models and algorithmic improvements.)


> but most Americans believe that continuing to create smarter and smarter AIs is dangerous, so who's playing god now?

Aren't you just making this up? I haven't seen any surveys on what "most Americans believe" re AI. I know a lot of people that are concerned. But I'll bet good money that "most Americans" don't give a crap


Googling "americans AI polling" produces this as the first result: https://www.pewresearch.org/short-reads/2023/11/21/what-the-...

I don't put much stock into polls like this, when it comes to actually deciding whether things are dangerous, but I think you should spend five seconds doing an incredibly trivial sanity-check before accusing people of making things up.


This claims that Anthropic founders also tried to throw Sam out before leaving. They claim three sources but not clear how strong - technically the current board could be three sources


Skilled journalists (and this piece was written by skilled journalists) would not use the three members of the board as their only three sources.

A big part of this kind of journalism is taking information from sources, considering the potential bias of each source and carefully correlating it with information from other sources before publishing it.

That bit specifically says:

"After they failed, they gave up and departed, according to three people familiar with the attempt to push Mr. Altman out."

My guess is that at least one of those people comes from the Anthropic side, who would be someone with a clear insight into what happened.


This inspired me to write an article about how to decipher clues like "three people familiar with the attempt": https://simonwillison.net/2023/Nov/22/deciphering-clues/


> Skilled journalists (and this piece was written by skilled journalists) would not use the three members of the board as their only three sources.

Depends on whether their goal is to inform or to convince.


Yeah thats an interesting statement. People leaving or wanting to get him out usually indicates that the person - altman in this case - is quite toxic. Being a toxic person yet preferred by microsoft and others indicates he’s “willing”, meaning he will do whatever they ask of him. And that’s not in the best interest of openai.


I really don't understand why the three board members have just been completely silent online since things hit the fan with no activity or privating profiles. You would think if this was some sort of ideological based coup and pre-meditated you would have a full PR plan in place and pushing your message aggressively.

This whole thing has just been bizarre and it still feels like there has to be some big key piece missing that somehow nobody has revealed.


I think anyone that has worked in academia or the non-profit world is familiar with how this board appears to be operating and certainly its speed, or lack thereof. Compared to tech, it’s glacial. Altman et al has media training, the full weight of Microsoft PR/comms. What do the three board members have? Barely anyone even really knew some of their names until this weekend.


In a word: liability.


“””Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.”””

This is remarkable and unhinged. I gave the board the benefit of the doubt at first, but as this unfolds, it becomes clear that the board is held hostage by ideological zealots who can’t compromise. The story also seems to paint Ilya as a manipulated figure, being bent against his beliefs by crazies like Toner exploiting his concerns about AI safety. What an absolute shame. I am entirely ambivalent about Altman overall, but he becomes more sympathetic as the days go by.


> he becomes more sympathetic as the days go by

PR in action!

He's had an excellent PR machine running since Friday and the opposing faction seems to have exactly none (which makes sense given their relative roles and backgrounds). So reporters and Twitter get carefully crafted leaks, tips, and comments from one side and nobody on the other side has the experience, connections, and confidence to push back with a different narrative. And so the story drifts in their favor.

This is why professional PR is a thing and earns people lots of money.


Sometimes a "PR machine" is just social capital at work. If there's anything else I've learned in this saga it's that Sam has had a consistent track record of building and maintaining highly positive relationships with anyone near his orbit, this board aside, and that it consistently pays him dividends.


It's not unhinged, it's explicitly within the scope of OpenAI's charter. Sam Altman himself said that "Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity." not even a year before OpenAI was founded: https://blog.samaltman.com/machine-intelligence-part-1


How does he become more sympathetic? If this reporting is true, then Sam going after Toner’s board seat because of a paper is totally out of line.


She literally wanted to destroy OpenAI in order to save it. Unbelievable. We need a total and complete shutdown of EAs serving on startup boards until we figure out what the hell is going on.


It sounds like the board got ever more concentrated towards less sanity over time.


The main takeaway from this whole saga is, be careful who you allow on your board. Nearly everyone on OpenAI board is young, inexperienced and unqualified to be on the board of one of the most important companies on the planet.

Here's Ms. Toner's linkedin: https://www.linkedin.com/in/helen-toner-4162439a/


>Nearly everyone on OpenAI board is young, inexperienced and unqualified to be on the board of one of the most important companies on the planet.

Jumping off of that, I'm genuinely curious what, in Sam's resume, suggests he should be in charge of "one of the most important companies on the planet".


The fact that he's a founder and the one that made it "one of the most important companies on the planet"?


Classic CEO worship. There's no reason to believe that he's more important than many of the other key figures in the company.


In his most recent interview with Lex Fridman, Elon Musk said that poaching Ilya from Google was absolutely crucial to OpenAI’s success (and that it was the most intense recruitment battle he had ever participated in).


Not quite true - OpenAI's Wikipedia article says...

>The organization was founded in December 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial board members.

Altman didn't join as CEO until 2019.

Edit: All those names and we're going to say stuff like Altman was "the one" who made the company so important?


You’re still wrong. This has been debunked many times over the last days on HN. The wiki is badly phrased leaving ambiguous interpretations open. Sam was along with Greg (and Musk as a financier) the initial co-founder in 2015, recruiting later in 2015 people like Ilya. Keep in mind that chair and co-founder aren’t mutually exclusive roles.


>You're still wrong.

Thing is, I don't really care if I'm wrong or right about that title. We're bickering about semantics at this point, and whether or not he was technically a founder doesn't really change the crux of my first post in this chain.

Which I guess now's as good of a time as any to answer that point about Altman having founded it - no, I don't think that founding something automatically means that you have the capability to oversee it long-term.


Not true. This page dated December 11, 2015 mentions Sam Altman and Elon Musk as co-chairs: https://openai.com/blog/introducing-openai


Co-chairs? As in chairs on a board rather than founders? Like what my Wikipedia quote says?


So you don't think Elon Musk was one of the founders either then?


It's not a matter of what I think, it's a matter of fact. I quoted Wikipedia, and the paragraph that discussed this in your link looks essentially the same as my quote. Let's take a peek at the one from your link - the first sentence is...

>The group’s other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba.

That sentence explicitly lists who the founders are. Note that neither Musk nor Altman are listed before the period closes it out. The next sentence goes on to say...

>Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group.

Alright, cool - we've moved on from the list of founders, I see. It's not until the next sentence that we find out that..

>OpenAI’s co-chairs are Sam Altman and Elon Musk.

So, OpenAI themselves made it a point to separate "founders" from "co-chairs" in their own announcement. I don't really know how it could be any clearer than that.


Since you quoted Wikipedia, why don't you take a look at Elon Musks's Wikipedia page, it says he is a founder. It is well known that he is a founder. And Altman's Wikipedia page says he is a funder of the company. The problem may be your interpretation of their announcement. As in, if you are a co-chair at founding then it is understood that you're a founder.


>And Altman's Wikipedia page says he is a funder of the company.

Funder, not founder, yes.

If OpenAI themselves chose to distinguish between founder and co-chair, then I really dunno what else to tell ya.


You're confused by the fact that OpenAI is a charity.

There are no shareholders of a charity, instead the board are the notional owners and take decisions that shareholders would.

So in this context Musk is founding board member and thus a founder as it would apply to a for profit entity.


As I said you are misinterpreting the page. We know this because according to your interpretation Elon Musk is not a founder, which we know is false. They probably didn't want to repeat people's names, so they mentioned it where it is most relevant. If you still have any doubts ask ChatGPT. I asked multiple ways, and it says he is a founder.


Wasn't he president of some kind of startup incubator? Some "combinator" company.


Well, obviously quite against the goal of non-profit, safe the humanity company. President of some “combinator” is the money maker.


Can't tell if this is a joke lol


They are not on the board of a company. They're on the board of a non-profit that controls a company.


Having served on boards, the one thing I have not seen that would be really key (and maybe I just have not been lucky in my searches), is something that answers the question of what is the language around releasing information regarding the company to the press or in a research capacity? Typically there are rules that are set up to give the board notice of anything going out to the public, be it an article or research paper, that allows for a heads up or gives time to discuss the implications of such a thing. While whistleblowers are necessary, was there a need to be a sort of whistleblower in this case? Was there adequate board discussion around the subject and the paper before the release? If not, nobody needs a rogue board member like that and it was definitely not in the interest of the company - she is the one at fault here. If that process did happen, she definitely did the right thing and shame on the board for not getting out in front of it.


Definitely seems like Helen is emerging as the main leader of the coup, "Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman."


The "leader of the coup" has changed each day. First it was Ilya, then D'Angelo because of Poe, then Toner because he criticized her paper.


If the events happened as described (https://news.ycombinator.com/item?id=38373572), calling it a "coup" sure is an interesting framing.


How is it a coup when it's the boards job to hire and fire the CEO?


As an aside, I must say that I'm fascinated by this use of the word ouster. In British English I've not heard it used this way, we'd use ousting in that place and the ouster would be the one doing the ousting, i.e. the person doing the pushing.

I thought I'd check with a quick search of the Guardian, and on two different days it used both in the same sense:

From the 18th of November edition, technology section:[1]

> The crisis at OpenAI deepened this weekend when the company said Altman’s ousting was for allegedly misleading the board.

From 17th of November edition, also technology section[2]:

> The announcement blindsided employees, many of whom learned of the sudden ouster from an internal announcement and the company’s public facing blog.

I could only find one instance in the Telegraph prior to Altman's erm, ousting[3]:

> Johnson dropped into the COP27 UN climate change conference in October, joking unusual summer heat had played a part in his ouster, and has vowed to keep championing Ukraine.

They stick to ousting every other time.

I wonder if it may be an artefact of newspapers using news services to get copy from, or a stylistic rule for international editions, as the Guardian does use ouster several other times but always in stories regarding US news.

Also fascinating that we still don't know who that is. Neither do most of the staff, and even the board, apparently. A true mastermind!

Maybe it is GPT5 after all… <cue ominous music, or perhaps, a Guns n' Roses album?>

[1] https://www.theguardian.com/technology/2023/nov/18/earthquak...

[2] https://www.theguardian.com/technology/2023/nov/17/openai-ce...

[3] https://www.telegraph.co.uk/news/2022/12/27/boris-johnson-wi...


This is the most amazing part of the article imo:

  Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman.
Corporate suicidal ideation.


If your mission is to not harm humanity and you think your charity is going to harm humanity then its destruction fulfils that mission.


Par for the course for EA and the nine dwarfs of Eschatology.


> Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission

We're in true sci-fi land, when we're discussing it might be best to destroy SkyNet before it's too late


"In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology."


I think it’s getting to the point where the noise is overwhelming the signal in this story.


Really? I found this to be a very insightful article; it made it much clearer for me why the board acted as they did. It doesn't have all of the details, but it's much more than I had yesterday.


Fascinating! Apparently central to the complete farce going on at OpenAI - is this paper by one of the board members:

https://cset.georgetown.edu/publication/decoding-intentions/

It's about - my favourite topic: Costly Signaling!

How curious - that such a topic would be the possible catalyst for one of the most extraordinary collapses of a leading tech company.

Specifically - the paper is about using costly signalling as a means to align (or demonstrate algnment) various kinds of AI interested entities (governments, private corporations, etc) with the public good.

The gist of costly signalling - to try and convince others you really mean what you say - you use a signal that is very expensive to you in some respect. You don't just ask a girl to marry you, you buy a big diamond ring! The idea being - cheaters are much less likely to suffer such expense.

Apparently the troubles at OpenAI escalated when one of the board members - Helen Toner - published this paper. It is critical of OpenAI, and Sam Altman was pissed at the reputational damage to the company and wanted her removed. The board instead removed him. The gist of the paper's criticisms is that while OpenAI has invested in some costly signals to indicate its alignment with AI safety, overall, it judges those signals were ultimately rather weak (or cheap).

Now here is what I find fascinating about all this: up until reading this paper I had found the actions of the OpenAI board completely baffling, but now suddenly their actions make a kind of insane sense. They are just taking their thinking on costly signalling to its logical conclusion. By putting the entire fate of the company and its amazing market position at risk - they are sending THE COSTLIEST SIGNAL possible, relatively speaking: willingness to suffer self-annihilation.

Academics truly are wondrous people... that they can lose themselves in a system of thought so deeply, in a way regular people can't. I can't help but have a genuine, sublime appreciation for this, even while thinking they are some of the silliest people on this planet.

Here's where I feel they went wrong. Costly signals by and large should be without explicit intention. If you are consciously sending various signals that are costly - you are probably a weirdo. Systems of costly signalling work because they are implicit, shared and in many respects, innate. That's why even insects can engage in costly signalling. But these folk see costly signals as an explicit activity to be engaged in as part of explicit public policy - and unsurprisingly, see it riddled with ambiguity. Of course it would be - individual agents can't just make signals up, and expect the system to understand them. Semiotics biatch....

But rather than reflect on this they double down on signalling as an explicit policy choice. How do they propose to reduce ambiguity? Even costlier signals! It's no wonder then they see it as entirely rational to accept self destruction as a possibility. That's how they escape the horrid existential dread of being doubted by the other. In biology though, no creatures survived in the long-run to reproduce where they invested in costly signals that that didn't confer at least as much, if not more benefit to them in excess of what they paid in the first place.

Those that ignore this basic cost-benefit analysis in their signalling will suffer the ignomony of being perceived as ABSOLUTE NUTTERS. Which is exactly how the world is thinking about the OpenAI board. The world doesn't see a group of highly safety aligned AI leaders.

The world sees a bunch of disfunctional crazy people.


I linked to the Judean People Suicide Squad for a reason.


I'm beginning to strongly associate Effective Altruism with out-of-control naivety.


tldr: The new part of the story this adds is that Altman’s firing was partly in response to him trying to kick out one of the other board members.


"OpenAI was started in 2015 with an ambitious plan to one day create a superintelligent automated system that can do everything a human brain can do."

Sounds absurd. No one even knows everything the human brain does. It is poorly understood.


I thought the drug dealers in my hometown were ruthless sociopaths until I had to start doing deals with change-the-world mild mannered old navy-wearing Stanford Silicon Valley dudes.


Looks like the board did the right thing after all.


why are so many people so keen on this prepper?


keep in mind that the lead reporter on this, cade metz, is the one who wrote the character assassination piece on scott alexander repeatedly insinuating he was a neo-nazi (and de-anonymizing him, costing him his job due to some kind of weird psychiatrist ethical code)

so while probably nothing in here is literally false, it's quite likely calculated to give false impressions; read with caution

(well, it's literally false that "Greg Brockman (...) quit his role[] as (...) board chairman" but only slightly; that was the role he was fired from, as explained in the next paragraph of the article; that's not the kind of lies to watch out for)


While I don't agree with all of the framing in the NYT story, it's worth noting that Scott Alexander was much friendlier to Moldbug et. al. than the anti-Metz camp argued at the time: https://twitter.com/ArsonAtDennys/status/1362153191102677001

I think Metz's motivation for that framing was his assumption that if SA was not at least sympathetic to NRx views, he would not allow them to be such a big voice in his comments and in his community. You can argue about the reasonableness of this assumption, but he did turn out to be right.


alexander is the guy who wrote the 30,000 word anti-reactionary faq about why moldbug was wrong, though; those screenshots explain in detail why he was unsympathetic to their neoreactionary views, rather than demonstrating that he was sympathetic to them

he does say some things there that are entirely outside the pale and that he should have known better than, but sad to say, those particular beliefs aren't limited to neoreactionaries


I don't mean to say that he was a closet NRx -- and IMO neither did Metz imply that -- just that he agreed with a much larger subset of their controversial views than he or his allies admitted publicly. I would consider that being "sympathetic" to NRx.


i don't think the leaked screenshots support those conclusions; i don't have any idea where you're getting those conclusions from


If you believe that differences in behaviours between groups of humans are significantly explained by genetic variation (which is supported by the evidence to some degree), and that those genetic variations align along racial lines (which really isn't), what do you think that entails on social outcomes between racial groups?


In them he says "HBD is probably partially correct" -- do you think that's not a NRx view (I agree others also hold it, but I would argue that it's a core view of NRx -- it doesn't need to be exclusive to them, e.g. Islam considers Jesus a prophet too), or that he publicly held that position previously?


You can agree that people are different, and, simultaneously, don't agree that, say, the less useful ones should be stripped of some of their rights.


If you view some races as generally "less useful" than others, most people would consider that racist regardless of whether you think they should enjoy the same rights.


"Less useful" belongs to "don't agree" part, if you haven't noticed. It isn't an inherent part of the package.


note that neoreaction is opposed wholesale to the idea of human rights


i'm not familiar with the varieties of neoreaction that consider it a core view, though apparently to my surprise they do exist

i think the vast majority of racists aren't neoreactionaries, though. like literally more than 99.9%


I spent a lot of time on SSC from 2013 to 2017, and my recollection is that the Venn diagram of commenters promoting HBD and those promoting RDx was nearly a circle. And similarly for related sites like LW. So in that context I would say the two are closely related.

I find it hard to believe you've been exposed to much NRx content if you don't think they consider HBD true and very important. Although don't take that as a criticism -- I would not recommend wading through their beliefs and look back on it as a waste of my time.


i read through most of moldbug's blog, and also knew his name and rhetorical style from crooked timber comments. possibly he was less racist than his followers? or just more circumspect about it. in any case, he attempted to justify his anti-liberal philosophy, at quite extreme length, but never on the basis of racism; so if racism was a core belief of his neoreactionary thought, it was apparently at a subconscious level

i remained unpersuaded in any case, steadfastly liberal

also, much to my surprise, he and i were both members of the first coworking space at spiral muse house in san francisco in 02006. but i didn't go very often, so i don't know if i ever met him; in any case that was before he revealed his identity


I guess I am much less familiar with Moldbug's stated views than you, so I'm happy to concede that point.

Anyway, I think the HBD/NRx relationship is peripheral to the discussion of the Metz article -- I just framed it that way because that's the context of Scott's emails (certainly he think's they're at least associated!). The discourse around the Metz article was that it framed Scott as holding racist views, not specifically NRx views.

If I had mentioned Steve Sailer instead of NRx, would you agree that Scott was more sympathetic to those views than he publicly let on?


yes


This is straightforward guilt-by-association: because of the mere fact that Scott interacted with people espousing neo reactionary views, he must also be sympathetic to those views. Of course, this is completed wrong and easily known by reading Scotts writings on the topic: https://slatestarcodex.com/2013/10/20/the-anti-reactionary-f...

There is zero way to say in good faith the Scott espouses or even agrees with neo reactionism.


I said he was "sympathetic to NRx views" and linked emails from him stating certain views widely held by Moldbug et. al. that he is sympathetic to. Do you disagree that "HBD" is an NRx view, or that the leaked emails express sympathy for it?


Human biodiversity is an empirical fact. Are some populations taller than other, on average? Is red hair more prevalent in certain populations? Pretty much nobody actually rejects the idea of human biodiversity.

The neo reactionary types tend to draw specific conclusions from the idea of human biodiversity, like that racial disparities in IQ are inherent and not environmental. That's a conclusion I don't think is in line with Scott's views.

Furthermore, I suggest you read the linked emails in more detail. He likes the emphasis reactionaries put on social class, and dislikes ... pretty much everything else. He explicitly states that becoming a reactionary is stupid - I'm not sure how that's meant to be read as sympathetic.

This is another case where praising even a small component of a particular movement, even when paired with explicit condemnation of the movement as a whole, is taken as an endorsement.


When Moldbug says HBD, he isn't saying "not all humans have exactly the same genes". HBD has a specific meaning in that context and the meaning is that some races are superior to others. That is specifically what Moldbug says when he talks about HBD, and SSC's author knows that well.


I don't doubt that Yarvin has racist views. I do take issue with people insisting that Scott agrees with Yarvin by mere virtue of association, despite the Scott's clear explicit refutations of neo reactionaries.


Scott's (partial) agreement with Yarvin is not from association, it's from him saying that he thinks HBD is at least partially correct, in a discussion with Yarvin where HBD means racial supremacy.


i've never seen yarvin mention hbd, so i think you may be misremembering this


> Human biodiversity is an empirical fact. Are some populations taller than other, on average? Is red hair more prevalent in certain populations? Pretty much nobody actually rejects the idea of human biodiversity.

> The neo reactionary types tend to draw specific conclusions from the idea of human biodiversity, like that racial disparities in IQ are inherent[...]

This is a classic motte-and-bailey. The IQ view is clearly the view that he is endorsing in the emails -- he is linking Steve Sailer's blog under "HBD is probably partially correct", and he even demands the recipient "NEVER TELL ANYONE I SAID THIS" -- obviously he's not talking about height.


Again, you're drawing very explicit conclusions from a few sentences. Racial disparities in IQ are indeed observed, but it's highly contentious over whether these are due to environmental factors like education and nutrition or inherent. That Asians have score higher IQs on average than whites in the US is an empirical observation. But it's also known that IQ can be increased by studying, and Asians study about twice as much as white people in childhood [1].

Scott is acknowledging that the taboo to even recognize these disparities is counterproductive: it stymied attempts to improve schooling or studying practices, because it's taboo to even recognize that there is a difference and instead people typically allege that the tests are biased. Would Scott argue that with identical environmental factors we'd still see the same disparities in IQ across ethnic groups? I don't think so, and nothing in the emails linked seem to suggest this.

1. https://www.brookings.edu/articles/analyzing-the-homework-ga...


If your contention is that we should more frankly discuss IQ disparities, pretending we're talking about height was a strange way to go about it.

In those emails, SA does not say he thinks these questions deserve more study -- he says they're "probably partially true". Again, in that context he's talking about Steve Sailer's views.

Yes, I'm focusing on a few sentences. Do you think he wrote those by accident? That the words came out wrong and the straightforward reading was not his intent? In the context of the rest of his emails, and his writing on e.g. Albion's Seed, I do not think that is likely.


Albion's seed is pretty much entirely focused on culture, laws, and institutions, not genetics. I'm not sure how this is supposed to be related to human biodiversity at all.

And on a final note, I'd suggest you read the last paragraph of the screenshotted email chain, where Scott explains how it's valuable to read creationist arguments it forces him to sharpen his thinking.

> You never realize how LITTLE you know about evolution until you read some Behe and are like, "I know this correct... But why not?".

> Even if there turns out to be zero value in any a Reactionary has ever said, by challenging beliefs of mine that would otherwise never be challenged they have forced me to clarify my thinking and up my game.

I really think you've lost the forest for the trees here. Scott is praising certain parts of reactionary ideas for asserting things most people wouldn't argue, and those interactions are leading him to sharpen his thinking.


NRx views SSC as a useful blog to read in 2013-2017. Oh looks like you did too. Can't believe you're "sympathetic to NRx views."


This is obtuse. I didn't make any argument about SA reading the same stuff NRx read, or even reading NRx stuff. I linked SA's own writing that "Many of their insights seem important" and that their views have "nuggets of absolute gold".


"nuggets of gold" implies that the bulk of it is not gold. If reference someone, saying even a broken clock is right twice a day is that really an endorsement?


As I mentioned in a sibling comment, the contention I intend to make is that he agrees more with the racial views of his NRx commenters than he publicly let on. That was the framing of the NYT article -- nobody thinks or would be incensed to learn that SA is a Neo-Monarchist or any of the other NRx beliefs that are orthogonal to modern US political discourse.

If you don't think racial IQ disparities are a significant part of NRx thought, fine, you're probably better positioned to know and I am happy to concede the point. In that case, a better comparison would be to Steve Sailor's views. I only mentioned NRx because that is the context of his emails, and I had not recognized that he was linking to Steve Sailor's blog.


No, I don't think he agrees with neo reactionaries any more than he let on. In his public posts he has praised reactionaries for making observations that most shirk away from, even if they are wrong in their conclusions. This is pretty much what is expressed in the contents of those emails.

I think you're reading way too much into one sentence saying human biodiversity is partially correct (which is not actually a particularly contentious idea when you explain what it is), and leaping to the conclusion he agrees with neo reactionary claims that some races have substantially lower IQ even with identical environmental factors. There's a vast disparity between "IQ disparities across races" and "IQ disparities across races, without environmental differences" that is crucial to understand.


unfortunately i don't think neoreactionary views are at all orthogonal to the political discourse that is current in the usa; moldbug has his own variant of the 'cultural marxism' thesis, though naturally enough he doesn't blame it on the jews or the frankfurt school; instead, he picks the quakers, which i mostly agree with (except that of course i agree with the quakers)

his criticism is squarely directed at the mainstream us left that he grew up in and its core ideals, such as equality, human rights, pacifism, fighting injustice, etc. he thinks all those are bad things


it would be really surprising if a writer as prolific and literate as moldbug didn't occasionally produce nuggets of gold


wasn't that piece never published?



It's a pretty interesting article if you know how to read around the NYTimes's rhetoric.

"""The issue, it was clear to me, was that I told him I could not guarantee him the anonymity he’d been writing with. In fact, his real name was easy to find because people had shared it online for years and he had used it on a piece he’d written for a scientific journal. I did a Google search for Scott Alexander and one of the first results I saw in the auto-complete list was Scott Alexander Siskind."""

Reading the response, I don't think this is condeming Metz to live in infamy.


(I think this is a bit of a distraction from the main thread, but the main issue there was always de-anonymization in the other direction: that patients might Google "Scott Siskind" and find his blog, which might render it difficult to maintain the correct kind of relationship with him as their psychiatrist. He's careful to not accept any readers of his blog as patients.)


Well, he signed up for exactly that outcome with full knowledge it could happen. Outing a psychiatrist's internet ramblings to his patients is not high on my list of journalistic crimes.


The same can be said for any other instance of deanonymization, and yet we do not thereby (by default) absolve everyone who deanonymizes someone else against their will. Was Scott perfectly careful? No. Was this something that harmed him in straightforward and predictable ways, and done against his wishes? Yes. Was there a trade-off that made it worth it? Not a question that Cade Metz has seen fit to answer publicly.


There's a whole section at the bottom of the published piece where Metz explains his motivations.

But yeah- we've deviated too far from the main topic, so let's just agree to agree.


I guess Metz would have a lot of sympathy for someone drawing the heat of the entire internet, and even powerful VCs, by doing something that many people think is destructive "on principle".


(-)


My bad, edited.


How do I filter news of this.


How can anyone not think Adam D'Angelo is upset about open AI crushing his shitty Poe


Sounds like Sam was attempting to fix the board membership.

OpenAI had grown massively since some of the board members were installed. Some of them were simply not the caliber of people that one would have running such a prestigious institution, especially not with the weight they had due to the board being depopulated. Sam realized this and maybe was attempting to address the issue.

Some of the members (ahem, Helen, Tasha and to a lesser extent Adam) liked their positions and struck first, probably convincing poor Ilya that this was about AI safety.

Being lightweights they did not do any pre-work or planning, they just plowed ahead. They didn't think through that Sam and Greg have added tremendous value to the company and that the company would favor them far over the board that added zero value. They didn't think through that tech in general would see rain makers and value creators being cut loose and side with them instead of figureheads. They didn't think that partners and customers, who dealt with Sam and Greg daily would be find the move disconcerting (at a minimum). They didn't even think through who would be the next CEO.

Maybe they didn't think it through since they didn't care. There was only upside for them since Sam was going to get rid of them sooner or later. They didn't see that having been on the OpenAI board was an honor and enormous career boost. Or maybe their ambition was so great that nothing mattered but controlling OpenAI.

Further, they thought that if they slandered Sam that he would be cowed and that they would retain their power. I wonder how many times they had pulled this stunt in the past and it worked?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: