Hacker News new | past | comments | ask | show | jobs | submit login

None of the comments thus far seem to clearly explain why this matters. Let me summarize the implications:

Sam Altman expelling Toner with the pretext of an inoffensive page (https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...) in a paper no one read* would have given him a temporary majority with which to appoint a replacement director, and then further replacement directors. These directors would, naturally, agree with Sam Altman, and he would have a full, perpetual board majority - the board, which is the only oversight on the OA CEO. Obviously, as an extremely experienced VC and CEO, he knew all this and how many votes he (thought he) had on the board, and the board members knew this as well - which is why they had been unable to agree on replacement board members all this time.

So when he 'reprimanded' her for her 'dangerous' misconduct and started talking seriously about how 'inappropriate' it was for a 'board member' to write anything which was not cheerleading, and started leading discussions about "whether Ms Toner should be removed"...

* I actually read CSET papers, and I still hadn't bothered to read this one, nor would I have found anything remarkable about that page, which Altman says was so bad that she needed to be expelled immediately from the board.




Okay, let's stipulate that Sam was maneuvering to get full board control. Then the independent directors were probably worried that -- sooner or later -- Sam would succeed. With Sam fully in charge the non-profit goals would be completely secondary to the commercial goals. This was unacceptable to the independent directors and Ilya and so they ousted Sam before he could oust them?

That's a clear motive. Sam and the independent directors were each angling to get rid of the other. The independent directors got to a majority before Sam did. This at least explains why they fired Sam in such a haphazard way. They had to strike immediately before one of the board members got cold feet.


Besides explaining the haphazardness, that would also nicely explain why they didn't want to elaborate publicly on why they "had" to let him go -- "it was either him or us" wouldn't have been popular given his seeming popularity.


I suspect his popularity is mostly about employees who want to maintain the value of their equity: https://nitter.net/JacquesThibs/status/1727134087176204410#m

Wild guess: If the board stands its ground, appoints a reasonable new CEO, and employees understand that OpenAI will continue to be a hot startup, most of them will stay with the company due to their equity


Except, the facts behind Sam's firing will inevitably come out, and it won't be possible to brush it under the carpet. I think they hoped the facts wouldn't come out, and they could just give a hand-wavey explanation, but that's clearly not going to happen. It seems they have well and truly shot themselves in the feet, and they will likely have to be replaced now.


If I'm correct then the board is fine with getting replaced, they just don't want Sam to have total control. Many of the candidates for independent director are friendly with Sam and will happily give him the keys to the kingdom. It's probably extremely difficult to find qualified independent board members who don't have ties to Sam.


Idk, seems like a pretty easy sell to me:

"Sam was trying to censor legitimate concerns the board had with regards to the safety of the technology and actively tried to undermine the board and replace it with his own puppets."

If that is indeed true they did a mistake by saying something vague imo.


I suspect the board prioritized legal exposure first and foremost. They made the mistake of not hiring a legal or PR firm to handle the dismissal.


If they prioritized legal exposure, they would not have made disparaging remarks in their initial press release.


Vague disparaging remarks, fine. Specific allegations, not so much.


>This at least explains why they fired Sam in such a haphazard way.

The timing of it makes sense, but the haphazard way it was done is only explained by inexperience.


I mean, here is a relevant passage from the paper, linked in another comment: https://news.ycombinator.com/item?id=38373684

If I were the CEO of OpenAI, I'd be pretty pissed if a member of my own board was shitting on the organization she was a member of while puffing up a competitor. But the tone of that paper makes me think that the schism must go back much earlier (other reporting said things really started to split a year ago when ChatGPT was first released), and it sounds to me like Toner was needling because she was pissed with the direction OpenAI was headed.

I'm thinking of a good previous comment I read when the whole Timnit Gebru situation at Google blew up and the Ethical AI team at Google was disbanded. The basic argument was on some of the inherent incompatibilities between an "academic ombudsman" mindset, and a "corporate growth" mindset. I'm not saying which one was "right" in this situation given OpenAI's frankenstein org structure, but just that this kind of conflict was probably inevitable.


Just spot checking: did anyone comment on this paper when it was published? Did any media outlet say “hey, a member of the OpenAI board is criticizing OpenAI and showing a conflict of interest?” Did any of the people who cover AI (Zvi, say) discuss this as a problem?

These are serious questions, not gotchas. I don’t know the answers, and I think having those answers would make it easier to evaluate whether or not the paper was a significant conflict of interest. The opinions we have formed now are shaped by our biases about current events.

It didn’t make HN.


Gambling that no one reads academics papers by fringe Open AI board members no one has heard of before is probably a safe bet but it's still a risk if some doomer AI people on Twitter pumped it up, some journo discovered the tweets and sells it as "concern in the industry including one of OpenAI's own board" and it got swept up in some lawyer-style grilling by Congress which Sam just had to go through.


Who cares if the paper was covered in the media or not? Think tanks write policy papers for regulators, not HN. And it's the regulators that Sam was worried about. From the article:

> Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.


> If I were the CEO of OpenAI, I'd be pretty pissed if a member of my own board was shitting on the organization she was a member of while puffing up a competitor.

Considering what's in the charter, it seems like she didn't do anything wrong?

> We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

more here: https://news.ycombinator.com/item?id=38372769


I agree and none of the other passages people are quoting from the paper (which I admittedly haven't read yet) seem at all controversial. She's saying OpenAI's messaging about AI safety is not landing because it's simultaneously launching products that are taking the spotlight, and Anthropic is doing a better job at signaling their commitment to safety. That's true, obvious, and entirely in line with the charter she's supposed to uphold.


> Considering what's in the charter, it seems like she didn't do anything wrong?

It’s incredibly disingenuous to slap your name on an ethics paper claiming a company is doing malfeasance such as triggering a race-to-the-bottom for AI ethics when you have an active hand in steering the company.

It’s shameful. Either she should have resigned from the company, for ethical reasons, or recused herself from the paper, for conflict of interest.


> when you serve on the board of directors for that company

I'm not sure if this is the true statement, because the not-for-profit/profit arms make it more complex.

In this case, the non-for-profit board seems to act as a kind of governance over the profit arm, in a way, it's there to be a roadblock to the profit arm.

Normally a board aligns with the incentives: to maximize profit for shareholders.

Here the board has the opposite incentive, to maximize AGI safety and achievement, even at the detriment of profit and investors.


She is at the top of the pyramid. Did they not fire the chief executive? I am saying she is morally culpable for OpenAI’s actions as a controlling party.

To put these claims in a published paper in such a naive way with no disclosure is academically disingenuous.


She's not in charge of the for-profit arm though, all she could do was fire the CEO, and she did, which would seem consistent with her criticism. I don't think she has many more power as being on the board. She also isn't at the top; in the sense she needs other board members to vote similar to her to enact any change, so it's possible she kept bringing up concerns and not getting support.

Academically, did she not disclose being on the board on her paper?


Her position was listed as a fun fact, not in a responsible disclosure of possible conflicts of interest (though it ran the other way).

Being at the top of the org and being present during the specific incidents that gives one qualms burdens one with moral responsibility, even if they were the one who voted against.

You shouldn’t say “they did [x]” instead of “we did [x]” when x is bad and you were part of the team.


It sounds like your argument is "Even if OpenAI did something bad, Helen should never write about it, because she is part of OpenAI".

Or, that she should write her paper in the first person: "We, OpenAI, are doing bad things." That would probably be seen as vastly more damaging to OpenAI, and also ridiculous since she doesn't have the right to represent OpenAI as "we".

I have no idea why you think that should be a rule, aside from wanting Helen to never be able to criticize OpenAI publicly. I think it's good for the public if a board member will report what they see as potentially harmful internal problems.


I just don’t know why an ethicist would remain involved in a company they find is behaving unethically and proceed with business as usual. I suppose the answer is the news from Friday, though the course feels quite unwise for the multitude of reasons others have already outlined.

Regarding specific verbiage and grammar, I’m sure an academic could give clearer guidance on what is better form in professional writing. What was presented was clearly lacking.


One thing we've learned over the past few days is that Toner had remarkably little control over OpenAI's actions. If a non-profit's board can't fire the CEO, they have no way to influence the organization.


You’ll catch more flies with honey than vinegar.


Did we read different things? All it said was that they had been accused of these things, which is true. If your charter involves ethical AI I’d imagine the first step is telling the truth?


From the PDF:

While the system card itself has been well received among researchers interested in understanding GPT-4’s risk profile, it appears to have been less successful as a broader signal of OpenAI’s commitment to safety. The reason for this unintended outcome is that the company took other actions that overshadowed the import of the system card: most notably, the blockbuster release of ChatGPT four months earlier. Intended as a relatively inconspicuous “research preview,” the original ChatGPT was built using a less advanced LLM called GPT-3.5, which was already in widespread use by other OpenAI customers. GPT-3.5’s prior circulation is presumably why OpenAI did not feel the need to perform or publish such detailed safety testing in this instance. Nonetheless, one major effect of ChatGPT’s release was to spark a sense of urgency inside major tech companies.149 To avoid falling behind OpenAI amid the wave of customer enthusiasm about chatbots, competitors sought to accelerate or circumvent internal safety and ethics review processes, with Google creating a fast-track “green lane” to allow products to be released more quickly.150 This result seems strikingly similar to the race- to-the-bottom dynamics that OpenAI and others have stated that they wish to avoid. OpenAI has also drawn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators, and the susceptibility of their products to “jailbreaks” that allow users to bypass safety controls.151 This muddled overall picture provides an example of how the messages sent by deliberate signals can be overshadowed by actions that were not designed to reveal intent.


> This […] provides an example of how the messages sent by deliberate signals can be overshadowed by actions that were not designed to reveal intent.

What a lovely turn of phrase! I'm stealing it for use later, in place of "actions speak louder than words".


Well since that's not what the paper claims at all...


The paper itself claims that OpenAI's actions have undone their stated goals:

https://news.ycombinator.com/item?id=38374972

It has an excess amount of weasel words, so you might need to employ ChatGPT to read between the lines.


This is really interesting. It makes perfect sense that they weren't sitting at 6 board members for 9 months because Sam and the others didn't see the implications, but because they saw them all too well and were gridlocked.

But then it gets interesting inferring things from there. Obviously sama and gdb were on one side (call it team Speed), and Helen Toner on the other (team Safety). I think McCauley is with Toner (some connection I read about which I don't remember now: maybe RAND or something?).

But what about D'Angelo and Ilya? For the gridlock, one would have to be on each side. Naively I'd expect tech CEO to be Speed and Ilya Safety, but what would have precipitated the switch Friday? If D'Angelo wanted to implode the company due to conflict of interest, wouldn't he just have sided with Team Safety earlier?

But maybe Team Speed vs Team Safety isn't the same as Team Fire Sam vs Team Don't. I could see that one as Helen, Tasha, and Adam vs Sam, GDB, and Ilya. And, that also makes sense to me in that Ilya seems the most likely to flip for reasons, which also lines up with his regret and contrition. But then that raises the question of what made him flip? A scary exchange with prototype GPT5, which made him weigh his Safety side more highly than his loyalty to Sam?


Maybe Sam wanted to redirect Ilya's GPUs to ChatGPT after DevDay surge. 20% of OpenAI's GPUs are allocated to Ilya's team.


My conclusion was that Sam slipped up somewhere and lost Ilya. Which maybe because of the reason you mentioned. Previously it seems like it was a 3 cofounders vs 3 non cofounders board split. Ilya switched teams after being upset by something.

If I may wear my conspiracy hat for a second. Adam D'Angelo is a billionaire or close to it so he has a war chest for a battle to hold on to the crown jewel's of AI. Sam has powerful friends but so does D'Angelo(Facebook mafia). I don't think the board anticipated the 90% potential employee turnover, so there is a small chance they leave the board because of that reason. But my guess is there is a 4 letter company that starts with an 'M' and ends with an 'a' that comes into the picture eventually.


Random fanfiction: it's also possible that it wasn't actually a 3-3 split but more like a 2-2 split with 2 people -- likely Adam and Ilya, though I guess Adam and Tasha is also possible -- trying to play nice and not obviously "take sides." And then eventually Sam thought he won Adam and Ilya's loyalty re: firing Helen but slipped up (maybe Adam was salty about Poe and Ilya was uncomfortable with him "being less than candid" about something Ilya care about. Or maybe they were more principled than Sam thought).

And then to Adam and Ilya, normally something like "you should've warned me about GPTs bro" or "hey remember that compute you promised me? Can I prettyplease have it back?" are stuff that they are willing to talk it out with their good friend Sam. But Sam overplayed his hand: they realized that if Sam was willing to force out Helen under such flimsy pretexts then maybe they're next, GoT style[1]. So they had a change of heart, warned Tasha and Helen, and Helen persuaded them to countercoup.

[1] Reid Hoffman was allegedly forced out before, so there's precedent. And of course Musk too. https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...


It's not just OpenAI. Every AI organization is discovering that they have internal groups which are pulling in a different direction than the rest of the organization, and trying to get rid of those groups.

* Google got rid of its "Ethical AI" group

* Facebook just got rid of its "Responsible AI" team

* OpenAI wanted to get rid of the "Effective Altruists" on the board

I guess if I was afraid of AI taking over the world then I would be rooting for OpenAI to be destroyed here. Personally I hope that they bring Sam back and I hope that GPT-5 is even more useful than GPT-4.


I feel the people advocating safety, while they are probably right from a moral and ethics point of view, are just doomed to fail.

It's like with the Nuclear Bomb, it's not like had Einstein withheld his contributions, we wouldn't have Nuclear Bombs today. It's always a matter of time before someone else figured it out, and until someone else with bad intentions does.

How to attack safety in AI I think has to assume there are already bad actors with super powerful AI around, and what can we do in defense of that.


Yes, I agree. There will always be some evil person out there using AI to try to achieve evil goals. Scammers, terrorists, hackers. All the people who use computers now to do bad things, they're going to try to do even worse bad things with AI. We need to stop those people by improving our security.

Like fixing the phone number system already. There must be some way to stop robo scam calls. Validate everyone making a call and stop phone spam. That would do a lot more to help AI safety than randomly slowing down the top companies.


Yeah, it’s an awful role basically by design. It reminds me of my uncle’s role as the safety/compliance lead at a manufacturing facility. He was just trying to do his job but constantly getting bullied, maneuvered around etc because his job was effectively to slow everyone else down. It drove him crazy, he ended up having a mental breakdown, becoming an alcoholic, and having to go on disability for the remainder of his career. I’m sure some people can handle the stress of those roles, but ugh, not worth it IMO.


It's interesting the paper is selling Anthropic's approach to 'safety' as the correct approach when they just launched a new version of Claude and HN thread is littered with people saying its unusable because half the prompts they type get flagged as ethical violations.

It's pretty clear some legitimate concerns about a hypothetical future AGI, that we've barely scraped the surface of, turns into "what can we do today" and it's largely virtue signalling type behaviour crippling a non-AGI very very alpha version of LLMs just to show you care about hypothetical future risks.

Even the coorelation between commercialization and AI safety is pretty tenuous. Unless I missed some good argument about how having a GPT store makes AGI destroying the world easier.

It can probably best be summarized as Helen Toner simply wants OpenAI to die for humanity's sake. Everything else is just minor detail.

> Over the weekend, Altman’s old executive team pushed the board to reinstate him—telling directors that their actions could trigger the company’s collapse.

> “That would actually be consistent with the mission,” replied board member Helen Toner, a director at a Washington policy research organization who joined the board two years ago.

https://www.wsj.com/tech/ai/altman-firing-openai-520a3a8c


What’s surprising to me is that top-level executives think that self-destructing the current leader in LLMs is the way to ensure safety.

Aren’t you simply making space for smaller, more aggressive, and less safety-minded competitors to grab a seat on the money train to do whatever they want to do?

Pandora’s box is already open. You have to guard it. You have to use your power and influence to enforce other competitors to also do that with their own boxes.

Self-destructing is the worst way to ensure AI safety.

Isn’t this just basic logic? Even chatGPT might have able to point out how stupid this is.

My only explanation is that something deeper happened that we’re not aware of. Us or them board fight might explain it. Great. Altman is out. Now what? Nobody predicted this would happen?


> Pandora’s box is already open. You have to guard it. You have to use your power and influence to enforce other competitors to also do that with their own boxes.

> Self-destructing is the worst way to ensure AI safety.

If they don't believe you're prepared to destroy the company if that's what it takes, then you have zero power or influence. If they try to "call your bluff", you have to go there, otherwise they'll never respect you again.


So the outcome of that is that you either destroy the company or you’re out? Either outcome is the same from your perspective, you chose to no longer have any say in what happens next


Sure, but you have a say in what happens right then and there.


Has Toner (or someone with like-minded views) filled in the blanks between "GPT-4" and "Terminator Judgement Day" in a believable way? I've read what Yudkowsky writes but it all sounds so fantastical that it's, at least to me, more like an episode of The Twilight Zone or The X-Files than something resembling reality. Is Toner really in the "nuke the datacenters" camp? If so, was her placement on the board not a mistake from the beginning?


I think the middle ground where several of the OpenAI board members were trying for is to "responsibly develop AGI", which means developing at a moderate pace while trying to avoid kicking off an investment gold rush through making heavily commercial use cases, and spending a substantial amount of resources on promising safety research (such as Ilya's work).

In my opinion, it was not a very strong position because the allure of money and trying to be the biggest is too strong (as we're seeing now), but I think it was at least coherent.


So, no, the blanks have not been filled in then. Because, for those in Toner's camp, that middle ground is just "progress from GPT to T-1000, but slowly", right? Yudkowsky talks about AIs 3D printing themselves into biological entities and killing us all because humans are an inefficient use of matter. It's not a strong position to me because it sounds ridiculous, not because there's greed involved.


It takes some level of delusion of grandeur to think half a board of a single non profit that just happens to be the first mover can stop the full forces of American capitalism - although I kind of respect the drive/purpose.

They could easily lose any power they had to guide the industry, it was a huge gamble. I remember reading a Harvard business school study showing the first mover advantage repeatedly turned out to be ineffective in the tech industry as there is a looong series of early winners dying out to later market entrants like Friendster->FB, Google, a bunch of dotcom era Ecommerce companies predating Amazon, etc.

They need full industry/society buy in - at an ideological level - to win this battle, they won't win through backroom dealing in a boardroom while losing 90% of their own staff.


I don't know whether I have "like-minded views", but I think the risk is there, and it is not fantastical to me because the AGI doesn't need to "do" anything outside of humans assisting it -- no killer robots or nanobots. It just needs to persuade humans to kill for it, or gather resources for it, and so on. And because it's an AGI, it has superhuman abilities for persuasion.

The intuition for GPT-4 being on the way towards superhuman intelligence or persuasion abilities is that the only significant difference between it and GPT-3 appears to be the amount of compute power applied to training.

The previous US President was much less than superhuman, and he still managed to seriously threaten democracy in the US, and by extension cause a minor existential risk to civilization. Imagine what a more intelligent agent with enough attention to be personally persuasive instead of generically persuasive could achieve for itself, given a goal. (If you think that LLMs can't express goals or preferences, read some of the Bing Sydney transcripts.)


Exactly, AGI is overrated because the world isn’t run by its smartest people, ChatGPT would make a better US or Russian president at this point compared with treasonous or genocidal geriatrics


I feel like the “Terminator Judgement Day” scenario is a strawman more often than not. There are reasons to push for the safe and responsible development of AI that aren’t based on the belief that AI is literally going to launch the nukes.

The biggest risks I see from AI over the next 2 decades are:

1. A complete devolution in what is true (because creating convincing fakes will be trivial). From politics to the way that people interact with each other on a daily basis, this seems like it has the potential to sew division

2. the societal turmoil/upheaval arising from the mass unemployment (and subsequent unemployability) of millions of people without the social safety nets in place to handle that. When people are suddenly unable to work and support their families and those who have consolidated the power have no interest in sharing the wealth, there will inevitably be pushback.


In his recent Ted talk, Ilya talks about AI as 'brains in a computer', and explains why he believes they aren't that far off from AGI. [0] GPT4 is a substantially worse software engineer than I am, but it is 10x GPT3. And if we 10x for at most 2 or 3 iterations, it could be better than me. While it is unclear that we can get there, it doesn't seem implausible to me, and would have deep implications for human existence. That is already true if it just stayed at that level, but what most doomers are really worried about is the possibility of GPT8 being as good or a better AI researcher than most or all humans.

[0] https://www.youtube.com/watch?v=SEkGLj0bwAU


No idea if the concerns are realistic, but dismissing them because they are fantastical surely is a fallacy, no?

We currently almost have scifi-level AI we can talk to on our smartphones, which would have been fantastical only a few years ago.


The concerns are unrealistic because there are massive unusual and unvalidated assumptions in multiple levels, and lynchpin arguments that depend on utterly unsupported assumptions.


> fill in the blanks

What exactly are you looking for here, proof of concept? This stuff is inherently speculative - because if it wasn't speculative, we'd probably be dead.


I have a feeling EA[1] is like anarcho capitalist communities (or just inverse for socialism) where you have a good idea like markets==good then apply it like a hammer to everything you see in life. Any hint of gov/statism is treated as the enemy, regardless of its IRL negative impact on markets or net freedom, or whatever positive end goal they envision.

OpenAI is (hypothetically) progressing to AGI, therefore bad. Whether GPT specifically is a half a foot on a 1000km journey or potentially insignificant, it doesn't matter to them as they've pre-selected any potential for progress as all bad.

[1] if you search "Helen Toner" on YouTube it's just a collection of videos of her talking at EA conferences. So she's a believer


No one has, no.


>how having a GPT store makes AGI destroying the world easier

The argument in general is that the more commercial interest there is in AI, the more money gets invested and the faster the companies will try to move to capture that market. This increases the risk for AGI by speeding up development due to competition, and safety is seen as "decel".

Helen was considering the possibility of Altman-dominated OpenAI that continued to rapidly grow the overall market for AI, and made a statement that perhaps destroying OpenAI would be better for the mission (safe development of AGI).


If true they should have never started in the first place because there was no way she was not going to press that button.


This sounds convincing, especially considering this story where Sam Altman was involved in a "long con" to seize board control of Reddit (https://www.reddit.com/r/AskReddit/comments/3cs78i/comment/c...).

I think Sam may have been angling to get control of the board for a long time, perhaps years, manipulating the board departures and signing deals with Microsoft. The board finally realized this when it was 3 v 3 (or perhaps 2 v 3 with 1 neutral). But Sam was still working on more funding deals and getting employee loyalty, and the board knew it was only a matter of time until he could force their hand.


Also of note, this comment by Sam's verified reddit account describing the con as "child's play": (https://www.reddit.com/r/AskReddit/comments/3cs78i/comment/c...)


In that page linked, she says that OpenAI's system card wasn't a suitable replacement as a "commitment to safety". I feel thats a fair critique even for someone on a non-profit board, unless she's really advocating for systemic change in the way the company operates and does any commercial business.


In this instance, "Sam is trying to take over the Board on a flimsy basis" is an reasonable reason to remove him. Started leading discussions about whether she should be removed, is also very very far from actively working to remove her.

This is amateur hour and considering what happened she probably should have been removed.


Leading discussions on whether someone should be removed is literally actively working to remove them.


It's quite literally not. After she wrote that it would be silly if you didn't even have discussions considering this.

Actively working would require active efforts to make this happen.


What would those "active efforts" consist of beyond "leading discussions"? You have the discussion, you have the board vote, you're done, there's no ??? step.


I mean it's not cool for board members to publicly criticize the company.


I think you're taking the intuition from a for-profit company and wrongly applying it to a non-profit company. When a board member criticizes a for-profit company, that's bad because the goal of the company is make a profit, and bad PR is bad for profit. A board member criticizing a non-profit doesn't have the same direct connection to a result opposite of the goals of the company. And if you actually read the page, it's an extremely mild criticism of OpenAI's decisions.

This situation is simultaneously "reckless board makes ill-considered mistake to suddenly fire CEO with insufficient justification" and "non-profit CEO slowly changes the culture of a non-profit to turn it into profit-seeking enterprise that he has a large amount of control over".


>A board member criticizing a non-profit doesn't have the same direct connection to a result opposite of the goals of the company.

Even in a nonprofit, the board has an obligation to maintain a good working relationship with their organization. It's very rare for a board member to publicly criticized for their own organization.

Also, this argument goes out the window because OpenAI is not just a nonprofit. When they started their for-profit subsidiary in 2019, they accepted a fiduciary duty to their investors.


That’s not how that works. You can argue that Ilya has a fiduciary responsibility to the investors as an officer of the company. Toner doesn’t. She is an independent board member on the board of the non-profit that owns a controlling share in the company, but she doesn’t personally own a majority share of the company. As an independent board member, she has no duty but to uphold the charter. She represents no shareholders.

Majority shareholders may have some fiduciary responsibility to minority shareholders, so you might be able to make the argument that the board as a whole shouldn’t write disparaging remarks about the company. But even that argument is tenuous.


She joined the board of OpenAI knowing that they owned a for profit subsidiary that received billions of dollars of investment. Being independent means that she could provide a unique outsider perspective, not that she has no obligations to her organization whatsoever. If you're going to be completely independent, you shouldn't be surprised when over 90% of the organization turns against you.


Whether people turn against her or not is irrelevant to my point. She has no personal fiduciary duty to the minority shareholders in the for-profit company. Her only duty is to the charter.


That people turned against her is relevant because it demonstrates how untenable this standard of independence is. A charter is just a collection of words. She isn't a board member to a collection of words. She's a board member to an organization of people


It’s not relevant to my point that she has no fiduciary duty to anything but the charter.

Her only personal fiduciary responsibility is to the mission defined in that collection of words, not to the employees of the for-profit.

That’s how this works whether you think it should or not.


Are you going to engage with anything I say or are you just going repeat yourself again? Is there a divine stone tablet that commands a nonprofit board member is solely responsible for the nonprofit's charter that I'm missing?


No because it's not relevant to the only point I made, which is that Helen Toner has no personal fiduciary responsibility to the investors of the for-profit.

You implied that she couldn't publicly disparage the for-profit because the board has a fiduciary responsibility to the minority investors in the the for-profit. I only piped up to correct that single point because it's wrong.


I mean sure was free to publicly disparage her organization, but then she can't expect not to be antagonized. My point is that being a board member is a responsibility, it's valid for Sam to interpret her paper as a violation of her responsibility.


>being a board member is a responsibility

And that responsibility is defined as a fiduciary duty to the mission of the organization as defined in the charter.

As a fellow board member, if Sam thinks Helen's behavior is in conflict with the mission of OpenAI as defined in the charter, he is free to push for her removal.


Or, the organization has an obligation to respect freedom of academic inquiry, because the truth is what helps organizations grow, change, and mature, does it not? If the company cannot handle independent critique then its fragility is not the fault of the researcher who is telling the truth.


> accepted a fiduciary duty to their investors.

Did they?


What do you mean? Schmidt was on Apple's board for 3 years while he was CEO of Google. Do you think Google did not criticise Apple during that whole time (remember that the CEO is ultimately responsible for the company communications)?

Even more in the case of OpenAI, the board member is on the board of a non-profit, those are typical much more independent and very often more critical of the decisions made by other board members. Just search for board members criticising medical/charity/government boards they are sitting on, there are plenty.

That's not even considering if the article was in fact critical.


FWIW that was a pretty strained relationship and iirc Eric had to frequently recuse himself during any discussion of the iPhone which eventually made his membership untenable.


Why was Schmidt on Apple’s board? Did he care about Apple’s mission or Apple shareholders?


it's not cool for companies to try to shut down academic freedom of inquiry in scholarly publishing in order to improve their public image

her independence from openai was the nominal reason she was asked to supervise it in the first place, wasn't it


He wasn't shutting down her academic freedom. However, if you're going to diss what your own company is doing and praise competitors, you probably shouldn't be on the board.

>During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities.

>Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.

This person is too far gone. Life isn't a movie.


that would be true for a for-profit company where her fiduciary duty was to shareholders, but in this case, given the charter in question, not publicly criticizing the company would be more likely to be a breach of her fiduciary duty


> This person is too far gone. Life isn't a movie.

This comment is a credibility killer. "This person", as a member of the board of a non-profit, has a duty to keep the non-profit on mission, and even to call for its dissolution if that isn't possible.


How far should she go to fulfill her interpretation of that duty? Should she even commit crimes in order to keep the non-profit mission? There's 700+ employees that disagree with her interpretation of the company's mission, and yet she stubbornly maintains that hers is correct and that it's her duty to destroy the company (or even sell it to Anthropic) to fulfill her duty.

This is delusion, and she thinks she's some hero that's saving the world by keeping it a non-profit when in reality it's just creating needless chaos and even impacting innocent people's livelihoods.


> Should she even commit crimes

probably not, but continuing to publish research papers of the kind that got her the job in the first place seems reasonable maybe

but that's what altman was criticizing her for, not for committing crimes, which as far as anyone knows she hasn't done

> There's 700+ employees that disagree with her interpretation

maybe, maybe not, but the standard way that nonprofits and other companies work is that the employees do what the management tells them, the management does what the board tells them, and the board does what the shareholders or the charter tells them, and in all cases, if they refuse, they get fired

if you're an employee at the american cancer society and you decide that malaria in africa is a more important cause, you probably shouldn't expect to be able to continue using the acs's assets to fight malaria in africa. it might happen, but you shouldn't be at all surprised if you get fired


Again, who's stopping her from publishing research papers? She felt threatened and decided to retaliate by orchestrating firing Sam.

They likely violated RICO by essentially nuking the company's valuation and trying to sell to Anthropic. That is a crime, by the way, so they don't seem above that.

They haven't given even a clear reason for firing Sam. "Not being candid" with no evidence doesn't sound like it directly violates "the charter", but you can really argue anything when you're just "interpreting" the charter.

Anyway, you can make infinite excuses for their actions and justify them infinite ways. Hopefully you agree that the current situation is a giant mess, and it's caused by their narrow misguided interpretation of their job, and it's at the expense of basically everybody else.


> They likely violated RICO by essentially nuking the company's valuation and trying to sell to Anthropic.

Really. What is/are the predicate offense(s) for this alleged RICO violation?


> They likely violated RICO

LOL

Nothing you have said is credible or honest.


She has a legally binding charter (well, assuming OpenAI's charter actually holds up in court, which we may end up seeing as the next logical step in the drama) to use board powers to further the non-profit mission. She does not have a legally binding obligation to commit crimes.


> Should she even commit crimes in order to keep the non-profit mission?

Should you argue in bad faith?


This is a non-profit, not a for profit company. Nominally, she's on the board precisely to criticize.


[flagged]


> A board member cannot sabotage their own company.

Criticism (especially valid criticism) is not sabotage, and in any case the board's responsibility is to advance the charter, not to support the (non-profit) company regardless of what it's doing.


There is a difference between independent from an organization, and someone that takes a public stand against a company. Reading that paper, which I am not really impressed with, they are taking a stance that openai's current trajectory is not as good as a competitor. I bet that if a Microsoft board member wrote a paper how Google search is better than Bing / AWS Better than Azure / OSX better than Windows they would be reprimanded also.


A comparison to another nonprofit would make more sense. A board member of the Komen Foundation being reprimanded for pointing out that other nonprofits are far more effective at helping cancer research would be a better comparison.


But I bet if a Greenpeace-like-organization board member would write a paper how a different environmental group is greener, they would be praised.


Non-profits and for-profits aren't comparable. The latter have a duty to make money. The former has a duty to its mission.


I'm not sure you should get to have your cake and eat it too.


You seem to be fundamentally misunderstanding the role of non-executive board members.

In many cases they're industry experts / academics and in many cases those academics continue to publish papers that look objectively at the actions of all companies in their sphere of influence, including the one they're on the board of.

It's _expected_ for them to publish this type of material. It's literally their job. Cheerleading is for the C-suite.


They've admitted that they think it would be better to destroy open AI than to continue on its path, but how can an organization uphold its goals if it doesn't exist?

Imo you have to commit to working within an organization if you're on its board, you can't burn it down from within. I think it's less of an issue to try to remove the CEO if you don't agree with the direction they're taking, but this was obviously done in a sloppy manner in this case, and their willingness to destroy it makes me distrust them (not that my opinion matters per se).


> but this was obviously done in a sloppy manner

I don't think it was done in a sloppy manner, I think there's a huge amount of spin and froth being generated to make it _look_ like it was done in a sloppy manner, but the reality is that the board took an executive action, released a discreet and concisely worded press release and then fell silent, which is 100% professional behaviour.

To me it's the other side that are acting sloppily, they're thrashing around like maniacs, they're obviously calling in favours behind the scenes as supportive articles are popping up all over the place, apparently employees were calling other employees in the middle of the night urging them to sign the 'I'll quit unless' letter, employees have been fed a misleading narrative (two of them, actually) about why the action was taken in the first place which riled them up...

I have to assume here that the board - as they're the ones acting professionally - similarly acted professionally before the firing and discussed the matter at great length and only resorted to this last minute firing because of exigent circumstances. The fact that a fired employee was able to cajole/bluff/whatever his way back into the office to prove some sort of point suggests that the board's action may have been necessary. What exactly was done during that time in the office, one might ask? Were systems accessed? Surely it was an unauthorised entry and a major security violation.

You do see that that action (re-entering the office after being fired) gives off lunatic vibes, don't you? Maybe it really did come down to 'yeah you're technically my boss but I'm gonna do whatever the hell I want and there's nothing you can do about it because everyone loves me' to which the board's only possible course of action is to fire immediately.


I don't think Sam cajoled their way and talked to the employees, as I understand it Ilya talked with the staff and revealed what they claimed were the reasons for the firing.


In your opinion, was a crime committed when that person returned to the office?

Either by himself or by the person that permitted him access knowing beyond any reasonable doubt that they were not authorized to be there?

If systems were accessed, was another crime committed?

If a person facilitated that access, has the boundary for 'conspiracy to commit...' been reached?


What? It's my understanding Sam Altman was invited back to talk to the board, and Ilya is still employed there.


Were the board members in the office on that day?

I just find the whole thing weird. I just think it seems unlikely that a board would decide something then about-face so rapidly as to have a meeting at the office the next day. It would be conventional for further meetings to take place at the offices of lawyers, for example.

It would also be normal for the non-fired members of staff to be instructed to have no further contact with the fired person without legal counsel present.

Sometimes it seems very much like everyone here is really playing to the cameras in a big way.


that does seem to be what altman was attempting: he wanted to have the cake of independent board members, but eat it by having them never express any opinions contrary to his own


It's a nonprofit board & it's absolutely the role of advisory board members to continue their work in the sector they're specialized in without bias.


But she was biased. She seemed so biased against her own company that her intended course of action seems to be to destroy it.


If she has a justifiable reason for holding that position it isn't bias.


OpenAI does not have an advisory board. She is on the board of directors and has fiduciary responsibilities. It does not matter that the parent organization is a non-profit. The duty remains.


Yes, those responsibilities are to advance the charter of the organization, which notably is totally compatible with publishing articles that contain mild criticism of the organization's behavior.


The board’s fiduciary duties are to the charter of the nonprofit. They have no duty to protect the for-profit-OpenAI’s profits, reputation, or anything else.


This has been argued here already and dissected thoroughly.

It's basically restating 'her role is to cheerlead' but with more sophisticated wording.

You haven't made an argument, you've just used longer words to restate your (incorrect) opinion.

Also I never said they had an 'advisory' board. Board members ar either executive or non-executive members. If they're nonexec then their role is to advise the executive members. They're generally fairly independent and many nonexecutive board members serve on the boards of multiple companies, sometimes including ones that are nominally in competition with each other.


No, I never said that her role is to "cheerlead". Refraining from public disparaging remarks is not a big ask for a board member. Especially if those thoughts were not first brought before the board, which is where such disputes should be resolved.

> Also I never said they had an 'advisory' board

From your comment: "...it's absolutely the role of advisory board members to continue their work".

Not sure how someone is meant to parse that except for you to imply that she was a member of an advisory board.


> Not sure how someone is meant to parse that

Someone is meant to use context and understand that when the context clearly refers to someone who is well known to be a member of the board of directors, 'advisory board member' means that they are a board member in an advisory role, aka a nonexec.

That assumes that someone is familiar with the structure and makeup of boards of directors. It's a bit rude to participate in a conversation if you're not familiar with the subject matter and the meaning of words-in-context as you just waste peoples time making them elaborate.


> 'advisory board member' means that they are a board member in an advisory role, aka a nonexec

No, those are not synonyms. Advisory boards are distinct from the board of directors. Advisory board members have roles similar to what you alluded to in your original comment and do not have fiduciary duties, hence my confusion.

Non-exec board members are not involved with day-to-day business operations but their fiduciary duties are no different than exec members.

Language has meaning. Don't insult others for your own clumsy usage.


Oh well I guess I used the wrong word then, but from context you should have been able to figure out the intent behind my words.

Only human!

> Don't insult others for your own clumsy usage.

We're on the internet. Something like 25% of internet discourse is nitpicking vocabulary. Let's keep traditions alive this holiday season!


Is this even true for for-profit companies? Like if a professor is on the board of a for-profit (which I think is pretty common for deeptech? Maybe companies in general too? https://onlinelibrary.wiley.com/doi/abs/10.1111/fima.12069#:....), is he/she banned from making a technical point about how a competitor's product or best practices is occasionally superior to the company's product?


This is the inherent conflict in the company.

Once it turned out you needed 7B parameters or more to get LLMs worth interacting with, it went from a research project to a winner-take-all compute grab. OpenAI, with an apparent technical lead and financial access, was well-positioned to win it.

It is/was naive of the board to think this could be won on a donation basis.


This is insane. "They had to fire Sam, because he was trying to take over the board".

First: most boards are accountable to something other than themselves. For the exact reason that it pre-empts that type of nonsense.

Second: the anti-Sam Altman argument seems to be "let's shut the company down, because that will stop AGI from being invented". Which is blatant nonsense; nothing they do will stop anyone else. (with the minimal exception that the drama they have incepted might make this holiday week a complete loss for productivity).

Third: in general, "publishing scholarly articles claiming the company is bad" is a good reason to remove someone from the board of a company. Some vague (and the fact that nobody will own up to anything publicly proves it is vague) ideological battle isn't a good enough rationale for the exception to a rule that suggests that her leaving the board soon would be a good idea.


> "They had to fire Sam, because he was trying to take over the board".

I mean, yes? The board is explicitly there to replace the CEO if necessary. If the CEO stuffs the board full of their allies, it can no longer do that.

> First: most boards are accountable to something other than themselves. For the exact reason that it pre-empts that type of nonsense.

Boards of for-profits are accountable to shareholders because corporations with shareholders exist for the benefit of (among others) shareholders. Non-profit corporations exist to further their mission, and are accountable to the IRS in this regard.

> Second: the anti-Sam Altman argument seems to be "let's shut the company down, because that will stop AGI from being invented". Which is blatant nonsense; nothing they do will stop anyone else. (with the minimal exception that the drama they have incepted might make this holiday week a complete loss for productivity).

No, the argument is that Sam Altman trying to bump off a board member on an incredibly flimsy pretext would be an obvious attempt at seizing power.

> Third: in general, "publishing scholarly articles claiming the company is bad" is a good reason to remove someone from the board of a company. Some vague (and the fact that nobody will own up to anything publicly proves it is vague) ideological battle isn't a good enough rationale for the exception to a rule that suggests that her leaving the board soon would be a good idea.

This might be true w.r.t. for-profit boards (though not obviously so in every case), but seems nonsensical with non-profits. (Also, the article did not reductively claim "the company is bad".)


> Non-profit corporations exist to further their mission, and are accountable to the IRS in this regard.

The IRS’s actual power in this regard is extremely limited. There’s so much wiggle room to define the “mission” that, at best, the only thing they can do is make sure you’re not secretly using your nonprofit as a tax shelter for a regular for-profit business, and even then some companies find ways around it (see IKEA).

Ordinary fraud (as opposed to tax fraud, though maybe that too) is even easier to pull off with a nonprofit than with an for-profit. Any nonprofit has a fundraising mechanism (which is usually an obnoxious set of dark patterns and outright spam reached through decades of optimization; the techniques are well understood), some overhead for managing the organization and money itself, and then the actual work involved with the mission. It’s quite simple for the overhead and fundraising parts of the organization to become most of the organization. But it gets worse: when your “mission” is vague enough, you can also turn that side of the business into a bunch of sinecures for your friends. There are well known “awareness” nonprofits that do literally nothing to actually solve the problems they’re nominally about because their job is to “spread awareness” (i.e. the top of the funnel for the fundraising part of the organization).

This is ironically also the origin story of “effective altruism”. The IRS doesn’t stop you from running a non-profit scam, so you have to do your own research to find out which non-profits are legit, and that’s what the effective altruists started off with, along with some cost-benefit modeling once you get past the outright scams.

The problem with OpenAI seems to be related to a vague mission as well. From the “an organization is what it does” perspective, OpenAI develops AI. If you have a formal mission that emphasizes safety over capability, you could certainly argue that OpenAI was closer to achieving that mission by sabotaging its own work, but if your interpretation of “safety” is to sabotage the development of AI, there are certainly better ways to do it than to develop AI and then sabotage yourself.


I’m curious about the IRS angle. Anyone know more about that?

One reason open source software projects struggle to get donations is because the IRS is skeptical of OSS as a true non-profit activity, so often doesn’t allow 503c tax deductible donations. OpenAI seems like a data point in favor of the IRS skepticism.


if thats the case why didnt they just stuff the board when they had the chance?


I think it’s also mentioned the article that they have failed to add new members to the board due to the disagreement between the board members.


Yeah it sounded like 3-3 gridlock until someone flipped, either it was Sam/Greg/Ilya against Adam/Helen/Tasha or it was Sam/Greg/Adam against Ilya/Helen/Tasha.


then the argument that altman would be able to easily stuff the board doesnt hold -- he didnt even have majority.

either you claim (1) that there was a risk of altman having majority and stuffing the board and hence concede that majority allows stuffing the board, in which case you concede the actual majority which was anti-altman could have done that

or (2) you concede that having majority is not enough to stuff the board, in which case we were far from any risk of altman doing so, given that he did not even have a majority of the board

---

edit because i cant seem to reply further down thread: i read the article. my point is precisely that the board had majority. that is how they got to fire altman in the first place. given they had majority they had the chance to stuff the board in their own favor, as per the argument above.


You should read the article, because it’s clearly stated how Altman could get a majority. For your other concern, a simple vote change (like Ilya’s) can explain it.


> Second: the anti-Sam Altman argument seems to be "let's shut the company down...

Isn't that the pro-Altman argument? The pro-Altman side is saying "let's shut the company down if we don't get our way." The anti-Altman side is saying "let's get rid of Sam Altman and keep going."


No. No it isn't.

The board-aligned argument seems to be "destroying the company is the right thing to do if it helps the cause of AI alignment".

Whereas the pro-Altman side seems to be "if you have the most successful startup since Google/Facebook, you shouldn't blow it up merely because of vague arguments about << alignment >> ".


> "if you have the most successful startup since Google/Facebook, you shouldn't blow it up"

... But they aren't the ones blowing it up? Firing one guy, even the CEO, isn't blowing it up. The only side that directly threatened "if you don't do X, we blow it up" was the pro-Altman side.


Your pro-Altman means that the argument should be won by Altman side, just because it’s a successful startup, while in reality, it’s a non-profit with a mission, which contradicts Altman’s position (according to the article).


Do you think Sam is more aligned to OpenAI non-profit charter than Helen?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: