Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Why OpenAI Fired Sam Altman – and What Happens Next in the AI World (thealgorithmicbridge.substack.com)
47 points by pseudolus on Nov 18, 2023 | hide | past | favorite | 58 comments



Just a heads-up that there isn’t much new in here if you’ve been following the saga on Twitter and have seen Brockman’s and Swisher’s tweets and are aware of the other recent departures. It’s mainly a summary of tweets. Not criticizing, just trying to save you a click if you’re following in real-time.


I haven’t been following and I appreciated the summary


Also the Why is not answered at all. Just some speculation.


Thanks. I was going to read as I’d like to know and the headline makes a (false) claim to know.


> It started with a rather cryptic and not very pleasant-sounding blog post from the company announcing a leadership transition. Here are the most revealing bits:

That doesn't seem to be the most revealing bits at all. Personally, the last part is what is the most revealing, as it reaffirms the boards commitment to the mission they initially declared:

> OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.

Between the lines, that paragraph is giving us the background to why the firing was necessary.


I guess I don't know which lines to read between.

Is the board being cautious about irresponsible growth in AI and lack of ethical governance? Is caution the 'mission'?

Or is the board blowing off responsible, cautious management in pursuit of runaway profit?

Not following the issue nor the characters (not a personality-worshipping silicon-valley-celebrity-watcher) so I have honestly no idea what all the veiled messages are saying.


> OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity

This outlines their mission.

> In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission

This acknowledges that they had to adjust the current structure in a attempt to in the future continue focusing harder on the mission.

> The majority of the board is independent, and the independent directors do not hold equity in OpenAI

This tells us that the board is acting not with the goal of maximizing monetary profits.

> While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter

This tells us that the company in a traditional sense have been growing, but lately focus from the core mission has been lost, and they're aiming to regain the focus on their mission.


Given OpenAI is a 501c and Sam Altman has a history in the VC world, it seems more plausible to me that Altman was on the pro-"make it a normal business" side.


To me I am more concerned as to if the board wants faster and more open development of AI.

Before he left, Sam had been doom and glooming AI advancement.


OpenAI wants to achieve responsible AGI without it being narrowly captured by any one political powerful entity (corporate or political).

Their version of open is not “open to regular folks” but “stwewarded to prevent capture and misuse”.

That they don’t recognize themselves and their self-importance as a form of capture is its own worrisome irony.


> Is the board being cautious about irresponsible growth in AI and lack of ethical governance? Is caution the 'mission'?

> Or is the board blowing off responsible, cautious management in pursuit of runaway profit?

To summarize the context:

The chartered mission is to lead the safe pursuit of AGI. The leader of the “coup” is a scientist who feels passionately about that mission. The CEO who was fired was a venture capitalist responsible for the recent product launches and lucrative commercial agreements.

Everybody involved is still being vague about the underlying dispute, but (so far) it plausibly reads that a passionate and anxious idealist felt like his opportunity to fulfill a mission critical to the future of humanity was being disrupted by a profit-seeking dealmaker.

(Whether the researcher’s purported worry is sound or not is likely irrelevant to explaining the news itself. It’s a closely held private org and so typical internal disagreements can have major practical consequences)


Reading the transcript of Bill Gates's (Nov 16) interview with Yejin Choi (University of Washington / Allen Institute for Artificial Intelligence), one gets the impression that AI is still in its teething stages:

YEJIN CHOI: Usually, the smaller models cannot win over ChatGPT in all dimensions, but if you have a target task, like a math tutoring, I do believe that definitely, not only you can close the gap with larger models, you can actually surpass the larger model’s capability by specializing on it. This is totally doable, and I believe in it.

BILL GATES: Certainly for something like drug discovery, knowing English isn’t necessary. It’s kind of weird, these models are so big that very few people get to probe them or change them in some way. And yet, in the world of Computer Science, the majority of everything that was ever invented was invented in universities. To not have this in a form that people can play around with, and take a hundred different approaches to play around with, we have to find some way to fix that, to let universities be pushing these things, and looking inside these things.

YEJIN CHOI: I couldn’t agree with you more. It cannot be very healthy to see this concentration of powers so that the major AI is only held by a few tech companies, and nobody knows what’s going on under the hood. That’s just not healthy. Especially when it is extremely likely that there is a moderate size solution that is open, that people can investigate and better understand and even better control, actually. Because if you open it, it’s so much easier for you to adapt it into your custom use cases, compared to the current way of using GPT-4, which all that you can do is sort of a prompt engineering, and then hope that it understood what you meant.

https://assets.gatesnotes.com/8a5ac0b3-6095-00af-c50a-89056f...


I have kind of mixed feelings about that. I never felt very comfortable with having Microsoft basically running the AI show in practice. I am just a human, and Microsoft as a company got the misanthropic DNA from Bill, and seems too eager to me to embark in a crusade to destroy middle-class professions (included, but not restricted to programming). Yeah, I've got it that at this point of the road I am senior enough that AI, at least for some time, won't be a threat to my unemployability, but rather a multiplier of my productivity, but a) This can and will change soon, and maybe before I retire. b) You have to think about people that are just starting in a profession that probably is the least one to promote some kind of social mobility.

On the hand, by principle, I abhor this kind of politics, backstabbing and mobbing. I prefer when people dispute in the open, not in a passive-aggressive way. This is too much toxic to my tastes.

And in the end, if the goals are to promote safety, it will probably backfire. It is obvious that there will be plenty of capital for whatever venture Sam decides to start, and that he will easily pouch lots of current OpenAI employees to go unfettered in the direction OpenAI boards fear. Hell, for all we know, he could just end in Microsoft, and then we are all fucked.


If true, this sounds like a good thing, albeit poorly handled. AGI is a world changing technology, the likes of which we haven't seen since "the internet." Being safety first and deliberate about it with a lesser focus on profit seems like the right call.


Being safety first means others with more nefarious values may achieve it first. AGI will eventually arrive at the same singularity point, but between initial creation and that point, it could be extremely destructive. What would happen if authoritarians authored it first?

Would if have been better to go safety first with nuclear weapons so that Germany might develop them first in WW2?

I find that by going very fast and in the open, humanity has the best chance of producing safe AI. Slowing down and making it closed will result in a much worse outcome. Who gets to make the decision on who gets to utilize that kind of knowledge power? Why is such a small group of people being given power over all of the collective human sacrifices and knowledge to get to where we all are?


If we’d done that with the Internet we’d still have dialup while we debate the safety of higher speeds and the danger of viruses spreading faster or whatever.

I’m not arguing that safety is nonsense. I’m arguing that humanity doesn’t know how to both move and be safe, at least at scale. We can do it as individuals when we exercise caution in an endeavor, but in groups we either move or we do not move. When we admit the cautionary forces it becomes a vetocracy of fear and everything stops.

This is why it usually takes an unhinged maniac to make progress at anything, like the psycho behind the company launching the rocket today.

We are not good at this.

Edit: forgot to add that things are even more complicated because sometimes there are hidden risks to not acting. Take nuclear power, which we basically paused when we encountered the risks. By doing that we blew our chance to head off climate change.


Be careful with reasoning by analogy and mistaking what is basically an ideology, namely the progress ideology, for some kind of natural law.


There must be some middle ground between reckless advancement without safeguards and hamstringing the technology through an abundance of caution.


OK, I was part of those that thought that Altman did something wrong and the board was trying to do some damage control, but it actually seems that the board is simply out of their mind. This was _indeed_ another possibility, but seemed much less likely, given Altman's slightly megalomania-tainted personality.

I think the board just signed their own resignation, but didn't realize it yet. Microsoft didn't pay 10 billion dollars for nice RL benchmarks and a better H-index. They are after GPT-x's capabilities to infuse them across their entire portfolio. I predict they will find a way to align OpenAI back on their interest (i.e. $$$), and that probably means bringing back Altman, Brockman and other leading engineer at the expense of the board that fired them in the first place.

Of course, the board might refuse, but Microsoft could then pull out of the agreement, effectively causing OpenAI to go bankrupt. They would also be the first bidders if it were on sale ;-)


Sorry I'm not sure I understand what Microsoft could possibly do if the board just stays firm on their decision. The non-profit owns the majority of the for profit, and the board has full control over the non-profit. Right? So at best Microsoft could invest in whatever Sam does next, but their $10b is now under OpenAI's control and there is nothing they can do about it, no?


Microsoft didnt pay for the boards company or else they would have a seat. The board is for the non profit which controls the for profit.


This wasn’t really a board coup, it was an employee revolt. If you read the anonymous account from Reddit it’s clear that the issue is that the employees’ didn’t have agency in their work or the running of the company, working long hours with unreasonable deadlines while the CEO was effectively publicly taking credit for their work. The employees were just instruments of the CEO and some significant portion of them didn’t want to accept that.


Can you link the Reddit source(s)?



> Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.

I'm very surprised to read that the chairman of the board did not know of the decision until the last moment. It looks like after the other directors had made the decision without including the chairman in collective discussion.

Isn't it a breach of directors legal duties to hold an effective board meeting without inviting (with adequate notice) all members of the board, such as it's chairman Greg, to ensure collective responsibility for its deliberations and decisions?

E.g. from https://www.delawareinc.com/blog/calling-and-holding-board-m...

>> Properly Called / Formalities Satisfied: A corporation’s bylaws ordinarily provide who can call a corporate board meeting, the notice directors must receive (which the directors can waive), the way notice of a meeting must be given (email and/or mail), the required contents of the notice, and any provisions addressing the location of meetings (permitted distance from principal offices). Compliance with these procedural requirements is important -- board action taken at an improperly held meeting can be deemed invalid and ineffective


Yeah, the handling is very suspect and definitely not kosher.

However, I would be very surprised if the board directors would make a legal mistake like that, especially given their professional tenure. Likely, they’ve discussed and cleared this with world-class legal advisors beforehand as well. So I reckon something in the Articles of Association or the Shareholders’ Agreement of OpenAI specifically allows this.


Your quote answers your own question:

> A corporation’s bylaws ordinarily provide

You would need to read the bylaws to determine if the meeting was proper or not.


Perhaps. The bylaws provide details such as who can call a meeting, how to perform written votes without a meeting, etc. but not the fact that the board of directors have legal protection as a collectively responsibility group, shielded from individual liability for their decisions, only if they act as a board rather than doing their own thing.

An improperly conducted meeting which excluded some board members on such a substantial decision could open OpenAI, or even the individual directors who took the decision, to losing a lawsuit from parties injured by the decision, such as Microsoft, Altman, Brockman and others.

A lawsuit seems unlikely, but its possibility creates pressures on OpenAI that wouldn't be present if the decisions were taken in a more "above board" manner (see what I did there). For example, hypothetically if there were enforcable non-disclosure and/or non-compete contracts between OpenAI and Altman that would prevent Microsoft hiring him straight away to directly work on a copy of OpenAI's technology, the poorly conducted board decision would boost Microsoft and Altman's confidence to ignore those contracts, knowing they could counter-sue at any time. Another example, it could empower Microsoft to force OpenAI to increase Microsoft's share in the for-profit subsidiary above 50%, effectively transferring full control of OpenAI's technical assets including data to Microsoft.


They dropped the chairman of the board before or shortly aftet this decision.


What should happen next in the AI world is that humanity itself should revolt against AI -- the problem with it being that:

(A) It is an immensely powerful tool that cannot be controlled and will be used by tech companies to create a sort of neo-feudalism where all jobs, including creative ones, can be accomplished by AI. Anyone wishing to participate in this advanced technological economy will need to pay.

(B) It will remove the need for other human beings. Think of how many times you met someone and developed a connection with them because they could do something you could not. Now, those connections will be a thing of the past.

(C) It will be used by criminals to terrorize ordinary people with very advanced scams such as impersonation, identity theft, etc.

We should revolt against AI and destroy it. There is no controlling it.


Don't count on people understanding the risks of something when they expect to make buckloads out of the thing. Lots of people here are AI maximalists because AI gave new breadth to the tired VC-backed startup hamster wheel.


Well, you are right.

We have created a system of "artificial selection" where only the ideas the net the greatest short-term profit flourish. Unfortunately, this has been taken to a global scale with modern tech companies, and this is the sort of scenario where short-term thinking can do the most damage.

I realize that almost everyone here will be against me, but I figure it is a good place to start a debate and face the fiercest opponents.


I feel the chances for success are slim to none, but I'm surprised it has taken this long to see someone share this opinion. This is how I have felt for some time. I feel for the children born today.


Feel free to reach out and talk about it. My email is in my profile.


And how exactly do you propose that we do that?

Even if we pass laws forbidding AI, you still acknowledge the existence of criminals. They aren't going to stop just because there's a law against it. (Neither will foreign governments. If Russian disinformation is messing with the US's internal political conversations - and I suspect it is - then a Russian disinformation AI could be that on steroids. Now imagine that the US has stopped their own AI research, and so is unable to counter or control this.)


I propose a system or consortium of people that are united by a few simple rules. To summarize: (1) don't use AI and (2) preferentially support people who minimize their use of AI as much as possible.

This consortium will also promote sustainable living in smaller communities, relying more on local talent and basic living sustenance as opposed to global corporations. This consortium will be more of a collection of local groups that rely mostly within themselves.

Certainly, a good personal start for people with technology skills is: do not contribute to AI yourself, and don't use it for God's sake.


Calling AI "feudalism" feels to me a bit like how Karl Marx dismissed capitalism[0] as being basically the same as feudalism but with different faces when writing the Communist Manifesto.

It might be a new and exciting kind of bad, but I doubt it will be anything that can reasonably be described as neo-feudalism.

[0] and all the other things he didn't like


Well, if a person wants to integrate with society and make any sort of decent living as society defines it, they already have to pay an enormous amount of money to keep various electronic devices like a computer.

And unlike a tractor or a car, computers and phones are upgraded so frequently that we are locked into continually upgrading. Several software suites are becoming subscription-only, including Adobe which is required for a lot of creative work, especially in the publishing and illustrating world. The other day I went into a restaurant and they didn't even have a menu but required a QR code to read the menu!

Now what if a community didn't want to pay the fee? How hard would it be to disconnect from being reliant on tech companies?

N.B. It is important to take into account the sheer power of tech companies compared to traditional businesses you might rely on like the hardware store. The scope and range of their power is what makes it more like feudalism compared to how traditional societies were inter-reliant.


These companies exist by agreement with societies and limiting their power can be done relatively easily if societies want to.


Unfortunately, these companies exist by creating incremental inventions that at first are beneficial but only start to show their negative effects decades later. The initial improvements are somewhat addictive like a drug, so that limiting their power is akin to asking drug addicts to stop their habit.


Stopping the habit is not required when limiting their power.


Marx does not say this at all in the Communist Manifesto. And further, if anything he was in awe of capitalism. He didn't write those 3000+ pages investigating and articulating capitalism because he simply "didn't like it."

He saw it as a crucial and huge moment of history that we will one day evolve out of, not some bad thing that bad guys do.

Say what you will but its quite funny to describe him as "dismissing" anything; he quite literally would not not shut up about it. Marx maybe dismissed Proudhon, but not Smith! Not Ricardo! Not even Malthus really.

Drawing parallels is good, but you gotta make sure you know what you are talking about!


> Marx does not say this at all in the Communist Manifesto. And further, if anything he was in awe of capitalism

As I read it, the Communist Manifesto puts each positive aspect of capitalism in the past tense, that it was an improvement over feudalism, but already in a bad state by his time. For example:

"""

A similar movement is going on before our own eyes. Modern bourgeois society with its relations of production, of exchange, and of property, a society that has conjured up such gigantic means of production and of exchange, is like the sorcerer, who is no longer able to control the powers of the nether world whom he has called up by his spells. [… I've cut out most of this paragraph, it's quite long …]. The conditions of bourgeois society are too narrow to comprise the wealth created by them. And how does the bourgeoisie get over these crises? On the one hand by enforced destruction of a mass of productive forces; on the other, by the conquest of new markets, and by the more thorough exploitation of the old ones. That is to say, by paving the way for more extensive and more destructive crises, and by diminishing the means whereby crises are prevented.

The weapons with which the bourgeoisie felled feudalism to the ground are now turned against the bourgeoisie itself.

But not only has the bourgeoisie forged the weapons that bring death to itself; it has also called into existence the men who are to wield those weapons —the modern working class —the proletarians.

"""

> Say what you will but its quite funny to describe him as "dismissing" anything; he quite literally would not not shut up about it.

I had to look up the word to make sure I wasn't misusing it, seems you and I are both using legit but incompatible senses of "dismiss" here. I mean "reject, deny, repudiate", you seem to think I meant "brush off, shrug off, forget" — the latter meanings would be, as you say, quite absurd. :)


Didn't read enough comments around this, but I imagine that among the wildest speculations should be comments about the hidden success of the actual stage of GPT5 promoting or having a role in this. We are not talking about a random company, they are more factors that distinguish it from others.


It might make sense to think about three forces in tension, not two: safety, business, and IP. The copyright lawsuits are just getting started and will probably continue until the courts can set a precedent.


I don’t know why this is on the front page, the article doesn’t answer the question in the title.


Because it’s a great summary for people not following Tweets live.


People upvoting before reading and the author using clickbait


I hear about an "AI safety" split, but what exactly does that mean? I feel like there's the existential risk "AI is going to escape and take over the world" folks and the more mundane "AI is going to spout racism and propaganda" types. Is that the division that I'm hearing about?


...


What did _Sam_ achieve, versus what did the actual Google Brain and other more deeply technical researchers forming OpenAI achieve ?

I’m far from this all so that’s a genuine question.


flagged for clickbait.


> Unsurprisingly, Twitter, Signal, and Slack channels were flooded with speculation as to the probable reasons for such an abrupt decision that was conveyed in a tone that, in the business world, is the equivalent of a backstab.

Signal has channels?


A channel is a medium on which communication can happen.


Irony abounds. Fired for lack of communication, yet the board did not communicate the firing to the person being fired but five minutes before the news was publicly released.

Then, the Chairman of the Board was demoted without being consulted!

Yes I can guess on who's side the 'lack of communcations' issue stems from.


That doesn't make much sense because of course day to day operations are going to be different from subsequent exceptional circumstances, by definition.


One of the reasons execs get the big paychecks is being subject to stuff like that. Honestly, I din't see much of a problem here.


What about Chairmen? Are they supposed to be insulted by essentially being dismissed from the board, then accept a job replacing their friend who was fired?

All while reeling from the news of the first firing? Without being asked, their 'reassignment' released in a press release?

That is as tone-deaf/blindingly fumble-fingered a move as any I've ever seen.

Particularly since it failed spectacularly.

I don't have to believe the board has skill at communicating normal business concerns, but somehow screws this up epically, as some executive-special-case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: