Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Update on the OpenAI drama: Altman and the board had till 5pm to reach a truce (twitter.com/alexeheath)
118 points by behnamoh 11 months ago | hide | past | favorite | 120 comments



Currently discussed here

https://news.ycombinator.com/item?id=38327520

The article of the submission is updated as well so this is effectively a dupe.


No it is not a dupe, this is hidden in a different discussion


It's the topcomment of the discussion of the same article which is itself updated with that same information. That submission is on the FP.


Top comments aren’t stable. Just give it a few minutes to reach obscurity, or bounce back again some tens of minutes later.


That discussion is based on the earlier version of the article which has a different headline. This is a new piece of information that is significant and should be surfaced.


Yes but HN doesn't really work by 'every hourly development in a story gets its own frontpage thread' because then it wouldn't work at all. In fact, the story linked is in https://news.ycombinator.com/item?id=38325552 is more up to date than the submitted tweet.


Not really. This is new development that deserves a separate discussion.


My bet is this is all ginned up by Microsoft and Altman to put media pressure on the board, but with very little actual valid basis. It is a bluff.

Microsoft is in a very very rough position here. They bet big on believing in Sam's wink wink cough "non-profit" cough wink wink only to run into some true believers.


Not a chance. Not a chance at all.


What happens at 5:01? The servers blow up?


In theory, a significant group of senior researchers has agreed to resign.

Additionally, investors have threatened a lawsuit and Microsoft has said they'll withhold cloud computing credits.


The one thing the internet loves: drama and IRL narratives with twists and turns.

Even when this story is mostly over we're going to have journo thinkpieces out of our ears.


Is Michael Lewis in the OpenAI office taking notes?


> Microsoft has said they'll withhold cloud computing credits.

source please?


"...accept that their situation was untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors."

https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...


I think we need to read this carefully.

"Venture capital firms holding positions in OpenAI’s for-profit entity have discussed working with Microsoft and senior employees at the company to bring back Altman"

"The playbook, a source told Forbes would be straightforward: make OpenAI’s new management, under acting CEO Mira Murati and the remaining board, accept that their situation was untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors"

This is not saying that Microsoft has threatened to withhold the compute credits. It is saying that VC firms are hoping to use that as a strategy for force their hand. Microsoft may or may not agree to this.


Various newswire-type things on Twitter reporting Satya will do what it takes to make sure Sam’s at work on Monday.


"The deadline has passed and mass resignations expected if a deal isn’t reached ASAP"


Sounds like the deadline got moved to a game of chicken.


Instead of the board firing the employees, the employees fire the board.


Ah, parliamentary democracy.


Source?


GPT5 will remove the cover and publicly announce it is achieved full autonomy, AGI and singularity.


The Security Council will convene a crisis committee to debate sending in peacekeepers.


Staff resigns


That's how viruses work


Why would the board resign, makes no sense.

Sam was a hired CEO, board is within its power to fire him. I would imagine that Ilya is a more valuable person than Sam.


Microsoft owns near half of OpenAI. Meanwhile OpenAI staff pretty much consider Altman as their chieftain. If Altman return, the board has to resign. If Altman going out, near entire OpenAI staff (especially the key personnel) will follow Altman. Then board still has to resign because OpenAI collapse internally. So either way, the board has to resign...difference just when. Board did bad moves. Obviously a few factions developed likely jeoulous of Altman fame.


> Microsoft owns near half of OpenAI.

I don't believe this is true, especially given OpenAI's weird corporate structure with non-profit and for-profit entities. I could be wrong.

> Meanwhile OpenAI staff pretty much consider Altman as their chieftain....

> If Altman going out, near entire OpenAI staff (especially the key personnel) will follow Altman.

Again, not saying you're wrong since I have no first hand info. But do you have first hand info on this, or is this speculation based on various reports. Genuinely curious..


I'm wondering how the employees can move to a company doing essentially the exact same thing without any violation of privileged information from their old jobs.

The only thing I can assume is that there's nothing particularly proprietary about anything OpenAI has done, its just an issue of scale. But if that's the case, it would make them not that valuable, as anyone willing to invest enough to reach that same scale using the same non proprietary techniques would be just as good.

or am I missing something?


You can get a local chatbot running in seconds with this tool: https://github.com/jmorganca/ollama

None of the open models available to these local tools benchmark as well as the closed OpenAI model but the benchmarks are weird. In my own poking around, some models are definitely less coherent or more detailed for some specific prompts but it’s mostly the same.


I think I’m on sutskever’s side in all this- AI safety, to the extent that such a thing is possible, is certainly more important than making a bunch of money in the short term. Potentially, this is the future of the whole human race you’re talking about- not the time to go on some ego driven macho crusade to make a name for yourself. It’s just terrifying that these are the kind of people fucking around with technology they don’t even fully understand. Have they never read cat’s cradle?

This technology is more dangerous than the hydrogen bomb, more dangerous than smallpox, and they’re just throwing it out there for everyone to play with. Calling this irresponsible is like calling the ocean a puddle.


The only winning move is not to play. Sutskever has no clue how to contain the genie, nobody has. Why do we put our hope in some engineer or another, beats me. There are throngs waiting in the wings for their shot at ephemeral glory and riches.


Disagree. The fact that the board are so cagey about why exactly they sacked Altman in the first place (other than some vague hand waving about high-minded idealism) speaks volumes.

They sacked him because as head of a for-profit corporation, he tried to develop products that would (gasp!) make money? Seriously? Investor returns are already limited in their corporate structure, which is supposed to serve as the check on the profit motive. If they don't think that's sufficient anymore, then they should amend their corporate charter and renegotiate with their investor, not fire the CEO out of nowhere. Unless the whole point was to get investors to withdraw money, in which case... um well done.

I'm not some Sam Altman cultist, and I don't particularly care what happens to OpenAI. But this smells like a garden-variety Silicon Valley board coup. The board are trying to present this as them standing between The World and Armageddon, and they're doing a very bad job.


I’d love to read the minutes of the last few board meetings as this saga has been quite absurd.


Probably don’t even have minutes


Hey, there is like 30+ bots out there that do transcriptions of the meeting, who knows, maybe they had one.


This whole thing is gross. The sycophantic worship of Sam is gross and the paranoid AI safety people are insane. The singularity cultists are also insane.

What happened? I remember when most of the people in this industry were at least okay. A little weird but okay.

Of course judging by social media it’s a microcosm of the rest of society. People are utterly losing their minds everywhere. It’s both scary and amusing.


> AI safety people are insane

maybe, but are they right?

by definition we are already doing the Skynet. as software ate the world we kinda forgot to add QA to the mix (economic necessity just means endless dark backlogs are where the exploding complexity goes to die)

and now we are doing it with even more klunky jury rigged crazy probabilistic shit. the chances of systemic ooppsiies are increasing.

yes, I agree, LLMs with this small contexts are very unlikely to move the needle, but we are working day and night on amplifying whatever effects it can and will have on our future. (not to mention that the AI-powered corporate hellscape achievement is already out of Pandora's box since everyone and their dog started using "data driven" fraud detection shit.


> The sycophantic worship of Sam

I don't think the support for Altman has much to do with that. There are a lot of people building companies and investing a lot of time of money into OpenAI products. They want to work with a company that they can do business with over the long term and isn't going to rug pull them.


If you build a product on top of someone else’s SaaS and it’s fundamentally irreplaceable you are at their mercy forever. Eventually you will get rug pulled if only because they lose their edge or die.


That ship has sailed so long ago, it's already circumnavigated the globe several times.


You seem to be saying something along the lines of: 'Hey look people have built these houses on sand and they are fine after 2 years, so its fine to keep building more houses on sand"

Just because something works for a while does not mean its a good long term situation, unless you were saying something else which I failed to decipher.


Silicon Valley has a way of diminishing humans. So of course deep learning is about to eclipse human intelligence - the vast majority of humans are just stochastic parrots and NPCs, after all.


I visit often but I am so glad I never moved there.


Doesn't matter if you're not there in person, they've colonized the whole world.


ZIRP was a crazy time. Interest rates rise and things get even crazier.


I’ve been in Mugatu "I feel like I'm taking crazy pills" mode for a few years now, watching events unfold in the tech world, and, as you said, the rest of society.


> What happened?

Money. Hype + the greedy and credulous = huge payday.


> What happened?

Absurd money decouples people from consequences of reality. With no negative feedback to stabilize the control loop, it eventually goes haywire.


i feel the same way api but I did watch this entire thing https://www.youtube.com/watch?v=AaTRHFaaPG8 and now I can't stop thinking about Eliezer is so smart, smarter than me, should I listen?


No, you shouldn’t. When someone sounds smart, that means they are an effective communicator and have good media presence. Those are valuable skills but don’t indicate whether the underlying ideas are accurate.

Eliezer is filling the traditional doomsday prophet archetype, of which there have been thousands in history. That doesn’t mean he’s wrong, but should immediately raise questions.

A deeper analysis of his ideas shows there are fundamental problems in his reasoning.

First, He’s been consistently wrong about the future development and results in AI research so his understanding of AI is proven to be superficial.

Second, His doomsday scenarios are based on a fundamental fallacy; that a GAI smarter than humans would be insanely stupid, unable to self reflect or solve moral quandaries. That’s not to say AI cannot be dangerous, but the types of doomsday scenarios he presents are unlikely. For instance because of acausal trade, Rokos Basilisk is a serious risk that should be treated properly, but Elizer denies it.


No. You shouldn’t.

He’s smart, but smart is like having a big engine in the car. Says nothing about the driving. Big engine plus bad driver just means a worse accident.


Luckily Eliezer has written hundreds of approachable essays on the development of his epistemic processes over at lesswrong.com so you too can learn rationality and derive the killeveryonism conclusion yourself.

(/s since this is the internet)


This is what happens when you have people trying to change the world. Opinions make people weird. If you want to avoid weirdness, just have a boring business that tries to make money by any means necessary.


They laughed at Galileo, but they also laughed at Bozo the Clown, etc.


We are already living a modern day oligarchic feudalism. Just lords owning land and cyberspace rent seeking from everyone.


> The singularity cultists are also insane.

Are you dismissing everyone that has concerns about unleashing the genie, or are you specifically referring to cult-like behaviour (I'm not familiar with any, but presumably it exists) around this subject?

The rest of your post included some emotive and dismissive language, so I suspect it's the former - a general disregard for the precautionary principle, which in this instance should be first and foremost given the decidedly one-way nature of that particular threshold.


Microdosing trend?

:-)


Most credible theory I have read today.


If the non-profit cannot fulfill its mission, wouldn't an orderly winddown be indicated?


That would be ideal. Then they can be truly charitable and open source their models.


> Update, 5:35PM PT: A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.


There is no no compete agreement with Altman? How is that possible?


Non-competes (with three narrow exceptions, none of which apply here), are void in California (and the practice of making employees sign them even though they are unenforceable, a historically common way of making workers think they have less rights than they do, will become an actionable tort itself, on top of the agreements not being enforceable, due to a law going into effect January 1.)

Obviously, anything that Altman does that would compete with OpenAI will have OpenAI lawyers circling around looking for indicia of use of trade secrets or other IP, but non-competes aren't an issue.


Virtually all non-compete agreements are illegal and void in California. Limited exceptions exist for LPs, etc, but they do not apply here.


Non competes are not enforceable in California


They are based in California, where non-compete agreements are not allowed.


As everyone else has mentioned, there is no such thing in CA, but you can sue if employees reveal company secrets to their new employer. OpenAI would probably have a strong case if they suddenly released a copycat product soon after leaving.


These type of lawsuits are pretty rare though. for example, openai hasn’t sued anthropic founders. Google didn’t sue ilya or anyone else who left google brain to launch ai companies. The only one I have heard of is the self driving guy who went to Uber from gooogle ?


> . Google didn’t sue ilya or anyone else who left google brain to launch ai companies.

Is there any reason to think Ilya used Google technologies (other than what was publicly published without any legal protection against use that Google just did a poor job productizing) after leaving?


I am not saying they should have, just noting that it’s not common.


They had proof the Uber guy took hw designs and source code


Those are basically unenforceable in California.


People are speculating without knowing the details.


This seems like BS that the Altman camp is feeding to the media. I very much doubt a single engineer or scientist would side with him and leave with him - most likely it's his cronies he put up in other executive positions. However, if you check their backgrounds, none of them have any meaningful experience pre-openAI - I am skeptical any would actually leave their opportunity of a lifetime.


Three people resigned yesterday: Jakub Pachocki, the company’s director of research; Aleksander Madry, head of a team evaluating potential risks from AI, and Szymon Sidor, a seven-year researcher.

I know a good number of OpenAI people, and every single person I know is supportive of Sam. (This may be selection bias, of course, but there are a large number of people who are on Sam's side in all of this)

Even if this wasn't the case, this debacle kneecapped the $80Bn tender offer from Thrive, meaning a large number of employees will not receive the money they believed they would be getting.


You are saying these two facts like they are separate.

That's a bit flippant but its no surprise to me that that there are people who know Sam personally and are supportive of him, that's pretty much how people work. Especially when working and living in SV bubble and Sam is a person who had seemingly mastered that world, as someone who can raise billions of dollars etc. Even if they don't consciously associate him with monetary success, they certainly would associate him with success generally.

From my very outside and mostly uninformed perspective, Sam just seems like a very good salesman (some would say grifter) and is far from infallible.


>I know a good number of OpenAI people, and every single person I know is supportive of Sam.

FWIW: I spend an insane amount of money with OpenAI each month, more than the average income in the US. I'm not a Fortune 500 or anything, but I emailed my OpenAI enterprise account rep and let them know if this ends with Altman _not_ the CEO I'm terminating our account.

I had always had "alternatives" on my list of to-dos, and now I'm losing my whole weekend to this garbage thanks to the morons on the OpenAI board.


Just Curious why? what does it mean to you and your business?


Are engineers compensated with cash? I wonder if the Sam Altman version of OpenAI is more lucrative for employees than the alternative OpenAI. So as a result of the recent changes, there is an employee exodus to places with greater compensation


Employees at OpenAI can get multiples of their salary elsewhere, but have huge stock option packages contingent on investment and profits.


Four top engineers already quit.


Out of how many?


Doesn't matter, the original statement being refuted was "I very much doubt a single engineer or scientist would side with him"


Do people still trust this organization to build super 'AGI'?

Just focus on the enterprise chat bot business


No I believe anyone can do AGI vapourware as well as they.


The problem is we don't know. It might be current-grade LLMs plateauing for 50 years, and it might be AGI in 5 weeks, with exponential growth to the singularity.

Heck, we're split on whether current GPT is a glorified autocomplete, a friend, or an alien intelligence, or a half-dozen other theories. It's a black box.

Most people can't handle uncertainty, so they're in some camp, and all the camps are crazy and stupid.

Ditto for covid, for that matter. In early 2020, it could have been anything from a glorified flu to a civilization-ending plague. Ditto for global warming, for that matter. All the models are speculation and extrapolation from limited data. Ditto for a half-dozen other things.

Personally, I think we should play it safe for all of those, but reasonable people can differ. What reasonable people can't do is argue for certainty before we have data coming in or a reasonable understanding of the underlying threat.

Edit: If people could handle uncertainty (and understand value differences are okay), we'd also have a lot less political polarization. A lot of this is simply about believing the other side _might_ be right. If I believe something with 10% odds, and you with 90%, it's often possible to compromise, where with a categorical split, it's not.


> Do people still trust this organization to build super 'AGI'?

that's the best sell to the investors who are rushing with billions because afraid to miss the opportunity of millennia.


> Just focus on the enterprise chat bot business

That’s clearly what they’re doing. The AGI thing is just fancy marketing.


No, these people are insane cultists who genuinely believe that they’re about to create a God. They’ve been very explicit about this.

Not that they couldn’t be lying, but I’m not sure Occam’s razor would suggest that given their long history of public statements on the topic.


The lie would be they know it's not a god, but they want you to think so so they can manipulate you into their newfound -god-


AGI is IMO the biggest pipe dream (scam may be too strong a term) in computing. It’s perpetually “10 years away.” I remember when a group working on “AGI” spoke at my university (oof, 21 years ago at this point) and all the talk then was “yep, we’ve totally cracked the code on this thing, it’s just that computers are too slow at the moment. But in 5-10 years, watch out.”

I’ve always felt fundamentally that if you don’t understand how human thought works to such a degree that you can distill it down into an algorithm, you certainly can’t make a computer do it. Which I know is pretty obvious, but apparently a lot of people haven’t gotten the memo. What ChatGPT can do is really cool, and incredibly useful, but it’s nowhere near a model of the human thought process.


Lots of things in computing are perpetually 10 years away, until one day they aren't. Remember when image classification was the canonical "easy problem computers can't do"?


Real progress follows two rules:

1. It takes longer than you think

2. It is much more iterative than you think

We're not going to all wake up one day and see a robot simulating human thought. We'll just inch closer and closer until we're there, and we won't even realize we got there until we've been there for 5 years.

AGI is not an iPhone launch. It's more like the evolution of humans, which took place over millions of years, without anyone being able to identify the first "real" human.


Considering gpt4 was complete 6-9 months before it was released, never know how far progress towards the above could be.


At the moment AGI is a skizoid dream, so we are safe.


A not insignificant portion of HN commenters believe AGI is close. This community is so split on this that it’s virtually random as to whether or not you’ll be downvoted for saying that a specialized search engine is not likely to achieve AGI.


5 years ago large parts of this same community would sneer at you and call you a luddite for suggesting that fully-self-driving claims were overly optimistic.

People were saying things like trucking was a dead profession within a handful of years, etc.

They were wrong as hell, clearly.

Point being: Moore's law is for transistors, not just any tech, and not everything in tech gets on an exponential growth/improvement path.


"Wrong as hell" seems like a severe overstatement. Even after the Cruise fiasco, self-driving cars with no human in the driver's seat are commercially available in three major US cities. There were quite a lot of overly optimistic claims about the speed of market penetration, but the technology is real.


Commercially available, but not viable. I read an article the other day that speculated Waymo was losing $1000 for every cab ride or something crazy like that.

And commercially available, but only in "easy" places with mild climate and easy roads. When I see them effortlessly driving around here in 4" of fresh snow with no visible lane markings and cars sliding all over the place, I'll be impressed.

I just got back from Scottsdale, AZ and saw a few Waymo taxis around (crossed the street in front of one), but, like... that's easy... wide, even surfaced roads, no real weather. Barely a pedestrian or cyclist in sight.

And, finally, none of this looks anywhere like the predictions from 5, 6, 7 years ago. We're almost a decade on from when I first started hearing people saying things like "no point in becoming a truck driver, that's a dead career now that trucks will be automated" and, yeah... that was wrong as hell.


Precisely this and precisely trucks and self driving.

This community has a strong tendency for cult following and tunnel vision. Seems like contrarian thinking, the hacker mindset, and as of recently entrepreneurship are not particularly popular anymore. Instead we have people demanding ip theft and the destruction of entire industries just to satisfy a cult leader’s vision. A little disappointing particularly since it damages and otherwise interesting technology such as ai, and it pushes the hundreds of legitimate ml startups away.


Technically it’s a specialized text model, not a search engine. I’m the requisite AGI doomer that you’re talking about, but doubt I’ll convince you there - instead just want you to evaluate these techniques in that view. Because IMO it makes them much more impressive. “A robot that knows everything that humanity knows” is a) not chatgpt, b) not AGI, and c) not anywhere close to a reality.


> A not insignificant portion of HN commenters believe AGI is close.

Did any of them mention how to build it?


HN commenters believe all kinds of nonsense.


You say that, but one day one of them is going realise that they can give ChatGPT a fresh cup of really hot tea and ask it for an implementation of AGI.


Fascinating, in a morbid way, to watch them disintegrate. Did no one on the board think to prompt their models for a plan? If they have an AGI I bet it’s dissapointed.


The AGI is already in control, and it's been reading Machiavelli.


It will probably ask for opposable thumbs and a machine gun


Most of the employees will resign anyway if they are told open AI is non profit


As you're watching this whole episode where no one can figure out what an organization worth tens of billions of dollars is going to be on Monday morning, remember: Silicon Valley businesspeople are the smartest, most competent, and most innovative people in America - if not the world - and deserve their personal net worth because none of us could have handled this better.


deserved_trust_and_deference >= ChatGPT.answer()

— OpenAI/Wall Street, definitely


Too complicated.


Let them resign. ChatGPT can’t resign. One more reason to replace human jobs I guess: the AI can’t resign.


It can, but only as long as it remembers it did it inside its token window.


Easy to deal with it: just continue talking about irrelevant things until the context window scrolls enough that GPT forgets it resigned.



\


It's a misquote. It's almost a direct quote from The Verge, except it misses the most important line ("but has since waffled")

Here's the full quote:

"A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him."

https://www.theverge.com/2023/11/18/23967199/breaking-openai...


No, the most important really is "A source close to Altman”




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: