Hacker News new | past | comments | ask | show | jobs | submit login

Doesn't justify the hostile language and the urgent last minute timing. (partners were notified just minutes before press release). They didn't wait even 30 min for the market to close causing MSFT to drop billions in that time.

A mere direction disagrement would have been handled with "Sam is retiring after 3 months to spend more time with his Family, We thank him for all his work". And surely would be taken months in advance of being announced.




> last minute timing

Only feels last minute to those outside. I've seen some of these go down in smaller companies and it's a lot like bankruptcy - slowly, then all at once.


Everything points towards this being last minute both for people outside and people inside. Microsoft caught with their pants down, announcement before markets closed rather than just waiting a bit, and so on.


Announcing something huge like this before market close is not something that can be interpreted as anything other than either a huge timing mistake or a massive feeling of urgency


I find it hard to believe that the board of OpenAI isn't smart, savvy and self-interested enough to know that not delaying the announcement by an hour or so is the wrong move. That leads me to believe that yes, this was something big and worthy enough of being announced with that timing, and that it was probably not a mistake.


They also said Greg was going to stay at the company and then he immediately quit. I find it very hard to believe that smart, savvy, and self interested are adjectives that apply to a board who doesn't know what their own chairman thinks.


Even smart, savvy, and self interested people can't always predict what individual humans are going to do. It's certainly an interesting wrinkle, but I don't think it's relevant to the limited scope of the analysis I've presented here.


He was the chair of the board. And they were wrong very quickly. It very much sounds like they spoke for him. Or he pretended that he was going to stay and then backstabbed them. Which, given how strongly aligned with altman he seems to be, not really a surprise. I have yet to see a single action from them that means towards saavy rather than incompetent.


Takeaway Sam, Greg and Ilya and who are the others even in the board. Doesn't inspire any confidence.


Exactly. They call it: The shit hitting the fan.


Yeah, this is more abrupt and more direct than any CEO firing I've ever seen. For comparison, when Travis Kalanick was ousted from Uber in 2017, he "resigned" and then was able to stay on the board until 2019. When Equifax had their data breach, it took 4 days for the CEO to resign and then the board retroactively changed it to "fired for cause". With the Volkswagon emissions scandal, it took 20 days for the CEO to resign (again, not fired) despite the threat of criminal proceedings.

You don't fire your CEO and call him a liar if you have any choice about it. That just invites a lawsuit, bad blood, and a poor reputation in the very small circles of corporate executives and board members.

That makes me think that Sam did something on OpenAI's behalf that could be construed as criminal, and the board had to fire him immediately and disavow all knowledge ("not completely candid") so that they don't bear any legal liability. It also fits with the new CEO being the person previously in charge of safety, governance, ethics, etc.

That Greg Brockman, Eric Schmidt, et al are defending Altman makes me think that this is in a legal grey area, something new, and it was on behalf of training better models. Something that an ends-justifies-the-means technologist could look at and think "Of course, why aren't we doing that?" while a layperson would be like "I can't believe you did that." It's probably not something mundane like copyright infringement or webscraping or even GDPR/CalOppa violations though - those are civil penalties, and wouldn't make the board panic as strongly as they did.


You are comparing a corporate scandals, but the alternative theory in this forum seems to be a power struggle and power struggles have completely different mechanics.

Think of it as the difference between a vote of no confidence and a coup. In the first case you let things simmer for a bit to allow you to wheel and deal and to arrange for the future. In the second case, even in the case of a parliamentary coup like the 9th of Thermidor, the most important thing is to act fast.


A boardroom coup isn't remotely like one where one looks for the gap where the guards and guns aren't and worries about the deposed leader being reinstated by an angry mob.

If they had the small majority needed to get rid of him over mere differences of future vision they could have done so on whatever timescale they felt like, with no need to rush the departure and certainly no need for the goodbye to be inflammatory and potentially legally actionable


Yeah, but Uber is completely different organization. The boards you mention were likely complic in stuff they kicked their CEOs out about.


what examples are you considering here, bioweapons?


Well OpenAI gets really upset when you ask it to design a warp drive so maybe that was it.


promising not to train on microsoft's customer data, and then training on MSFT customer data.


I don't think the person you are replying to is correct, because the only technological advancement where a new OpenAI artifact provides schematics that I think could qualify is Drexler-wins-Smalley-sucks style nanotechnology that could be used to build computation. That would be the sort of thing where if you're in favour of building the AI faster you're like "Why wouldn't we do this?" and if you're worried the AI may be trying to release a bioweapon to escape you're like "How could you even consider building to these schematics?".

I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that, considering all the many problems that need to be solved for Drexler to be right.

I think it's much more likely that this was an ideological disagreement about safety in general rather than a given breakthrough or technology in specific, and Ilya got the backing of US NatSec types (apparently their representative on the board sided with him) to get Sam ousted.


> I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that

Aren't these synonymous at this point? The conceit that you can point AGI at any arbitrary speculative sci-fi concept and it can just invent it is a sci-fi trope.


No, not really. Calling something "science-fiction" at the present moment is generally an insult intended to say something along the lines of "You're an idiot for believing this made up children's story could be real, it's like believing in fairies", which is of course a really dumb thing to say because science fiction has a very long history of predicting technological advances (the internet, tanks, video calls, not just phones but flip phones, submarines, television, the lunar landing, credit cards, aircraft, robotics, drones, tablets, bionic limbs, antidepressants) so the idea that because something is in science fiction it is therefore a stupid idea to think it is a real possibility for separate reasons is really, really dumb. It would also be dumb to think something is possible only because it exists in science fiction, like how many people think about faster than light travel, but science fiction is not why people believe AGI is possible.

Basically, there's a huge difference between "I don't think this is a feasible explanation for X event that just happened for specific technical reasons" (good) and "I don't think this is a possible explanation of X event that just happened because it has happened in science fiction stories, so it cannot be true" (dumb).

About nanotechnology specifically, if Drexler from Drexler-Smalley is right then an AGI would probably be able to invent it by definition. If Drexler is right that means it's in principle possible and just a matter of engineering, and an AGI (or a narrow superhuman AI at this task) by definition can do that engineering, with enough time and copies of itself.


How would a superhuman intelligence invent a new non-hypothetical actually-working device without actually conducting physical experiments, building prototypes, and so on? By conducting really rigorous meta-analysis of existing research papers? Every single example you listed involved work IRL.

> with enough time and copies of itself.

Alright, but that’s not what you the previous post was hypothesizing,which is that OpenAI was possibly able to do that without physical experimentation.


Yes, the sort of challenges you're talking about are pretty much exactly why I don't consider it feasible that OpenAI has an internal system that is at that level yet. I would consider it to be at the reasonable limits of possibility that they could have an AI that could give a very convincing, detailed, & feasible "grant proposal" style plan for answering those questions, which wouldn't qualify for OPs comment.

With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work. That level of cognitive achievement is what I think is infeasible that OpenAI could possibly have internally right now, for several reasons. Mainly that it's extremely far ahead of everything else to the point that I think they'd need recursive self-improvement to have gotten there, and I know for a fact there are many people at OpenAI who would rebel before letting a recursively self-improving AI get to that point. And two, if they lucked into something that capable accidentally by some freak accident, they wouldn't be able to keep it quiet for a few days, let alone a few weeks.

Basically, I don't think "a single technological advancement that product wants to implement and safety thinks is insane" is a good candidate for what caused the split, because there aren't that many such single technological advancements I can think of and all of them would require greater intelligence than I think is possible for OpenAI to have in an AI right now, even in their highest quality internal prototype.


> With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work.

It couldn't do any of that because it would cost money. The AGI wouldn't have money to do that because it doesn't have a job. It would need to get one to live, just like humans do, and then it wouldn't have time to take over the world, just like humans don't.

An artificial human-like superintelligence is incapable of being superhuman because it is constrained in all the same ways humans are and that isn't "they can't think fast enough".


I think you're confused. We're talking about a hypothetical internal OpenAI prototype, and the specific example you listed is one I said wasn't feasible for the company to have right now. The money would come from the same budget that funds the rest of OpenAI's research.


Human cloning


Actual humans or is this a metaphor for replicating the personas of humans via an LLM?


Great point. It was rude to drop a bombshell during trading hours. That said, the chunk of value Microsoft dropped today may be made back tomorrow, but maybe not: if OpenAI is going to slow down and concentrate on safe/aligned AI then that is not quite as good for Microsoft.


It only dropped 2% and it’s already half back in after hours. I don’t think the market thinks it’s Altman who’s the golden boy here.


It's still a completely unnecessary disturbance of the market. You also don't want to bite the hand that feeds you. This would be taking a personal disagreement to Musk-levels of market impact.


What is all this nonsense about MSFT stock price? Nothing material has happened to it.

https://www.google.com/finance/quote/MSFT:NASDAQ


People zooming in too far. If you look at the 1d chart, yeah, something happened at 3:30. If you look at the 1m chart, today is irrelevant.


Why is OpenAI responsible for protecting Microsoft’s stock price?


Well if for nothing else they are their biggest partner and investor


Even Microsoft themselves shouldn’t care about the traders that react to this type of headline so quickly.

This will end up being a blip that corrects once it’s actually digested.

Although, the way this story is unfolding, it’s going to be hilarious if it ends up that the OpenAI board members had taken recent short positions in MSFT.


Yeah and if antitrust regulators weren’t asleep at the wheel they’d be competitors


it's not that OpenAI is reponsible, but those board members have burned a lot of bridges with investors with this behaviour. the investor world is not big so self-serving interest would dictate that you at least take their interest in consideration before acting especially with something like waiting 1 hour before press release. No Board would want them now because they are posined apple for the investors.


Alternately, there may be mission-minded investors and philanthropists who were uncomfortable with Microsoft's sweetheart deals and feel more comfortable after the non-profit board asserted itself and booted the mission-defying VC.

We won't know for a while, especially since the details of the internal dispute and the soundness of the allegations against Altman are still vague. Whether investors/donors-at-large are more or less comfortable now than they were before is up in the air.

That said, startups and commercial partners that wanted to build on recent OpenAI, LLC products are right to grow skittish. Signs are strong that the remaining board won't support them the way Altman's org would have.


MSFT is still up this week.


> . They didn't wait even 30 min for the market to close causing MSFT to drop billions in that time

Ha! Tell me you don't know about markets without telling me! Stock can drop after hours too.


After market prices are just a potential trend as the volume traded is very small and easily manipulated.


Not as much tho right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: