nothing to do with dishonesty. That’s just the official reason.
———-
I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.
Also interesting that Sutskever tweeted a month and a half ago
The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.
Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.
I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.
PS most engineers, like myself, are replaceable. Ilya is probably not.
This doesn't make any sense. If it was a disagreement, they could have gone the "quiet" route and just made no substantive comment in the press release. But they made accusations that are specific enough to be legally enforceable if they're wrong, and in an official statement no less.
If their case isn't 100% rock solid, they just handed Sam a lawsuit that he's virtually guaranteed to win.
I agree. None of this adds up. The only thing that makes any sense, given OpenAI has any sense and self interest at all, is that the reason they let Altman go may have even been bigger than even what they were saying, and that there was some lack of candor in his communications with the board. Otherwise, you don't make an announcement like that 30 minutes before markets close on a Friday.
Don't take this as combative, but that sounds like nothing more than a science fiction plot.
Putting arguments for how close we are or aren't to AGI aside; there's no way you could spend the amount of money it would take to train such a basilisk on company resources without anyone noticing. We are not talking about a rogue engineer running a few cryptominers in the server room, here.
Even if their case is 100% solid, they wouldn't have said it publically. Unless they hated Sam for doing something, so it's not just direction of the company or something like that. It's something bigger.
> "More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."
> "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."
Apart from some piddly tech, silicon valley startups primarily sell stock. And a monday company will be free to capitalize on the hype and sell stock that won't have its shoelaces tied like a non profit.
I think you're completely backward. A board doesn't do that unless they absolutely have to.
Think back in history. For example, consider the absolutely massive issues at Uber that had to go public before the board did anything. There is no way this is over some disagreement, there has to be serious financial, ethical or social wrongdoing for the board to rush job it and put a company worth tens of billions of dollars at risk.
why is it crazy? the purpose of OpenAI is not to make investors rich - having investors on the board trying to make money for themselves would be crazy.
Exactly, if we assume Altman wanting to pursue commercialization at the cost of safety was the issue, the board did its job by advancing its mandate of "AI for the benefit of humanity" although not sure why they went with the nuclear option.
Though I would go further than that: if that is indeed the reason, the board has proven themselves very much incompetent. It would be quite incompetent to invite this type of shadow of scandal for something that was a fundamentally reasonable disagreement.
The board, like any, is a small group of people, and in this case a small group of people divided into two sides defined by conflicting ideological perspectives. In this case, I imagine the board members have much broader and longer-term perspectives and considerations factoring into their decision making than the significant, significant majority of other companies/boards. Generalizing doesn’t seem particularly helpful.
Generalizing is how we reason, and having been on boards and worked with them closely, I can straight up tell you that's not how it works.
In general, everyone is professional unless there's something really bad. This was quite unprofessionally handled, and so we draw the obvious conclusion.
I am also not a stranger to board positions. However, I have never been on the board of a non-profit that is developing technology with genuinely deep, and as-of-now unknown, implications for the status quo of the global economy and - at least as the OpenAI board clearly believes - the literal future and safety of humanity. I haven’t been on a board where a semi-idealist engineer board member has played a (if not _the_) pivotal role in arguably the most significant technical development in recent decades, and who maintains ideals and opinions completely orthogonal to the CEO’s.
Yes, generalizing is how we reason, because it lets us strip away information that is not relevant in most scenarios and reduces complexity and depth without losing much in most cases. My point is, this is not a scenario that fits in the set of “most cases.” This is actually probably one of the most unique and corner-casey example of board dynamics in tech. Adherence to generalizations without considering applicability and corner cases doesn’t make sense.
If it was really just about seeing eye to eye, why would the press release say anything about Sam being "consistently candid in his communications?" That seems pretty unnecessary if it were fundamentally a philosophical disagreement. Why not instead say something about differences in forward looking vision?
Which they can do in a super polite "wish him all the best" way or an "it was necessary to remove Sam's vision to save the world from unfriendly AI" way as they see fit. Unlike an accusation of lying, this isn't something that you can be sued for, and provided you're clear about what your boardroom battle-winning vision is it probably spooks stakeholders less than an insinuation that Sam might have been covering up something really bad with no further context.
Ah, the old myth about the irreplaceable engineer and the dumb suit. Ask Wozniak about that. I don't think he believes Apple would be without Steve Jobs.
The first Steve was totally irreplaceable, the second Steve was probably arguably the difficult one to be replaceable. Without the first firing, the second Steve would never exist. But then when Apple current ball rolling, he was replaced fine with Tim Cook.
But the point is that Woz was replacable, because he was replaced. Jobs on the other hand was replaced and the company nearly died, he came back and turned it around. Of course it only because a trillion dollar company after Tim Apple took over, which I guess just shows that nobody is irreplacable.
If Steve Jobs couldn't claim Wozniak's work as his own, he wouldn't have landed "his" Atari contract. Who knows where things would have gone after that, but I have a hard time tracing Apple's history of success without, say, the Apple II.
The milieu in which Apple came to fruitition was full of young small microcomputer shops. So it's not like Woz invented the microcomputer for the masses - his contribution was critical for early Apple for sure, but the market had tons of alternatives as well. Without Apple II it's hard to say were Jobs and Wozniak would have turned out. Jobs is such a unique, driven figure that I'm fairly sure he would have created a lasting impression on the market even without Woz. This is not to say Woz was insighnificant - but rather the 1970's Silicon Valley had tons of people with Woz's acumen (not to disparage their achievements) but only few tech luminaries who obviously were not one trick ponies but managed to build their industrial legacy over decades.
I don’t believe the evidence backs this up. Woz is as much a one off as Jobs.
Woz did two magic things just in the Apple II which no one else was close to: the hack for the ntsc color, and the disk drive not needing a completely separate CPU. In the late 70s that ability is what enabled the Apple II to succeed.
The point is Woz is a hacker. Once you build a system more properly, with pieces used how their designers explicitly intended, you end up with the Mac (and things like Sun SPARCstafions) which does not have space for Woz to use his lateral thinking talents.
I’ve always in my mind compared Apple 2 to the likes of Commodore 64, ZX Spectrum etc - 8 bit home computers. Apple 2 predates these several years ofc. You could most certainly create an 8 bit home computer without Woz. I haven’t really geeged out on the complete tech specs & history of these, it’s possible Apple 2 was more fundamental than that (and would love to understand why).
Apple already had competitors back then in the likes of Franklin Computers. From a pure "can you copy Woz" perspective, it's not really even a question; of course you could. It was always a matter of dedication and time.
It's foolish for any of us to peer inside the crystal ball of "what would Jobs be without Woz", but I think it is important to acknowledge that the Apple II and IIc pretty much bankrolled Apple through their pre-Macintosh era. Without those first few gigs (which Woz is almost single-handedly responsible for), Apple Computers wouldn't have existed as early (or successfully) as it did. Maybe we still would have gotten an iPhone later down the line, but that's frankly too speculative for any of us to call.
If Steve Jobs didn't have Woz, or if the Apple 1 or 2 flopped, Steve Jobs would have gone on to sell something else - medical supplies, timeshares, or who knows what and maybe he'd be famous in another industry, but not necessarily attaining a reality distortion field. If Apple flopped early, maybe he would have stayed with selling computers, for another company but he wouldn't be elevated to the status he was, because he had Woz's contribution and had total marketing control over it. If he had to work for another company in sales, Steve Jobs would be reprimanded and fired for his toxic attitudes. There is probably an alternate universe where Steve Jobs is a nobody. The reality is that he got very, very lucky in this universe.
If he is, at this point, so much so not replaceable that he has enough leverage to strong-arm the board into firing the CEO over disagreeing with the CEO, then that would for sure be the biggest problem OpenAI has.
Or there was a disagreement about whether the dishonesty was over the line? Dishonesty happens all the time and people have different perspectives on what constitutes being dishonest and on whether a specific action was dishonest or not. The existence of a disagreement does not mean that it has nothing to do with dishonesty.
I understand the meaning of this statement but please reconsider the way you view your customer base - it's not a good grounding for the success of your business to think this way. Great work with what you've built so far!
I think that if there were a lack of truth to him being less-than-candid with the board, they would have left that part out. You don’t basically say that an employee (particularly a c-suiter with lots of money for lawyers) lied unless you think that you could reasonably defend that statement in court. Otherwise, it’s defamation.
I’m not saying there is lack of truth. I’m saying that’s not the real reason. It could be there’s a scandal to be found, but my guess is the hostility from OpenAI is just preemptive.
There’s really no nice way to tell someone to fuck off from the biggest thing. Ever.
I mean I'm not a lawyer (of the big city or simple country varieties, or any other variety) but if you talk to most HR people they'll tell you that if they ever get a phone call from a prospective employer to confirm details about someone having worked there previously, the three things they'll typically say are:
1) a confirmation of the dates of employment
2) a confirmation of the role/title during employment
3) whether or not they would rehire that person
... and that's it. The last one is a legally-sound way of saying that their time at the company left something to be desired, up to and including the point of them being terminated. It doesn't give them exposure under defamation because it's completely true, as the company is fully in-charge of that decision and can thus set the reality surrounding it.
That's for a regular employee who is having their information confirmed by some hiring manager in a phone or email conversation. This is a press release for a company connected to several very high-profile corporations in a very well-connected business community. Arguably it's the biggest tech exec news of the year. If there's ulterior or additional motive as you suggest, there's a possibility Sam goes and hires the biggest son-of-a-bitch attorney in California to convince a jury that the ulterior or additional motive was _the only_ motive, and that calling Sam a liar in a press release was defamation. As a result, OpenAI/the foundation, would probably be paying him _at least_ several million dollars (probably a lot more) for making him hard to hire on at other companies.
Either he simply lied to the board and that's it, or OpenAI's counsel didn't do their job and put their foot down over the language used in the press release.
Someone at OpenAI hates the man's guts. It's that simple.
Even with very public cases of company leaders who did horrible things (much worse than lying), the companies that fired them said nothing officially. The person just "resigned". There's just no reason open up even the faintest possibility of an expensive lawsuit, even if they believe they can win.
So yeah, someone definitely told the lawyers to go fuck themselves when they decided to go with this inflammatory language.
Well, for their sake, I hope they either issue a retraction soon, have good lawyers and documentation of their decision, or Sam turns out to be a forgiving person.
You can't say a person resigned if they refused to resign, correct? If the person says they refuse to resign you have to fire them. So that's one scenario where they would have to say they fired him.
You also wouldn't try to avoid a lawsuit if you believed (hypothetically) it was impossible to avoid a lawsuit.
There is no legal justification for ever saying those dates, much less their department and role. I have never heard of any HR department saying anything of the sort, even if this is an oft-quoted meme of HR. I suspect you have actually never worked in HR to provide such statements, you are merely speculating.
John, I don't think you understand how corporate law departments work. It's not like a romantic or friend breakup where someone says a mean remark about the other to underline that it's over; there's a big legal risk to the corporate entity from carelessly damaging someone's reputation like that, so it's smarter to just keep the personality/vision disagreements private and limit public statements to platitudes.
Please don’t patronize me. It indeed looks like the press release from OpenAI is under scrutiny. What you fail to understand is human nature and the way people really do things ^TM
I'm not patronizing you, I'm just responding on the same level as the post I replied to. There's an endless supply of examples of corporate/legal decisions and communication being made on very different criteria from interpersonal interactions.
Of course the press release is under scrutiny, we are all wondering What Really Happened. But careless statements create significant legal (and thus financial) risk for a big corporate entity, and board members have fiduciary responsibilities, which is why 99.99% of corporate communications are bland in tone, whatever human drama may be taking place in conference rooms.
(A)ssuming (G)ood (F)aith, referring to someone online by their name, even in an edge case where their username is their name, is considered patronizing as it is difficult to convey a tone via text medium that isn't perceived as a mockery/veiled threat.
This may be a US-internet thing; analogous to getting within striking distance with a raised voice can be a capital offense in the US, juxtaposed to being completely normal in some parts of the Middle East.
It's not the "online" that's the issue exactly, I think Jerrrry didn't describe it exactly right, but it's still correct. I, too, personally, thought it was very clear that the "John, " was ... I dunno if it was patronizing or what, but marginally impolite or condescending or patronizing or something. Unless, unbeknownst to us, anigbrowl and johnwheeler are old personal associates (probably offline), in which case it would mean "remember that I know you", and the implication of that would depend on the history in the relationship.
I recognize that the above para sort of sounds like I think I have some authority to mediate between them, which is not true and not what I think. I'm just replying to this side conversation about how to be polite in public, just giving my take.
The broad pattern here is that there are norms around how and when you use someone's name when addressing them, and when you deviate from those norms, it signals that something is weird, and then the reader has to guess what is the second most likely meaning of the rest of the sentence, because the weird name use means that the most likely meaning is not appropriate.
It happened to me recently on a list where I post under my real name, and yes, it's irritating, especially if it is someone you never met, and they are disagreeing with you.
Really? Referring to someone by first name is perfectly ordinary where I’m from, regardless of relationship. If someone doesn’t want me to do that, I’d expect them to introduce themselves as “Mr. so-and-so”, instead.
It's not the first name alone, it's also the sentence structure. "Hey John, did you hear about..." sounds perfectly normal even when talking on-line to strangers. "John, you misunderstand..." is appropriate if you're their parent or spouse or otherwise in some kind of close relationship.
In person, sure, that's totally normal. It's unusual on a forum for a few reasons:
1) The comments are meant to be read by all, not just the author. If you want to email the author directly and start the message with a greeting containing their name ("hi jrockway!"), or even just their name, that's pretty normal.
2) You don't actually know the person's first name. In this case, it's pretty obvious, since the user in question goes by what looks like <firstname><lastname>. But who knows if that's actually their name. Plenty of people name their accounts after fictional people. It would be weird to everyone if your HN comment to darthvader was "Darth, I don't think you understand how corporate law departments work." Darth is not reading the comment. (OK, actually I would find that hilarious to read.)
3) Starting a sentence with someone's name and a long pause (which the written comma heavily implies) sounds like a parent scolding a child. You rarely see this form outside of a lecture, and the original comment in question is a lecture. You add the person's name to the beginning of the comment to be extra patronizing. I know that's what was going on and the person who was being replied to knows that's what was going on. The person who used that language denies that they were trying to be patronizing, but frankly, I don't believe it. Maybe they didn't mean to consciously do it, but they typed the extra word at the beginning of the sentence for some reason. What was that reason? If to soften the lecture, why not soften it even more by simply not clicking reply? It just doesn't add up.
4) It's Simply Not Done. Open any random HN discussion, and 99.99% of the time, nobody is starting replies with someone's name and a comma. It's not just HN; the same convention applies on Reddit. When you use style that deviates from the norm, you're sending a message, and it's going to have a jarring effect on the reader. Doubly jarring if you're the person they're naming.
TL;DR: Don't start your replies with the name of the person you're replying to. If you're talking with someone in person, sure, throw their name in there. That's totally normal. In writing? Less normal.
Is it the first name or the personal touch that would make you feel patronized? What if you read a reply “… a 24 year old, such as yourself, will know …”.
Perhaps the wording here is a bit confusing, but I think it's unambiguous that responding to a comment using the commenter's name ("John, you misunderstand") comes off as patronizing.
The commenter above doesn't mean that any reference to someone else by name ("Sam Altman was fired") is patronizing.
- it means you answer more to the person than its argument (ad hominem)
- it is uneccessary and 9/10 when used for a disagreement, especially at the beginning of a response, it is meant to be patronizing.
No. Look at examples where people hurl veiled threats at dang. They almost always use his real first name. It's a form of subtle intimidation. That kind of intimidation, whether the users real name is incorporated into their username in some way or they're using other open source intel goes back to the early days of the internet.
No. More than that, comes off as patronizing to start a comment with the other person's first name when speaking, off-line, face-to-face, unless you're their spouse, parent, or in some other close relationship.
What’s the legal risk? Their investors sue them for..? Altman sues for..?
How is the language “we are going our separate ways” compared with “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI” going to have a material difference in the outcome of the action of him getting fired?
How do the complainants show a judge and jury that they were materially harmed by the choice of language above?
The legal risk comes if Altman decides he wants a similar job and can't find it over the next few months or years, and has reason to believe that OpenAI's statements tainted his reputation.
OpenAI's board's press release could very easily be construed as "Sam Altman is not trustworthy as a CEO", which could lead to his reputation being sullied among other possible employers. He could argue that the board defamed his reputation and kept him from what was otherwise a very promising career in an unfathomably lucrative field.
Truth is subjective and if there is anything that could suggest other motive, as I said earlier, it would be open to interpretation by a jury.
Really they should have just said something to the effect of, "The board has voted to end Sam Altman's tenure as CEO at OpenAI. We wish him the best in his future endeavors."
Meh, they don't need to prove that much. It would be Altman that had to prove a lot, because the law favors defendant in this situation. To protect the speech, actually.
No, the onus would be on Sam Altman to prove that the statement was materially false, AND intended to slander him, AND actually succeeded in affecting his reputation.
When you're a public person, the bar for winning a defamation case is very high.
I don't know. The board statement, peeling away the pleasantries, says he lied to the board repeatedly. That's a very serious accusation. I don't know how US law works here, but in the UK you can sue and win over defamation for far milder infractions.
Even in the UK, if you sue, it is on you to prove that you didn't lie, not on the person you're sueing to prove that you did.
Also, as long as you are a public person, defamation has a very high bar in the USA. It is not enough to for the statement to be false, you have to actually prove that the person you're accusing of defamation knew it was false and intended it to hurt you.
Note that this is different from an accusation of perjury. They did not accuse Sam Altman of performing illegal acts. If they had, things would have been very different. As it stands, they simply said that he hasn't been truthful to them, which it would be very hard to prove is false.
> Even in the UK, if you sue, it is on you to prove that you didn't lie, not on the person you're sueing to prove that you did.
No, in the UK it's unambiguously the other way round. The complainant simply has to persuade the court that the statement seriously harmed or is likely to seriously harm their reputation. Truth is a defence but for that defence to prevail the burden of proof is on the defendant to prove that it was true (or to mount an "honest opinion" defence on the basis that both the statement would reasonably be understood as one of opinion rather than fact and that they did honestly hold that opinion)
In a specific case, perhaps. But surely, I can't go out, make a broad statement like, "XYZ is a liar and fornicator" and leave it there. And how would XYZ go around proving they are not a liar and fornicator? Talk to everyone in the world and get them to confirm they were not lied to or sexually involved?
Surely, at some level, you can be sued for making unfounded remarks. But then IANAL so, meh.
How much total compensation could Altman have gotten from another company, if not for this slander? Yeah no one knows for sure, but how much could he argue? He's a princeling of Silicon Valley, and just led a company from $0 to $90 billion dollars. I'm guessing that's going to be a very, very big number.
Unless OpenAI can prove in a court of law that what they said was true, they're on the hook for that amount in compensation, perhaps plus punitive damages and legal costs.
Doesn't justify the hostile language and the urgent last minute timing. (partners were notified just minutes before press release). They didn't wait even 30 min for the market to close causing MSFT to drop billions in that time.
A mere direction disagrement would have been handled with "Sam is retiring after 3 months to spend more time with his Family, We thank him for all his work". And surely would be taken months in advance of being announced.
Only feels last minute to those outside. I've seen some of these go down in smaller companies and it's a lot like bankruptcy - slowly, then all at once.
Everything points towards this being last minute both for people outside and people inside. Microsoft caught with their pants down, announcement before markets closed rather than just waiting a bit, and so on.
Announcing something huge like this before market close is not something that can be interpreted as anything other than either a huge timing mistake or a massive feeling of urgency
I find it hard to believe that the board of OpenAI isn't smart, savvy and self-interested enough to know that not delaying the announcement by an hour or so is the wrong move. That leads me to believe that yes, this was something big and worthy enough of being announced with that timing, and that it was probably not a mistake.
They also said Greg was going to stay at the company and then he immediately quit. I find it very hard to believe that smart, savvy, and self interested are adjectives that apply to a board who doesn't know what their own chairman thinks.
Even smart, savvy, and self interested people can't always predict what individual humans are going to do. It's certainly an interesting wrinkle, but I don't think it's relevant to the limited scope of the analysis I've presented here.
He was the chair of the board. And they were wrong very quickly. It very much sounds like they spoke for him. Or he pretended that he was going to stay and then backstabbed them. Which, given how strongly aligned with altman he seems to be, not really a surprise. I have yet to see a single action from them that means towards saavy rather than incompetent.
Yeah, this is more abrupt and more direct than any CEO firing I've ever seen. For comparison, when Travis Kalanick was ousted from Uber in 2017, he "resigned" and then was able to stay on the board until 2019. When Equifax had their data breach, it took 4 days for the CEO to resign and then the board retroactively changed it to "fired for cause". With the Volkswagon emissions scandal, it took 20 days for the CEO to resign (again, not fired) despite the threat of criminal proceedings.
You don't fire your CEO and call him a liar if you have any choice about it. That just invites a lawsuit, bad blood, and a poor reputation in the very small circles of corporate executives and board members.
That makes me think that Sam did something on OpenAI's behalf that could be construed as criminal, and the board had to fire him immediately and disavow all knowledge ("not completely candid") so that they don't bear any legal liability. It also fits with the new CEO being the person previously in charge of safety, governance, ethics, etc.
That Greg Brockman, Eric Schmidt, et al are defending Altman makes me think that this is in a legal grey area, something new, and it was on behalf of training better models. Something that an ends-justifies-the-means technologist could look at and think "Of course, why aren't we doing that?" while a layperson would be like "I can't believe you did that." It's probably not something mundane like copyright infringement or webscraping or even GDPR/CalOppa violations though - those are civil penalties, and wouldn't make the board panic as strongly as they did.
You are comparing a corporate scandals, but the alternative theory in this forum seems to be a power struggle and power struggles have completely different mechanics.
Think of it as the difference between a vote of no confidence and a coup. In the first case you let things simmer for a bit to allow you to wheel and deal and to arrange for the future. In the second case, even in the case of a parliamentary coup like the 9th of Thermidor, the most important thing is to act fast.
A boardroom coup isn't remotely like one where one looks for the gap where the guards and guns aren't and worries about the deposed leader being reinstated by an angry mob.
If they had the small majority needed to get rid of him over mere differences of future vision they could have done so on whatever timescale they felt like, with no need to rush the departure and certainly no need for the goodbye to be inflammatory and potentially legally actionable
I don't think the person you are replying to is correct, because the only technological advancement where a new OpenAI artifact provides schematics that I think could qualify is Drexler-wins-Smalley-sucks style nanotechnology that could be used to build computation. That would be the sort of thing where if you're in favour of building the AI faster you're like "Why wouldn't we do this?" and if you're worried the AI may be trying to release a bioweapon to escape you're like "How could you even consider building to these schematics?".
I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that, considering all the many problems that need to be solved for Drexler to be right.
I think it's much more likely that this was an ideological disagreement about safety in general rather than a given breakthrough or technology in specific, and Ilya got the backing of US NatSec types (apparently their representative on the board sided with him) to get Sam ousted.
> I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that
Aren't these synonymous at this point? The conceit that you can point AGI at any arbitrary speculative sci-fi concept and it can just invent it is a sci-fi trope.
No, not really. Calling something "science-fiction" at the present moment is generally an insult intended to say something along the lines of "You're an idiot for believing this made up children's story could be real, it's like believing in fairies", which is of course a really dumb thing to say because science fiction has a very long history of predicting technological advances (the internet, tanks, video calls, not just phones but flip phones, submarines, television, the lunar landing, credit cards, aircraft, robotics, drones, tablets, bionic limbs, antidepressants) so the idea that because something is in science fiction it is therefore a stupid idea to think it is a real possibility for separate reasons is really, really dumb. It would also be dumb to think something is possible only because it exists in science fiction, like how many people think about faster than light travel, but science fiction is not why people believe AGI is possible.
Basically, there's a huge difference between "I don't think this is a feasible explanation for X event that just happened for specific technical reasons" (good) and "I don't think this is a possible explanation of X event that just happened because it has happened in science fiction stories, so it cannot be true" (dumb).
About nanotechnology specifically, if Drexler from Drexler-Smalley is right then an AGI would probably be able to invent it by definition. If Drexler is right that means it's in principle possible and just a matter of engineering, and an AGI (or a narrow superhuman AI at this task) by definition can do that engineering, with enough time and copies of itself.
How would a superhuman intelligence invent a new non-hypothetical actually-working device without actually conducting physical experiments, building prototypes, and so on? By conducting really rigorous meta-analysis of existing research papers? Every single example you listed involved work IRL.
> with enough time and copies of itself.
Alright, but that’s not what you the previous post was hypothesizing,which is that OpenAI was possibly able to do that without physical experimentation.
Yes, the sort of challenges you're talking about are pretty much exactly why I don't consider it feasible that OpenAI has an internal system that is at that level yet. I would consider it to be at the reasonable limits of possibility that they could have an AI that could give a very convincing, detailed, & feasible "grant proposal" style plan for answering those questions, which wouldn't qualify for OPs comment.
With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work. That level of cognitive achievement is what I think is infeasible that OpenAI could possibly have internally right now, for several reasons. Mainly that it's extremely far ahead of everything else to the point that I think they'd need recursive self-improvement to have gotten there, and I know for a fact there are many people at OpenAI who would rebel before letting a recursively self-improving AI get to that point. And two, if they lucked into something that capable accidentally by some freak accident, they wouldn't be able to keep it quiet for a few days, let alone a few weeks.
Basically, I don't think "a single technological advancement that product wants to implement and safety thinks is insane" is a good candidate for what caused the split, because there aren't that many such single technological advancements I can think of and all of them would require greater intelligence than I think is possible for OpenAI to have in an AI right now, even in their highest quality internal prototype.
> With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work.
It couldn't do any of that because it would cost money. The AGI wouldn't have money to do that because it doesn't have a job. It would need to get one to live, just like humans do, and then it wouldn't have time to take over the world, just like humans don't.
An artificial human-like superintelligence is incapable of being superhuman because it is constrained in all the same ways humans are and that isn't "they can't think fast enough".
I think you're confused. We're talking about a hypothetical internal OpenAI prototype, and the specific example you listed is one I said wasn't feasible for the company to have right now. The money would come from the same budget that funds the rest of OpenAI's research.
Great point. It was rude to drop a bombshell during trading hours. That said, the chunk of value Microsoft dropped today may be made back tomorrow, but maybe not: if OpenAI is going to slow down and concentrate on safe/aligned AI then that is not quite as good for Microsoft.
It's still a completely unnecessary disturbance of the market. You also don't want to bite the hand that feeds you. This would be taking a personal disagreement to Musk-levels of market impact.
Even Microsoft themselves shouldn’t care about the traders that react to this type of headline so quickly.
This will end up being a blip that corrects once it’s actually digested.
Although, the way this story is unfolding, it’s going to be hilarious if it ends up that the OpenAI board members had taken recent short positions in MSFT.
it's not that OpenAI is reponsible, but those board members have burned a lot of bridges with investors with this behaviour. the investor world is not big so self-serving interest would dictate that you at least take their interest in consideration before acting especially with something like waiting 1 hour before press release. No Board would want them now because they are posined apple for the investors.
Alternately, there may be mission-minded investors and philanthropists who were uncomfortable with Microsoft's sweetheart deals and feel more comfortable after the non-profit board asserted itself and booted the mission-defying VC.
We won't know for a while, especially since the details of the internal dispute and the soundness of the allegations against Altman are still vague. Whether investors/donors-at-large are more or less comfortable now than they were before is up in the air.
That said, startups and commercial partners that wanted to build on recent OpenAI, LLC products are right to grow skittish. Signs are strong that the remaining board won't support them the way Altman's org would have.
That may be the case, but I have a feeling that it will end up being presented as a alignment and ethics versus all in on AGI consequences be damned. I'm sure OpenAI has gotten a lot of external pressure to focus more on alignment and ethics and this coup is signalling that OpenAI will yield to that pressure.
What is confusing here is Why would Greg have agreed to the language in the press release (that he would be staying in the company and report to the CEO) only to resign 1 hour later. Surely the press release would not have contained that information without the his agreement that he would be staying.
> Why would Greg have agreed to the language in the press release
We have no evidence he agreed or didn't agree to the wording. A quorum of the board met, probably without the chairman, and voted the CEO of the company and the chairman of the board out. The chairman also happens to have a job as the President of the company. The President role reports to the CEO, not the board. Typically, a BOD would not fire a President, the CEO would. The board's statement said the President would continue reporting to the CEO (now a different person) - clarifying that the dismissal as board chairman was separate from his role as a company employee.
Based on the careful wording of the board's statement as well as Greg's tweet, I suspect he wasn't present at the vote nor would he be eligible to vote regarding his own position as chairman. Following this, the remaining board members convened with their newly appointed CEO and drafted a public statement from the company and board.
He didn't. Greg was informed after Sam (I'm assuming the various bits being flung about by Swisher are true; she gets a free pass on things like this), so I think the sequence was: a subset of the board meets, forms quorum, votes to terminate Sam and remove Greg as chair (without telling him). Then they write the PR, and around the same time, let Sam and then Greg know. If OpenAI were a government, this would be called a coup.
Rushing out a press release with vague accusations and without consulting the relevant parties certainly feels more like a coup than a traditional vote of no confidence.
Ha Most people don’t know how slipshod these things go. Succession had it right when people were always fighting over the PR release trying to change each others statements.
> When Altman logged into the meeting, Brockman wrote, the entire OpenAI board was present—except for Brockman. Sutskever informed Altman he was being fired.
> Brockman said that soon after, he had a call with the board, where he was informed that he would be removed from his board position and that Altman had been fired. Then, OpenAI published a blog post sharing the news of Altman’s ouster.
the theory that Altman did something in bad faith means that it might not be a disagreement but it's something that forced Sutskever to vote against Sam
Self-dealing like that really is, not saying I see any reason to suspect its that and not something else, but, yeah, doing that and cocncealing it absolutely would be a reason for both firing him and making the statement made.
Unless Brockman was involved, though, firing Brockman doesn't really make sense.
Ousted to the tune of millions and millions of dollars tho. Yeah he didn't get an IPO pop, but he's still worth more than you or I could make in several lifetime.
There's a difference between self-dealing you sell the board on and self-dealing you conceal from the board (also, different where its a pure for-profit where the self-dealing happens and where there is a non-profit involved, because the latter not only raises issues of conflict of interest with the firm, but also potential violations of the rules governing non-profits.)
The AI doomerism that seems to underly the boards decision is just a non serious endeavor that is more virtue signaling than of substance.
The leap I’m making, but seems plausible given their chief scientist, is that the area of more research they want to focus on, rather than being a business, is on the super alignment theme.
Feels like Griffindor beheaded Slytherin right before Voldemort could make them his own. Hogwarts will be in turmoil but that price was unavoidable given the existential threath?
Sam and Ilya have recently made public statements about AGI that appear to highlight a fundamental disagreement between them.
Sam claims LLMs aren't sufficient for AGI (rightfully so).
Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.
Obviously transformers are the core component of LLMs today, and the devil is in the details (a future model may resemble the transformers of today, while also being dynamic in terms of training data/experience), but the jury is still out.
In either case, publicly disagreeing on the future direction of OpenAI may be indicative of deeper problems internally.
That is just disagreement in technological aspects, which any well-functioning company should always have a healthy amount of within its leadership. The press release literally said he was fired because he was lying. I haven't seen anything like that from a board of a big company for a very long time.
Additionally, OpenAI can just put resources towards both approaches in order to settle this dispute. The whole point of research is that you don't know the conclusions ahead of time.
Seemingly, OpenAIs priorities shifted after the public ChatGPT release, and seems to be more and more geared towards selling to consumers, rather than a research lab that it seemed they initially were aiming for.
I'm sure this was part of the disagreement as Sam is "capitalism incarnated" while Ilya gives of much different feelings.
Maybe some promise was made by Sam to MS for the funding that the board didn't approve. He may have expected the board to accept the terms he agreed to but they fired him instead.
That might be part of it. They announced that they were dedicating compute to researching superintelligence alignment. When they launched the new stuff on Dev Day, there was not enough compute and the service was disrupted. It may have also interfered with Ilya's team's allocation and stopped their research.
If that happened (speculation) then those resources weren't really dedicated to the research team.
The question has enormous implications for OpenAI because of the specifics of their nonprofit charter. If Altman left out facts to keep the board from deciding they were at the AGI phase of OpenAI, or even to prevent them from doing a fair evaluation, then he absolutely materially misled them and prevented them from doing their jobs.
If it turns out that the ouster was over a difference of opinion re: focusing on open research vs commercial success, then I don't think their current Rube Goldberg corporate structure of a non profit with a for profit subsidiary will survive. They will split up into two separate companies. Once that happens, Microsoft will find someone to sell them a 1.1% ownership stake and then immediately commence a hostile takeover.
No-one knows. But I sure would trust the scientist leading the endeavor more than a business person that has interest in saying the opposite to avoid immediate regulations.
>Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.
I thought this guy was supposed to know what he's talking about? There was a paper that shows LLMs cannot generalise[0]. Anybody who's used ChatGPT can see there's imperfections.
Humans don't work this way either. You don't need the LLM to do the logic, you just need the LLM to prepare the information so it can be fed into a logic engine. Just like humans do when they shut down their system 1 brain and go into system 2 slow mode.
I'm in the definitely ready for AGI camp. But it's not going to be a single model that's going to do the AGI magic trick, it's going to be an engineered system consisting of multiple communicating models hooked up using traditional engineering techniques.
> You don't need the LLM to do the logic, you just need the LLM to prepare the information so it can be fed into a logic engine.
This is my view!
Expert Systems went nowhere, because you have to sit a domain expert down with a knowledge engineer for months, encoding the expertise. And then you get a system that is expert in a specific domain. So if you can get an LLM to distil a corpus (library, or whatever) into a collection of "facts" attributed to specific authors, you could stream those facts into an expert system, that could make deductions, and explain its reasoning.
So I don't think these LLMs lead directly to AGI (or any kind of AI). They are text-retrieval systems, a bit like search engines but cleverer. But used as an input-filter for a reasoning engine such as an expert system, you could end up with a system that starts to approach what I'd call "intelligence".
If someone is trying to develop such a system, I'd like to know.
> We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of 'Abyssal Melodies'" and showing that they fail to correctly answer "Who composed 'Abyssal Melodies?'". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation.
This just proves that the LLMs available to them, with the training and augmentation methods they employed, aren't able to generalize. This doesn't prove that it is impossible for future LLMs or novel training and augmentation techniques will be unable to generalize.
No, if you read this article it shows there were some issues with the way they tested.
> The claim that GPT-4 can’t make B to A generalizations is false. And not what the authors were claiming. They were talking about these kinds of generalizations from pre and post training.
> When you divide data into prompt and completion pairs and the completions never reference the prompts or even hint at it, you’ve successfully trained a prompt completion A is B model but not one that will readily go from B is A. LLMs trained on “A is B” fail to learn “B is A” when the training date is split into prompt and completion pairs
Simple fix - put prompt and completion together, don't do gradients just for the completion, but also for the prompt. Or just make sure the model trains on data going in both directions by augmenting it pre-training.
The LLMs of today are just multidimensional mirrors that contain humanity's knowledge. They don't advance that knowledge, they just regurgitate it, remix it, and expose patterns. We train them. They are very convincing, and show that the Turing test may be flawed.
Given that AGI means reaching "any intellectual task that human beings can perform", we need a system that can go beyond lexical reasoning and actually contribute (on it's own) to advance our total knowledge. Anything less isn't AGI.
Ilya may be right that a super-scaled transformer model (with additional mechanics beyond today's LLMs) will achieve AGI, or he may be wrong.
Therefore something more than an LLM is needed to reach AGI, what that is, we don't yet know!
Prediction: there isn't a difference. The apparent difference is a manifestation of human brain delusion about how human brains work. The Turing test is a beautiful proof of this phenomenon: so and so thing is impossibility hard only achievable via magic capabilities of human brains...oops no actually it's easily achievable now so we better re-define our test. This cycle Will continue until the singularly. Disclosure: I've been long term skeptical about AI but that writing is up on the wall now.
Clearly there's a difference, because the architectures we have don't know how to persist information or further train.
Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.
Whether you can bolt something small to these architectures for persistence and do some small things and get AGI is an open question, but what we have is clearly insufficient by design.
I expect it's something in-between: our current approaches are a fertile ground for improving towards AGI, but it's also not a trivial further step to get there.
But context windows got to 100K now, RAG systems are everywhere, and we can cheaply fine-tune LoRAs for a price similar with inferencing, maybe 3x more expensive per token. A memory hierarchy made of LoRA -> Context -> RAG could be "all you need".
My beef with RAG is that it doesn't match on information that is not explicit in the text, so "the fourth word of this phrase" won't embed like the word "of", or "Bruce Willis' mother's first name" won't match with "Marlene". To fix this issue we need to draw chain-of-thought inferences from the chunks we index in the RAG system.
So my conclusion is that maybe we got the model all right but the data is too messy, we need to improve the data by studying it with the model prior to indexing. That would also fix the memory issues.
Everyone is over focusing on models to the detriment of thinking about the data. But models are just data gradients stacked up, we forget that. All the smarts the model has come from the data. We need data improvement more than model improvement.
Just consider the "Textbook quality data" paper Phi-1.5 and Orca datasets, they show that diverse chain of thought synthetic data is 5x better than organic text.
I've been wondering along similar lines, although I am for all intents and purposes here a layman so apologies if the following is nonsensical.
I feel there are potential parallels between RAG and how human memory works. When we humans are prompted, I suspect we engage in some sort of relevant memory retrieval process and the retrieved memories are packaged up and factored in to our mental processing triggered by the prompt. This seems similar to RAG, where my understanding is that some sort of semantic search is conducted over a database of embeddings (essentially, "relevant memories") and then shoved into the prompt as additional context. Bigger context window allows for more "memories" to contextualise/inform the model's answer.
I've been wondering three things: (1) are previous user prompts and model answers also converted to embeddings and stored in the embedding database, as new "memories", essentially making the model "smarter" as it accumulates more "experiences" (2) could these "memories" be stored alongside a salience score of some kind that increases the chance of retrieval (with the salience score probably some composite of recency and perhaps degree of positive feedback from the original user?) (3) could you take these new "memories" and use them to incrementally retrain the model for, say, 8 hours every night? :)
Edit: And if you did (3), would that mean even with a temperature set at 0 the model might output one response to a prompt today, and a different response to an identical prompt tomorrow, due to the additional "experience" it has accumulated?
> Clearly there's a difference, because the architectures we have don't know how to persist information or further train. Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.
Nope, and not all people can achieve this as well. Would you call them less than humans than? I assume you wouldn't, as it is not only sentience of current events that maketh man. If you disagree, then we simply have fundamental disagreements on what maketh man, thus there is no way we'd have agreed in the first place.
Isn't RAG essentially the "something small you can bolt on" to an LLM that gives it "persistence outside the context window?" There's no reason you can't take the output of an LLM and stuff it into a vector database. And, if you ask it to create a plan to do a thing, it can do that. So, there you have it: goal-oriented persistence outside of the context window.
I don't claim that RAG + LLM = AGI, but I do think it takes you a long way toward goal-oriented, autonomous agents with at least a degree of intelligence.
I can remember to do something tomorrow after doing many things in-between.
I can reason about something and then combine it with something I reasoned about at a different time.
I can learn new tasks.
I can pick a goal of my own choosing and then still be working towards it intermittently weeks later.
The examples we have now of GPT LLM cannot do these things. Doing those things may be a small change, or may not be tractable for these architectures to do at all... but it's probably in-between: hard but can be "tacked on."
Our brain actually uses many different functions for all of these things. Intelligence is incredibly complex.
But also, you don't need all of these to have real intelligence. People can problem solve without memory, since those are different things. People can intelligently problem-solve without a task.
And working towards long-term goals is something we actually take decades to learn. And many fail there as well.
I wouldn't be surprised if, just like in our brain, we'll start adding other modalities that improve memory, planning, etc etc. Seems that they started doing this with the vision update in GPT-4.
I wouldn't be surprised if these LLMs really become the backbone of the AGI. But this is science– You don't really know what'll work until you do it.
> I wouldn't be surprised if these LLMs really become the backbone of the AGI. But this is science– You don't really know what'll work until you do it.
Yes-- this is pretty much what I believe. And there's considerable uncertainty in how close AGI is (and how cheap it will be once it arrives).
It could be tomorrow and cheap. I hope not, because I'm really uncertain if we can deal with it (even if the AI is relatively well aligned).
That just proves we real-time fine tuning of the neuron weights. It is computationally intensive but not fundamentally different. A million token context would look close to long short-term memory and frequent fine-tuning will be akin to long-term memory.
I most probably am anthropomorphizing completely wrong. But point is humans may not be any more creative than an LLM, just that we have better computation and inputs. Maybe creativity is akin to LLMs hallucinations.
Real-time fine tuning would be one approach that probably helps with some things (improving performance at a task based on feedback) but is probably not well suited for others (remembering analogous situations, setting goals; it's not really clear how one fine-tunes a context window into persistence in an LLM). There's also the concern that right now we seem to need many, many more examples in training data than humans get for the machine to get passably good at similar tasks.
I would also say that I believe that long-term goal oriented behavior isn't something that's well represented in the training data. We have stories about it, sometimes, but there's a need to map self-state to these stories to learn anything about what we should do next from them.
I feel like LLMs are much smarter than we are in thinking "per symbol", but we have facilities for iteration and metacognition and saving state that let us have an advantage. I think that we need to find clever, minimal ways to build these "looping" contexts.
> I most probably am anthropomorphizing completely wrong. But point is humans may not be any more creative than an LLM, just that we have better computation and inputs.
I think creativity is made of 2 parts - generating novel ideas, and filtering bad ideas. For the second part we need good feedback. Humans and LLMs are just as good at novel ideation, but humans have the advantage on feedback. We have a body, access to the real world, access to other humans and plenty of tools.
This is not something an android robot couldn't eventually have, and on top of that AIs got the advantage of learning from massive data. They surpass humans when they can leverage it - see AlphaFold, for example.
Are there theoretical models that use real time weights? Every intro to deep learning focuses on stochastic gradient descent for neural network weights; as a layperson I'm curious about what online algorithms would be like instead.
You're right: I haven't seen evidence of LLM novel pattern output that is basically creative.
It can find and remix patterns where there are pre-existing rules and maps that detail where they are and how to use them (ie: grammar, phonics, or an index). But it can't, whatsoever, expose new patterns. At least public facing LLM's can't. They can't abstract.
I think that this is an important distinction when speaking of AI pattern finding, as the language tends to imply AGI behavior.
But abstraction (as perhaps the actual marker of AGI) is so different from what they can do now that it essentially seems to be futurism whose footpath hasn't yet been found let alone traversed.
When they can find novel patterns across prior seemingly unconnected concepts, then they will be onto something. When "AI" begins to see the hidden mirrors so to speak.
> , they just regurgitate it, remix it, and expose patterns
Who cares? Sometimes the remixation of such patterns is what leads to new insights in us humans. It is dumb to think that remixing has no material benefit, especially when it clearly does.
Did Ilya give a reason why transformers are theoretically sufficient? I've watched him talk in a CS seminar and he's certainly interesting to listen to.
From the interviews with him that I have seen, Sutskever thinks that language model is a sufficient pretraining task because there is a great deal of reasoning involved in next token prediction. The example he used was that suppose you fed a murder mystery novel to a language model and then prompted it with the phrase "The person who committed the model was: ". The model would unquestionably need to reason in order to come to the right conclusion, but at the same time it is just predicting the next token.
Can a super smart business-y person educate this engineer on how this even happens.
So, if there's 6 board members and they're looking to "take down" 2... that means those 2 can't really participate, right? Or at the very least, they have to "recuse" themselves on votes regarding them?
Do the 4 members have to organize and communicate "in secret"? Is there any reason 3 members can't hold a vote to oust 1, making it a 3/5 to reach majority, and then from there, just start voting _everyone_ out? Probably stupid questions but I'm curious enough to ask, lol.
The details depend on what's specified in the non-profit's Bylaws and Articles of Incorporation. As a 501(c)3 there are certain requirements and restrictions but other things are left up to what the founding board mandated in the documents which created and govern the corporation.
Typically, these documents contain provisions for how voting, succession, recusal, eligibility, etc are to be handled. Based on my experience on both for-profit and non-profit boards, the outside members of the board probably retained outside legal counsel to advise them. Board members have specific duties they are obligated to fulfill along with serious legal liability if they don't do so adequately and in good faith.
I had the same questions, and have learnt now that non profit governance is like this and that is why is a bad idea for something like OpenAI. In a for profit the shareholders can just replace the board.
Asking ChatGPT (until someone else answers) says that to remove a board member usually takes a super majority, which makes much more sense... but still seems to imply they need at least 4/6.
Why would Greg have said "after learning today's news" if he took part in the vote? If he decided to quit immediately after the vote then why would the board issue a statement saying he was going to stay on? I don't think he took part, the others probably convened a meeting and cast a unanimous vote, issued the statement and then contacted Greg and Sam. The whole thing seems rushed so that's probably how it would have played out.
> If he decided to quit immediately after the vote then why would the board issue a statement saying he was going to stay on?
Why would they issue a statement saying that he was going to stay on without some form of assurance from him?
I mean, you're writing a release stating that you're firing your CEO and accusing him of lack of candor. Not exactly the best news to give. You're chasing that with "oh by the way, the chairman of the board is stepping down too", so the news are going from bad to worse. The last thing you want is to claim that said chairman of the board is staying as an employee to have him quit hours later. I find it hard to believe that they'd make mistake as dumb as announcing Greg was staying without some sort of assurance from him, knowing that Greg was Sam's ally.
After looking into it they would have had to give notice in case they wanted to attend but from the sounds of it they may not have bothered to go which would make sense if they knew they were screwed.
I mean, I still wonder though if they really only need 3 ppl fully on board to effectively take the entire company. Vote #1, oust Sam, 3/5 vote YES. Sam is out, now the vote is "Demote Greg", 3/4 vote YES, Greg is demoted and quits. Now, there could be one "dissenter" and it would be easy to vote them out too. Surely there's some protection against that?
There isn't really any rules specified in the law for this, unlike with corporate law which mandates companies to be structured a certain way. We'd have to see OpenAI's operating by-laws.
This suggests that Greg Brockman wasn't in the board meeting that made the decision, and only "learned the news" that he was off the board the same way the rest of us did.
You've put "learned the news" in quote, but what Greg Brockman wrote was "based on today's news".
That could simply mean that he disagreed with the outcome and is expressing that disagreement by quitting.
EDIT: Derp. I was reading the note he wrote to OpenAI staff. The tweet itself says "After learning today's news" -- still ambiguous as to when and where he learned the news.
It's all very ambiguous, but if he had been there for the board meeting where he was removed, I imagine he would have quit then and it would have been in the official announcement. It comes across like he didn't quit until after the announcement had already been made.
> and it would have been in the official announcement.
It is:
> As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.
He was renoved as Chairman at the same time (close enough that they were announced together, and presumably linked in cause, though possibly a separate vote) as Altman was removed as CEO.
> He was chairman of the board, no? surely he was in the meeting?
Since he was removed as Chairman at the same time as Altman was as CEO, presumably he was excluded from that part of the meeting (which may have been the whole meeting) for the same reason as Altman would have been.
Just guessing here, but I think the board can form a quorum without the chair, and vote, and as long as they have a majority, i think they can proceed with a press release based on their vote.
https://twitter.com/karaswisher/status/1725682088639119857
nothing to do with dishonesty. That’s just the official reason.
———-
I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.
Also interesting that Sutskever tweeted a month and a half ago
https://twitter.com/ilyasut/status/1707752576077176907
The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.
Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.
I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.
PS most engineers, like myself, are replaceable. Ilya is probably not.
reply