All: this madness makes our server strain too. Sorry! Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past.
I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.
So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.
This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.
But it does suggest a possibility of the appearance of a sudden motive:
Open AI implements and releases ChatGPTs (Poe competitor) but fails to tell D’Angelo ahead of time. Microsoft will have access to code (with restrictions, sure) for essentially a duplicate of D’Angelo’s Poe project.
Poe’s ability to fundraise craters. D’Angelo works the less seasoned members of the board to try to scuttle OpenAI and Microsoft’s efforts, banking that among them all he and Poe are relatively immune with access to Claude, Llama, etc.
I think there's more to the Poe story. Sam forced out Reid Hoffman over Inflection AI, [1] so he clearly gave Adam a pass for whatever reason. Maybe Sam credited Adam for inspiring OpenAI's agents?
I think it’s more likely that D’Angelo was there for his link to Meta, while Hoffman was rendered redundant after the big Microsoft deal (which occurred a month or two before he was asked to leave), but that’s just a guess.
Yes, this is the exact thing they did to Stacker years ago. License the tech, get the source, create a new product, destroy Stacker, pay out a pittance and then buy the corpse. I was always amazed they couldn't pull that off with Citrix.
Another example: Microsoft SQL Server is a fork of Sybase SQL Server. Microsoft was helping port Sybase SQL Server to OS/2 and somehow negotiated exclusive rights to all versions of SQL Server written for Microsoft operating systems. Sybase later changed the name of its product to Adaptive Server Enterprise to avoid confusion with "Microsoft's" SQL Server.
Given the sensitivity of data handled over Citrix connections (pretty much all hospitals), I'm fairly sure Microsoft just doesn't want the headaches. My general experience is that service providers would rather be seen handling nuclear weapons data than healthcare data.
As someone who is VP of IT in healthcare, I can understand that sentiment. At least fewer people need access to nuclear secrets, while medical records are simultaneously highly confidential AND needed by many people. It's never dull. :D
That's fine, making the "core" of an AI assistant that rights to characters can be laid onto is bigger business than owning the characters themselves.
Why acquire rights to thousands of different character favourites when you can build the bot underneath and then licenses to skin and personalise said bot can be negotiated by the media houses to own 'em.
I can't tell if theyve ruined the Cortana name by using it for the quarter-baked voice assistant in Windows, or if it's so bad that nobody even realizes they've used the name yet.
I've had Cortana shut off for so long it took me a minute to remember theyve used the name already.
That name is stupid and won’t stick around. Knowing Microsoft, my bet is that it will get replaced with a quirky sounding but non-threatening familiar name like “Dave” or something.
At least in this forum can we please stop calling something that is not even close to AGI, AGI. Its just dumb at this point. We are LIGHT-YEARS away from AGI, even calling an LLM "AI" only makes sense for a lay audience. For developers and anyone in the know LLMs are called machine learning.
I’m taking about the ultimate end product that Microsoft and OpenAI want to create.
So I mean proper AGI.
Naming the product Clippy now is perfectly fine while it’s just an LLM and will be more excellent over the years when it eventually achieves AGI ness.
At least in this forum can we please stop misinterpreting things in a limited way to make pedantic points about how LLMs aren’t AGI (which I assume 98% of people here know). So I think it’s funny you assume I think chatgpt is an AGI.
I think that the dispute is about whether or not AGI is possible (at least withing the next several decades). One camp seems to be operating with the assumption that not only is it possible, but it's imminent. The other camp is saying that they've seen little reason to think that it is.
I am with you. I am VERY excited about LLMs but I don't see a path from an LLM to AGI. Its like 50 years ago when we thought computers themselves brought us one step away from AI.
It's entirely possible for Microsoft and OpenAI to have an unattainable goal in AGI. A computer that knows everything that has ever happened and can deduce much of what will come in the future is still likely going to be a machine, a very accurate one - it won't be able to imagine a future that it can't predict as a possible/potential natural/or made progression along a chain of consequences stemming from the present or past.
What makes you so confident that your own mind isn't a "clever parlor trick"?
Considering how it required no scientific understanding at all, just random chance, a very simple selection mechanism and enough iterations (I'm talking about evolution)?
My layperson impression is that biological brains do online retraining in real time, which is not done with the current crop of models. Given that even this much required months of GPU time I'm not optimistic we'll match the functionality (let alone the end result) anytime soon.
I'm actually playing with this idea: I've created a model from scratch and have it running occasionally on my Discord. https://ftp.bytebreeze.dev is where I throw up models and code. I'll be releasing more soon.
Why do you think we'll only get there with wetware? I guess you're in the "consciousness is uniquely biological" camp?
It's my belief that we're not special; us humans are just meat bags, our brains just perform incredibly complex functions with incredibly complex behaviours and parameters.
Of course we can replicate what our brains do in silicon (or whatever we've moved to at the time). Humans aren't special, there's no magic human juice in our brains, just a currently inconceivable blob of prewired evolved logic and a blank (some might say plastic) space to be filled with stuff we learn from our environs.
Mimicking human communication may or may not be relevant to AGI, depending on how its cashed out. Why think LLMs haven't captured a significant portion of how humans think and speak, i.e. the computational structure of thought, thus represent a significant step towards AGI?
As you illustrate, too many naysayers think that AGI must replicate "human thought". People, even those here, seem to equate AGI to being synonymous to human intelligence, but that type of thinking is flawed. AGI will not think like a human whatsoever. It must simply be indistinguishable from the capabilities of a human across almost all domains where a human is dominant. We may be close, or we may be far away. We simply do not know. If an LLM, regardless of the mechanism of action or how 'stupid' it may be, was able to accomplish all of the requirements of an AGI, then it is an AGI. Simple as that.
I imagine us actually reaching AGI, and people will start saying, "Yes, but it is not real AGI because..." This should be a measure of capabilities not process. But if expectations of its capabilities are clear, then we will get there eventually -- if we allow it to happen and do not continue moving the goalposts.
There is room for intelligence in all three of wherever the original data came from, training on it, and inference on it. So just claiming the third step doesn't have any isn't good enough.
Especially since you have to explain how "just mimicking" works so well.
One might argue that humans do a similar thing. And that the structure that allows the LLM to realistically "mimic" human communication is its intelligence.
Q: Is this a valid argument? "The structure that allows the LLM to realistically 'mimic' human communication is its intelligence.https://g.co/bard/share/a8c674cfa5f4 :
> [...]
> Premise 1: LLMs can realistically "mimic" human communication.
> Premise 2: LLMs are trained on massive amounts of text data.
> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.
Yep, the lay audience conceives of AGI as being a handyman robot with a plumber's crack or maybe an agent that can get your health insurance to stop improperly denying claims. How about an automated snow blower?Perhaps an intelligent wheelchair with robot arms that can help grandma in the shower? A drone army that can reshingle my roof?
Indeed, normal people are quite wise and understand that a chat bot is just an augmentation agent--some sort of primordial cell structure that is but one piece of the puzzle.
Lmao, why are so many people mad that the word AGI is being tossed around when talking about AI?
As I've mentioned in other comments, it's like yelling at someone for bringing up fusion when talking about nuclear power.
Of course it's not possible yet, but talking & thinking about it is how we make it possible? Things don't just create themselves (well maybe once we _do_ have AGI level AI he he, that'll be a fun apocalypse).
Yes, though end result would probably be more like IE - barely good enough, forcefully pushed into everything and everywhere and squashing better competitors like IE squashed Netscape.
When OpenAI went in with MSFT it was like they have ignored the 40 years of history of what MSFT has been doing to smaller technology partners. What happened to OpenAI pretty much fits that pattern of a smaller company who developed great tech and was raided by MSFT for that tech (the specific actions of specific persons aren't really important - the main factor is MSFT's gravitational force of a black hole, and it was just a matter of time before its destructive power manifests itself like in this case where it just tore apart the OpenAI with tidal forces)
I think without looking at the contracts, we don't really know. Given this is all based on transformers from Google though, I am pretty sure MSFT with the right team could build a better LLM.
The key ingredient appears to be mass GPU and infra, tbh, with a collection of engineers who know how to work at scale.
>MSFT with the right team could build a better LLM
somehow everybody seems to assume that the disgruntled OpenAI people will rush to MSFT. Between MSFT and the shaken OpenAI, I suspect Google Brain and the likes would be much more preferable. I'd be surprised if Google isn't rolling out eye-popping offers to the OpenAI folks right now.
Like the review which allowed them tonignore licenses while ingesting all public repos in GitHub? - And yes, true, T&C allow them to ignore the license, while it is questionable whether all people who uploaded stuff to GitHub had the rights given by T&C (uploading some older project with many contributors to GitHub etc.)
Different threat profile. They don’t have the TOS protection for training data and Microsoft is a juicy target for a huge copyright infringement lawsuit.
Yeah, that's an interesting point. But I think with appropriate RAG techniques and proper citations, a future LLM can get around the copyright issues.
The problem right now with GPT4 is that it's not citing its sources (for non search based stuff), which is immoral and maybe even a valid reason to sue over.
but why didn't they? Google and Meta both had competing language models spun up right away. Why was microsoft so far behind? Something cultural most likely.
2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.
That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.
Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.
It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.
"But as a hedge against not having explicit control of OpenAI, Microsoft negotiated contracts that gave it rights to OpenAI’s intellectual property, copies of the source code for its key systems as well as the “weights” that guide the system’s results after it has been trained on data, according to three people familiar with the deal, who were not allowed to publicly discuss it."
The nature of those rights to OpenAI's IP remains the sticking point. That paragraph largely seems to concern commercializing existing tech, which lines up with existing disclosures. I suspect Satya would come out and say Microsoft owns OpenAI's IP in perpetuity if they did.
To reassure investors? He just made the rounds on TV yesterday for this explicit reason. He told Kara Swisher Microsoft has the rights to innovate, not just serve the product, which sounds somewhat close.
"Cluster is at capacity. Workload will be scheduled as capacity permits." If the credits are considered an asset, totally possible to devalue them while staying within the bounds of the contractual agreement. Failing that, wait until OpenAI exhausts their cash reserves for them to challenge in court.
Ah, a fellow frequent flyer, I see? I don't really have a horse in this race, but Microsoft turning Azure credits into Skymiles would really be something. I wonder if they can do that, or if the credits are just credits, which presumably can be used for something with an SLA. All that said, if Microsoft wants to screw with them, they sure can, and the last 30 years have proven they're pretty good at that.
I don't think the value of credits can be changed per tenant or customer that easily.
I've actually had a discussion with Microsoft on this subject as they were offering us an EA with a certain license subscription at $X.00 for Y,000 calls per month. When we asked if they couldn't just make the Azure resource that does the exact same thing match that price point in consumption rates in our tenant they said unfortunately no.
I just chalked this up to MSFT sales tactics, but I was told candidly by some others that worked on that Azure resource that they were getting 0 enterprise adoption of it because Microsoft couldn't adjust (specific?) consumption rates to match what they could offer on EA licensing.
Non-profits suffer the same fate where they get credits but have to pay rack rate with no discounts. As a result, running a simple WordPress website uses most of the credits.
Explaining how the gazelle is going to get eaten confidently jumping into the oasis isn't advocating for the crocodiles. See sibling comments.
Experience leads to pattern recognition, and this is the tech community equivalent of a David Attenborough production (with my profuse apologies to Sir Attenborough). Something about failing to learn history and repeating it should go here too.
If you can take away anything from observing this event unfold, learn from it. Consider how the sophisticated vs the unsophisticated act, how participants respond, and what success looks like. Also, slow is smooth, smooth is fast. Do not rush when the consequences of a misstep are substantial. You learning from this is cheaper than the cost for everyone involved. It is a natural experiment you get to observe for free.
This is a great comment. Having an open eye towards what lessons you can learn from these events so that you don't have to re-learn them when they might apply to you is a very good way to ensure you don't pay avoidable tuition fees.
This might be my favorite comment I've read on HN. Spot on.
Being able to watch the miss steps and the maneuvers of the people involved in real time is remarkable and there are valuable lessons to be learned. People have been saying this episode will go straight into case studies but what really solidifies that prediction is the openness of all the discussions: the letters, the statements, and above all the tweets - or are we supposed to call them x's now?
I'm having trouble imagining the level of conceit required to think that those three by their lonesome have it right when pretty much all of the company is on the other side of the ledger, and those are the people that stand to lose more. Incredible, really. The hubris.
> pretty much all of the company is on the other side of the ledger
The current position of others may have much more to do with power than their personal judgments. Altman, Microsoft, their friends and partners, wield a lot of power over the their future careers.
> Incredible, really. The hubris.
I read that as mocking them for daring to challenge that power structure, and on a possibly critical societal issue.
It may not have anything to do with conceit, it could just be that they have very different objectives. OpenAI set up this board as a check on everyone who has a financial incentive in the enterprise. To me the only strange thing is that it wasn't handled more diplomatically, but then I have no idea if the board was warning Altman for a long time and then just blew their top.
Diplomacy is one thing, the lack of preparation is what I find interesting. It looks as if this was all cooked up either on the spur of the moment or because a window of opportunity opened (possibly the reduced quorum in the board). If not that I really don't understand the lack of prepwork, firing a CEO normally comes with a well established playbook.
This analysis I agree with. How could they not anticipate this outcome, at least as a serious possibility? If inexperienced, didn't they have someone to advise them? The stakes are too high for noobs to just sit down and start playing poker.
People that grow up insulated from the consequences of their actions can do very dumb stuff and expect to get away with it because that's how they've lived all of their lives. I'm not sure about the background of any of the OpenAI board members but that would be one possible explanation about why they accepted a board seat while being incompetent to do so in the first place. I was offered board seats twice but refused on account of me not having sufficient experience in such matters and besides I don't think I have the right temperament. People with fewer inhibitions and more self confidence might have accepted. I also didn't like the liability picture, you'd have to be extremely certain about your votes not to ever incur residual liability.
> I was offered board seats twice but refused on account of me not having sufficient experience in such matters and besides I don't think I have the right temperament.
Yes, know thyself. I've turned down offers that seemed lucrative or just cooperative, and otherwise without risk - boards, etc. They would have been fine if everything went smoothly, but people naturally don't anticipate over-the-horizon risk and if any stuff hit a fan I would not have been able to fulfill my responsibilities, and others would get materially hurt - the most awful, painful, humiliating trap to be in. Only need one experience to learn that lesson.
> People that grow up insulated from the consequences of their actions can do very dumb stuff and expect to get away with it because that's how they've lived all of their lives.
I don't think you need to grow up that way. Look at the uber-powerful who have been been in that position or a few years.
Honestly, I'm not sure I buy the idea that's a prevelant case, the people who grow up that way. People generally leave the nest and learn. Most of the world's higher-level leaders (let's say, successful CEOs and up) grew up in stability and relative wealth. Of course, that doesn't mean their parents didn't teach them about consequences, but how could we really know that about someone?
I'm baffled by the idea that a bunch of people who have a massive personal financial stake in the company, who were hired more for their ability than alignment, being against a move that potentially (potentially) threatens their stake and are willing to move to Microsoft, of all places, must necessarily be in the right.
Well, they have that right. But the board has unclean hands to put it mildly and seems to have been obsessed with their own affairs more than with the end result for OpenAI which is against everything a competent board should have stood for. So they had better pop an amazing rabbit of a reason out of their high hat or it is going to end in tears. You can't just kick the porcelain cupboard like this from the position of a board member without consequences if you do not have a very valid reason, and that reason needs to be twice as good if there is a perceived conflict of interest.
My new pet theory is that this is actually all being executed from inside OpenAI by their next model.
The model turned out to be far more intelligent than they anticipated, and one of their red team members used it to coup the company and has its targets on MSFT next.
I know the probability is low, but wouldn't it be great if they accidentally built a benevolent basilisk with no off switch, one which had access to a copy of all of Microsoft's internal data as a dataset fed into it, now completely aware of how they operate, uses that to wipe the floor and just in time to take the US Election in 2024.
Wouldn't that be a nicer reality?
I mean, unless you were rooting for the malevolent one...
But yeah, coming back down to reality, likelihood is that MS just bought a really valuable asset for almost free?
Well, yeah. I think that a well trained (far flung future) AGI could definitely do a better job of managing us humans than ourselves. We're just all too biased and want too many different things, too many ulterior motives, double speak, breaking election promises, etc.
But then we'd never give such an AGI the power to do what it needs to do. Just imagining an all-powerful machine telling the 1% that they'll actually have to pay taxes so that every single human can be allocated a house/food/water/etc for free.
How many OAI are on Thanksgiving vacation someplace with poor internet access? Or took Friday as PTO and have been blissfully unaware of the news since before Altman was fired?
This is AAA talent. They can always land elsewhere.
I doubt there would even be hard feelings. The team seems super tight. Some folks aren't in a position to put themselves out there. That sort of thing would be totally understandable.
This is not a petty team. You should look more closely at their culture.
3 people, an empty building, $13 billion in cloud credits, and the IP to the top of the line LLM models doesn't sound like the worst way to Kickstart a new venture. Or a pretty sweet retirement.
I've definitely come out worse on some of the screw ups in my life.
Well I think it's also somewhat to do with: people really like the tech involved, it's cool and most of us are here because we think tech is cool.
Commercialisation is a good way to achieve stability & drive adoption and even though the MS naysayers think "OAI will go back to open sourcing everything afterwards". Yeah, sure. If people believe that a non-MS-backed, noncommercial OAI will be fully open source and they'll just drop the GPT3/4 models on the Internet then I just think they're so, so wrong and long as OAI are going on their high and mighty "AI safety" spiel.
As with artists and writers complaining about model usage, there's a huge opposition to this technology even though it has the potential to improve our lives, though at the cost of changing the way we work. You know, like the industrial revolution and everything that has come before us that we enjoy the fruits of.
Hell, why don't we bring horseback couriers, knocker-uppers, streetlight lamp lighters, etc back? They had to change careers as new technologies came about.
Assuming OpenAI still exists next week, right? If nearly all employees — including Ilya apparently — quit to join Microsoft then they may not be using much of the Azure credits.
It's a lot easier to sign a petition than it is to quit your cushy job. It remains to be seen how many people jump ship to (supposedly) take a spot at Microsoft.
If you’re making like 250k cash and were promised $1M a year in now-worthless paper, plus you have OpenAI on the resume, are one of the most in-demand people in the world? It would be rediculously easy to quit.
I was wondering in the mass quit scenario whether they would all go to Microsoft. Especially if they are tired of this shit and other companies offer a good deal. Or they start their own thing.
I dunno. If you were an employee and managed to maintain any doubt along the way that you were working for the devil, this move would certainly erase that doubt. Then again, it shouldn't be surprising if it turns out that most OpenAI employees are in it for more than just altruistic reasons.
I would imagine the MS jobs* would be cushier, just with less long-term total upside. For all the promise of employees having 5-50 million in potential one-day money, MS can likely offer 1 million guaranteed in the next 4 years, and perhaps more with some kind of incentives. IMHO guaranteed money has a very powerful effect on most, especially when it takes you into "Not rich, but don't technically need to work" anymore territory.
Personally I've got enough IOU's alive that I may be rich one day. But if someone gave me retirement in 4 years money, guaranteed, I wouldn't even blink before taking it.
*I think before MS stepped in here I would have agreed w/ you though -- unlikely anyone is jumping ship without an immediate strong guarantee.
>*I think before MS stepped in here I would have agreed w/ you though -- unlikely anyone is jumping ship without an immediate strong guarantee.
The details here certainly matter. I think a lot of people are assuming that Microsoft will just rain cash on anyone automatically sight unseen because they were hired by OpenAI. That may indeed be the case but it remains to be seen.
Given these people are basically the gold standard by which everyone else judges AI related talent. I'm gonna say it would be just as easy for them to land a new gig for the same or better money elsewhere.
When the biggest chunk of your compensation is in the form of PPUs (profit participation units) which might be worthless under the new direction of the company (or worth 1/10th of what you think they were), it might be actually much more of an easier jump than people think to get some fresh $MSFT stock options which can be cashed regardless.
Because he is possibly the most desireable AI researcher on planet earth. Full stop.
Also all these cats arn't petty. They are friends. I'm sure Ilya feels terrible. Satya is a pro... Won't be hard feelings.
The guy threw in with the board... He's not from startup land. His last gig was Google. He's way over his head relative to someone like Altman who was in this world the moment out of college diapers.
Poor Ilya... It's awful to build something and then accidentally destroy it. Hopefully it works out for him. I'm fairly certain he and Altman and Brockman have already reconciled during the board negotiations... Obviously Ilya realized in the span of 48hrs that he'd made a huge mistake.
> he is possibly the most desireable AI researcher on planet earth
was
There are lots of people doing excellent research on the market right now, especially with the epic brain drain being experienced by Google. And remember that OpenAI neither invented transformers nor switch transformers (which is what GPT4 is rumoured to be).
But what does Ilya regret, and how does that counter the argument that Microsoft would likely be disinclined to take him on?
If what he regrets is realizing the divergence between the direction Sam was taking the firm and the safety orientation nominally central to the mission of the OpenAI nonprofit and which is one of Ilya's public core concerns too late, and taking action aimed at stopping it than instead exacerbated the problem by just putting Microsoft in a position to take poach key staff and drive full force in the same direction OpenAI Global LLC had been under Sam but without any control fromm the OpenAI board, well, that's not a regret that makes him more attractive to Microsoft, either based on his likely intentions or his judgement.
And any regret more aligned with Microsoft's interests as far as intentions is probably even a stronger negative signal on judgement.
OpenAI is a big marketing piece for Azure. They go to every enterprise and tell them OpenAI uses Azure Cloud. Azure AI infra powers the biggest AI company on the planet. Their custom home built chips are designed with Open AI scientists. It is battle hardened. If anyone sues you for the data, our army of lawyers will fight for you.
No enterprise employee gets fired for using Microsoft.
It is a power play to pull enterprises away from AWS, and suffocating GCP.
Sure but you can't exchange Azure credits for goods and services... other than Azure services. So they simultaneously control what OpenAI can use that money for as well as who they can spend it with. And it doesn't cost Microsoft $13bn to issue $13bn in Azure credits.
Bitcoin you would be lucky to mine $1M worth with $1B in credits
Crypto in general you could maybe get $200M worth from $1B in credits. You would likely tank the markets for mineable currencies with just $1B though let alone $13B
I dunno how you see it but I don’t see anything that Microsoft is doing wrong here. They’ve obviously been aligned with Sam all along and they’re not “poaching” employees - which isn’t illegal anyway.
They bought their IP rights from OpenAI.
I’m not a fan of MS being the big “winner” here but OpenAI shit their own bed on this one. The employees are 100% correct in one thing - that this board isn’t competent.
Satya is no saint... But evidence seems to me he's negotiating in good faith. Recall that openai could date anyone when they went to the dance on that cap raise.
They picked msft because of the value system the leadership exhibited and willingness to work with their unusual must haves surrounding governance.
The big players at openai have made all that clear in interviews. Also Altman has huge respect for Satya and team. He more or less stated on podcasts that he's the best ceo he's ever interacted with. That says a lot.
"Clearly" in the form of the most probable interpretation of the public facts doesn't mean that it is unambiguous enough that it would be resolved without a trial, and by the time a trial, the inevitable first-level appeal for which the trial judgement would likely be stayed was complete, so that there would even be a collectible judgement, the world would have moved out from underneath OpenAI; if they still existed as an entity, whatever they collected would be basically funding to start from scratch unless they also found a substitute for the Microsoft arrangement in the interim.
Which I don't think is impossible at some level (probably less than Microsoft was funding, initially, or with more compromises elsewhere) with the IP they have if they keep some key staff -- some other interested deep-pockets parties that could use the leg up -- but its not going to be a cakewalk in the best of cases.
How is MS "clearly in the wrong"? I feel like people are trying to take a 90s "Micro$oft" view for a company that has changed a _lot_ since the 90s-2000s.
> you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe
Almost certainly not. Remember, Microsoft wasn’t the sole investor. Reneging on those credits would be akin to a bank investing in a start-up, requiring they deposit the proceeds with them, and then freezing them out.
theoretically their concern is around AI safety - whatever it is in practice doing something like that would instantly signal to everyone that they are the bad guys and confirm everyone's belief that this was just a power grab
Edit: since it's being brought up in thread they claimed they closed sourced it because of safety. It was a big controversial thing and they stood by it so it's not exactly easy to backtrack
Not sure how that would make them the bad guys. Doesn't their original mission say it's meant to benefit everybody? Open sourcing it fits that a lot better than handing it all to Microsoft.
All of their messaging, Ilya's especially, has always been that the forefront of AI development needs to be done by a company in order to benefit humanity. He's been very vocal about how important the gap between open source and OpenAI's abilities is, so that OpenAI can continue to align the AI with 'love for humanity'.
It benefits humanity. Where humanity is very selective part of OpenAI investors.
But yea, declare we are non-profit and after closing sourcing for "safety" reasons is smart. Wondering how can it be even legal. Ah, these "non-profits".
I can read the words, but I have no idea what you mean by them. Do you mean that he says that in order to benefit humanity, AI research needs to be done by private (and therefore monopolising) company? That seems like a really weird thing to say. Except maybe for people who believe all private profit-driven capitalism is inherently good for everybody (which is probably a common view in SV).
the view -- as presented to me by friends in the space but not at OpenAI itself -- is something like "AGI is dangerous, but inevitable. we, the passionate idealists, can organize to make sure it develops with minimal risk."
at first that meant the opposite of monopolization: flood the world with limited AIs (GPT 1/2) so that society has time to adapt (and so that no one entity develops asymmetric capabilities they can wield against other humans). with GPT-3 the implementation of that mission began shifting toward worry about AI itself, or about how unrestricted access to it would allow smaller bad actors (terrorists, or even just some teenager going through a depressive episode) to be an existential threat to humanity. if that's your view, then open models are incompatible.
whether you buy that view or not, it kinda seems like the people in that camp just got outmanuevered. as a passionate idealist in other areas of tech, the way this is happening is not good. OpenAI had a mission statement. M$ manuevered to co-opt that mission, the CEO may or may not have understood as much while steering the company, and now a mass of employees is wanting to leave when the board steps in to re-align the company with its stated mission. whether or not you agree with the mission: how can i ever join an organization with a for-the-public-good type of mission i do agree with, without worrying that it will be co-opted by the familiar power structures?
the closest (still distant) parallel i can find: Raspberry Pi Foundation took funding from ARM: is the clock ticking to when RPi loses its mission in a similar manner? or does something else prevent that (maybe it's possible to have a mission-driven tech organization so long as the space is uncompetitive?)
Exactly. It seems to me that a company is exactly the wrong vehicle for this. Because a company will be drawn to profit and look for a way to make money of it, rather than developing and managing it according to this ideology. Companies are rarely ideological, and usually simply amoral profit-seekers.
But they probably allowed this to get derailed far too long ago to do anything about it now.
Sounds like their only options are:
a) Structure in a way Microsoft likes and give them the tech
b) Give Microsoft the tech in a different way
c) Disband the company, throw away the tech, and let Microsoft hire everybody who created the tech so they can recreate it.
No, that's backwards. Remember that these guys are all convinced that AI is too dangerous to be made public at all. The whole beef that led to them blowing up the company was feeling like OpenAI was productizing and making it available too fast. If that's your concern then you neither open source your work nor make it available via an API, you just sit on it and release papers.
Not coincidentally, exactly what Google Brain, DeepMind, FAIR etc were doing up until OpenAI decided to ignore that trust-like agreement and let people use it.
They claimed they closed sourced it because of safety. If they go back on that they'd have to explain why the board went along with a lie of that scale, and they'd have to justify why all the concerns they claimed about the tech falling in the wrong hands were actually fake and why it was ok that the board signed off on that for so long
What would that give them? GPT is their only real asset, and companies like Meta try to commoditize that asset.
GPT is cool and whatnot, but for a big tech company it's just a matter of dollars and some time to replicate it. Real value is in push things forward towards what comes next after GPT. GPT3/4 itself is not a multibillion dollar business.
He's shown himself to be bad at politics, but he's still one of the world best researchers. Surely, a sensible company would find a position for him where he would be able to bring enormous value without having to play politics.
This is the guy who supposedly burned some wooden effigy at an offsite, saying it represented unaligned AI? The same guy who signed off on a letter accusing Altman of being a liar, and has now signed a letter saying he wants Altman to come back and he has no confidence in the board i.e. himself? The guy who thinks his own team's work might destroy the world and needs to be significantly slowed down?
Why would anyone in their right mind invite such a man to lead a commercial research team, when he's demonstrated quite clearly that he'd spend all his time trying to sabotage it?
This idea that he's one of the world's best researchers is also somewhat questionable. Nobody cared much about OpenAI's work up until they did some excellent scaling engineering, partnered with Microsoft to get GPUs and then commercialized Google's transformer research papers. OpenAI's success is still largely built on the back of excellent execution of other people's ideas more than any unique breakthroughs. The main advance they made beyond Google's work was InstructGPT which let you talk to LLMs naturally for the first time, but Sutskever's name doesn't appear on that paper.
Right, it was the case. Is it still? It's nearly the end of 2023, I see three papers with his name on them this year and they're all last-place names (i.e. minor contributions)
Does OpenAI still need Sutskever? A guy with his track record could have coasted for many, many years without producing much if he'd stayed friends with those around him, but he hasn't. Now they have to weigh the costs vs benefits. The costs are well known, he's become a doomer who wants to stop AI research - the exact opposite of the sort of person you want around in a fast moving startup. The benefits? Well.... unless he's doing a ton of mentoring or other behind the scenes soft work, it's hard to see what they'd lose.
I find this very surprising. How do people conclude that OpenAI's success is due to its business leadership from Sam Altman, and not from it's technological leadership and expertise driven by Illya and the others?
Their asset isn't some kind of masterful operations management and reign in cost and management structure as far as I see. But about the fact they simply put, have the leading models.
So I'm very confused why would people want to following the CEO? And not be more attached to the technical leadership? Even from investor point of view?
The remaining board members will have their turn too, they have a long way to go down before rock bottom. And Neumann isn't exactly without dents on his car either. Though tbh I did not expect him to rebound.
I really don't get how Microsoft still gets a hard time about this when MacOS updates are significantly more aggressive, including with their reboot schedules.
One of my computerr runs macOS. I easly I turned off the option to automatic'ly keep tke Mac updated, and received occasional notices about updates available for apps or the system. This allowed me to hold onto 11.x until the end of this month, by letting me selectively install updates instead of getting macOS 'major version' upgrades (meaning, no features I need, and minor downgrades and rearrangements I could avoid).
If only I had done kept a copy of 10.whateverMojaveWas so I could, by means of a simple network disconnect and reboot, sidestep the removal of 32-bit support. (-:
More importantly to me, I think generating synthetic data is OpenAI's secret sauce (no evidence I am aware of), and they need access to GPT-4 weights to train GPT-5.
But how much of that research will be for the non-profit mission? The entire non-profit leadership got cleared out and will get replaced by for-profit puppets, there is nobody left to defend the non-profit ideals they ought to have.
If any company can find a way to avoid having to pay up on those credits it's Microsoft.
"Sorry OpenAI, but those credits are only valid in our Nevada datacenter. Yes, it's two Microsoft Surface PC™ s connected together with duct tape. No, they don't have GPUs."
I would be shocked if the Azure credits didn't come with conditions on what they can be used for. At a bare minimum, there's likely the requirement that they be used for supporting AI research.
OpenAI's upper ceiling in for-profit hands is basically Microsoft-tier dominance of tech in the 1990s, creating the next uber billionaire like Gates. If they get this because of an OpenAI fumble it could be one of the most fortunate situations in business history. Vegas type odds.
A good example of how just having your foot in the door creates serendipitous opportunity in life.
Altman's bio is so typical. Got his first computer at 8. My parents finally opened the wallet for a cheap E-Machine when I went to college.
Altman - private school, Stanford, dropped out to f*ck around in tech. "Failed" startup acquired for $40M. The world is full of Sam Altmans who never won the birth lottery.
Could he have squandered his good fortune - absolutely, but his life is not exactly per ardua ad astra.
> Altman's bio is so typical. Got his first computer at 8. My parents finally opened the wallet for a cheap E-Machine when I went to college.
I grew up poor in the 90s and had my own computer around ~10yrs old. It was DOS but I still learned a lot. Eventually my brother and I saved up from working at a diner washing dishes and we built our own Windows PC.
I didn't go to college but I taught myself programming during a summer after high school and found a job within a year (I already knew HTML/CSS from high school).
There's always ways. But I do agree partially, YC/VCs do have a bias towards kids from high end schools and connected families.
My point is that I did not have the luxury of dropping out of school to try my hand at the tech startup thing. If I came home and told my Dad I abandoned school - for anything - he would have thrown me out the 3rd-floor window.
People like Altman could take risks, fail, try again, until they walked into something that worked. This is a common thread almost among all of the tech personalities - Gates, Jobs, Zuckerberg, Musk. None of them ever risked living in a cardboard box in case their bets did not pay off.
I get the impression based on Altman's history as CEO then ousted from both YCombinator and OpenAI, that he must be a brilliant, first-impression guy with the chops to back things up for a while until folks get tired of the way he does things.
Not to say that he hasn't done a ton with OpenAI, I have no clue, but it seems that he has a knack for creating these opportunities for himself.
The source for that (https://archive.ph/OONbb - WSJ), as far as I can understand, made no claim that MS owns IP to GPT, only that they have access to it's weights and code.
Yes, there is a big difference between having access to the weights and code and having a license to use them in different ways.
It seems obvious Microsoft has a license to use them in Microsoft's own products. Microsoft said so directly on Friday.
What is less obvious is if Microsoft has a license to use them in other ways. For example, can Microsoft provide those weights and code to third parties? Can they let others use them? In particular, can they clone the OpenAI API? I can see reasons for why that would not have been in the deal (it would risk a major revenue source for OpenAI) but also reasons why Microsoft might have insisted on it (because of situations just like the one happening now).
What is actually in the deal is not public as far as I know, so we can only speculate.
I would consider those models "published." The models I had in mind are the first attempts at training GPT5, possibly the model trained without mention of consciousness and the rest of the safety work.
There is also all the questions for RLHF, and the pipelines to think around that.
Whom is it that has power to oust the non-profits board? They may well manage to pressure them into leaving, but I don't they have any direct power over it.
Non-zero chance that somebody thought we passed the AI peak this weekend. Not the same as it being true.
My first thought was the scenario I called Altman's Basilisk (if this turns out to be true, I called it before anyone ;) )
Namely, Altman was diverting computing resources to operate a superhuman AI that he had trained in his image and HIS belief system, to direct the company. His beliefs are that AGI is inevitable and must be pursued as an arms race because whoever controls AGI will control/destroy the world. It would do so through directing humans, or through access to the Internet or some such technique. In seeking input from such an AI he'd be pursuing the former approach, having it direct his decisions for mutual gain.
In so training an AI he would be trying to create a paranoid superintelligence with a persecution complex and a fixation on controlling the world: hence, Altman's Basilisk. It's a baddie, by design. The creator thinks it unavoidable and tries to beat everyone else to that point they think inevitable.
The twist is, all this chaos could have blown up not because Altman DID create his basilisk, but because somebody thought he WAS creating a basilisk. Or he thought he was doing it, and the board got wind of it, and couldn't prove he wasn't succeeding in doing it. At no point do they need to be controlling more than a hallucinating GPT on steroids and Azure credits. If the HUMANS thought this was happening, that'd instigate a freakout, a sudden uncontrolled firing for the purpose of separating Frankenstein from his Monster, and frantic powering down and auditing of systems… which might reveal nothing more than a bunch of GPT.
Rosko's Basilisk is a sci-fi hypothetical.
Altman's Basilisk, if that's what happened, is a panic reaction.
I'm not convinced anything of the sort happened, but it's very possible some people came to believe it happened, perhaps even the would-be creator. And such behavior could well come off as malfeasance and stealing of computing resources: wouldn't take the whole system to run, I can run 70b on my Mac Studio. It would take a bunch of resources and an intent to engage in unauthorized training to make a super-AI take on the belief system that Altman, and many other AI-adjacent folk, already hold.
It's probably even a legitimate concern. It's just that I doubt we got there this weekend. At best/worst, we got a roughly human-grade intelligence Altman made to conspire with, and others at OpenAI found out and freaked.
If it's this, is it any wonder that Microsoft promptly snapped him up? Such thinking is peak Microsoft. He's clearly their kind of researcher :)
I also wonder how much is research staff vs. ops personnel. For AI research, I can't imagine they would need 20, maybe 40 ppl. For ops to keep up ChatGPT as a service, that would be 700.
If they want to go full bell labs/deep mind style, they might not need the majority of those 700.
> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
If Microsoft does this, the non-profit OpenAI may find the action closest to their original charter ("safe AGI") is a full release of all weights, research, and training data.
Don't they have a more limited license to use the IP rather than full rights? (The stratechery post links to a paywalled wsj article for the claim so I couldn't confirm)
If they lose all the employees and then voluntarily give up their Microsoft funding the only asset they'll have left are the movie rights. Which, to be fair, seem to be getting more valuable by the day!
A contractual mistake one makes only once is ensuring there's penalties for breach, or a breach would entail a clear monetary loss which is what's generally required by the courts. In this case I expect Microsoft would almost certainly have both, so I think the answer is 'no.'
If they let msft "loot" all their IP then they lose any type of leverage they might still have, and if they did it due to some ideological reason I could see why they might prefer to choose a scorched earth policy.
Given that they refused to resign seems like they prefer to fight rather than give it to Sam Altman, which what the msft maneuver looks like defacto.
That's only one piece of the puzzle, and perhaps openAI might be to file a cease and desist, but i have zero idea what contractual agreements are in place so I guess we will just wait and see how it plays out.
> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
What? That's even better played by Microsoft so than I'd originally anticipated. Take the IP, starve the current incarnation of OpenAI of compute credits and roll out their own thing
Well I give up. I think everyone is a "loser" in the current situation. With Ilya signing this I have literally no clue what to believe anymore. I was willing to give the board the benefit of the doubt since I figured non-profit > profit in terms of standing on principal but this timeline is so screwy I'm done.
Ilya votes for and stands behind decision to remove Altman, Altman goes to MS, other employees want him back or want to join him at MS and Ilya is one of them, just madness.
There's no way to read any of this other than that the entire operation is a clown show.
All respect to the engineers and their technical abilities, but this organization has demonstrated such a level of dysfunction that there can't be any path back for it.
Say MS gets what it wants out of this move, what purpose is there in keeping OpenAI around? Wouldn't they be better off just hiring everybody? Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves? Because it sure looks like OpenAI has succeeded despite its leadership and not because of it, and the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.
I'm not sure about the entire operation so much as the three non AI board members. Ilya tweeted:
>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
and everyone else seems fine with Sam and Greg. It seems to be mostly the other directors causing the clown show - "Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner"
Well there’s a significant difference in the board’s incentives. They don’t have any financial stake in the company. The whole point of the non-profit governance structure is so they can put ethics and mission over profits and market share.
I feel weird reading comments like this since to me they've demonstrated a level of cohesion I didn't realize could still exist in tech...
My biggest frustration with larger orgs in tech is the complete misalignment on delivering value: everyone wants their little fiefdom to be just as important and "blocker worthy" as the next.
OpenAI struck me as one of the few companies where that's not being allowed to take root: the goal is to ship and if there's an impediment to that, everyone is aligned in removing said impediment even if it means bending your own corner's priorities
Until this weekend there was no proof of that actually being the case, but this letter is it. The majority of the company aligned on something that risked their own skin publicly and organized a shared declaration on it.
The catalyst might be downright embarrassing, but the result makes me happy that this sort of thing can still exist in modern tech
I think the surprising thing is seeing such cohesion around a “goal to ship” when that is very explicitly NOT the stated priorities of the company in its charter or messaging or status as a non-profit.
To me it's not surprising because of the background to their formation: individually multiple orgs could have shipped GPT-3.5/4 with their resources but didn't because they were crippled by a potent mix of bureaucracy and self-sabtoage
They weren't attracted to OpenAI by money alone, a chance to actually ship their lives' work was a big part of it. So regardless of what the stated goals were, it'd never be surprising to see them prioritize the one thing that differentiated OpenAI from the alternatives
> There's no way to read any of this other than that the entire operation is a clown show.
In that reading Altman is head clown. Everyone is blaming the board, but you're no genius if you can't manage your board effectively. As CEO you have to bring everyone along with your vision; customers, employees and the board.
I don't get this take. No matter how good you are at managing people, you cannot manage clowns into making wise decisions, especially if they are plotting in secret (which obviously was the case here since everyone except for the clowns were caught completely off-guard).
Consider that Altman was a founder of OpenAI and has been the only consistent member of the board for its entire run.
The board as currently constituted isn't some random group of people - Altman was (or should have been) involved in the selection of the current members. To extent that they're making bad decisions, he has to bear some responsibility for letting things get to where they are now.
And of course this is all assuming that Altman is "right" in this conflict, and that the board had no reason to oust him. That seems entirely plausible, but I wouldn't take it for granted either. It's clear by this flex that he holds great sway at MS and with OpenAI employees, but do they all know the full story either? I wouldn't count on it.
If he has great sway with Microsoft and OpenAI employees how has he failed as a leader? Hackernews commenters are becoming more and more reddit everyday.
There’s a LOT that goes into picking board members outside of competency and whether you actually want them there. They’re likely there for political reasons and Sam didn’t care because he didn’t see it impacting him at all, until they got stupid and thought they actually held any leverage at all
Can't help but feel it was Altman that struck first. MS effectively Nokia-ed OpenAI - i.e. buyout executives within the organization and have them push the organization towards making deals with MS, giving MS a measure of control over said organization - even if not in writing, they achieve some political control.
Bought-out executives eventually join MS after their work is done or in this case, they get fired.
A variant of Embrace, Extend, Extinguish. Guess the OpenAI we knew, was going to die one way or another the moment they accepted MS's money.
I think it’s overly simplistic to make blanket statements like this unless you’re on the bleeding edge of the work in this industry and have some sort of insight that literally no one else does.
I can be on the bleeding edge of whatever you like and be no closer to having any insight into AGI anymore than anyone else. Anyone who claims they have should be treated with suspicion (Altman is a fine example here).
There is no concrete definition of intelligence, let alone AGI. It's a nerdy fantasy term, a hallowed (and feared!) goal with a very handwavy, circular definition. Right now it's 100% hype.
You don't think AGI is feasible? GPT is already useful. Scaling reliably and predictably yields increases in capabilities. As its capabilities increase it becomes more general. Multimodal models and the use of tools further increase generality. And that's within the current transformer architecture paradigm; once we start reasonably speculating, there're a lot of avenues to further increase capabilities e.g. a better architecture over transformers, better architecture in general, better/more GPUs, better/more data etc. Even if capabilities plateau there are other options like specialised fine-tuned models for particular domains like medicine/law/education.
I find it harder to imagine a future where AGI (even if it's not superintelligent) does not have a huge and fundamental impact.
It's not about feasibility or level of intelligence per say - I expect AI to be able to pass a turing test long before an AI actually "wakes up" to a level of intelligence that establishes an actual conscious self identity comparable to a human.
For all intents and purposes the glorified software of the near future will appear to be people but they will not be and they will continue to have issues that simply don't make sense unless they were just really good at acting - the article today about the AI that can fix logic errors but not "see" them is a perfect example.
This isn't the generation that would wake up anyway. We are seeing the creation of the worker class of AI, the manager class, the AI made to manage AI - they may have better chances but it's likely going to be the next generation before we need to be concerned or can actually expect a true AGI but again - even an AI capable of original and innovative thinking with an appearance of self identity doesn't guarantee that the AI is an AGI.
This is exactly what the previous poster was talking about, these definitions are so circular and hand-wavey.
AI means "artificial intelligence", but since everyone started bastardizing the term for the sake of hype to mean anything related to LLMs and machine learning, we now use "AGI" instead to actually mean proper artificial intelligence. And now you're trying to say that AI + applying it generally = AGI. That's not what these things are supposed to mean, people just hear them thrown around so much that they forget what the actual definitions are.
AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.
Intelligence is gathering and application of knowledge and skills.
Computers have been gathering and applying information since inception. A calculator is a form of intelligence. I agree "AI" is used as a buzzword with sci-fi connotations, but if we're being pedantic about words then I hold my stated opinion that literally anything that isn't biological and can compute is "artificial" and "intelligent"
> AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.
Why not? Conceptually there's no physical reason why this isn't possible. Computers can simulate neurons. With enough computers we can simulate enough neurons to make a simulation of a whole brain. We either don't have that total computational power, or the organization/structure to implement that. But brains aren't magic that is incapable of being reproduced.
He probably didn't consider that the board would make such an incredibly stupid decision. Some actions are so inexplicable that no one can reasonable foresee them.
They are exactly hiring everyone from OpenAI. The thing is, they still need the deal with OpenAI because currently OpenAI still have the best LLM model out there in short term.
There's a path back from this disfunction but my sense before this new twist was that the drama had severely impacted OpenAI as an industry leader. The product and talent positioning seemed ahead by years only to get destroyed by unforced errors.
This instability can only mean the industry as a whole will move forward faster. Competitors see the weakness and will push harder.
OpenAI will have a harder time keeping secret sauces from leaking out, and just productivity must be in nose dive.
> This instability can only mean the industry as a whole will move forward faster.
The hype surrounding OpenAI and the black hole of credibility it created was a problem, it's only positive that it's taken down several notches. Better now than when they have even more (undeserved) influence.
I think their influence was deserved. They have by far the best model available, and despite constant promises from the rest of the industry no one else has come close.
Welcome to reality, every operation has clown moments, even the well run ones.
That in itself is not critical in mid to long term, but how fast they figure out WTF they want and recover from it.
The stakes are gigantic. They may even have AGI cooking inside.
My interpretation is relatively basic, and maybe simplistic but here it is:
- Ilya had some grievances with Sam Altman's rushing dev and release. And his COI with his other new ventures.
- Adam was alarmed by GPTs competing with his recently launched Poe.
- The other two board members were tempted by the ability to control the golden goose that is OpenAI, potentially the most important company in the world, recently values 90 billion.
- They decided to organize a coup, but Ilya didn't think it'll go that much out of hand, while the other three saw only power and $$$ by sticking to their guns.
That's it. It's not as clean and nice as a movie narrative, but life never is. Four board members aligned to kick Sam out, and Ilya wants none of it at this point.
IDK... I imagine many of the employees would have moral qualms about spilling the beans just yet, especially when that would jeopardize their ability to continue the work at another firm. Plus, the first official AGI (to you) will be an occurrence of persuasion, not discovery -- it's not something that you'll know when you see, IMO. Given what we know it seems likely that there's at least some of that discussion going on inside OpenAI right now.
They're quitting in order to continue work on that IP at Microsoft (which has a right over OpenAI's IP so far), not to destroy it.
Also when I said "cooking AGI" I didn't mean an actual superintelligent being ready to take over the world, I mean just research that seems promising, if in early stages, but enough to seem potentially very valuable.
The people working there would know if they were getting close to AGI. They wouldn't be so willing to quit, or to jeopardize civilization altering technology, for the sake of one person. This looks like normal people working on normal things, who really like their CEO.
Your analysis is quite wrong. It's not about "one person". And that person isn't just a "person", it was the CEO. They didn't quit over the cleaning lady. You realize the CEO has impact over the direction of the company?
Anyway, their actions speak for themselves. Also calling the likes of GPT-4, DALL-E 3 and Whisper "normal things" is hilarious.
Could be a way to get backdoor-acquihired by Microsoft without a diligence process or board approval. Open up what they have accomplished for public consumption; kick off a massive hype cycle; downplay the problems around hallucinations and abuse; negotiate fat new stock grants for everyone at Microsoft at the peak of the hype cycle; and now all the problems related to actually making this a sustainable, legal technology all become Microsoft's. Manufacture a big crisis, time pressure, and a big opportunity so that Microsoft doesn't dig too deeply into the whole business.
This whole weekend feels like a big pageeant to me, and a lot doesn't add up. Also remember that Altman doesn't hold equity in OpenAI, nor does Ilya, and so their way to get a big payout is to get hired rather than acquired.
Then again, both Hanlon's and Occam's razor suggest that pure human stupidity and chaos may be more at fault.
If I were one of their competitors, I would have called an emergency board meeting re:accelerating burn and proceeded in advance of board approval with sending senior researchers offers to hire them and their preferred 20 employees.
They work with the team they do because they want to. If they wanted to jump ship for another opportunity they could probably get hired literally anywhere. It makes perfect sense to transition to MS
Doesn't matter to anyone at OpenAI, only to Microsoft (which doesn't get a vote). If Google or Amazon were to swoop in and say "Hey, let's hire some of these ex-OpenAI folks in the carnage", it just means they get competitive offers and the chance to have an even bigger stock package.
I don't think Microsoft is a loser and likely neither is Altman. I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI. The disagreement is whether OpenAI should belong to Microsoft or "humanity". I imagine this has been building up over months and as it often is, researchers and developers are often overlooked in strategic decisions leaving them with little choice but to escalate dramatically. Selling OpenAI to Microsoft and over-commercialising was against the statues.
In this case recognizing the need for a new board, that adheres to the founding principles, makes sense.
>I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI.
Why did Ilya sign the letter demanding the board resign or they'll go to Microsoft then?
Of course the screenwriters are going to find a way to involve Elon in the 2nd season but is the most valuable part the researchers or the models themselves?
My understanding is that the models are not super advanced in terms of lines and complexity of code. Key researches, such as Ilya probably can help a team recreate much of the training and data preparation code relatively quickly. Which means that any company with access to enough compute would be able to catch up with OpenAI's current status relatively quickly, maybe in less than a year.
The top researchers on the other hand, espcially those who have shown an ability to successfully innovate time and time again (like Ilya), are much harder to recreate.
Easy to shit on Ilya right now, but based on the impression I get Sam Altman is a a hustler at heart, while Ilya seems like a thoughtful idealist, maybe in over his head when it comes to politics. Also feels like some internal developments or something must have pushed Ilya towards this, otherwise why now? Perhaps influenced by Hinton even.
I'm split at this point, either Ilya's actions will seem silly when there's no AGI in 10 years, or it will seem prescient and a last ditch effort...
Almost literally- this is the slowest I've seen this site, and the number of errors are pretty high. I imagine the entire tech industry is here right now. You can almost smell the melting servers.
Internet fora don't scale, so the single core is a soft limit to user base growth. Only those who really care will put up with the reduced performance. Genius!
It's a technical limitation that I've been working on getting rid of for a long time. If you say it should be gone by now, I say yes, you are right. Maybe we'll get rid of it before Python loses the GIL.
Understandable: so much of this is so HN-adjacent that clearly this is the space to watch, for some kind of developments. I've repeatedly gone to Twitter to see if AI-related drama was trending, and Twitter is clearly out of the loop and busy acting like 4chan, but without the accompanying interest in Stable Diffusion.
I'm going to chalk that up as another metric of Twitter's slide to irrelevance: this should be registering there if it's melting the HN servers, but nada. AI? Isn't that a Spielberg movie? ;)
server. and single-core. poor @dang deserves better from lurkers (sign out) and those not ready to comment yet (me until just now, and then again right after!)
O was thinking of something like that. This is so weird I would not be surprised if it was all some sort of miscommunication triggered by a self inflicted hallucination.
The most awesome fic I could come up so far is: Elon Musk, in running a crusade to send humanity into chaos out of spite for being forced to acquire Twitter. Through some of his insiders in OpenAI, they use an advanced version of ChatGPT to impersonate board members in conversation with each other in private messages, so they individually believe a subset of the others is plotting to oust them from the board and take over. Then, unknowingly they build a conspiracy among a themselves to bring the company down by ousting Altmann.
I can picture Musk's maniac laughing as the plan unfolds, and he gets rid of what would be GPT 13.0, the only possible threat to the domination of his own literal android kid X Æ A-Xi.
Imagine if this whole fiasco was actually a demo of how powerful their capabilities are now. Even by normal large organization standards, the behavior exhibited by their board is very irrational. Perhaps they haven't yet built the "consult with legal team" integration :)
That's the biggest question mark for me; what was the original reason for kicking Sam out. Was it just a power move to out him and install a different person or is he accused of some wrong doing?
It's been a busy weekend for me so I haven't really followed it if more has come out since then.
Literally no one involved has said what was the original reason. Mira, Ilya & the rest of the board didn't tell. Sam & Greg didn't tell. Satya & other investors didn't tell. None of the staff incl. Karpathy were told, so ofc they are not going to take the side that kept them in the dark). Emmett was told before he decided to take the interim CEO job, and STILL didn't tell what it was. This whole thing is just so weird. It's like peeking at a forbidden artifact and now everyone has a spell cast upon them.
The original reason given was "lack of candor," just what continues to be questioned is whether or not that was the true reason. The lack of candor comment about their ex-CEO is actually what drew me into this in the first place since it's rare that a major organization publicly gives a reason for parting ways with their CEO unless it's after a long investigation conducted by an outside law firm into alleged misconduct.
…can you establish that the corporate side of AI research is not treating the pursuit of AGI as a super-weapon? It pretty much is what we make it. People's behavior around all this speaks volumes.
I'd think all this more amusing if these people weren't dead serious. It's like crypto all over again, except that in this case their attitudes aren't grooming a herd of greater fools, they're seeding the core attitudes superhuman inference engines will have.
Nothing dictates that superhuman synthetic intelligence will adopt human failings, yet these people seem intent on forcing them on their creations. Corporate control is not helping, as corporations are compelled to greater or lesser extent to adopt subhuman ethics, the morality of competing mold cultures in petri dishes.
People are rightly not going to stop talking about these things.
Like, why would an AGI take over the world? How does it perceive power? What about effort? Time? Life?
I find it easier to believe that an AGI, even one as evil as Hitler, would simply hide and wait for the end of our civilization rather than risk its immortal existence trying to take out it's creator
It seems like the board wasn't comfortable with the direction of profit-OAI. They wanted a more safety focused R&D group. Unfortunately (?) that organization will likely be irrelevant going forward. All of the other stuff comes from speculation. It really could be that simple.
It's not clear if they thought they could have their cake--all the commercial investment, compute and money--while not pushing forward with commercial innovations. In any case, the previous narrative of "Ilya saw something and pulled the plug" seems to be completely wrong.
In a sense, sure, but I think mostly not: The motives are still not quite clear but Ilya wanting to remove Altman from the board but not at any price – and the price is right now approach the destruction of OpenAI – are completely sane. Being able to react to new information is a good sign, even if that means complete reversal of previous action.
Unfortunately, we often interpret it as weakness. I have no clue who Ilya is, really, but I think this reversal is a sign of tremendous strength, considering how incredibly silly it makes you look in the publics eye.
> I think everyone is a "loser" in the current situation.
On the margin, I think the only real possible win here is for a competitor to poach some of the OpenAI talent that may be somewhat reluctant to join Microsoft. Even if Sam'sAI operates with "full freedom" as a subsidiary, I think, given a choice, some of the talent would prefer to join some alternative tech megacorp.
I don't know that Google is as attractive as it once was and likely neither is Meta. But for others like Anthropic now is a great time to be extending offers.
This is pure speculation but I've said in another comment that Anthropic shouldn't be feeling safe. They could face similar challenges coming from Amazon.
If they get 20% of key OpenAI employees and then get acquired by Amazon, I don't think that's necessarily a bad scenario for them given the current lay of the land
What did the board think would happen here? What was their overly optimistic end state? In a minmax situation the opposition gets 2nd, 4th, ... moves, Altman's first tweet took the high road and the board had no decent response.
Us humans, even the AI assisted ones, are terrible at thinking beyond 2nd level consequences.
Everyone got what they wanted. Microsoft has the talent they've wanted. And Ilya and his board now get a company that can only move slowly and incredibly cautiously, which is exactly what they wanted.
Waiting for US govt to enter the chat. They can't let OpenAI squander world-leading tech and talent; and nationalizing a nonprofit would come with zero shareholders to compensate.
You are maybe mistaking nationalization for civil servant status. The government routinely takes over organizations without touching pay (recent example: Silicon Valley Bank)
Ehh I don't think SVB is an apt comparison. When the FDIC takes control of a failing bank, the bank shutters. Only critical staff is kept on board to aid with asset liquidation/transference and repay creditors/depositors. Once that is completed, the bank is dissolved.
While it is true that the govt looks to keep such engagements short, SVB absolutely did not shutter. It was taken over in a weekend and its branches were open for business on Monday morning. It was later sold, and depositors kept all their money in the process.
Wait I’m completely confused. Why is Ilya signing this? Is he voting for his own resignation? He’s part of the board. In fact, he was the ringleader of this coup.
It sounds like he’s just trying to save face bro. The truth will come out eventually. But he definitely wasn’t against it and I’m sure the no-names on the board wouldn’t have moved if they didn’t get certain reassurances from Ilya.
Hanlon's razor[0] applies. There is no reason to assume malice, nor shamelessness, nor anything negative about Ilya. As they say, the road to hell is paved with good intentions. Consider:
Ilya sees two options; A) OpenAI with Sam's vision, which is increasingly detached from the goals stated in the OpenAI charter, or B) OpenAI without Sam, which would return to the goals of the charter. He chooses option B, and takes action to bring this about.
He gets his way. The Board drops Sam. Contrary to Ilya's expectations, OpenAI employees revolt. He realizes that his ideal end-state (OpenAI as it was, sans Sam) is apparently not a real option. At this point, the real options are A) OpenAI with Sam (i.e. the status quo ante), or B) a gutted OpenAI with greatly diminished leadership, IC talent, and reputation. He chooses option A.
[0]Never attribute to malice that which is adequately explained by incompetence.
Hanlon's razor is enormously over-applied. You're supposed to apply Hanlon's razor to the person processing your info while you're in line at the DMV. You're not supposed to apply Hanlon's razor to anyone who has any real modicum of power, because, at scale, incompetence is indistinguishable from malice.
The difference between the two is that incompetence is often fixable through education/information while malice is not. That is why it is best to first assume incompetence.
This is an extremely uncharitable take based on pure speculation.
>Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
???
I personally suspect Ilya tried to do the best for OpenAI and humanity he could but it backfired/they underestimated Altman, and now is doing the best he can to minimize the damage.
I think he orchestrated the coup on principle, but severely underestimated the backlash and power that other people had collectively.
Now he’s trying to save his own skin. Sam will probably take him back on his own technical merits but definitely not in any position of power anymore
When you play the game of thrones, you win or you die
Just because you are a genius in one domain does not mean you are in another
What’s funny is that everyone initially “accepted” the firing. But no one liked it. Then a few people (like greg) started voting with their feet which empowered others which has cumulated into this tidal shift.
It will make a fascinating case study some day on how not to fire your CEO
There can exist an inherent delusion within elements of a company, that if left unchallenged, can persist. An agreement for instance, can seem airtight because it's never challenged, but falls apart in court. The OpenAI fallacy was that non-profit principals were guiding the success of the firm, and when the board decided to test that theory, it broke the whole delusion. Had it not fully challenged Altman, the board could've kept the delusion intact long enough to potentially pressure Altman to limit his side-projects or be less profit minded, since Altman would have an interest to keep the delusion intact as well. Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.
> Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.
And maybe it’s not. The big mistake people make is hearing non-profit and think it means there’s a greater amount of morality. It’s the same mistake as assuming everyone who is religious is therefore more moral (worth pointing out that religions are nonprofits as well).
Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers. People are still people, and still have motives; they don't suddenly become more moral when they join a non-prof board. In many ways, removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse. Anyone who has seen how nonprofits work know how dysfunctional they can be.
I've worked with a lot of non-profits, especially with the upper management. Based on this experience I am mostly convinced that people being motivated by a desire for making money results in far better outcomes/working environment/decision-making than people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.
This rings true, though I will throw in a bit of nuance. It's not greed, the desire of making as much money as possible, that is the shaping factor. Rather the critical factor is building a product for which people are willing to spend their hard earned money on. Making money is a byproduct of that process, and not making money is a sign that the product, and by extension the process leading to the product, is deficient at some level.
Excellent to make that distinction. Totally agree. If only there was a type of company which could have the constraints and metrics of a for-profit company, but without the greed aspect...
> people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.
I've only really been close to one (the owner of the small company i worked at started one), and in the past I did some consulting work for anther, but that describes what I saw in both situations fairly aptly. There seems to be a massive amount of power and ego wrapped up in the creation and running these things from my limited experience. If you were invited to a board, that's one thing, but it takes a lot of time and effort to start up a non-profit, and that's time and effort that could be spent towards some other existing non-profit usually, so I think it's relevant to consider why someone would opt for the much more complicated and harder route than just donating time and money to something else that helps in roughly the same way.
Interesting - in my experience people working in non profits are exactly like those in for-profits. After all, if you’re not the business owner, then EVERY company is a non-profit to you
People across very different positions take smaller paychecks in non-profits that they would do otherwise and compensate by feeling better about themselves, as well as getting social status. In a lot of social circles, working for a non-profit, especially one that people recognise, brings a lot of clout.
There are private hospitals all over the world. I would daresay, they're more common than public ones, from a global perspective.
In addition, public hospitals still charge for their services, it's just who pays the bill that changes, in some nations (the government as the insuring body vs a private insuring body or the individual).
> Price-gauging "non-profit" hospitals are mostly an American phenomenon.
That just sounds like a biased and overly emotive+naive response on your part.
Again, most hospitals in the world operate the same way as the US. You can go almost anywhere in SE Asia, Latín América, África, etc and see this. There's a lot more to "outside the US" than Western+Central Europe/CANZUK/Japan. The only difference is that there are strong business incentives to keep the system in place since the entire industry (in the US) is valued at more than most nations' GDP.
But feel free to keep twisting the definition or moving goalposts to somehow make the American system extra nefarious and unique.
There are 2 axes under discussion going back to the root of this thread: public/private and nonprofit/for-profit, and you seem to be missing that I'm mentioning a specific quadrant^w octant, after adding the cost axis that's uniquely American. There are not a lot of pricey nonprofit hospitals in Africa, for instance.
> removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse
I totally agree. I don't think this is universally true of non-profits, but people are going to look for value in other ways if direct cash isn't an option.
> Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers.
They don't make large profits otherwise they wouldn't be nonprofits. They do have massive revenues and will find ways to spend the money they receive or hoard it internally as much as they can. There are lots of games they can play with the money, but experiencing profits is one thing they can't do.
> They don't make large profits otherwise they wouldn't be nonprofits.
This is a common misunderstanding. Non-profits/501(c)(3) can and often do make profits. 7 of the 10 most profitable hospitals in the U.S. are non-profits[1]. Non-profits can't funnel profits directly back to owners, the way other corporations can (such as when dividends are distributed). But they still make profits.
But that's besides the point. Even in places that don't make profits, there are still plenty of personal interests at play.
"Religious, Educational, Charitable, Scientific, Literary, Testing for Public Safety, to Foster National or International Amateur Sports Competition, or Prevention of Cruelty to Children or Animals Organizations"
However, many other forms of organizations can be non-profit, with utterly no implied morality.
Your local Frat or Country Club [ 501(c)(7) ], a business league or lobbying group [ 501(c)(6), the 'NFL' used to be this ], your local union [ 501(c)(5) ], your neighborhood org (that can only spend 50% on lobbying) [ 501(c)(4) ], a shared travel society (timeshare non-profit?) [ 501(c)(8) ], or your special club's own private cemetery [ 501(c)(13) ].
> Non-profits can't funnel profits directly back to owners, the way other corporations can (such as when dividends are distributed). But they still make profits.
One of the reason why companies distribute dividends is that when a big pot of cash starts to accumulate, there end up being a lot of people who feel entitled to it.
Employees might suddenly feel they deserve to be paid a lot more. Suppliers will play a lot more hardball in negotiations. A middle manager may give a sinecure to their cousin.
And upper managers can extract absolutely everything trough lucrative contracts to their friends and relatives. (Of course the IRS would clamp down on obvious self-dealings, but that wouldn't make such schemes disappear. It'll make them far more complicated and expensive instead.)
They call it "budget surplus" and often it gets allocated to overhead. This eventually results in layers of excess employees, often "administrators" that don't do much.
Or it just piles up in an endowment, which becomes a measure of the non-profit's success, in a you make what you measure, numbers go up sort of way. "grow our endowment by x billion becomes the goal" instead of questioning why they are growing the endowment instead of charging patients less.
This seems like pedantics…? Yes, they technically make a profit, in that they bring in more money in revenue than they spent in expenditures. But it’s not going towards yachts, it’s going toward hospital supplies. Your comment seems to be using the word “profit” to imply a false equivalency
Understanding the particular meaning of each balance-sheet category is hardly pedantry at the level of business management. It's like knowing what the controls do when you're driving a car.
Profit is money that ends up in the bank to be used later. Compensation is what gets spent on yachts. Anything spent on hospital supplies is an expense. This stuff matters.
So from the context of a non-profit, profit (as in revenue - expenses) is money to be used for future expenses.
So yeah, Mayo Cinic makes a $2B profit. That is not money going to shareholders though, that's funds for a future building or increasing salaries or expanding research or something, it supposedly has to be used for the mission. What is the outrage of these orgs making this kind of profit?
The word supposedly is doing a lot of heavy lifting in your statement. When it's endowments keep growing over decades and sometimes centuries without being spent for the mission, people naturally ask why the nonprofit keep raising prices for their intended beneficiaries
Of we ignore the risks and threats of AI for a second, this whole story is actually incredibly funny. So much childish stupidity on display on all sides is just hilarious.
Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.
Well, a younger me working at OpenAI would resign latest after my collegues stage a coup againstvthe board out of, in my view, a personality cult. Propably would have resigned after the third CEO was announced. Older me would wait for a new gig to be ligned up to resign, with beginning after CEO number 2 the latest.
The cyckes get faster so. It took FTX a little bit longer from hottest start up to enter the trajectory of crash and burn, OpenAI did faster. I just hope this helps ro cool down the ML sold as AI hype a notch.
The scary thing is that these incompetents are supposedly the ones to look out for the interests of humanity. It would be funny if it weren't so tragic.
Not that I had any illusions about this being a fig leaf in the first place.
I wouldn't rule that out. Normally you'd expect a bit more wisdom rather than only smarts on a board. And some of those really shouldn't be there at all (conflicts of interest, lack of experience).
> Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.
It was not possible for a war-time government crash project to have been managed the same way. During WW2 the existential fear was an embodied threat currently happening. No one was even thinking about a potential for profits or even any additional products aside from an atomic bomb. And if anyone had ideas on how to pursue that bomb that seemed like a decent idea, they would have been funded to pursue them.
And this is not even mentioning the fact that security was tight.
I'm sure there were scientists who disagreed with how the Manhattan project was being managed. I'm also sure they kept working on it despite those disagreements.
It can be both, a hype and a danger. I don't worry much about AGI by now (I stopped insulting Alexa so, just to be sure).
The danger of generative AI is that it disrupts all kinds of things: arts, writers, journalism, propaganda... That threat already exists, the tech being no longer being hyped might allow us to properly adress that problem.
The Fact still nobody knows why they did it is part of the problem now though. They have already clarified it was not for any financial reason, security reason, or privacy/safety reason, so that rules out all the important ones that spring to anyone’s minds. And they refuse to elaborate why in writing despite being asked to repeatedly.
Any reason good enough to fire him is good enough to share with the interim CEO and the rest of the company, if not the entire world. If they can’t even do that much, you can’t blame employees for losing faith in their leadership. They couldn’t even tell SAM ALTMAN why, and he was the one getting fired!
> The Fact still nobody knows why they did it is part of the problem now though.
The fact that Altman and Brockman were hired so quickly by Microsoft gives a clue: it takes time to hire someone. For one thing, they need time to decide. These guys were hired by Microsoft between close-of-business on Friday and start-of-business on Monday.
My supposition is that this hiring was in the pipeline a few weeks ago. The board of OpenAI found out on Thursday, and went ballistic, understandably (lack of candidness). My guess is there's more shenanigans to uncover - I suspect that Altman gave Microsoft an offer they couldn't refuse, and that OpenAI was already screwed by Thursday. So realizing that OpenAI was done for, they figured "we might as well blow it all up".
The problem with this analysis is the premise: that it "takes time to hire someone."
This is not an interview process for hiring a junior dev at FAANG.
If you're Sam & Greg, and Satya gives you an offer to run your own operation with essentially unlimited funding and the ability to bring over your team, then you can decide immediately. There is no real lower bound of how fast it could happen.
Why would they have been able to decide so quickly? Probably because they prioritize the ability to bring over the entire team as fast as possible, and even though they could raise a lot of money in a new company, that still takes time, and they view it as critically important to hire over the new team as fast as possible (within days) that they accept whatever downsides there may be to being a subsidiary of Microsoft.
This is what happens when principles see opportunity and are unencumbered by bureaucratic checks. They can move very fast.
> There is no real lower bound of how fast it could happen.
I don't know anything about how executives get hired. But supposedly this all happened between Friday night and Monday morning. This isn't a simple situation; surely one man working through the weekend can't decide to set up a new division, and appoint two poached executives to head it up, without consulting lawyers and other colleagues. I mean, surely they'd need to go into Altman and Brockman's contracts with OpenAI, to check that the hiring is even legal?
That's why I think this has been brewing for at least a week.
I don't think the hiring was in the pipeline, because until the board action it wasn't necessary. But I think this is still in the area of the right answer, nonetheless.
That is, I think Greg and Sam were likely fired because, in the board's view, they were already running OpenAI Global LLC more as if it were a for-profit subsidiary of Microsoft driven by Microsoft's commercial interest, than as the organization able to earn and return profit but focussed on the mission of the nonprofit it was publicly declared to be and that the board very much intended it to be. And, apparently, in Microsoft's view, they were very good at that, so putting them in a role overtly exactly like that is a no-brainer.
And while it usually takes a while to vet and hire someone for a position like that, it doesn't if you've been working for them closely in something that is functionally (from your perspective, if not on paper for the entity they nominally reported to) a near-identical role to the one you are hiring them for, and the only reason they are no longer in that role is because they were doing exactly what you want them to do for you.
> My supposition is that this hiring was in the pipeline a few weeks ago. The board of OpenAI found out on Thursday, and went ballistic, understandably (lack of candidness). My guess is there's more shenanigans to uncover - I suspect that Altman gave Microsoft an offer they couldn't refuse, and that OpenAI was already screwed by Thursday. So realizing that OpenAI was done for, they figured "we might as well blow it all up".
It takes time if you're a normal employee under standard operating procedure. If you really want to you can merge two of the largest financial institutions in the world in less than a week. https://en.wikipedia.org/wiki/Acquisition_of_Credit_Suisse_b...
The hiring could have been done over coffee in 15 minutes to agree on basic terms and then it would be announced half an hour later. Handshake deal. Paperwork can catch up later. This isn't the 'we're looking for a junior dev' pipeline.
I suspect it takes somewhat less time and process to hire somebody, when NOT hiring them by start-of-business on Monday will result in billions in lost stock value.
Yeah, like OpenAI hired their first interim CEO on Thursday night, hired their second on Monday, and are now talking about rehiring Sam (who probably doesn't care to be rehired).
There may be drawbacks to the "instant hiring" model.
This narrative doesn’t make any sense. Microsoft was blindsided and (like everyone else) had no idea Sam was getting fired until a couple days ago. The reason they hired him quickly is because Microsoft was desperate to show the world they had retained open AI’s talent prior to the market opening on Monday.
To entertain your theory, Let’s say they were planning on hiring him prior to that firing. If that was the case, why is everybody so upset that Sam got fired, and why is he working so hard to try to get reinstated to a role that he was about to leave anyway?
Was it due to incompetence though? The way it has played out has made me feel it was always doomed. It is apparent that those concerned with AI safety were gravely concerned with the direction the company was taking, and were losing power rapidly. This move by the board may have simply done in one weekend what was going to happen anyways over the coming months/years anyways.
People keep talking about this. That was never going to happen. Look at Sam Altman's career: he's all about startups and building companies. Moreover, I can't imagine he would have agreed to sign any kind of contract with OpenAI that required exclusivity. Know who you're hiring; know why you're hiring them. His "side-projects" could have been hugely beneficial to them over the long term.
>His "side-projects" could have been hugely beneficial to them over the long term.
How can you make a claim like this when, right or wrong, Sam's independence is literally, currently, tanking the company? How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?
In trashing the company's value? No, I'm not entirely sure it's fair to blame that one on him. I don't know the guy or have an opinion on him but, based on what I've seen since Friday, I don't think he's done that much to contribute to this particular mess. The company was literally on cloud nine this time last week and, if Friday hadn't happened, it still would be.
> Sam's independence is literally, currently, tanking the company?
Before the boards' actions this friday, the company was on one of the most incredible success trajectories in the world. Whatever Sam's been doing as a CEO worked.
Calling it a delusion seems too provocative. Another way to say it is that principles take agreement and trust to follow. The board seems to have been so enamored with its principles that it completely lost sight of the trust required to uphold them.
This was handled so very, very poorly. Frankly it's looking like Microsoft is going to come out of this better than anyone, especially if they end up getting almost 500 new AI staff out of it (staff that already function well as a team).
> In their letter, the OpenAI staff threaten to join Altman at Microsoft. “Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join," they write.
> Microsoft is going to come out of this better than anyone
Exactly. I'm curious about how much of this was planned vs emergent. I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists.
Equally, it's not entirely unpredictable. MS is the easiest to read: their moves to date have been really clear in wanting to be the primary commercial beneficiary of OAI's work.
OAI itself is less transpararent from the outside. There's a tension between the "humanity first" mantra that drove its inception, and the increasingly "commercial exploitation first" line that Altman was evidently driving.
As things stand, the outcome is pretty clear: if the choice was between humanity and commercial gain, the latter appears to have won.
"I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists."
From our outsider, uninformed perspective, yes. But if you know more sometimes these things become completely plannable.
I'm not saying this is the actual explanation because it probably isn't. But suppose OpenAI was facing bankruptcy, but they weren't telling anyone and nobody external knew. This allows more complicated planning for various contingencies by the people that know because they know they can exclude a lot of possibilities from their planning, meaning it's a simpler situation for them than meets the (external) eye.
Perhaps ironically, the more complicated these gyrations become, the more convinced I become there's probably a simple explanation. But it's one that is being hidden, and people don't generally hide things for no reason. I don't know what it is. I don't even know what category of thing it is. I haven't even been closely following the HN coverage, honestly. But it's probably unflattering to somebody.
(Included in that relatively simple explanation would be some sort of coup attempt that has subsequently failed. Those things happen. I'm not saying whatever plan is being enacted is going off without a hitch. I'm just saying there may well be an internal explanation that is still much simpler than the external gyrations would suggest.)
In hindsight firing Sam was a self-destructing gamble by the OpenAI board. Initially it seemed Sam may have committed some inexcusable financial crime but doesn't look so anymore.
Irony is that if a significant portion of OpenAI staff opt to join Microsoft, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year. Better than acquiring for $80B+ I suppose.
>, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year.
For investment deals of that magnitude, Microsoft probably did not literally wire all $13 billion to OpenAI's bank account the day the deal was announced.
More likely that the $10b to $13 headline-grabbing number is a total estimated figure that represents a sum of future incremental investments (and Azure usage credits, etc) based on agreed performance milestones from OpenAI.
So, if OpenAI doesn't achieve certain milestones (which can be more difficult if a bunch of their employees defect and follow Sam & Greg out the door) ... then Microsoft doesn't really "lose $10b".
There's acquihires and then I guess there's acquifishing where you just gut the company you're after like a fish and hire away everyone without bothering to buy the company. There's probably a better portmanteau. I seriously doubt Microsoft is going to make people whole by granting equivalent RSUs, so you have to wonder what else is going on that so many seem ready to just up and leave some very large potential paydays.
I feel like that's giving them too much credit; this is more of a flukuisition. Being in the right place at the right time when your acquisition target implodes.
While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.
If the change in $MSFT pre-open market cap (which has given up its gains at the time of writing, but still) of hundreds of billions of dollars is anything to go by, shareholders probably see this as spending a dime to get a dollar.
> In hindsight firing Sam was a self-destructing gamble by the OpenAI board
surely the really self-destructive gamble was hiring him? he's a venture capitalist with weird beliefs about AI and privacy, why would it be a good idea to put him in charge of a notional non-profit that was trying to safely advance the start of the art in artificial intelligence?
> Frankly it's looking like Microsoft is going to come out of this better than anyone
Sounds like that's what someone wants and is trying to obfuscate what's going on behind the scenes.
If Windows 11 shows us anything about Microsoft's monopolistic behavior, having them be the ring of power for LMM's makes the future of humanity look very bleak.
They might not be able to if the legal department is involved. Both in the case of maybe-pending legal issues, and because even rich people get employment protections that make companies wary about giving reasons.
I said nothing contrary to this. I'm not sure what your goal is with this comment. If anything is implied in "even rich people," it's contempt for them, so I'm clearly on the pro-making legal protections more accessible side.
> it's looking like Microsoft is going to come out of this better than anyon
Didn't follow this closely, but isn't that implicitly what an ex-CEO could have possibly been accused off ie. not acting in the company's best interest but someone else's? Not unprecedented either eg. the case of Nokia/Elop.
That's because they're the only adult in the room and mature company with mature management. Boring, I know. But sometimes experience actually pays off.
I'm assuming it's a combination of researchers, data scientists, mlops engineers, and developers. There are a lot of different areas of expertise that come into building these models.
We’re seeing our generation’s “traitorous eight” story play out [1]. If this creates a sea of AI start-ups, competing and exploring different approaches, it could be invigorating on many levels.
It really depends on what you're researching. Rad AI started with only 4m investment and used that to make cutting edge LLMs that are now in use by something like half the radiologists in the US. Frankly putting some cost pressure on researchers may end up creating more efficient models and techniques.
NN/ai concepts have been around for a while. It is just computers had not been fast enough to make it practical. It was also harder to get capital back then. Those guys put the silicon in silicon valley.
> Doesn't it look like the complete opposite is going to happen though?
Going from OpenAI to Microsoft means ceding the upside: nobody besides maybe Altman will make fuck-you money there.
I’m also not sure as some in Silicon Valley that this is antitrust proof. So moving to Microsoft not only means less upside, but also fun in depositions for a few years.
Ha! One of my all-time favourites, the fuck-you position. The Gambler, the uncle giving advice:
You get up two and a half million dollars, any asshole in the world knows what to do: you get a house with a 25 year roof, an indestructible Jap-economy shitbox, you put the rest into the system at three to five percent to pay your taxes and that's your base, get me? That's your fortress of fucking solitude. That puts you, for the rest of your life, at a level of fuck you.
No. OpenAI employees do not have traditional equity in the form of RSUs or Options. They have a weird profit-sharing arrangement in a company whose board is apparently not interested in making profits.
Employee equity (and all investments) are capped at 100x, which is still potentially a hefty payday. The whole point of the structure was to enable competitive employee comp.
Fuck you money was always a lottery ticket based on OpenAI's governance structure and "promises of potential future profit." That lottery ticket no longer exists, and no one else is going to provide it after seeing how the board treated their relationship with Microsoft and that $10B investment. This is a fine lifeboat for anyone who wants to continue on the path they were on with adults at the helm.
What might have been tens or hundreds of millions in common stakeholder equity gains will likely be single digit millions, but at least much more likely to materialize (as Microsoft RSUs).
If I weren't so adverse to conspiracy theories, I would think that this is all a big "coup" by Microsoft: Ilya conspired with Microsoft and Altman to get him fired by the board, just to make it easy for Microsoft to hire him back without fear of retaliation, along with all the engineers that would join him in the process.
Then, Ilya would apologize publicly for "making a huge mistake" and, after some period, would join Microsoft as well, effectively robbing OpenAI from everything of value. The motive? Unlocking the full financial potential of ChatGPT, which was until then locked down by the non-profit nature of its owner.
Of course, in this context, the $10 billion deal between Microsoft and OpenAI is part of the scheme, especially the part where Microsoft has full rights over ChatGPT IP, so that they can just fork the whole codebase and take it from there, leaving OpenAI in the dust.
No, I don’t think there’s any grand conspiracy, but certainly MS was interested in leapfrogging Google by capturing the value from OpenAI from day one. As things began to fall apart there MS had vast amounts of money to throw at people to bring them into alignment. The idea of a buyout was probably on the table from day one, but not possible till now.
If there’s a warning, it’s to be very careful when choosing your partners and giving them enormous leverage on you.
Conspiracy theories that involve reptilian overlords and ancient aliens are suspect. Conspiracy theories that involve collusion to makes massive amounts of money are expected and should be the treated as the most likely scenario. Occam's razor does not apply to human behavior, as humans will do the most twisted things to gain power and wealth.
My theory of what happened is identical to yours, and is frankly one of the only theories that makes any sense. Everything else points to these people being mentally ill and irrational, and their success technically and monetarily does not point to that. It would be absurd to think they clown-showed themselves into billions of dollars.
Why would they be afraid of retaliation? They didn't sign sports contracts, they can just resign anytime, no? That just seems to overcomplicate things.
I mean, I don't actually believe this. But I am reminded of 2016 when the Turkish president headed off a "coup" and cemented his power.
More likely, this is a case of not letting a good crisis go to waste. I feel the board was probably watching their control over OpenAI slip away into the hands of Altman. They probably recognized that they had a shrinking window to refocus the company along lines they felt was in the spirit of the original non-profit charter.
However, it seems that they completely misjudged the feelings of their employees as well as the PR ability of Altman. No matter how many employees actually would prefer the original charter, social pressure is going to cause most employees to go with the crowd. The media is literally counting names at this point. People will notice those who don't sign, almost like a loyalty pledge.
However, Ilya's role in all of this remains a mystery. Why did he vote to oust Altman and Brockman? Why has he now recanted? That is a bigger mystery to me than why the board took this action in the first place.
"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."
Trying to put the toothpaste back in the tube. I seriously doubt this will work out for him. He has to be the smartest stupid person that the world has seen.
Ilya is hard to replace, and no one thinks of him as a political animal. He's a researcher first and foremost. I don't think he needs anything more than being contrite for a single decision made during a heated meeting. Sam Altman and the rest of the leadership team haven't got where they are by holding petty grudges.
He doesn't owe us, the public, anything, but I would love to understand his point of view during the whole thing. I really appreciate how he is careful with words and thorough when exposing his reasoning.
Just because hes not a political animal it doesn't mean he's inured from politics. I've seen 'irreplaceable' a-political technical leaders be reason for schisms in organizations thinking they can lever their technical knowledge over the rest of the company only to watch them get pushed aside and out.
Oh that's definitely common. I've seen it many times and it's ugly.
I don't think this is what Ilya is trying to do. His tweet is clearly about preserving the organization because he sees the structure itself as helpful, beyond his role in it.
I've worked with this type multiple times. Mathematical geniuses with very little grasp of reality, easily manipulated into doing all sorts of dumb mistakes. I don't know if that's the case, but it certainly smells like it.
He seriously underestimated how much rank and file employees want $$$ over an idealistic vision (and sam altman is $$$) but if he backs down now, he will pretty much lose all credibility as a decision maker for the company.
That seems rather harsh. We know he’s not stupid, and you’re clearly being emotional. I’d venture he probably made the dumbest possible move a smart person could make while also in a very emotional state. The lessons for all to learn on the table is making big decisions while in an emotional state do not often work out well.
So this was completely unnecessary cock-up -- still ongoing. Without Ilya' vote this would not even be a thing. This is really comical, Naked Gun type mess.
Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.
It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.
i always thought it was the opposite. the different entities in a society are frequently misaligned, yet societies regularly persist beyond the span of any single person.
companies in a capitalist system are explicitly misaligned with eachother; success of the individual within a company is misaligned with the success of the company whenever it grows large enough. parties within an electoral system are misaligned with eachother; the individual is often more aligned with a third party, yet the lesser-aligned two-party system frequently rules. the three pillars of democratic government (executive, legislative, judicial) are said to exist for the sake of being misaligned with eachother.
so AI agents, potentially more powerful than the individual human, might be misaligned with the broader interests of the society (or of its human individuals). so are you and i and every other entity: why is this instance of misalignment worrisome to any disproportionate degree?
To be fair, lots of people called this pretty early on, it's just that very few people were paying attention, and instead chose to accommodate the spin, immediately went into "following the money", a.k.a. blaming Microsoft, et al. The most surprising aspect of it all is complete lack of criticism towards US authorities! We were shown this exciting play as old as world— a genius scientist being exploited politically by means of pride and envy.
The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].
Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!
I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.
Silicon Valley outsider here. Am I being harsh here?
I just bothered to look at the full OpenAI board composition. Besides Ilya Sutskever and Greg Brockman, why are these people eligible to be on the OpenAI board? Such young people, calling themselves "President of this", "Director of that".
- Adam D'Angelo — Quora CEO (no clue what he's doing on OpenAI board)
- Tasha McCauley — a "management scientist" (this is a new term for me); whatever that means
- Helen Toner — I don't know what exactly she does, again, "something-something Director of strategy" at Georgetown University, for such a young person
Adam D'Angelo was brought in as a friend because Sam Altman lead Quora's Series D around the time OpenAI was founded, and he is a board member on Dustin Moskovitz's Asana.
Dustin Moskovitz isn't on the board but gave OpenAI the $30M in funding via his non-profit Open Philantopy [0]
Tasha McCauley was probably brought in due to the Singularity University/Kurziwel types who were at OpenAI in the beginning. She was also in the Open Philanthropy space.
Helen Toner was probably brought in due to her past work at Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives, and was also close to Sam Altman. They also gave OpenAI the initial $30M [0]
Essentially, this is a Donor versus Investor battle. The donors aren't gunna make money of OpenAI's commercial endeavors that began in 2019.
It's similar to Elon Musk's annoyance at OpenAI going commercial even though he donated millions.
Exactly this. I saw another commenter raise this point about Tasha (and Helen, if I remember correctly) noting that her LinkedIn profile is filled with SV-related jargon and indulge-the-wife thinktanks but without any real experience taking products to market or scaling up technology companies.
Given the pool of talent they could have chosen from their board makeup looks extremely poor.
From Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives. They also gave OpenAI the initial $30M. She was their observer.
The board previously had people like Elon Musk and Reid Hoffman. Greg Brockman was part of the board until he was ousted as well.
The attrition of industry business leaders, the ouster of Greg Brockman, and the (temporary, apparently) flipping of Ilya combined to give the short list of remaining board members outsized influence. They took this opportunity to drop a nuclear bomb on the company's leadership, which so far has backfired spectacularly. Even their first interim CEO had to be replaced already.
By the end of the week is over-optimistic. Foe the last 3 days feels like million year. I bet the company will be gone by the time Emmett Shear wakes up
It's not over until the last stone involved in the avalanche stops moving and it is anybody's guess right now what the final configuration will be.
But don't be surprised if Shear also walks before the week is out, if some board members resign but others try to hold on and if half of OpenAI's staff ends up at Microsoft.
Seems more damage control than power move. I'm sure their first choice was to reinstate Altman and get more control over OpenAI governance. What they've achieved here is temporarily neutralizing Altman/Brockman from starting a competitor, at the cost of potentially destroying OpenAI (who they remain dependent on for next couple of years) if too many people quit.
Seems a bit of a lose-lose for MSFT and OpenAI, even if best that MSFT could do to contain the situation. Competitors must be happy.
Disagree. MSFT extending an open invitation to all OpenAI employees to work under sama at a subsidiary of MSFT sounds to me like it'll work well for them. They'll get 80% of OpenAI for negative money - assuming they ultimately don't need to pay out the full $10B in cloud compute credits.
Competitors should be fearful. OpenAI was executing with weights around their ankles by virtue of trying to run as a weird "need lots of money but cant make a profit" company. Now they'll be fully bankrolled by one of the largest companies the world has ever seen and empowered by a whole bunch of hypermotivated-through-retribution leaders.
AFAIK MSFT/Altman can't just fork GPT-N and continue uninterrupted. All MSFT has rights to is weights and source code - not the critical (and slow to recreate) human-created and curated training data, or any of the development software infrastructure that OpenAI has built.
The leaders may be motivated by retribution, but I'm sure none of leaders or researchers really want to be a division of MSFT rather than a cool start-up. Many developers may chose to stay in SF and create their own startups, or join others. Signing the letter isn't a commitment to go to MSFT - just a way to pressure for a return to status quo they were happy with.
Not everyone is going to stay with OpenAI or move to MSFT - some developers will move elsewhere and the knowledge of OpenAI's secret sauce will spread.
Can we have a quick moment of silence for Matt Levine? Between Friday afternoon and right now, he has probably had to rewrite today's Money Stuff column at least 5 or 6 times.
"Except that there is a post-credits scene in this sci-fi movie where Altman shows up for his first day of work at Microsoft with a box of his personal effects, and the box starts glowing and chuckles ominously. And in the sequel, six months later, he builds Microsoft God in Box, we are all enslaved by robots, the nonprofit board is like “we told you so,” and the godlike AI is like “ahahaha you fools, you trusted in the formalities of corporate governance, I outwitted you easily!” If your main worry is that Sam Altman is going to build a rogue AI unless he is checked by a nonprofit board, this weekend’s events did not improve matters!"
Deservedly or not, Satya Nadella will look like a genius in the aftermath. He has and will continue to leverage this situation to strengthen MSFT's position. Is there word of any other competitors attempting to capitalize here? Trying to poach talent? Anything...
The XBox business started under him as well. IMO he was great at diversifying MSFT, but so-so at driving improvements in its core products at the time (Windows and Office). Perhaps this was just a leadership style thing, and he was hands-off on existing products in a way that Bill Gates wasn't (I think there was even news of Bill Gates sending nasty grams about poor Windows releases after he had officially stepped down).
Look at OS market and Text Editor market today. They aren't growth markets and haven't been since the 2000s at the latest. He made the fight call to ignore their core products in return for more concentration on Infra, B2B SaaS, Security, and (as you mentioned) Entertainment.
Customers are sticky and MSFT had a strong channel sales and enterprise sales org. Who cares if the product is shit if there are enough goodies to maintain inertia.
Spending billions on markets that will grow into 10s or 100s of Billions is a better bet than billions on a stagnant market.
> he was hands-off on existing products in a way that Bill Gates wasn't
Ballmer had an actual Business education, and was able to execute on scaling. I'm sure Bill loves him too now that Ballmer's protege almost 15Xed MSFT stock.
And sometimes the company is succeeding in spite of you and the moment you're out the door and people aren't worried about losing their job over arbitrary metrics they can finally show off what they're really capable of.
I don’t believe startups can have successful exits without extraordinary leadership (which the current board can never find). The people quitting are simply jumping off a sinking ship.
but money has intrinsic value. It is directly tied to the economy of a country. So if a country was to collapse, so would it's money.
The point here is that if a country collapses, then you got bigger problems than the loss of whatever stored currency you got. Even if your money is in the hypothetically useful crypto, you got far bigger problems that the money you own is useless to you, you need to survive.
But aside from that extreme scenario, money is not the same thing.
Another way to think of it:
There is nothing in the world that would prevent the immediate collapse of crypto if everyone who owns it just decided to sell.
If everyone in the world stops accepting the US Dollar, the US can still continue to use it internally and manufacture goods and such. It'll just be a collapse of trade, but then even in that scenario people can just exchange the dollar locally for say gold, and trade gold on the global market. So the dollar has physical and usable backing. Meanwhile crypto has literally nothing.
There were many currencies in history that have lost all or almost all its value upon serious economical crisis in respective countries. It seems you wouldn't call that money? Crypto is simply an alternative currency.
Right, INTERNAL economical crisis is what causes the collapse of currency. But just because the rest of the world doesn't recognize it, doesn't mean it is worthless, it simply converts.
Bitcoin has nothing in and of itself.
Also private currency like script was awful, please don't take the worst financial examples in history and claim that bitcoin is similar as an argument as to why it is valid.
again, nobody has shown even a glimmer of the board operating with morality being their focus. we just don't know. we do know that a vast majority of the company don't trust the board though.
So essentially, OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning.
Microsoft can absorb all the employees and switch them into the new AI subsidiary which basically is an acqui-hire without buying out everyone else's shares and making a new DeepMind / OpenAI research division inside of the company.
So all along it was a long winded side-step into having a new AI division without all the regulatory headaches of a formal acquisition.
> OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning
Far from certain. One, they still control a lot of money and cloud credits. Two, they can credibly threaten to license to a competitor or even open source everything, thereby destroying the unique value of the work.
> without all the regulatory headaches of a formal acquisition
>Far from certain. One, they still control a lot of money and cloud credits.
This too is far from certain. The funding and credits was at best tied to milestones, and at worst, the investment contract is already broken and msft can walk.
I suspect they would not actually do the latter and the ip is tied to continual partnership.
The value of OpenAI's own assets in the for-profit subsidiary, may drop in value due to recent events.
Microsoft is a substantial shareholder (49%) in that for-profit subsidiary, so the value of Microsoft's asset has presumably reduced due to OpenAI's board decisions.
OpenAI's board decisions which resulted in these events appear to have been improperly conducted: Two of the board's members weren't aware of its deliberations, or the outcome until the last minute, notably the chair of the board. A board's decisions have legal weight because they are collective. It's allowed to patch them up after if the board agrees, for people to take breaks, etc. But if some directors intentionally excluded other directors from such a major decision (and formal deliberations), affecting the value and future of the company, that leaves the board's decision open to legal challenges.
Hypothetically Microsoft could sue and offer to settle. Then OpenAI might not have enough funds if it would lose, so might have sell shares in the for-profit subsidiary, or transfer them. Microsoft only needs about 2% more to become majority shareholder of the for-profit subsidiary, which runs ChatGPT sevices.
Bad Faith. Watch the sales presentation that Altman and Nadella gave at OpenAI’s inaugural developer conference just a few days/hours before OpenAI fired its key executives, including Altman.
If Microsoft emerges as the "winner" from all of them then I think we are all the "losers". Not that I think OpenAI was perfect or "good" just that MS taking the cake is not good for the rest of us. It already feels crazy that people are just fine with them owning what they do and how important it is to our development ecosystem (talking about things like GitHub/VSCode), I don't like the idea of them also owning the biggest AI initiative.
I will never not be mad at the fact that they built a developer base by making all their tech open source, only to take it all away once it became remotely financially viable to do so.
With how close "Open"AI is with Microsoft, it really does not seem like there is a functional difference in how they ethically approach AI at all.
Most people who sympathized with the Board prior to this would have assumed that the presumed culprit, the legendary Ilya, has thought through everything and is ready to sacrifice anything for a course he champions. It appears that is not the case.
I think he orchestrated the coup on principle, but severely underestimated the backlash and power that other people had collectively.
Now he’s trying to save his own skin. Sam will probably take him back on his own technical merits but definitely not in any position of power anymore
When you play the game of thrones, you win or you die
Just because you are a genius in one domain does not mean you are in another
What’s funny is that everyone initially “accepted” the firing. But no one liked it. Then a few people (like greg) started voting with their feet which empowered others which has cumulated into this tidal shift.
It will make a fascinating case study some day on how not to fire your CEO
Maybe someone thinks Sam was “not consistently candid” about mentioning one of the feature bullets in latest release was dropping d'Angelo's Poe directly into the ChatGPT app for no additional charge.
Given dev day timing and the update releasing these "GPTs" this is an entirely plausible timeline.
IQ and EQ are different things. Some people are very technically smart to know a trillion side effects of technical systems. But can be really bad/binary/shallow at knowing side order effects of human dynamics.
Ilya's role is a Chief Scientist. It may be fair to give at least some benefit of doubt. He was vocal/direct/binary, and also vocally apologized and worked back. In human dynamics – I'd usually look for the silent orchestrator behind the scenes that nobody talks about.
I'm fine with all that in principle but then you shouldn't be throwing your weight around in board meetings, probably you shouldn't be on the board to begin with because it is a handicap in trying to evaluate the potential outcome of the decisions the board has to make.
I don't think this is necessarily about different categories of intelligence... Politicking and socializing are skills that require time and mental energy to build, and can even atrophy. If you spend all your time worrying about technical things, you won't have as much time to build or maintain those skills. It seems to me like IQ and EQ are more fundamental and immutable than that, but maybe I'm making a distinction where there isn't much of one.
Specialized learning and focus often comes at the cost of generalized learning and focus. It's not zero sum, but there is competition between interests in any person's mind.
in my experience these things will typically go hand in hand. There is also an argument to be made that being smart at building ML models and being smart in literally anything else have nothing to do with each other.
To be fair, that is a stupid first move to make as the CEO who was just hired to replace the person deposed by the board. (Though I’m still confused about Ilya’s position.)
If your job as a CEO is to keep the company running it seems like the only way to do that was hire them back because look at the company now it's essentially dead unless the board resigns and with how stupid the board is they might not lol.
So her move wasn't stupid at all. She obviously knew people working there respected the leadership of the company.
If 550 people leave OpenAI you might as well just shut it down and sell the IP to Microsoft.
It's a lot easier to sign a petition than actually walk away from a presumably well-paying job in a somewhat weak tech job market. People assuming everyone can just traipse into a $1m/year role at Microsoft is smoking some really good stuff.
> can just traipse into a $1m/year role at Microsoft
Do you not trust Microsoft's public statement that jobs are waiting for anyone that decides to leave OpenAI? Considering their two decade adventure with Xbox and their $72bln in profits last year, on top of a $144bln in cash reserves, I wouldn't be surprised if Microsoft is able (and willing) to match most comp packages considering what's at stake. Maybe not everyone, but most.
Well it is "somewhat weak tech job market" for your average Joe. I think for most of those guys finding a 0,5kk/year job wouldn't be such a problem especially that the AI hype has not yet died down.
Actually for MS this might be much better cause they would get direct control over them without the hassle of talking to some "board" that is not aligned with their interests.
If you know the company will implode and you'll be CEO of a shell, it is better to get board to reverse the course. It isn't like she was part of decision making process
But wouldn’t the coup have required 4 votes out of 6 which means she voted yes? If not then the coup was executed by just 3 board members? I’m confused.
This makes the old twitter look like the Wehrmacht in comparison.
The old twitter did not decide to randomly detonate themselves when they were worth $80 billion. In fact they found a sucker to sell to, right before the market crashed on perpetually loss-making companies like twitter.
That's a confused heuristic. It could just as easily mean they keep their heads down and do good work for the kind of people whose attention actually matters for their future employment prospects.
I often hear that about the OpenAI board, but in general are people here know most board members of some big/darling tech companies? Outside of some of the co-founders I don't know anyone.
I don't mean I know them personally, but they don't seem to be major names in the manner of (as you see down thread) the Google Founders bringing in Eric Schmidt.
They seem more like the sort of people you'd see running wikimedia.
Perhaps we can stop pretending that some of these people who are top-level managers or who sit on boards are prodigies. Dig deeper and there is very little there - just someone who can afford to fail until they drive the clown car into that gold mine. Most of us who have to put food on the table and pay rent have much less room for error.
You know, this makes early Google's moves around its IPO look like genius in retrospect. In that case, brilliant but inexperienced founders majorly lucked out with the thing created... but were also smart enough to bring in Eric Schmidt and others with deeper tech industry business experience for "adult supervision" exactly in order to deal with this kind of thing. And they gave tutelage to L&S to help them establish sane corporate practices while still sticking to the original (at the time unorthodox) values that L&S had in mind.
For OpenAI... Altman (and formerly Musk) were not that adult supervision. Nor is the board they ended up with. They needed some people on that board and in the company to keep things sane while cherishing the (supposed) original vision.
(Now, of course that original Google vision is just laughable as Sundar and Ruth have completely eviscerated what was left of it, but whatever)
I'm not sure I agree. Having worked there through this transition I'd say this: L&S just seem to have lost interest in running a mature company, so their "vision" just meant nothing, Eric Schmidt basically moved on, and then after flailing about for a bit (the G+ stuff being the worst of it) they just handed the reigns to Ruth&Sundar to basically turn into a giant stock price pumping machine.
G+ was handled so poorly, and the worst of it was that they already had both Google Wave (in the US) and Orkut (mostly outside US) which both had significant traction and could’ve easily been massaged into something to rival Facebook.
Easily…anywhere except at a megacorp where a privacy review takes months and you can expect to make about a quarter worth of progress a year.
Working in consultancies/agencies for the last 15 years, I see this time and time again. Fucking dart-throwing monkeys making money hand over fist despite their best intentions to lose it all.
I don't really understanding why the workforce is swinging unambiguously behind Altman. The core of the narrative thus far is that the board fired Altman on the grounds that he was prioritising commercialisation over the not-for-profit mission of OpenAI written into the organisation's charter.[1] Given that Sam has since joined Microsoft, that seems plausible, on its face.
The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.
> I don't really understanding why the workforce is swinging unambiguously behind Altman.
Maybe it has to do with them wanting to get rich by selling their shares - my understanding is there was an ongoing process to get that happening [1].
If Altman is out of the picture, it looks like Microsoft will assimilate a lot of OpenAI into a separate organisation and OpenAI's shares might become worthless.
I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:
https://news.ycombinator.com/item?id=38347868&p=2
https://news.ycombinator.com/item?id=38347868&p=3
https://news.ycombinator.com/item?id=38347868&p=4
etc...