Anyone with sufficient intellectual power to grok building AI must be fully aware of the monetization value of the same. If you are navel gazing over AIG taking over humanity you must first step through the stage were capital and AI couple up.
So it is not too much to ask since others who also were aware of the inherent unwanted social distortions that was entirely predictable were relying on these individuals and "non-profit" organizations to actually live up to their claims.
As it is, it seems like a thinly disguised propaganda to recruit and benefit from altruistic and capable workers in the field to then have Sam Altman (and whoever is behind him $$$) to parachute and take over and say "oh well, you can'tn expect people to be truthful and have principles! What are ya, a chump?"
If Plucky Nonprofit was OAI-1, and Abruptly Serious AI Lab was OAI-2, And Viable Commercial Product was OAI-3, and Scary Brittle Governance With Creepy NSA Ties was OAI-4, then God Emperor of Arrakis is presumably OAI-5.
I figured they’d ship GPT-5 to justify OAI-5, but I guess they’ve realized that they now answer to no one on anything in practical terms.
Nit; the God Emperor arises long after the Butlerian Jihad banned thinking machines - so it's not a great image for the rise of totalitarian artificial intelligence...
> Since then, OpenAI's board has been refreshed with more tech executives, chaired by Bret Taylor, former Salesforce co-CEO who now runs his own AI startup. Any corporate changes need approval from its nine-person non-profit board.
Why would the non-profit board approve a change to a for-profit company? Wouldn't this be against the nature of the non-profit entity that was founded and which they are supposed to govern?
Here are the board members just to save anyone else looking it up:
Bret Taylor (Chair), Sam Altman, Adam D’Angelo, Dr. Sue Desmond-Hellmann, Retired U.S. Army General Paul M. Nakasone, Nicole Seligman, Fidji Simo, Larry Summers and Zico Kolter.
I don't think you can vote to change a non-profit to a for-profit company. Once a non-profit, always a non-profit.
There are sort-off loopholes, like changing the name of "Open AI" to something else and selling the name along with the IP to a commercial entity - EdX did this. But it's not really a loophole, since the board would need justify the price the commercial company paid and the money from the sale is still within a non-profit and bound by it's rules.
Exactly. The problem here is that OA is too valuable, and the non-profit owns too much of it (ie. all of it). The non-profit owns exactly 100% of the OA for-profit right now, and can cancel the "PPUs" any time it pleases, and if you believe Altman about the agent roadmap, would be required to do so within a few years. Now, given that the PPUs are being raised at a pseudo-marketcap of $150b, we're hearing, and that non-profits are required to sell assets for fair market value, how can the board sell its 100% ownership of the for-profit for anything less than the $150b that the market values that ownership at...? And where does one get $150b, exactly? Even Sam Altman can't pull that off.
So, the whole question has been, how does he figure out how to leave the board with <50% ownership (preferably much less than, even 0%), in a way which passes the legal sniff test so the deal goes through, but which doesn't sacrifice >$75b that neither he nor anyone else has?
This is where the rhetoric and preparing the grounds comes in. You can argue that OA is actually worth <<$150b, maybe even as low as $0, by saying that it has value only because it is going to keep raising enough cash to reach AGI... but the board remaining in charge + the current cap on raising capital + Altman quitting will destroy that raising ability, rendering OA worthless. (Similar to his threat last year to destroy OA by creating a competitor in MS, and MS then immediately cutting off hardware & capital so the rump OA would starve to death.)
Hence, the board should - nay, is required by their legal & moral duties to the nonprofit - to accept much less than $150b, because whatever that deal is, it's more than $0.
This is of course ridiculous and wrong, but it's not so ridiculous that a board of loyalists can't sign off on it, nor that lawyers can't defend it in court in front of a judge who wants to rule in its favor (because judges will tolerate anything in nonprofits short of gross criminality).
How this not in the front page of HN is really beyond me.
I asked ChatGPT this simple question and the responses are really commendable and a bit hilarious:
Prompt: Since OpenAI the owner of the ChatGPT has removed non-profit control of OpenAI and gave Sam Altman equity what is the more appropriate alternative name to OpenAI now?
Answers: Given the shift in OpenAI's structure from a non-profit to a more profit-oriented model, a name like "AltmanAI" could be seen as more reflective of its current leadership and business model. Alternatively, something like "ProfitAI" or "EquityAI" could highlight the focus on profitability and equity distribution, though these might come across as blunt or satirical.
However, if the goal is to preserve a sense of innovation and artificial intelligence leadership while reflecting the new profit-driven focus, names like "InnovativeAI" or "EnterpriseAI" could also work. These names could better align with the organization's current direction while maintaining a forward-thinking brand identity.
I also find it curious if not distressing that this wasn’t pinned at #1 all day. To be clear: I’m quite sure that @dang is deeply committed to running a clean shop on moderation.
But whether something highly unlikely and slightly sensational like a finger on the scales, or the far more likely and utter banal explanation that people in vast numbers see their fates as intertwined with the status quo, the result is the same: on some deep institutional level HN is never going to hold Altman to account.
So the question becomes, what authority handles the cases the community can’t? On paper that’s regulators and legislators. Those folks ostensible and actual missions aren’t identical, and differ more with time, but they intersect at “prevent would-be autocrats being so brazen as to provoke de facto revolt”.
The public doesn’t hate Big Tech generally and its sociopath fringe specifically enough to make it a true wedge issue yet, but it’s trending that way.
I’d go so far as to say that most almost anyone breathing the Bay air isn’t capable of truly internalizing how deeply the general public loathes the modern Valley machine: it’s dramatically more than Wall St at any time.
It’s getting even trickier than usual to predict which historical social norms are still bright lines, but “profiting personally via using a charity as a vehicle for fraud” is still putting popular people in prison with bipartisan support.
And Altman isn’t popular even here. He’s feared here, but loved almost nowhere.
I know. When I said he’s feared here I didn’t mean by everyone, and by me never.
It sounds like you also refuse to live in fear of the goblin child of pg’s overconfidence in the joint character estimation of himself and Jessica and pg’s sloppy personal entanglements writ large as de facto public policy.
Almost no one thinks this is ok. A plurality if not a majority is still willing to co-sign because of various pressures.
A lot of very smart people acting in good faith had that investment in the benefit of the doubt betrayed, and for an eminently relatable reason: historically trustworthy figures stopped declaring their conflicts of interest at some point and thereby became exploiters of those who respected their judgement.
The real test is who carries on now that the lupine avarice has been utterly unmasked.
That's just a function of the HN community having a range of opinions on every topic. "Manufacture consent" is too sledgehammer a phrase for the normal process of people discussing things.
I'm a bit concerned about how this might impact their commitment to AI safety though. The non-profit structure was supposed to be a safeguard against profit-driven decision making. Will they still prioritize responsible AI development as a regular for-profit company?
> I'm a bit concerned about how this might impact their commitment to AI safety though.
Their commitment will remain unparalleled, because AI safety actually means doing whatever it takes to provide maximum return to the shareholders, no matter the social cost.
That Sam is a shifty one. Here's him 4 months ago:
“It's so deeply unimaginable to people to say i don’t really need more money... If I were to say I'm going to try and make a trillion dollars with OpenAI it would save a lot of conspiracy theories”
And now having turned OpenAI into closed AI he's trying to give himself $10bn in equity.
This felt inevitable which is why it’s not front page everywhere. It also doesn’t help that we’re in a bit of a lull with AI. I was with friends who don’t work in tech and AI came up in conversation at dinner. The general consensus is that AI is kinda dumb but it does a great job helping everyone write nicer emails.
But I don’t think I’m being alarmist when I say that this moment, when the altruistic ideals get suddenly pushed to the side, may be the moment noted in history books before whatever it is that this leads us to happens. I don’t mean evil machines are next, but I do think it’s a cotton gin, telegram over the ocean, light bulb, AARPNET moment. Maybe even more impactful than those. Manhattan project? TBD I guess.
Which is why I believe we’ll regret that we didn’t move slower or enforce more collective stop gaps behind the unbridled force of capitalism and the public goodwill. I’m not a doomsayer but you can’t tell me something isn’t up when this much money is involved.
Yes. We used those bombs for evil beyond our comprehension.
I encourage all Americans to further research Hiroshima and Nagasaki. Our propaganda has told us our war crimes were completely justified, but a more neutral historical analysis reveals this isn't the case.
It’s so weird that the person behind Loopt has now come fully in charge of this company and perverted its initial goals so completely. I’m still not clear on what Sam has accomplished or why he was put in charge of YC or OpenAI. Also apparently chairman of Helion Energy (fusion startup). Masterclass in failing up.
I think it is the same reason rich people throw money after Adam Neumann post Wework. They know it is a con and wanna be part of it. They see themself as alpha-wolfes to badass to be losers. Compare to people that shill as true believers in different strange crypto Ponzi-schemes. They think they are in on it.
Being so good that you get in charge of YC, and not fired by PG at all, makes you perfect to meta-morph 'OpenAI' into some dystopian big corp, as seen.
He is an effective bullshit artist with no moral restraint and capital behind the scene promotes and pushes him and his type as they are the perfect match for their role as "captains" of "industry". The 'building' of these fronts is a collaborative effort, with definitive media element of which even HN is likely included.
Once Sam Altman has consolidated all the power (think: top-10 richest person in the world and control over AGI v0) we’ll find out what he actually wants. And we will hate it.
It's probably because of the poorly-designed "flamewar detector" that censors posts if they are upvoted/downvoted too quickly. @dang explained it to me a few weeks ago[1] on another YC-related post that conveniently got scrubbed from the front page.
Currently there is another OpenAI story on the front page about about Mira Murati leaving [1], and a couple of comments there are also talking about this news of OpenAI removing non-profit control [2] [3].
I tend to agree that this is the bigger story and more worthy of being on the front page, but HN tends to enjoy a bit of celebrity gossip so not surprising to me that the news of the CTO leaving would get more traction.
I don't think it's any sort of conspiracy if that's what you're implying.
I see what you mean, but at the same time there is a value in this being discussed on its own. The celebrity gossip story is currently covering up the actual important news. (Yeah, I'll go with with that conspiracy theory, also known as a classic PR move.)
Probably now everyone is starting to realize that Sam Altman really is far worse than Elon and everyone was very late to understand this after the coup that happened nearly a year ago.
Perhaps this is what Mira, Greg and IIya saw in Sam; his true intentions after that coup.
This 'non-profit' / 'for-profit' complication structure + taking capped investment won't be tried again in a very long time after these events.
reply