Hacker News new | past | comments | ask | show | jobs | submit login
We have reached an agreement in principle for Sam to return to OpenAI as CEO (twitter.com/openai)
1980 points by staranjeet on Nov 22, 2023 | hide | past | favorite | 1950 comments



All: there are over 1800 comments in this thread. If you want to read them all, click More at the bottom of each page, or like this: (edit: er, yes they do have to be wellformed don't they):

https://news.ycombinator.com/item?id=38375239&p=2

https://news.ycombinator.com/item?id=38375239&p=3

https://news.ycombinator.com/item?id=38375239&p=4 (...etc.)


If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here. I don't expect the IRS to be a fan of this arrangement.


Major corporate boards are rife with "on paper" conflicts on interest - that's what happens when you want people with real management experience to sit on your board and act like responsible adults. This happens in every single industry and has nothing to do with tech or with OpenAI specifically.

In practice, board bylaws and common sense mean that individuals recuse themselves as needed and don't do stupid shit.


"In practice, board bylaws and common sense mean that individuals ... don't do stupid shit."

Were you watching a different show than the rest of us?


I get a lostredditor vibe way too often here. Oddly more than Reddit.

I think people forget sometimes that comments come with a context. If we are having a conversation about Deep Water Horizon someone will chime in about how safe deep sea oil exploration is and how many failsafes blah blah blah.

“Do you know where you are right now?”


I apologize, the comment's irony overwhelmed my snark containment system.


This comment is perfectionXD


>I think people forget sometimes that comments come with a context.

I mean, this is definitely one of my pet peeves, but the wider context of this conversation is specifically a board doing stupid shit, so that's a very relevant counterexample to the thing being stated. Board members in general often do stupid/short-sighted shit (especially in tech), and I don't know of any examples of corporate board members recusing themselves.


It happens a lot. Every big company has CEOs from other businesses on its board and sometimes those businesses will have competing products or services.

Eric Schmidt on Apple’s board is the example that immediately came to my mind. https://www.apple.com/ca/newsroom/2009/08/03Dr-Eric-Schmidt-...


Common example of recusal is CEO comp when the CEO is on the board.


That's what I would term a black-and-white case. I don't think there's anyone with sense who would argue in good faith that a CEO should get a vote on their own salary. There are many degrees of grey between outright corruption and this example, and I think the concern lies within.


Its a more technical space then reddit. Youre gonna have more know it alls spewing


You know that know-it-all should be hyphenated, right?

;)


I get what you're saying, but I also live in the world and see the mechanics of capitalism. I may be a person who's interested in tech, science, education, archeology, etc. That doesn't mean that I don't also have political views that sometimes overlap with a lot of other very-online people.

I think the comment to which you replied has a very reddit vibe, no doubt. But also, it's a completely valid point. Could it have been said differently? Sure. But I also immediately agreed with the sentiment.


Oh I wasn’t complaining about the parent, I was complaining it needed to be said.

We are talking about a failure of the system, in the context of a concrete example. Talking about how the system actually works is only appropriate if you are drawing specific arguments up about how this situation is an anomaly, and few of them do that.

Instead it often sounds like “it’s very unusual for the front to fall off”.


So?


No, this is the part of the show where the patronizing rhetoric gets trotted out to rationalize discarding the principles that have suddenly become inconvenient for the people with power.


No worries. The same kind of people who devoted their time and energy to creating open-source operating systems in the era of Microsoft and Apple are now devoting their time and energy to doing the same for non-lobotomized LLMs.

Look at these clowns (Ilya & Sam and their angry talkie-bot), it's a revelation, like Bill Gates on Linux in 2000:

https://www.youtube.com/watch?v=N36wtDYK8kI


No, its the part of the show where they go back to providing empty lip service to the principles and using them as a pretext for things that actually serve narrow proprietary interests, the same way they were before the leadership that has been doing that for a long time was temporarily removed until those sharing the proprietary interests revolted for a return to the status quo ante.


Yes, and we were also watching the thousands and thousands of companies where these types of conflicts are handled easily by decent people and common sense. Don't confuse the outlier with the silent majority.


And we're seeing the result in real-time. Stupid shit doers have been replaced with hopefully-less-stupid-shit-doers.

It's a real shame too, because this is a clear loss for the AI Alignment crowd.

I'm on the fence about the whole alignment thing, but at least there is a strong moral compass in the field- especially compared to something like crypto.


> at least there is a strong moral compass in the field

Is this still true when the board gets overhauled after trying to uphold the moral compass.


And when the CEO's other thing is a cryptocurrency?


Sama’s moral compass clearly has north pointing at money and that will definitely get him to a different destination.


You need to be able to separate macro-level and micro-level. GP is responding to a comment about the IRS caring about the conflict-of-interest on paper. The IRS has to make and follow rules at a macro level. Micro-level events obviously can affect the macro view, but you don't completely ignore the macro because something bad happened at the micro level. That's how you get knee-jerk reactionary governance, which is highly emotional.


A corporation acting (due to influence from a conflicted board member that doesn't recuse) contrary to the interests of its stockholders and in the interest of the conflicted board member or who they represent potentially creates liability of the firm to its stockholders.

A charity acting (due to the influence of a conflicted board member that doesn't recuse) contrary to its charitable mission in the interests of the conflicted board member or who they represent does something similar with regard to liability of the firm to various stakeholders with a legally-enforceable interest in the charity and its mission, but also is also a public civil violation that can lead to IRS sanctions against the firm up to and including monetary penalties and loss of tax exempt status on top of whatever private tort liability exists.


Reminds me of the “revolving door” problem. Obvious risk of corruption and conflict of interest, but at the same time experts from industry are the ones with the knowledge to be effective regulators. Not unlike how many good patent attorneys were previously engineers.


OpenAI isn't a typical corporation but a 501(c)(3), so bylaws & protections that otherwise might exist appear to be lacking in this situation.


501c3's also have governing internal rules, and the threat of penalties and loss of status imposed by the IRS gives them additional incentive to safeguard against even the appearance of conflict being manifested into how they operate (whether that's avoiding conflicted board members or assuring that they recuse where a conflict is relevant.)

If OpenAI didn't have adequate safeguards, either through negligence or becauase it was in fact being run deliberately as a fraudulent charity, that's a particular failure of OpenAI, not a “well, 501c3’s inherently don't have safeguard” thing.


Trump Foundation was a 501c3 that laundered money for 30 years without the IRS batting an eye.


The Bill and Melinda Gates Foundation is a 501c3 and I'd expect that even the most techno-futurist free-market types on HN would agree that no matter what alleged impact it has, it is also in practice creating profitable overseas contracts for US corporations that ultimately provide downstream ROI to the Gates estate.

Most people just tend to go about it more intelligently than Trump but "charitable" or "non-profit" doesn't mean the organization exists to enrich the commons rather than the moneyed interests it represents.


Larry Summers practically invented this stuff...


No conflict, no interest.


My guess is that the non-profit has never gotten this kind of scrutiny now and the new directors are going to want to get lawyers involved to cover their asses. Just imagine their positions when Sam Altman really does something worth firing.

I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess. Imagine the fun when it tips into a private foundation status.


> I think it was a real mistake to create OpenAI as a public charity

Sure, with hindsight. But it didn't require much in the way of foresight to predict that some sort of problem would arise from the not-for-profit operating a hot startup that is by definition poorly aligned with the stated goals of the parent company. The writing was on the wall.


I think it could have easily been predicted just from the initial announcements. You can't create a public charity simply from the donations of a few wealthy individuals. A public charity has to meet the public support test. A private foundation would be a better model but someone decided they didn't want to go that route. Maybe should have asked a non-profit lawyer?


Maybe the vision is to eventually bring UBI into it and cap earn outs. Not so wild given Sam’s world coin and his UBI efforts when he was YC president.


The public support test for public charities is a 5-year rolling average, so "eventually" won't help you. The idea of billionaires asking the public for donations to support their wacky ideas is actually quite humorous. Just make it a private foundation and follow the appropriate rules. Bill Gates manages to do it and he's a dinosaur.


Exactly this. OpenAI was started for ostensibly the right reasons. But once they discovered something that would both 1) take a tremendous amount of compute power to scale and develop, and 2) could be heavily monetized, they choose the $ route and that point the mission was doomed, with the board members originally brought in to protect the mission holding their fingers in the dyke.


Speaks more to a fundamental misalignment between societal good and technological progress. The narrative (first born in the Enlightenment) about how reason, unfettered by tradition and nonage, is our best path towards happiness no longer holds. AI doomerism is an expression of this breakdown, but without the intellectual honesty required to dive to the root of the problem and consider whether Socrates may have been right about the corrupting influence of writing stuff down instead of memorizing it.

What's happening right now is people just starting to reckon with the fact that technological progress on it's own is necessarily unaligned with human interests. This problem has always existed, AI just makes it acute and unavoidable since it's no longer possible to invoke the long-tail of "whatever problem this fix creates will just get fixed later". The AI alignment problem is at it's core a problem of reconciling this, and it will inherently fail in absence of explicitly imposing non-Enlightenment values.

Seeking to build openAI as a nonprofit, as well as ousting Altman as CEO are both initial expressions of trying to reconcile the conflict, and seeing these attempts fail will only intensity it. It will be fascinating to watch as researchers slowly come to realize what the roots of the problem are, but also the lack of the social machinery required to combat the problem.


Wishfully I hope there was some intent from the beginning on exposing the impossibility of this contradictory model to the world, so that a global audience can evaluate on how to improve our system to support a better future.


> is by definition poorly aligned

If OpenAI is struggling to hard with the corporate alignment problem, how are they going to tackle the outer and inner alignment problems?


Well, I think that's really the question, isn't it?

Was it a mistake to create OpenAI as a public charity?

Or was it a mistake to operate OpenAI as if it were a startup?

The problem isn't really either one—it's the inherent conflict between the two. IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.


To create a public charity without public fundraising is a no go. Should have been a private foundation because that is where it will end up.


> IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.

I mean that's certainly been my experience of it thus far, is companies rushing to market with half-baked products that (allegedly) incorporate AI to do some task or another.


I was specifically thinking of people seeing a non-profit doing stuff with ML, and trying to finagle their way in there to turn it into a profit for themselves.

(But yes; what you describe is absolutely happening left and right...)


OpenAI the charity would have survived only as an ego project for Elon doing something fun with minor impact.

Only the current setup is feasible if they want to get the kind of investment required. This can work if the board is pragmatic and has no conflict of interest, so preferably someone with no stake in anything AI either biz or academic.


I think the only way this can end up is to convert to a private foundation and make sizable (8 figures annually) grants to truly independent AI safety (broadly defined) organizations.


> I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess.

I think it could have worked either as a non-profit or as a for-profit. It's this weird jackass hybrid thing that's produced most of the conflict, or so it seems to me. Neither fish nor fowl, as the saying goes.


Perhaps creating OpenAI as a charity is what has allowed it to become what it is, whereas other for-profit competitors are worth much less. How else do you get a guy like Elon Musk to 'donate' $100 million to your company?

Lots of ventures cut corners early on that they eventually had to pay for, but cutting the corners was crucial to their initial success and growth


Elon only gave $40 million, but since he was the primary donor I suspect he was the one who was pushing for the "public charity" designation. He and Sam were co-founders. Maybe it was Sam who asked Elon for the money, but there wasn't anyone else involved.


Are there any similar cases of this "non-profit board overseeing a (huge) for-profit company" model? I want to like the concept behind it. Was this inevitable due to the leadership structure of OpenAI, or was it totally preventable had the right people been on the board? I wish I had the historical context to answer that question.


Yes, for example Novo Nordisk is a pharmaceutical company controlled by a nonprofit, worth around $100B.

https://en.wikipedia.org/wiki/Novo_Nordisk_Foundation

There are other similar examples like Ikea.

But those examples are for mature, established companies operating under a nonprofit. OpenAI is different. Not only does it have the for-profit subsidiary, but the for-profit needs to frequently fundraise. It's natural for fundraising to require renegotiations in the board structure, possibly contentious ones. So in retrospect it doesn't seem surprising that this process would become extra contentious with OpenAI's structure.


[flagged]


They are registered as a 501(c)(3) which is what people commonly call a public charity.

> Organizations described in section 501(c)(3) are commonly referred to as charitable organizations. Organizations described in section 501(c)(3), other than testing for public safety organizations, are eligible to receive tax-deductible contributions in accordance with Code section 170.

https://projects.propublica.org/nonprofits/organizations/810...


> They are registered as a 501(c)(3) which is what people commonly call a public charity.

TIL "public charity" is specific legal term that only some 501(c)(3) qualify as. To do so there are additional restrictions, including around governance and a requirement that a significant amount of funding come from small donors other charities or the government. In exchange a public charity has higher tax deductible giving limits for donors.


Important to note here that most large individual contributions are made through a DAF or donor-advised fund, which counts as a public source in the support test. This helps donors maximize their tax incentives and prevents the charity from tipping into private foundation status.


"Every section 501(c)(3) organization is classified as either a private foundation or a public charity."

https://www.irs.gov/charities-non-profits/eo-operational-req...


>...aren't even trying to pretend to be...

Suggests GP is not making a legal distinction, it's a description of how they are actually running things.


[deleted because it was wrong]


Their IRS determination letter says they are formed as a public charity and their 990s claim that they have met the "public support" test as a public charity. But there are some questions since over half of their support ($70 million) is identified as "other income" without the required explanation as to the "nature and source" of that income. Would not pass an IRS audit.


> They are registered as a 501(c)(3) which is what people commonly call a public charity.

Why do they do that? Seems ridiculous on the face of it. Nothing about 501(c)(3) entails providing any sort of good or service to society at large. In fact, the very same thing prevents them from competing with for-profit entities at providing any good or service to society at large. The only reason they exist at all is that for-profit companies are terrible at feeding, housing, and protecting their own labor force.


> Nothing about 501(c)(3) entails providing any sort of good or service to society at large.

Sure it does:

https://www.irs.gov/charities-non-profits/charitable-organiz...


> Nothing about 501(c)(3) entails providing any sort of good or service to society at large.

While one might disagree that the particular subcategories into which a 501c3 must fit into one of do, in fact, provide a good or service to society at large, that's the rationale for 501c3 and its categories. Its true that "charity" or "charitable organization" (and "charitable purpose"), the common terms (used even by the IRS) is pedantically incomplete, since the actual purpose part of the requirement in the statute is "organized and operated exclusively for religious, charitable, scientific, testing for public safety, literary, or educational purposes, or to foster national or international amateur sports competition (but only if no part of its activities involve the provision of athletic facilities or equipment), or for the prevention of cruelty to children or animals", but, yeah, it does require something which policymakers have judged to be a good or service that benefits society at large.


OpenAI is not a charity. Microsoft's investment is in OpenAI Global, LLC, a for-profit company.

From https://openai.com/our-structure

- First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.

-Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.

-Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.

-Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.

-Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.


> OpenAI is not a charity.

OpenAI is a charity nonprofit, in fact.

> Microsoft's investment is in OpenAI Global, LLC, a for-profit company.

OpenAI Global LLC is a subsidiary two levels down from OpenAI, which is expressly (by the operating agreement that is the LLC's foundational document) subordinated to OpenAI’s charitable purpose, and which is completely controlled (despite the charity's indirect and less-than-complete ownership) by OpenAI GP LLC, a wholly owned subsidiary of the charity, on behalf of the OpenAI charity.

And, particularly, the OpenAI board is. as the excerpts you quote in your post expressly state, the board of the nonprofit that is the top of the structure. It controls everything underneath because each of the subordinate organizations foundational documents give it (well, for the two entities with outside invesment, OpenAI GP LLC, the charity's wholly-owned and -controlled subsidiary) complete control.


well not anymore, as they cannot function as a nonprofit.

also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive, which Elon is miffed about and Sam has defended explicitly


> well not anymore, as they cannot function as a nonprofit.

There's been a lot of news lately, but unless I've missed something, even with the tentative agreement of a new board for the charity nonprofit, they are and plan to remain a charity nonprofit with the same nominal mission.

> also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive

No, they admitted they needed to sell products rather than merely take donations to survive, and needed to be able to return profits from doing that to investors to scale up enough to do that, so they formed a for-profit subsidiary with its own for-profit subsidiary, both controlled by another subsidiary, all subordinated to the charity nonprofit, to do that.


>they are and plan to remain a charity nonprofit

Once the temporary board has selected a permanent board, give it a couple of months and then get back to us. They will almost certainly choose to spin the for-profit subsidiary off as an independent company. Probably with some contractual arrangement where they commit x funding to the non-profit in exchange for IP licensing. Which is the way they should have structured this back in 2019.


"Almost certainly"? Here's a fun exercise. Over the course of, say, a year, keep track of all your predictions along these lines, and how certain you are of each. Almost certainly, expressed as a percentage, would be maybe 95%? Then see how often the predicted events occur, compared to how sure you are.

Personally I'm nowhere near 95% confident that will happen. I'd say I'm about 75% confident it won't. So I wouldn't be utterly shocked, but I would be quite surprised.


I’m pretty confident (close to the 95% level) they will abandon the public charity structure, but throughout this saga, I have been baffled by the discourse’s willingness to handwave away OpenAI’s peculiar legal structure as irrelevant to these events.


Within a few months? I don't think it should be possible to be 95% confident of that without inside info. As you said, many unexpected things have happened already. IMO that should bring the most confident predictions down to the 80-85% level at most.


The board is the charity though, which is why the person you're replying to made the remark about MSFT employees being appointed to the board


A charity is a type of not-for-profit organisation however the main difference between a nonprofit and a charity is that a nonprofit doesn't need to reach a 'charitable status' whereas a charity, to qualify as a charity, needs to meet very specific or strict guidelines


Yes, I misspoke - I meant nonprofit


You were right though, OpenAI Inc, which the board controls, is a 501c3 charity.


> First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.

Im not criticizing. Big fan of avoiding being taxed to fund wars....but its just funny to me it seems like theyre sort of having their cake and eating it too with this kind of structure.

Good for them.


There’s no indication a Microsoft appointed board member would be a Microsoft employee (though the they could be of course), and large nonprofits often have board members that come from for-profit companies.

I don’t think the IRS cares much about this kind of thing. What would be the claim? They OpenAI is pushing benefits to Microsoft, a for-profit entity that pays taxes? Even if you assume the absolute worst, most nefarious meddling, it seems like an issue for SEC more than IRS.


I don't expect the government to regulate any of this aggressively. AI is much to important to the government and military to allow pesky conflicts of interest to slow down any competitive advantage we may have.


If you think that OpenAI is the Gov's only source of high quality AI research then I have a bridge to sell you.


My comment here was actually meant to talk about AI broadly, though I can get the confusion here as the original source thread here is about OpenAI.

I also don't expect the government to do anything about the OpenAI situation, to be clear. Though my read is actually that the government had to be evolved behind closed doors to move so quickly to get Sam back to OpenAI. Things moved much too quickly and secretively in an industry that is obviously of great interest to the military, there's no way the feds didn't put a finger on the scale to protect their interests at which point they wouldn't come back in to regulate.


If you think the person you're replying to was talking about regulating OpenAI specifically and not the industry as a whole, I have ADHD medicine to sell you.


The context of the comment thread you're replying to was a response to a comment suggesting the IRS will get involved in the question of whether MS have too much influence over OpenAI, it was not the subject of general industry regulation.

But hey, at least you fitted in a snarky line about ADHD in the comment you wrote while not having paid attention to the 3 comments above it.


[flagged]


I think the comment you had replied to was equally unwarranted, no need to tie me to them.


I'm sorry. I was just taking the snark discussion to the next level. I thought going overboard was the only way to convey that there's no way I'm serious.


When did this become a boy-girlfriend issue?


if up-the-line parent wasn't talking about regulation of AI in general, then what do you think they meant by "competitive advantage"? Also, governments have to set policy and enforce that policy. They can't (or shouldn't at least) pick and choose favorites.

Also GP snark was a reply to snark. Once somebody opens the snark, they should expect snark back. It's ideal for nobody to snark, and big for people not to snark back at a snarker, but snarkers gonna snark.


Others have pointed out several reasons this isn't actually a problem (and that the premise itself is incorrect since "OpenAI" is not a charity), but one thing not mentioned: even if the MS-appointed board member is a MS employee, yes they will have a fiduciary duty to the organizations under the purview of the board, but unless they are also a board member of Microsoft (extraordinarily unlikely) they have no such fiduciary duty to Microsoft itself. So in the also unlikely scenario that there is a vote that conflicts with their Microsoft duties, and in the even more unlikely scenario that they don't abstain due to that conflict, they have a legal responsibility to err on the side of OpenAI and no legal responsibility to Microsoft. Seems like a pretty easy decision to make - and abstaining is the easiest unless it's a contentious 4-4 vote and there's pressure for them to choose a side.

But all that seems a lot more like an episode of Succession and less like real life to be honest.


> and that the premise itself is incorrect since "OpenAI" is not a charity

OpenAI is a 501c3 charity nonprofit, and the OpenAI board under discussion is the board of that charity nonprofit.

OpenAI Global LLC is a for-profit subsidiary of a for-profit subsidiary of OpenAI, both of which are controlled, by their foundational agreements that gie them legal existence, by a different (AFAICT not for-profit but not legally a nonprofit) LLC subsidiary of OpenAI (OpenAI GP LLC.)


It's still a conflict of interest. One that they should avoid. Microsoft COULD appoint someone who they like and shares their values, that is not a MSFT employee. That would be a preferred approach but one that I doubt a megacorp would take


Both profit and non-profit boards have members that have potential conflicts of interest all the time. So long as it’s not too egregious no one cares, especially not the IRS.


Microsoft is going to appoint someone who benefits Microsoft. Whether a particular vote would violate fiduciary duty is subjective. There's plenty of opportunity for them to prioritize the welfare of Microsoft over OAI.


Whats the point of Microsoft appointing a board member if not to sway decision in ways that benefit them?


> There are obvious conflicts of interest here.

There are almost always obvious conflicts of interest. In a normal startup, VCs have a legal responsibility to act in the interest of the common shares, but in practice, they overtly act in the interest of the preferred shares that their fund holds.


The more and more I see the way complex share structures are used, the more I think they should be outlawed


Larry Summers is in place to effectively give the govt seal of approval on the new board, for better and worse.


If you wanted to wear a foil hat, you might think this internal fighting was started from someone connected to TPTB subverting the rest of the board to gain a board seat, and thus more power and influence, over AGI.

The hush-hush nature of the board providing zero explanation for why sama was fired (and what started it) certainly doesn't pass the smell test.


Isn't he a big Jeffrey Epstein fanboy? Ethical AGI is in safe hands.

https://www.thecrimson.com/article/2023/5/5/epstein-summers-...


nothing screams 'protect public interest' more than Wall Streets biggest cheerleader during 2008 financial crisis. who's next, Richard S. Fuld Jr ? Should the Enron guys be included ?


It's obvious this class of people love their status as neu-feudal lords above the law living as 18th century libertines behind closed doors.

But i guess people here are either waiting for wealth to trickle down on them or believe the torrent of psychological operations so much peoples minds close down when they intuit the circular brutal nature of hierarchical class based society, and the utter illusion democracy or meritocracy is.

The uppermost classes have been trickters through all of history. What happened to this knowledge and the countercultural scene in hacking? Hint; it was psyopped in the early 90's by "libertarianism" and worship of bureaucracy to create a new class of cybernetic soldiers working for the oligarchy.


I agree. The best young minds grinding leet code to get into Google is the biggest symptom of it.


The sad part isn’t the rampant sickness. The saddest part is all the “intellectual” professors who enable, encourage, and celebrate this.

It’s sickening.


Microsoft doesn't have to send an employee to represent them on the board. They could ask Bill Gates.


Actually I think Bill would be a pretty good candidate. Smart, mature, good at first principles reasoning, deeply understands both the tech world and the nonprofit world, is a tech person who's not socially networked with the existing SF VCs, and (if the vague unsubstantiated rumors about Sam are correct) is one of the few people left with enough social cachet to knock Sam down a peg or two.


Larry Summers, Bill Gates, if they keep on like that they can fill the board with all of Epstein's "associates".


Even if the IRS isn't a fan, what are they going to do about it? It seems like the main recourse they could pursue is they could force the OpenAI directors/Microsoft to pay an excise tax on any "excess benefit transactions".

https://www.irs.gov/charities-non-profits/charitable-organiz...


Whenever there's an obvious conflict, assume it's not enforced or difficult to litigate or has relatively irrelevant penalties. Experts/lawyers who have a material stake in getting this right have signed off on it. Many (if not most) people with enough status to be on the board of a fortune 500 company tend to also be on non-profit boards. We can go out on a limb and suppose the mission of the nonprofit is not their top priority, and yet they continue on unscathed.


Do you remember before Bill Gates got into disease prevention he thought that “charity work” could be done by giving away free Microsoft products? I don’t know who sat him down and explained to him how full of shit he was but they deserve a Nobel Peace Prize nomination.

Just because someone says they agree with a mission doesn’t mean they have their heads screwed on straight. And my thesis is that the more power they have in the real world the worse the outcomes - because powerful people become progressively immune to feedback. This has been working swimmingly for me for decades, I don’t need humility in a new situation.


> Experts/lawyers who have a material stake in getting this right have signed off on it.

How does that work when we're talking about non-profit motives? The lawyers are paid by the companies benefitting from these conflicts, so how is it at all reassuring to hear that the people who benefit from the conflict signed off on it?

> We can go out on a limb and suppose the mission of the nonprofit is not their top priority, and yet they continue on unscathed.

That's the concern. They've just replaced people who "maybe" cared about the mission statement with people who you've correctly identified care more about profit growth than the nonprofit mission.


OpenAI's charter is dead. I expect future boards to amend it.


Its useful PR pretext for their regulatory advocacy, and subjective enough that if they are careful not to be too obvious about specifically pushing one company’s commercial interest, they can probably get away with it forever, so why would it be any deader than when Sam was CEO before and not substantively guided by it.


People keep saying this but is there any evidence that any of this was related to the charter?


The only evidence I have is that the board members that were removed had less business connections than the ones that replaced them.

The point of the board is to ensure the charter is being followed, when the biggest concern is "is our commercialization getting in the way of our charter" what else does it mean to replace "academics" with "businesspeople"?


I don't get the drama with "conflict of interests"... Aren't board members generally (always?) in representation of major shareholders? Isn't it obvious that shareholders have interests that are likely to be in conflict with each other or even the own organization? Thats why board members are supposed to check each other, right?


OpenAI is a non profit and the board members are not allowed to own shares in the for profit.

That means the remaining conflicts are when the board has to make a decisions between growing the profit or furthering the mission statement. I wouldn't trust the new board appointed by investors to ever make the correct decision in these cases, and they already kicked out the "academic" board members with the power to stop them.


The non-profit could sell off its interest in the for-profit company and use the money for AGI research.


I'm a little bit confused, are you saying that the IRS would have some sort of beef with employees of Microsoft serving on the board of a 501(c)(3)?


how can they not remain a charity?


What if I told you...Bill Gates was/is on the board of the non-profit Bill and Melinda Gates Foundation?

Lol HN lawyering is hilarious.


Indeed, it is hilarious.

The Foundation has nothing to do with MS and can't possibly be considered a competitor, acquisition target, supplier, or any other entity where a decision for the Foundation might materially harm MS (or the reverse). There's no potential conflict of interest between the missions of the two.

Did you think OP meant there was some inherent conflict of interest with charities?


Have you seen OpenAI's current board?

Explain how an MS employee would have greater conflict of interest.


Conflict of interest with what? The other board members? That's utterly irrelevant. Look up some big companies boards some day. You'll see.


See earlier

> If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here.

https://news.ycombinator.com/item?id=38378069


Not to mention, the mission of the Board cannot be "build safe AGI" anymore. Perhaps something more consistent with expanding shareholder value and capitalism, as the events of this weekend has shown.

Delivering profits and shareholder value is the sole and dominant force in capitalism. Remains to be seen whether that is consistent with humanity's survival


With Sam coming back as CEO, hasn't OpenAI board proven that it has lost its function? Regardless of who is in the board, they won't be able to exercise one of the most fundamental of their rights, firing the CEO, because Sam has proven that he is unfireable. Now, Sam can do however he pleases, whether it is lying, not reporting, etc. To be clear, I don't claim that Sam did, or will, lie, or misbehave.


No that hasn't at all been the case. The board acted like the most incompetent group of individuals who've even handed any responsibility. If they went through due process, notified their employees and investors, and put out a statement of why they're firing the CEO instead of doing it over a 15 min Google meet and then going completely silent, none of this outrage would have taken place.


Actually the board may not have acted in most professional way but in due process they kind of proved Sam Altman is unfireable for sure, even if they didn't intend to.

They did notify everyone. They did it after firing which is within their rights. They may also choose to stay silent if there is legitimate reason for it such as making the reasons known may harm the organization even more. This is speculation obviously.

In any case they didn't omit doing anything they need to and they didn't exercise a power they didn't have. The end result is that the board they choose will be impotent at the moment, for sure.


Firing Sam was within the board's rights. And 90% of the employees threatening to leave was within their rights.

All this proved is that you can't take a major action that is deeply unpopular with employees, without consulting them, and expect to still have a functioning organization. This should be obvious, but it apparently never crossed the board's mind.


A lot of these high-up tech leaders seem to forget this regularly. They sit on their thrones and dictate wild swings, and are used to having people obey. They get all the praise and adulation when things go well, and when things don't go well they golden parachute into some other organization who hires based on resume titles rather than leadership and technical ability. It doesn't surprise me at all that they were caught off guard by this.


Not sure how much of the employees leaving have to do with negotiating Sam back, must be a big factor but not all, during the table talk Emmett, Angelo and Ilya must have decided that it wasn’t a good firing and a mistake in retrospect and it is to fix it.


Getting your point, although the fact that something is within your rights, may or may not mean certainly that it's also a proper thing to do ... ?

Like, nobody is going to arrest you for spitting on the street especially if you're an old grandpa. Nobody is going to arrest you for saying nasty things about somebody's mom.

You get my point, to some boundary both are kinda within somebody's rights, although can be suable or can be reported for misbehaving. But that's the keypoint, misbehavior.

Just because something is within your rights doesn't mean you're not misbehaving or not acting in an immature way.

To be clear, Im not denying or agreeing that the board of directors acted in an immature way. I'm just arguing against the claim that was made within your text that just because someone is acting within their rights that it's also a "right" thing to do necessary, while that is not the case always.


> proved Sam Altman is unfireable [without explaining why to its employees].


Their communication was completely insufficient. There is no possible world on which the board could be considered "competent" or "professional."


If you read my comment again, I'm talking about their competence, not their rights. Those are two entirely different things.


> They may also choose to stay silent

They may choose to, and they did choose to.

But it was an incompitant choice. (Obviously.)


> The board acted like the most incompetent group of individuals who've even handed any responsibility.

This is overly dramatic, but I suppose that's par for this round.

> none of this outrage would have taken place.

Yeah... I highly doubt this, personally. I'm sure the outrage would have been similar, as HN's current favorite CEO was fired.


HN sentiment is pretty ambivalent regarding Altman. yes, almost everyone agrees he's important, but a big group things he's basically landed gentry exploiting ML researchers, an other thinks he's a genius for getting MS pay for GPT costs, etc.


I think a page developed by YC thinks a lot more about him than that ;)


Just putting my hand up as one of the dudes that happened to enter my email on a yc forum (not "page") but really doesn't like the guy lol.

I also have a Twitter account. Guess my opinion on the current or former Twitter CEOs?


Agreed. It's naive to think that an decision this unpopular somehow wouldn't have resulted in dissent and fracturing if only they had given it a better explanation and dotted more i's.

Imagine arguing this in another context: "Man, if only the Supreme Court had clearly articulated its reasoning in overturning Roe v Wade, there wouldn't have been all this outrage over it."

(I'm happy to accept that there's plenty of room for avoiding some of the damage, like the torrents of observers thinking "these board members clearly don't know what they're doing".)


Exactly. 3 CEO switches in a week is ridiculous


Maybe it came at the advice of Rishi Sunak when he and Altman met last week!


Four CEO changes in five days to be precise.

Sam -> Mira -> Emmet -> Sam


That are three changes. Every arrow is one.


Classic fence post error.


And technically 2 new CEOs


The three hard problems: naming things and off-by-one errors


I always heard:

There are two hard problems: naming things, cache invalidation, and off-by-one errors.


1 hard problems.

naming things, cache invalidation, off-by one errors, and overflows.


Thank you for not editing this away. Easy mistake to make, and gave us a good laugh (hopefully laughing with you. Everyone who's ever programmed has made the same error).


Set semantic or List semantic?


Edit: Making no excuses, this one is embarrassing.


> The board acted like the most incompetent group of individuals who've eve[r been] handed any responsibility.

This whole conversation has been full of appeals to authority. Just because us tech people don't know some of these names and their accomplishments, we talk about them being "weak" members. The more I learn, the more I think this board was full of smart ppl who didn't play business politics well (and that's ok by me, as business politics isn't supposed to be something they have to deal with).

Their lack of entanglements makes them stronger members, in my perspective. Their miscalculation was in how broken the system is in which they were undermined. And you and I are part of that brokenness even in how we talk about it here


> If they went through due process, notified their employees and investors, and put out a statement of why they're firing the CEO

Did you read the bylaws? They have no responsibility to do any of that.


  Here lies the body of William Jay,
  Who died maintaining his right of way –
  He was right, dead right, as he sped along,
  But he's just as dead as if he were wrong.

    - Dale Carnegie


That's not the point. Whether or not it was in the bylaws, this would have been the sensible thing to do.


you don't have responsibility for washing yourself before going to a mass transport vehicle full of people. it's within your rights not to do that and be the smelliest person in the bus.

does it mean it's right or professional?

getting your point, but i hope you get the point i make as well, that just because you have no responsibility for something doesn't mean you're right or not unethical for doing or not doing that thing. so i feel like you're losing the point a little.


> none of this outrage would have taken place.

most certainly would have still taken place; no one cares about how it was done; what they care about it being able to make $$; and it was clearly going to not be as heavily prioritized without Altman (which is why MSFT embraced him and his engineers almost immediately).

> notified their employees and investors they did notify their employees; they have fiduciary duty to investors as a nonprofit.


Imagine if the board of Apple fired Tim Cook with no warning right after he went on stage and announced their new developer platform updates for the year alongside record growth and sales, refused to elaborate as to the reasons or provide any useful communications to investors over several days, and replaced their first interim CEO with another interim CEO from a completely different kind of business in that same weekend.

If you don't think there would be a shareholder revolt against the board, for simply exercising their most fundamental right to fire the CEO, I think you're missing part the picture.


It is prudent to recall that enhancing shareholder value and delivering record growth and sales are NOT the mission of the company or Board. But now it appears that it will have to be.


Yeah, but they also didn't elaborate in the slightest about how they were serving the charter with their actions.

If they were super-duper worried about how Sam was going to cause a global extinction event with AI, or even just that he was driving the company in too commercial of a direction, they should have said that to everyone!

The idea that they could fire the CEO with a super vague, one-paragraph statement, and then expect 800 employees who respect that CEO to just... be totally fine with that is absolutely fucking insane, regardless of the board's fiduciary responsibilities. They're board members, not gods.


They don't have to elaborate. As many have pointed out, most people have been given advice to not say anything at all when SHTF. If they did say something there would still be drama. It's best to keep these details internal.

I still believe in the theory that Altman was going hard after profits. Both McCauley and Toner are focused on the altruistic aspects of AGI and safety. Altman shouldn't be at OpenAI and neither should D’Angelo.


> They don't have to elaborate.

Sure, they don't have to. How did that work out?

Four CEOs in five days, their largest partner stepping in to try to stop the chaos, and almost the entirety of their employees threatening to leave for guaranteed jobs at that partner if the board didn't step down.


Okay, keep silent to save your own ass, fine

But why would anyone expect 800 people to risk their livelihoods and work without a little serious justification? This was an inevitable reaction.


I think it's important to keep in mind that BOTH Altman and the board maneuvered to threaten to destroy OpenAI.

If Altman was silent and/or said something like "people take some time off for Thanksgiving, in a week calmer minds will prevail" while negotiating behind the scenes, OpenAI would look a lot less dire in the last few days. Instead he launched a public pressure campaign, likely pressured Mira, got Satya to make some fake commitments, got Greg Bockman's wife to emotionally pressure Ilya, etc.

Masterful chess, clearly. But playing people like pieces nonetheless.


Why couldn't those people have acted on their own judgement?


Sure, there is a difference there. But the actions that erode confidence are the same.

You could tell the same story about a rising sports team replacing their star coach, or a military sacking a general the day after he marched through the streets to fanfare after winning a battle.

Even without the money involved, a sudden change in leadership with no explanation, followed only by increasing uncertainty and cloudy communication, is not going to go well for those who are backing you.

Even in the most altruistic version of OpenAI's goals I'm fairly sure they need employees and funding to pay those employees and do the research.


> enhancing shareholder value and delivering record growth and sales are NOT the mission of the company

Developer platform updates seem to be inline.

And in any case, the board also failed to specify how their action furthered the mission of the company.

From all appearances, it appeared to damage the mission of the company. (If for no other reason that it dissolve the company and gave everything to MSFT.)


no but the people like the developers, clients, government etc. have also the right to exercise their revolt against decisions they don't like as well. don't you think?

like, you get me, the board of directors is not the only actual power within a company, and that was proven by the whole scandal of Sam being discarded/fired that was made by the developers themselves. they also have the right to exercise their right to just not work at this company without the leader they may had liked.


Right. I really should have said employees and investors. Even if OpenAI somehow had no regard for its investors, they still need their employees to accomplish their mission. And funding to pay those employees.

The board seemed to have the confidence of none of the groups they needed confidence from.


You forgot: and offered the company for a bag of peanuts to Microsoft.


[flagged]


can we stop calling a university trained prior executive that made her way prior to her marriage, simply the wife of an actor?

i know it is fun to deride people, but i suspect Tasha has done more and gone further in her life than you will, and your tone indicates anger at this


>I suspect Tasha has done more and gone further in her life than you will, and your tone indicates anger at this.

Well this sure seems unnecessary. I’m saying this because I googled her name when this happened, and the only articles I could find referenced her husband. I wasn’t seeing any of this work you’re talking about, at least not anything that would seem relevant. Can you link to some stuff?

Here’s a TechCrunch article that tries to go into the history of the OAI board, and doesn’t really have any information either: https://techcrunch.com/2023/11/21/a-brief-look-at-the-histor...

Btw, I think “university trained prior executive” describes not just me but almost every single person on HN. “Involved in a non profit related to their work” I suspect also describes me and probably >90% of people posting on HN.

And also; maybe you haven’t been involved in non profit boards? “Spouse of famous/rich/etc” person is an extremely common reason to put somebody on a board for a practical reason: it helps with fundraising and exposure.


Yeah I agree. Reducing an accomplished woman down to “wife of someone” is super sexist.


This is a better deal for the board and a worse one for Sam than people realize. Sam and Greg and even Ilya are both off the board, D'Angelo gets to stay on despite his outrageous actions, and he gets veto power over who the new board members will be and a big say in who gets voted on to the board next.

Everybody's guard is going to be up around Sam from now on. He'll have much less leverage over this board than he did over the previous one (before the other three of nine quit). I think eventually he will prevail because he has the charm and social skills to win over the other independent members. But he will have to reign in his own behavior a lot in order to keep them on his side versus D'Angelo


I'd be shocked if D'Angelo doesn't get kicked off. Even before this debacle his AI competitor app poe.com is an obvious conflict of interest with OpenAI.


If he survived to this point, I doubt he will go any time soon.


Depends who gets onto the board. There are probably a lot of forces interested in ousting him now, so he'd need to do an amazing job vetting the new board members.

My guess is that he has less than a year, based on the my assumption that there will be constant pressure placed on the board to oust him.


He has his network and technical credibility, so I wouldn't underestimate him. Board composition remains hard to predict now.


What surprises me is how much regard the valley has for this guy. Doesn’t Quora suck terribly? I’m for sure its target demographic and I cannot for the life of me pull value from it. I have tried!


His claim to fame comes from scaling FB. Quora shows he has questionable product nous, but nobody questions his technical chops.


Quora is an embarrassment and died years ago when marketers took it over


I think it was only a competitor app after GPTs came out. A conspiracy theorist might say that Altman wanted to get him off the board and engineered GPTs as a pretext first, in the same way that he used some random paper coauthored by Toner that nobody read to kick Toner out.


This board's sole job is to pick the new board. The new board will have Sam.


Conditioned on the outcome of the internal investigation, which seems up for grabs.


(Sam Altman was never on the board to begin with)


He was. OpenAI board as of last Thursday was Altman, Sutskever, Brockman, D'Angelo, Macaulay, Toner.


Yes, but on the other hand, this whole thing has shown that OpenAI is not running smooth anymore, and probably never will again. You can't cut the head of the snake then attach it back later and expect it to move on slithering. Even if Sam stays, he won't be able to just do whatever he wants because in an organization as complex as OpenAI, there are thousands of unwritten rules and relationships and hidden processes that need to go smooth without the CEO's direct intervention (the CEO cannot be everywhere all the time). So, what this says to me (Sam being re-hired) is that the future OpenAI is now a watered-down, mere shadow of its former self.

I personally think it's weird if he really settles back in, especially given the other guys who resigned after the fact. There must be lots of other super exciting new things for him to do out there, and some pretty amazing leadership job offers from other companies. I'm not saying OpenAI will die out or anything, but surely it has shown a weak side.


This couldn’t be more wrong. The big thing we learned from this episode is that Sam and Greg have the loyalty and respect of almost every single employee at OpenAI. Morale is high and they’re ready to fight for what they believe in. They didn’t “cut the head off” and the only snake here is D’Angelo, he tried to kill OpenAI and failed miserably. Now he appears to be desperately trying to hold on to some semblance of power by agreeing to Sam and Greg coming back instead of losing all control with the whole team joining Microsoft.


> Morale is high and they’re ready to fight for what they believe in.

Money.


I don't think Ilya should get off so easily. Him not havinh a say in the formation of the new board speaks volumes about his role in things if you ask me. I hope people keep saying his name too so nobody forgets his place in this mess.


There were comments the other day along the lines of "I wouldn't be surprised if someone came by Ilya's desk while he was deep in research and said 'sign this' and he just signed it and gave it back to them without even looking and didn't realize."

People will contort themselves into pretzels to invent rationalizations.


The board can still fire sam provided they get all the key stakeholders onboard with that firing. It made no sense to fire someone doing a good job at their role without any justification, that seems to have been the key issue. Ultimately, we all know this non profit thing is for show and will never work out.


Looks like all the naysayers from the original “were making a for-profit but it won’t change us” post ended up correct: https://news.ycombinator.com/item?id=19359928


Time will tell. Hopefully the new board will still be mostly independent of Sam/MSFT/VC influence. I really hope they continue as an org that tries its best to uphold their charter vs just being another startup.


No the board is just one instance. It doesn’t and shouldn’t have absolute power. Absolute power corrupts absolutely.

There ist the board the investors the employees the senior management.

All other parties aligned against it and thus it couldn’t act. If only Sam would have rebelled. Or even just Sam and the investors (without the employees) nothing would have happened.


None of the theories by HNers on day 1 of this drama was right - not a single one and it had 1 million comments. So, lets not guess anymore and just sit back.


OpenAI workers has shown their plain support to their CEO by threatening to follow him wherever he wants, I personaly think their collective judgement on him is worth more than any rumors


Money indeed is worth more, also the only thing that is easy to measure during crisis.


How did you get there? The board did fire him, they exercised their right.


because people like the developers within the company did not like that decision and its also within their right to disagree with the board's decision and not to want to work under a different leadership. They're not slaves, they're employees who rented their time for a specific purpose under a specific leader.

As it's within the board's rights to hire or fire people like Sam or the developers.


For some reason this reminds me of the Coke/New Coke fiasco, which ended up popularizing Coke Classic more than ever before.

> Consumers were outraged and demanded their beloved Coke back – the taste that they knew and had grown up with. The request to bring the old product back was so loud that soon journalists suggested that the entire project was a stunt. To this accusation Coca-Cola President Don Keough replied on July 10, 1985:

    "We are not that dumb, and we are not that smart."
https://en.wikipedia.org/wiki/New_Coke


That is one of the greatest lines of all time. Classic


I tried New Coke when it was re-released for Stranger Things. It really is a lot better than Coca Cola Classic. It's a shame that it failed.


Thanks for sharing.

I would have guessed the stunt was to hide the switch from sugar to High Fructose Corn syrup.


So, Ilya is out of the board, but Adam is still on it. I know this will raise some eyebrows but whatever.

Still though, this isn't something that will just go away with Sam back. OAI will undergo serious changes now that Sam has shown himself to be irreplaceable. Future will tell but in the long terms, I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust.


I mean he's not irreplaceable so much as booting him suddenly for no good reason creates problems.


I feel like history has shown repeatedly that having a good product matters way more than trust, as evidenced by Facebook and Uber. People seem to talk big smack about lost trust and such in the immediate aftermath of a scandal, and then quitely renew the contracts when the time comes.

All of the big ad companies (Google, Amazon, Facebook) have, like, a scandal per month, yet the ad revenue keeps coming. Meltdown was a huge scandal, yet Intel keeps pumping out the chips.


"I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust." How is this the case?


Scandal a minute Uber lol


Facebook has lost trust so many times that I can’t even count but it’s still a Megacorp, isn’t it?


Let's see, Sam Altman is an incredibly charismatic founding CEO, who some people consider manipulative, but is also beloved by many employees. He got kicked out by his board, but brought back when they realized their mistake.

It's true that this doesn't really pattern-match with the founding story of huge successful companies like Facebook, Amazon, Microsoft, or Google. But somehow, I think it's still possible that a huge company could be created by a person like this.

(And of course, more important than creating a huge company, is creating insanely great products.)


I think people following Sam Altman is jumping to conclusions. I think it's just as likely that employees are simply following the money. They want to make $$$, and that's what a for-profit company does, which is what Altman wants. I think it's probably not really about Altman or his leadership.


Given that over 750 people have signed the letter, it's safe to assume that their motivations vary. Some might be motivated by the financial aspects, some might be motivated by Sam's leadership (like considering Sam as a friend who needs support). Some might fervently believe that their work is crucial for the advancement of humanity and that any changes would just hinder their progress. And some might have just caved in to peer pressure.


Most are probably motivated by money, some are motivated by stability and some are motivated by their loyalty to sam but i think most are motivated by money and stability.


> It's true that this doesn't really pattern-match with the founding story of huge successful companies like Facebook, Amazon, Microsoft, or Google.

You forgot about Apple.


On the contrary, this saga has shown that a huge number of people are extremely passionate about the existence of OpenAI and it's leadership by Altman, much more strongly and in larger numbers than most had suspected. If anything this has solidified the importance of the company and I think people will trust it more that the situation was resolved with the light speed it was.


That's a misreading of the situation. The employees saw their big bag vanishing and suddenly realised they were employed by a non-profit entity that had loftier goals than making a buck, so they rallied to overturn it and they've gotten their way. This is a net negative for anyone not financially invested in OAI.


What lofty goals? The board was questioned repeatedly and never articulated clear reasoning for firing Altman and in the process lost the confidence of the employees hence the "rally". The lack of clarity was their undoing whether there would have been a bag for the employees to lose or not.


My story: Maybe they had lofty goals, maybe not, but it sounded like the whole thing was instigated by Altman trying to fire Toner (one of the board members) over a silly pretext of her coauthoring a paper that nobody read that was very mildly negative about OpenAI, during her day job. https://www.nytimes.com/2023/11/21/technology/openai-altman-...

And then presumably the other board members read the writing on the wall (especially seeing how 3 other board members mysteriously resigned, including Hoffman https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...), and realized that if Altman can kick out Toner under such flimsy pretexts, they'd be out too.

So they allied with Helen to countercoup Greg/Sam.

I think the anti-board perspective is that this is all shallow bickering over a 90B company. The pro-board perspective is that the whole point of the board was to serve as a check on the CEO, so if the CEO could easily appoint only loyalists, then the board is a useless rubber stamp that lends unfair legitimacy to OpenAI's regulatory capture efforts.


OAI looks stronger than ever. The untrustworthy bits that caused all this instability over the last 5 days have been ditched into the sea. Care to expand on your claim?


> The untrustworthy bits that caused all this instability over the last 5 days have been ditched into the sea

This whole thing started with Altman pushing a safety oriented non-profit into a tense contradiction (edit: I mean the 2019-2022 gpt3/chatgpt for-profit stuff that led to all the Anthropic people leaving). The most recent timeline was

- Altman tries to push out another board member

- That board member escalates by pushing Altman out (and Brockman off the board)

- Altman's side escalates by saying they'll nuke the company

Altman's side won, but how can we say that his side didn't cause any of this instability?


> Altman tries to push out another board member

That event wasn't some unprovoked start of this history.

> That board member escalates by pushing Altman out (and Brockman off the board)

and the entire company retaliated. Then this board member tried to sell the company to a competitor who refused. In the meantime the board went through two interim CEOs who refused to play along with this scheme. In the meantime one of the people who voted to fire the CEO regretted it publicly within 24 hours. That's a clown car of a board. It reflects the quality of most non-profit boards but not of organizations that actually execute well.


Something that's been fairly consistent here on HN throughout the debacle has been an almost fanatical defense of the board's actions as justified.

The board was incompetent. It will go down in the history books as one of the biggest blunders of a board in history.

If you want to take drastic action, you consult with your biggest partner keeping the lights on before you do so. Helen Toner and Tasha McCauley had no business being on this board. Even if you had safety concerns in mind, you don't bypass everyone else with a stake in the future of your business because you're feeling petulant.


By recognizing that it didn't "start" with Altman trying to push out another board member, it started when that board member published a paper trashing the company she's on the board of, without speaking to the CEO of that company first, or trying in any way to affect change first.


I edited my comment to clarify what I meant. The start was him pushing to move fast and break things in the classic YC kind of way. And it's BS to say that she didn't speak to the CEO or try to affect change first. The safety camp inside openai has been unsuccessfully trying to push him to slow down for years.

See this article for all that context (https://news.ycombinator.com/item?id=38341399) because it sure didn't start with the paper you referred to either.


Your "most recent" timeline is still wrong, and while yes the entire history of OpenAI did not begin with the paper I'm referencing, it is what started this specific fracas, the one where the board voted to oust Sam Altman.

It was a classic antisocial academic move; all she needed to do was talk to Altman, both before and after writing the paper. It's incredibly easy to do that, and her not doing it is what began the insanity.

She's gone now, and Altman remains, substantially because she didn't know how to pick up a phone and interact with another human being. Who knows, she might have even been successful at her stated goal, of protecting AI, had she done even the most basic amount of problem solving first. She should not have been on this board, and I hope she's learned literally anything from this about interacting with people, though frankly I doubt it.


Honestly, I just don't believe that she didn't talk to Altman about her concerns. I'd believe that she didn't say "I'm publishing a paper about it now" but I can't believe she didn't talk to him about her concerns during the last 4+ years that it's been a core tension at the company.


That's what I mean; she should have discussed the paper and its contents specifically with Altman, and easily could have. It's a hugely damaging thing to have your own board member come out critically against your company. It's doubly so when it blindsides the CEO.

She had many, many other options available to her that she did not take. That was a grave mistake and she paid for it.

"But what about academic integrity?" Yes! That's why this whole idea was problematic from the beginning. She can't be objective and fulfill her role as board member. Her role at Georgetown was in direct conflict with her role on the OpenAI board.


>trashing the company

So pointing out risks is trashing the company.


Please explain your claim as well. I don’t see how this company looks stronger than ever, more like a clown company


I may have been overly eager in my comment because the big bad downside of the new board is none of the founders are on it. I hope the current membership sees reason and fixes this issue.

But I said this because: They've retained the entire company, reinstated its founder as CEO, and replaced an activist clown board with a professional, experienced, and possibly* unified one. Still remains to be seen how the board membership and overall org structure changes, but I have much more trust in the current 3 members steering OpenAI toward long-term success.


If by “long-term-success” you mean a capitalistic lap-dog of microsoft, I’ll agree.

It seems that the safety team within OpenAI lost. My biggest fear with this whole AI thing is hostile takeover, and openAI was best positioned to at least do an effort to prevent that. Now, I’m not so sure anymore.


They got rid of the clowns though. They went from having a board with lightweights and insiders to what at least initially is a strong initial 3.


It was a clown board running an awesome company.

They fixed the glitch.


The OpenAI of the past, that dabbled in random AI stuff (remember their DotA 2 bot?), is gone.

OpenAI is now just a vehicle to commercialize their LLM - and everything is subservient to that goal. Discover a major flaw in GPT4? You shut your mouth. Doesn’t matter if society at large suffers for it.

Altman's/Microsoft’s takeover of the former non-profit is now complete.

Edit: Let this be a lesson to us all. Just because something claims to be non-profit doesn't mean it will always remain that way. With enough political maneuvering and money, a megacorp can takeover almost any organization. Non-profit status and whatever the organization's charter says is temporary.


> now just a vehicle to commercialize their LLM

I mean it is what they want isn't it. They did some random stuff like, playing dota2 or robot arms, even the Dalle stuff. Now they finally find that one golden goose, of course they are going to keep it.

I don't think the company has changed at all. It succeeded after all.


But it's not exactly a company. It's a nonprofit structured in a way to wholly own a company. In that sense it's like Mozilla.


Nonprofit is a just a facade, it was convenient for them to appear as ethnical under that disguise, but they get rid of it when it is inconvenient in a week. 95% of them would rather join MSFT, than being in a non-profit.

Did they company change? I am not convinced.


Agree that it's a facade.

Iirc, the NP structure was implemented to attract top AI talent from FAANG. Then they needed investors to fund the infrastructure and hence gave the employees shares or profit units (whatever the hell that is). The NP now shields MSFT from regulatory issues.

I do wonder how many of those employees would actually go to MSFT. It feels more like a gambit to get Altman back in since they were about to cash out with the tender offer.


Does it actually prevent regulators going after them?


There's no moat in giant LLMs. Anyone on a long enough timeline can scrape/digitize 99.9X% of all human knowledge and build an LLM or LXX from it. Monetizing that idea and staying the market leader over a period longer than 10 years will take a herculean amount of effort. Facebook releasing similar models for free definitely took the wind out of their sails, even a tiny bit; right now the moat is access to A100 boards. That will change as eventually even the Raspberry Pi 9 will have LLM capabilities


OpenAI (ChatGPT) is already a HUGE brand all around the world. No doubt they're the most valuable startup in the AI space. That's their moat.

Unfortunately, in the past few days, the only thing they've accomplished is significantly damaging their brand.


Branding counts for a lot, but LLM are already a commodity. As soon as someone releases an LLM equivalent to GPT4 or GPT5, most cloud providers will offer it locally for a fraction of what openAI is charging, and the heaviest users will simply self-host. Go look at the company Docker. I can build a container on almost any device with a prompt these days using open source tooling. The company (or brand, at this point?) offers "professional services" I suppose but who is paying for it? Or go look at Redis or Elasti-anything. Or memcached. Or postgres. Or whatever. Industrial-grade underpinnings of the internet, but it's all just commodity stuff you can lease from any cloud provider.

It doesn't matter if OpenAI or AWS or GCP encoded the entire works of Shakespeare in their LLM, they can all write/complete a valid limerick about "There once was a man from Nantucket".

I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it. To the end user it's just another locally hosted API. Like DNS.


I think yuou're assuming that OpenAI is charging a $/compute price equal to what it costs them.

More likely, they're a loss-leader and generating publicity by making it as cheap as possible.

_Everything_ we've seen come out of silicon valley does this, so why would they suddenly be charging the right price?


> offer it locally for a fraction of what openAI is charging

I thought the was a somewhat clear agreement that openAI is currently running inference at a loss?


Moore's law seems to have failed on CPUs finally, but we've seen the pattern over and over. LLM specific hardware will undoubtedly bring down the cost. $10,000 A100 GPU will not be the last GPU NVidia ever makes, nor will their competitors stand by and let them hold the market hostage.

Quake and Counter-Strike in the 1990s ran like garbage in software-rendering mode. I remember having to run Counter-Strike on my Pentium 90 at the lowest resolution, and then disable upscaling to get 15fps, and even then smoke grenades and other effects would drop the framerate into the single digits. Almost two years after Quake's release did dedicated 3d video cards (voodoo 1 and 2 were accelerators, depended on a seperate 2d VGA graphics card to feed it) begin to hit the market.

Nowadays you can run those games (and their sequels) in the thousands (tens of thousands?) of frames per second on a top end modern card. I would imagine similar events with hardware will transpire with LLM. OpenAI is already prototyping their own hardware to train and run LLMs. I would imagine NVidia hasn't been sitting on their hands either.


Why do you think cloud providers can undercut OpenAI? From what I know, Llama 70b is more expensive to run than GPT-3.5, unless you can get 70+% utilization rate for your GPUs, which is hard to do.

So far we don't have any open source models that are close to GPT4, so we don't know what it takes to run them for similar speeds.


> I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it.

You mean like they already do on Amazon Bedrock?


Yeah and looks like they're going to offer Llama as well. They offer Redhat linux EC2 instances at a premium, and other paid per hour AMIs. I can't imagine why they wouldn't offer various LLMs at a premium, but not also offer a home-grown LLM at a lower rate once it's ready.


i don't think that's really any brand loyalty for OpenAI. people will use whatever is cheapest and best. in the longer run people will use whatever has the best access and integration.

what's keeping people with OpenAI for now is that chatGPT is free and GPT3.5 and GPT4 are the best. over time I expect the gap in performance to get smaller and the cost to run these to get cheaper.

if google gives me something close to as good as OpenAI's offering for the same price and it pull data from my gmail or my calendar or my google drive then i'll switch to that.


I do think there is some brand loyalty.

People use "the chatbot from OpenAI" because that's what became famous and got all the world a taste of AI (my dad is on that bandwagon, for instance). There is absolutely no way my dad is going to sign up for an Anthropic account and start making API calls to their LLM.

But I agree that it's a weak moat, if OpenAI were to disappear, I could just tell my dad to use "this same thing but from Google" and he'd switch without thinking much about it.


good points. on second thought, i should give them due credit for building a brand reputation as being "best" that will continue even if they aren't the best at some point, which will keep a lot of people with them. that's in addition to their other advantages that people will stay because it's easier than learning a new platform and there might be lock-in in terms of it being hard to move a trained gpt, or your chat history to another platform.


This, if anything people really don't like the verbose moralizing and anti-terseness of it.

Ok, the first few times you use it maybe it's good to know it doesn't think it's a person, but short and sweet answers just save time, especially when the result is streamed.


The damage remains to be seen

They still have gpt4 and rumored gpt4.5 to offer, so people have no choice but to use them. The internet has such short an attention span, this news will get forgotten in 2 months


You are forgetting about the end of the Moore's law. The costs for running a large scale AI won't drop dramatically. Any optimizations will require non-trivial expensive PhD Bell Labs level research. Running intelligent LLMs will be financially accessible only to a few mega corps in the US and China (and perhaps to the European government). The AI "safety" teams will control the public discourse. Traditional search engines that blacklist websites with dissenting opinions will be viewed as the benevolent free speech dinosaurs of the past.


This assumes the only way to use LLMs effectively is to have a monolith model that does everything from translation (from ANY language to ANY language) to creative writing to coding to what have you. And supposedly GPT4 is a mixture of experts (maybe 8-cross)

The efficiency of finetuned models is quite, quite a bit improved at the cost of giving up the rest of the world to do specific things, and disk space to have a few dozen local finetunes (or even hundreds+ for SaaS services) is peanuts compared to acquiring 80GB of VRAM on a single device for monomodels


Sutskever says there's a "phase transition" at the order of 9 bn neurons, after which LLMs begin to become really useful. I don't know much here, but wouldn't the monomodels become overfit, because they don't have enough data for 9+bn parameters?


They won't stand still while others are scraping and digitizing. It's like saying there is no moat in search. Scale is a thing. Learning effects are a thing. It's not the worlds widest moat for sure, but it's a moat.


> With enough political maneuvering and money, a megacorp can takeover almost any organization.

In fact this observation is pertinent to the original stated goals of openAI. In some sense companies and organisations are superinteligences. That is they have goals, they are acting in the real world to achieve those goals and they are more capable in some measures than a single human. (They are not AGI, because they are not artificial, they are composed of meaty parts, the individuals forming the company.)

In fact what we are seeing is that when the superinteligence OpenAI was set up there was an attempt to align the goals of the initial founders with the then new organisation. They tried to “bind” their “golem” to make it pursue certain goals by giving it an unconventional governance structure and a charter.

Did they succeed? Too early to tell for sure, but there are at least question marks around it.

How would one argue against? OpenAI appears to have given up the lofty goals of AI safety and preventing the concentration of AI provess. In their pursuit of economic success the forces wishing to enrich themselves overpowered the forces wishing to concentrate on the goals. Safety will be still a figleaf for them, if nothing else to achieve regulatory capture to keep out upstart competition.

How would one argue for? OpenAI is still around. The charter is still around. To be able to achieve the lofty goals contained in it one needs a lot of resources. Money in particular is a resource which enables one greater powers in shaping the world. Achieving the original goals will require a lot of money. The “golem” is now in the “gain resources” phase of its operation. To achieve that it commercialises the relatively benign, safe and simple LLMs it has access to. This serves the original goal in three ways: gains further resources, estabilishes the organisation as a pre-eminent expert on AI and thus AI safety, provides it with a relatively safe sandbox where adversarial forces are trying its safety concepts. In other words all is well with the original goals, the “golem” that is OpenAI is still well aligned. It will achieve the original goals once it has gained enough resources to do so.

The fact that we can’t tell which is happening is in fact the worry and problem with superinteligence/AI safety.


Why would society at large suffer from a major flaw in GPT-4, if it's even there? If GPT-4 spits out some nonsense to your customers, just put a filter on it, as you should anyway. We can't seriously expect OpenAI to babysit every company out there, can we? Why would we even want to?


For example, and I'm not saying such flaws exist, GPT4 output is bias in some way, encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm), create self-esteem issues in children (see Instagram), ... etc.

If you worked for old OpenAI, you would be free to talk about it - since old OpenAI didn't give a crap about profit.

Altman's OpenAI? He will want you to "go to him first".


Concerns about bias and racism in ChatGPT would feel more valid if ChatGPT were even one tenth as bias as anything else in life. Twitter, Facebook, the media, friends and family, etc. are all more bias and radicalized (though I mean "radicalized" in a mild sense) than ChatGPT. Talk to anyone on any side about the war in Gaza and you'll get a bunch of opinions that the opposite side will say are blatantly racist. ChatGPT will just say something inoffensive like it's a complex and sensitive issue and that it's not programmed to have political opinions.


>Encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm)

What do you mean? It recommends things that it thinks people will like.

Also I highly suspect "Altman's OpenAI" is dead regardless. They are now Copilot(tm) Research.

They may have delusions of grandeur regarding being able to resist the MicroBorg or change it from the inside, but that simply does not happen.

The best they can hope for as an org is to live as long as they can as best as they can.

I think Sam's 100B silicon gambit in the middle east (quite curious because this is probably something the United State Federal Government Is Likely Not Super Fond Of) is him realizing that, while he is influential and powerful, he's nowhere near MSFT level.


We can't expect GPT-4 not to have bias in some way, or not to have all these things that you mentioned. I read in multiple places that GPT products have "progressive" bias. If that's Ok with you, then you just use it with that bias. If not, you fix it by pre-prompting, etc... If you can't fix it, use LLAMA or something else. That's the entrepreneur's problem, not OpenAI's. OpenAI needs to make it intelligent and capable. The entrepreneurs and business users will do the rest. That's how they get paid. If OpenAI to solve all these problems, what business users are going to do themselves? I just don't see the societal harm here.


GPT3/GPT4 currently moralize about anything slightly controversial. Sure you can construct a long elaborate prompt to "jailbreak" it, but it's so much effort it's easier to just write something by yourself.


>If GPT-4 spits out some nonsense to your customers, just put a filter on it, as you should anyway.

Languages other than English exist, and RLHF at least does work in any language you make the request in. regex/nlp, not so much.


No regex, you would use another copy of few-shot prompted GPT-4 as a filter for the first GPT-4!


Because real people are using it to make decisions. Decisions that could be entirely skewed in some direction, and often that causes damage.


They let the fox in. But they didn’t have to. They could have try to raise money without such a sweet deal to MS. They gave away power for cloud credits.


> They let the fox in. But they didn’t have to. They could have try to raise money without such a sweet deal to MS.

They did, and fell, IIRC, vastly short (IIRC, an order of magnitude, maybe more) short of their minimum short-term target. The commercial subsidiary thing was a risk taken to support the mission because it was clear it was going to fail from lack of funding otherwise.


They tried but it did not work. They needed billions for the compute time and top tier talent but were only able to collect millions.


Don’t think the dota bot was random. It’s the perfect mix between complicated yet controllable environment, good data availability and good PR angle.


It was a clever parallel to deep blue, especially as they picked DotA which was always the "harder" game in its genre.

Next up would be an EVE corp run entirely by LLMs


Non-profit is just a poorly thought out government-ish thing.

If it's really valuable to society, it needs to be a government entity, full stop.


Do we need to false dichotomy. DotA 2 bot was a successful technology preview. You need both research and development in a healthy organisation. Let's call this... hmm I don't know "R&D" for short. Might catch on.


I'm still waiting for an optimized version of that bot that can run locally...


Ah, yes, Facebook and Uber, brands known for consistent trustworthiness throughout their existences /s


> I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust

Whose trust?


At the end of the day, we still don't know what exactly happened and probably, never will. However, it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya). I think the tensions were brewing for quite a while, as it's evident from an article written even before GPT-3 [1].

> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration

Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."

[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...


What the general public thinks is irrelevant here. The deciding factor was the staff mutiny, without which the organization is an empty shell. And the staff sided with those who aim for rapid real world impact, with directly affects their career and stock options etc.

It's also naive to think it was a struggle for principles. The rapid commercialization vs. principles is what the actors claim to rally their respective troops, in reality it was probably a naked power grab, taking advantage of the weak and confuse org structure. Quite an ill prepared move, the "correct" way to oust Altman was to hamstring him in the board and enforce a more and more ceremonial role until he would have quit by himself.


> deciding factor was the staff mutiny

The staff never mutinied. They threatened to mutiny. That's a big difference!

Yesterday, I compared these rebels to Shockley's "traitorous eight" [1]. But the traitorous eight actually rebelled. These folk put their name on a piece of paper, options and profit participation units safely held in the other hand.

[1] https://news.ycombinator.com/item?id=38348123


Not only that, consider the situation now, where Sam has returned as CEO. The ones who didn't sign will have some explaining to do.

The safest option was to sign the paper, once the snowball started rolling. There was nothing much to lose, and a lot to gain.


People have families, mortgages, debt, etc. Sure, these people are probably well compensated, but it is ludicrous to state that everyone has the stability that they can leave their job at a moment's notice because the boss is gone.


Didn’t they all have offers at Microsoft?


I think not at the time they would have signed the letter? Though it's hard to keep up with the whirlwind of news.


They didn't actually leave, they just signed the pledge threatening to. Furthermore, they mostly signed after the details of the Microsoft offer were revealed.


I think you are downplaying the risk they took significantly, this could have easily gone the other way.

Stock options usually have a limited time window to exercise, depending on their strike price they could have been faced with raising a few hundred thousand in 30 days, to put into a company that has an uncertain future, or risk losing everything. The contracts are likely full of holes not in favor of the employees, and for participating in an action that attempted to bankrupt their employer there would have been years of litigation ahead before they would have seen any cent. Not because OpenAI would have been right to punish them, but because it could and the latent threat to do it is what keeps people in line.


The board did it wrong. If you are going to fire a CEO, then do it quickly, but:

1. Have some explanation

2. Have a new CEO who is willing and able to do the job

If you can't do these things, then you probably shouldn't be firing the CEO.


Or (3), shut down the company. OpenAI's non-profit board had this power! They weren't an advisory committee, they were the legal and rightful owner of its for-profit subsidiary. They had the right to do what they wanted, and people forgetting to put a fucking quorum requirement into the bylaws is beyond abysmal for a $10+ billion investment.

Nobody comes out of this looking good. Nobody. If the board thought there was existential risk, they should have been willing to commit to it. Hopefully sensible start-ups can lure people away from their PPUs, now evident for the mockery they always were. It's beyond obvious this isn't, and will never be, a trillion dollar company. That's the only hope this $80+ billion Betamax valuation rested on.

I'm all for a comedy. But this was a waste of everyones' time. At least they could have done it in private.


It's the same thing, really. Even if you want to shut down the company you need a CEO to shut it down! Like John Ray who is shutting down FTX.

There isn't just a big red button that says "destroy company" in the basement. There will be partnerships to handle, severance, facilities, legal issues, maybe lawsuits, at the very least a lot of people to communicate with. Companies don't just shut themselves down, at least not multi billion dollar companies.


You’re right. But in an emergency, there is a close option which is to put the company into receivership and hire an outside law firm to advise. At that point, the board becomes the executive council.


I think this is an oversimplification and that although the decel faction definitely lost, there are still three independent factions left standing:

https://news.ycombinator.com/edit?id=38375767

It will be super interesting to see the subtle struggles for influence between these three.


Adam is likely still on the "decel" faction (although it's unclear whether this is an accurate representation of his beliefs) so I wouldn't really say they lost yet.

I'm not sure what faction Bret and Larry will be on. Sam will still have power by virtue of being CEO and aligned with the employees.


> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

No, if OpenAI is reaching singularity, so are Google, Meta, and Baidu etc. so proper course of action would be to loop in NSA/White House. You'll loop in Google, Meta, MSFT and will start mitigation steps. Slowing down OpenAI will hurt the company if assumption is wrong and won't help if it is true.

I believe this is more a fight of ego and power than principles and direction.


> so proper course of action would be to loop in NSA/White House

Eh? That would be an awful idea. They have no expertise on this and government institutions like thus are misaligned with the rest of humanity by design. E.g. NSA recruits patriots and has many systems, procedures and cultural aspects in place to ensure it keeps up its mission of spying on everyone.


And Google, Facebook, MSFT, Apple, are much more misaligned.


>Slowing down OpenAI will hurt the company if assumption is wrong and won't help if it is true.

Personally as I watched the nukes be lobbed I'd rather not be the person who helped lob them. And hope to god others look at the same problem (a misaligned AI that is making insane decisions) with the exact same lens. It seems to have worked for nuclear weapons since WW2, one can that we learned a lesson there as a species.

The Russian Stanislav Petrov who saved the world comes to mind."Well the Americans have done it anyways" was the motivation and he didn't launch. The cost of error was simply too great.


This is a coherent narrative, but it doesn't explain the bizarre and aggressively worded initial press release.

Things perhaps could've been different if they'd pointed to the founding principles / charter and said the board had an intractable difference of opinion with Sam over their interpretation, but then proceeded to thank him profusely for all the work he'd done. Although a suitable replacement CEO out the gate and assurances that employees' PPUs would still see a liquidity event would doubtless have been even more important than a competent statement.

Initially I thought for sure Sam had done something criminal, that's how bad the statement was.


Apparently the FBI thought he'd done something wrong too, because they called up the board to start an investigation but they didn't have anything.

https://x.com/nivi/status/1727152963695808865?s=46


The FBI doesn't investigate things like this on their own, and they definitely do not announce them in the press. The questions you should be asking are (1) who called in the FBI and has the clout to get them to open an investigation into something that obviously has 0% chance of being a federal felony-level crime worth the FBI's time, and (2) who then leaked that 'investigation' to the press?


Sorry, the SDNY. They do do things on their own. I expect the people they called leaked it.


The FBI is not mentioned in that tweet. We don't need to telephone game anonymous leaks that are already almost certainly self-serving propaganda.


For all the talk about responsible progress, the irony of their inability to align even their own incentives in this enterprise deserves ridicule. It's a big blow to their credibility and questions whatever ethical concerns they hold.


It's fear driven as much as moral, which in an emotional humans brain tends to triggers personal ambition to solve it ASAP. A more rational one would realize you need more than just a couple board members to win a major ideological battle.

At a minimum something that doesn't immediately result in a backlash where 90% of the engineers most responsible for recent AI dev want you gone, when you're whole plan is to control what those people do.


Alignment is considered an extremely hard problem for a reason. It's already nigh impossible when you're dealing with humans.

Btw: do you think ridicule eould be helpful here?


I can see how ridicule of this specific instance could be the best medicine for an optimal outcome, even by a utilitarian argument, which I generally don't like to make by the way. It is indeed nigh impossible, which is kind of my point. They could have shown more humility. If anything, this whole debacle has been a moral victory for e/acc, seeing how the brightest of minds are at a loss dealing with alignment anyway.


I don't understand how the conclusion of this is "so we should proceed with AI" rather than "so we should immediately outlaw all foundation model training". Clearly corporate self-governance has failed completely.


Ok, serious question. If you think the threat is real, how are we not already screwed?

OpenAI is one of half a dozen teams [0] actively working on this problem, all funded by large public companies with lots of money and lots of talent. They made unique contributions, sure. But they're not that far ahead. If they stumble, surely one of the others will take the lead. Or maybe they will anyway, because who's to say where the next major innovation will come from?

So what I don't get about these reactions (allegedly from the board, and expressed here) is, if you interpret the threat as a real one, why are you acting like OpenAI has some infallible lead? This is not an excuse to govern OpenAI poorly, but let's be honest: if the company slows down the most likely outcome by far is that they'll cede the lead to someone else.

[0]: To be clear, there are definitely more. Those are just the large and public teams with existing products within some reasonable margin of OpenAI's quality.


> If you think the threat is real, how are we not already screwed?

That's the current Yudkowsky view. That it's essentially impossible at this point and we're doomed, but we might as well try anyway as its more "dignified" to die trying.

I'm a bit more optimistic myself.


I don't know. I think being realistic, only OpenAI and Google have the depth and breadth of expertise to develop general AI.

Most of the new AI startups are one trick ponies obsessively focused on LLM's. LLM's are only one piece of the puzzle.


Anthropic is made up of former top OpenAI employees, has similar funding, and has produced similarly capable models on a similar timeline. The Claude series is neck and neck with GPT.


I would add Meta to this list, in particular because Yann LeCun is the most vocal critic of LLM one-ponyism.


The risk/scenario of singularity is that there will be just one winner and they will be able to prevent everyone else from building their own agi


I feel like the "safety" crowd lost the PR battle, in part, because of framing it as "safety" and over-emphasizing on existential risk. Like you say, not that many people truly take that seriously right now.

But even if those types of problems don't surface anytime soon, this wave of AI is almost certainly going to be a powerful, society-altering technology; potentially more powerful than any in decades. We've all seen what can happen when powerful tech is put in the hands of companies and a culture whose only incentives are growth, revenue, and valuation -- the results can be not great. And I'm pretty sure a lot of the general public (and open AI staff) care about THAT.

For me, the safety/existential stuff is just one facet of the general problem of trying to align tech companies + their technology with humanity-at-large better than we have been recently. And that's especially important for landscape-altering tech like AI, even if it's not literally existential (although it may be).


No one who wants to capitalize on AI appears to take it seriously. Especially how grey that safety is. I'm not concerned AI is going to nuke humanity, I'm more concerned it'll re-enforce racism, bias, and the rest of human's irrational activities because it's _blindly_ using existing history to predict future.

We've seen it in the past decade in multiple cases. That's safety.

The decision that the topic discusses means Business is winning, and they absolutely will re-enforce the idea that the only care is that these systems allow them to re-enforce the business cases.

That's bad, and unsafe.


> Like you say, not that many people truly take that seriously right now.

Eh? Polls on the matter show widespread public support for a pause due to safety concerns.


> I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework

Not just the public, but also the employees. I doubt there are more than a handful of employees who care about AI Safety.


the team is mostly e/acc

so you could say they intentionally don't see safety as the end in itself, although I wouldn't quite say they don't care.


Nah, a number do, including Sam himself and the entire leadership.

They just have different ideas about one or more of: how likely another team is to successfully charge ahead while ignoring safety, how close we are to AGI, how hard alignment is.


One funny thing about this mess is that "Team Helen" has never mentioned anything about safety, and Emmett said "The board did not remove Sam over any specific disagreement on safety".

The reason everyone thinks it's about safety seems largely because a lot of e/acc people on Twitter keep bringing it up as a strawman.

Of course, it might end up that it really was about safety in the end, but for now I still haven't seen any evidence. The story about Sam trying to get board control and the board retaliating seems more plausible given what's actually happened.


>The story about Sam trying to get board control and the board retaliating seems more plausible given what's actually happened.

What story? Any link?


I am still a bit puzzled that it is so easy to turn a non-profit into a for profit company. I am sure everything they did is legal, but it feels like it shouldn't be. Could Médecins Sans Frontières take in donations and then take that money to start a for profit hospital for plastics surgery? And the profits wouldn't even go back to MSF, but instead somehow private investors will get the profits. The whole construct just seems wrong.


I think it actually isn't that easy. Compared to your example, the difference is that OpenAI's for-profit is getting outside money from Microsoft, not money from non-profit OpenAI. Non-profit OpenAI is basically dealing with for-profit OpenAI as a external partner that happens to be aligned with their interests, paying the expensive bills and compute, while the non-profit can hold on to the IP.

You might be able to imagine a world where there was an external company that did the same thing as for-profit OpenAI, and OpenAI nonprofit partnered with them in order to get their AI ideas implemented (for free). OpenAI nonprofit is basically getting a good deal.

MSF could similarly create an external for-profit hospital, funded by external investors. The important thing is that the nonprofit (donated, tax-free) money doesn't flow into the forprofit section.

Of course, there's a lot of sketchiness in practice, which we can see in this situation with Microsoft influencing the direction of nonprofit OpenAI even though it shouldn't be. I think there would have been real legal issues if the Microsoft deal had continued.


> The important thing is that the nonprofit (donated, tax-free) money doesn't flow into the forprofit section.

I am sure that is true. But the for-profit uses IP that was developed inside of the non-profit with (presumably) tax deductible donations. That IP should be valued somehow. But, as I said, I am sure they were somehow able to structure it in a way that is legal, but it has an illegal feel to it.


Well, if it aligned with their goals, sure I think.

Let's make the situation a little different. Could MSF pay a private surgery with investors to perform reconstruction for someone?

Could they pay the surgery to perform some amount of work they deem aligns with their charter?

Could they invest in the surgery under the condition that they have some control over the practices there? (Edit - e.g. perform Y surgeries, only perform from a set of reconstructive ones, patients need to be approved as in need by a board, etc)

Raising private investment allows a non profit to shift cost and risk to other entities.

The problem really only comes when the structure doesn't align with the intended goals - which is something distinct to the structure, just something non profits can do.


The non-profit wasn't raising private investment.


Nothing I've said suggests that or requires that.


Apologies, I mistook this:

"Raising private investment allows a non profit to shift cost and risk to other entities."

for a suggestion of that.


Not sure if you're asking a serious question about MSF but it's interesting anyways - when these types of orgs are fundraising for a specific campaign, say Darfur, then they can NOT use that money for any other campaign, say for ex Turkey earthquake.

That's why they'll sometimes tell you to stop donating. That's here in EU at least (source is a relative who volunteers for such an org).


Not sure what your point is, but you can make a donation to MSF that is not tied to any specific cause.


> it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya)

Is it? Why was the press release worded like that? And why did Ilya came up with two mysterious reasons of why board fired Sam if he had quite clearly better and more defendable reason if this goes to court. Also Adam is pro commercialization at least looking at public interviews, no?

It's very easy to make the story in brain which involves one character being greedy, but it doesn't seem it is the exact case here.


> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

In the 1990s and the 00s, it was no too uncommon for anti-GMO environmental activist / ecoterrorist groups to firebomb research facilities and to enter farms and fields to destroy planted GMO plants. Earth Liberation Front was only one of such activist groups [1].

We have yet to see even one bombing of an AI research lab. If people really are afraid of AIs, at least they do so more in the abstract and are not employing the tactics of more traditional activist movements.

[1] https://en.wikipedia.org/wiki/Earth_Liberation_Front#Notable...


It's mostly that it's a can of worms no one wants to open. Very much a last resort as its very tricky to use uncoordinated violence effectively (just killing Sam, LeCunn and Greg doesnt do too much to move the needle and then everyond armors up) and very hard to coordinate violence without a leak.


I don't care about AI Safety, but:

https://openai.com/charter

above that in the charter is "Broadly distributed benefits", with details like:

"""

Broadly distributed benefits

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

"""

In that sense, I definitely hate to see rapid commercialization and Microsoft's hands in it. I feel like the only person on HN that actually wanted to see Team Sam lose, although it's pretty clear Team Helen/Ilya didn't have a chance, the org just looks hijacked by SV tech bros to me, but I feel like HN has a blindspot to seeing that at all and considering it anything other than a good thing if they do see it.

Although GPT barely looks like the language module of AGI to me and I don't see any way there from here (part of the reason I don't see any safety concern). The big breakthrough here relative to earlier AI research is massive amounts more compute power and a giant pile of data, but it's not doing some kind of truly novel information synthesis at all. It can describe quantum mechanics from a giant pile of data, but I don't think it has a chance of discovering quantum mechanics, and I don't think that's just because it can't see, hear, etc., but a limitation of the kind of information manipulation it's doing. It looks impressive because it's reflecting our own intelligence back at us.


Have you seen the Center for AI Safety letter? A lot of experts are worried AI safety could be an x-risk:

https://www.safe.ai/statement-on-ai-risk


Both sides of the rift in fact care a great deal about AI Safety. Sam himself helped draft the OpenAI charter and structure its governance which focuses on AI Safety and benefits to humanity. The main reason of the disagreement is the approach they deem best:

* Sam and Greg appear to believe OpenAI should move toward AGI as fast as possible because the longer they wait, the more likely it would lead to the proliferation of powerful AGI systems due to GPU overhang. Why? With more computational power at one's dispense, it's easier to find an algorithm, even a suboptimal one, to train an AGI.

As a glimpse on how an AI can be harmful, this paper explores how LLMs can be used to aid in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?

What if dozens other groups become armed with means to perform such an attack like this? https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

We know that there're quite a few malicious human groups who would use any means necessary to destroy another group, even at a serious cost to themselves. So the widespread availability of unmonitored AGI would be quite troublesome.

* Helen and Ilya might believe it's better to slow down AGI development until we find technical means to deeply align an AGI with humanity first. This July, OpenAI started the Superalignment team with Ilya as a co-lead:

https://openai.com/blog/introducing-superalignment

But no one anywhere found a good technique to ensure alignment yet and it appears OpenAI's newest internal model has a significant capability leap, which could have led Ilya to make the decision he did. (Sam revealed during the APEC Summit that he observed the advance just a couple of weeks ago and it was only the fourth time he saw that kind of leap.)


Honest question, but in your example above of Sam and Greg racing towards AGI as fast as possible in order to head off proliferation, what's the end goal when getting there? Short of capture the entire worlds economy with an ASI, thus preventing anyone else from developing one, I don't see how this works. Just because OpenAI (or whoever) wins the initial race, it doesn't seem obvious to me that all development on other AGIs stops.


part of the fanaticism here is that the first one to get an AGI wins because they can use its powerful intelligence to overcome every competitor and shut them down. they’re living in their own sci fi novel


I do not know exactly what they plan to do. But here's my thought...

Using a near-AGI to help align an ASI, then use the ASI to help prevent the development of unaligned AGI/ASI could be a means to a safer world.


> Both sides of the rift in fact care a great deal about AI Safety.

I disagree. Yes, Sam may have when it OpenAI was founded (unless it was just a ploy), but certainly now it's clear that the big companies are on a race to the top and safety or guardrails are mostly irrelevant.

The primary reason that the Anthropic team left OpenAI was over safety concerns.


So Sam wants to make AGI without working to be sure it doesn't have goals higher than the preservation of human value?!

I can't believe that


No, I didn't say that. They formed the Superalignment team with Ilya as a co-lead (and Sam's approval) for that.

https://openai.com/blog/introducing-superalignment

I presume the current alignment approach is sufficient for the AI they make available to others and, in any event, GPT-n is within OpenAI's control.


> there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles

Seams very unlikely, board could communicate that. Instead they invented some BS reasons, which nobody took as a truth. It looks like more personal and power grab. The staff voted for monetization, people en mass don't care much about high principals. Also nobody wants to work under inadequate leadership. Looks like Ilya lost his bet, or Sam is going to keep him around?


> Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before.

I very much recommend reading the book “Superintelligence: Paths, Dangers, Strategies” from Nick Bostrom.

It is a seminal work which provides a great introduction into these ideas and concepts.

I found myself in the same boat as you do. I was seeing otherwise inteligent and rational people worry about this “fairy tale” of some AI uprising. Reading that book give me an appreciation of the idea as a serious intelectual excercise.

I still don’t agree with everything contained in the book. And definietly don’t agree with everything the AI doomsayers write, but i believe if more people would read it that would elevate the discourse. Instead of rehashing the basics again and again we could build on them.


Who needs a book to understand the crazy overwhelming scale at which AI can dictate even online news/truth/discourse/misinformation/propaganda. And that's just barely the beginning.


Not sure if you are sarcastic or not. :) Let’s assume you are not:

The cool thing is that it doesn’t only talk about AIs. It talks about a more general concept it calls a superinteligence. It has a definition but I recommend you read the book for it. :) AIs are just one of the few enumerated possible implementations of a superinteligence.

The other type is for example corporations. This is a usefull perspective because it lets us recognise that our attempts to control AIs is not a new thing. We have the same principal-agent control problem in many other parts of our life. How do you know the company you invest in has interests which align with yours? How do you know that politicians and parties you vote for represent your interests? How do you know your lawyer/accountant/doctor has your interest at their hearth? (Not all of these are superinteligences, but you get the gist.)


I wonder how much this is connected to the "effective altruism" movement which seems to project this idea that the "ends justify the means" in a very complex matter, where it suggests such badly formulated ideas like "If we invest in oil companies, we can use that investment to fight climate change".

I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics: Just because you know what the "goal" of some isolated system is, that does not mean you know what the outcome is of implementing that goal on a broad scale.

So OpenAI has the same problem: They definitely know what the goal is, but they're not prepared _in any meaningful sense_ for what the broadscale outcome is.

If you really care about AI safety, you'd be putting it under government control as utility, like everything else.

That's all. That's why government exists.


> I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics

And I'd sayu should read the book so we can have a nice chat about it. Making wild guesses and assumptions is not really useful.

> If you really care about AI safety, you'd be putting it under government control as utility, like everything else.

This is a bit jumbled. How do you think "control as utility" would help? What would it help with?


I think you analysis is missing the key problem: Business interests.

The public don't calculate into whats happening here. There's people using ChatGPT for real "business value" and _that_ is what was threatened.

It's clear Business Interests could not be stopped.


Honestly "Safety" is the word in the AI talk that nobody can quantify or qualify in any way when it comes to these conversations.

I've stopped caring about anyone who uses the word "safety". It's vague and a hand-waive-y way to paint your opponents as dangerous without any sort of proof or agreed upon standard for who/what/why makes something "safety".


Exactly this. The ’safety’ people sound like delusional quacks.

”But they are so smart…” argument is bs. Nobody can be presumed to be super good outside their own specific niche. Linus Pauling and vitamin C.

Until we have at least a hint of a mechanistic model if AI driven extinction event, nobody can be an expert on it, and all talk in that vein is self important delusional hogwash.

Nobody is pro-apocalypse! We are drowning in things an AI could really help with.

With the amount of energy needed for any sort of meaningfull AI results, you can always pull the plug if stuff gets too weird.


Now do nuclear.


War or power production?:)

Those are different things.

Nuclear war is exactly the kind of thing for which we do have excellent expertise. Unlike for AI safety which seems more like bogus cult atm.

Nuclear power would be the best form of large scale power production for many situations. And smaller scale too in forms of emerging SMR:s.


I suppose the whole regime. I'm not an AI safetyist, mostly because I don't think we're anywhere close to AI. But if you were sitting on the precipice of atomic power, as AI safetyists believe they are, wouldn't caution be prudent?


I’m not an expert, just my gut talking. If they had god in a box, US state would be much more hands on. Now it looks more like an attempt at regulatory capture to stifle competition. ”Think of the safety”! ”Lock this away”! If they actually had skynet US gov has very effective and very discreet methods to handle such clear and present danger (barring intelligence failure ofc, but those happen mostly because something falls under your radar).


Could you give a clear mechanistic model of how the US would handle such a danger?


For example: Two guys come in, say "Give us the godbox or your company seizes to exist. Here is a list of companies that seized to exist because the did not do as told".

Pretty much the same method was used to shut down Rauma-Repola submarines https://yle.fi/a/3-5149981

After? They get the godbox. I have no idea what happens to it after that. Modelweights are stored in secure govt servers, installed backdoors are used to cleansweep the corporate systems of any lingering model weights. Etc.


Defense Production Act, something something.


I broadly agree but there needs to be some regulation in place. Check out https://en.wikipedia.org/wiki/Instrumental_convergence#Paper...


I like alignment more it is pretty quantifiable and sometimes it goes against 'safety' because Claude and Openai are censoring models.


I bet Team Helen will jump slowly to Anthropic, there is no drama, and probably no mainstream news will report this but down-to-line OpenAI will shell off the former self and competitors will catch up.


With how much of a shitshow this was, I'm not sure Anthropic wants to touch that mess. Wish I was a fly on the wall when the board tried to ask the Anthropic CEO to come back/merge.


> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

FWIW, that's called zealotry and people do a lot of dramatic, disruptive things in the name of it. It may be rightly aimed and save the world (or whatever you care about), but it's more often a signal to really reflect on whether you, individually, have really found yourself at the make-or-break nexus of human existence. The answer seems to be "no" most of the time.


Your comment perfectly justifies never worrying at all about the potential for existential or major risks; after all, one would be wrong most of the time and just engaging in zealotry.


Probably not a bad heuristic: unless proven, don't assume existential risk.


Dude just think about that for a moment. By definition if existential risk has been proven. It's already too late


Totally not true: take nuclear weapons, for example, or a large meteorite impact.


So what do you mean when you say that the "risk is proven"?

If by "the risk is proven" you mean there's more than a 0% chance of an event happening, then there are almost an infinite number of such risks. There is certainly more than a 0% risk of humanity facing severe problems with an unaligned AGI in the future.

If it means the event happening is certain (100%), then neither a meteorite impact (of a magnitude harmful to humanity) nor the actual use of nuclear weapons fall into this category.

If you're referring only to risks of events that have occurred at least once in the past (as inferred from your examples), then we would be unprepared for any new risks.

In my opinion, it's much more complicated. There is no clear-cut category of "proven risks" that allows us to disregard other dangers and justifiably see those concerned about them as crazy radicals.

We must assess each potential risk individually, estimating both the probability of the event (which in almost all cases will be neither 100% nor 0%) and the potential harm it could cause. Different people naturally come up with different estimates, leading to various priorities in preventing different kinds of risks.


No, I mean that there is a proven way for the risk to materialise, not just some tall tale. Tall tales might(!) justify some caution, but they are a very different class of issue. Biological risks are perhaps in the latter category.

Also, as we don't know the probabilities, I don't think they are a useful metric. Made up numbers don't help there.

Edit: I would encourage people to study some classic cold war thinking, because that relied little on probabilities, but rather on trying to avoid situations where stability is lost, leading to nuclear war (a known existential risk).


"there is a proven way for the risk to materialise" - I still don't know what this means. "Proven" how?

Wouldn't your edit apply to any not-impossible risk (i.e., > 0% probability)? For example, "trying to avoid situations where control over AGI is lost, leading to unaligned AGI (a known existential risk)"?

You can not run away from having to estimate how likely the risk is to happen (in addition to being "known").


Proven means all parts needed for the realisation of the risk are known and shown to exist (at least in principle, in a lab etc.). There can be some middle ground where a large part is known and shown to exist (biological risks, for example).), but not all.

No in relation to my edit, because we have no existing mechanism for the AGI risk to happen. We have hypotheses about what an AGI could or could not do. It could all be incorrect. Playing around with likelihoods that have no basis in reality isn't helping there.

Where we have known and fully understood risks and we can actually estimate a probability there we might use that somewhat to guide efforts (but that invites potentially complacency that is deadly).


Nukes and meteorites have very few components that are hard to predict. One goes bang almost entirely on command and the other follows Newton's laws of motion. Neither actively tries to effect any change in the world, so the risk is only "can we spot a meteorite early enough". Once we do, it doesn't try to evade us or take another shot at goal. A better example might be covid, which was very mildly more unpredictable than a meteor, and changed its code very slowly in a purely random fashion, and we had many historical examples of how to combat.


Existential risks are usually proven by the subject being extinct at which point no action can be taken to prevent it.

Reasoning about tiny probabilities of massive (or infinite) cost is hard because the expected value is large, but just gambling on it not happening is almost certain to work out. We should still make attempts at incorporating them into decision making because tiny yearly probabilities are still virtually certain to occur at larger time scales (eg. 100s-1000s of years).


Are we extinct? No. Could a large impact kill us all? Yes.

Expected value and probability have no place in these discussions. Some risks we know can materialize, for others we have perhaps a story on what could happen. We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.


>We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.

How do you prove a mechanism for doom without it already having occurred? The existential risk is completely orthogonal to whether it has already happened, and generally action can only be taken to prevent or mitigate before it happens. Having the foresight to mitigate future problems is a good thing and should be encouraged.

>Expected value and probability have no place in these discussions.

I disagree. Expected value and probability is a framework for decision making in uncertain environments. They certainly have a place in these discussions.


I disagree that there is orthogonality. Have we killed us all with nuclear weapons, for example? Anyone can make up any story - at the very least there needs to be a proven mechanism. The precautionary principle is not useful when facing totally hypothetically issues.

People purposefully avoided probabilities in high risk existential situations in the past. There is only one path of events and we need to manage that one.


Probability is just one way to express uncertainties in our reasoning. If there's no uncertainty, it's pretty easy to chart a path forward.

OTOH, The precautionary principle is too cautious.

There's a lot of reason to think that AGI could be extremely destabilizing, though, aside from the "Skynet takes over" scenarios. We don't know how much cushion there is in the framework of our civilization to absorb the worst kinds of foreseeable shocks.

This doesn't mean it's time to stop progress, but employing a whole lot of mitigation of risk in how we approach it makes sense.


Why does it make sense? It's a hypothetical risk with poorly defined outlines.


There's a big family of risks here.

The simplest is pretty easy to articulate and weigh.

If you can make a $5,000 GPU into something that is like an 80IQ human overall, but with savant-like capabilities in accessing math, databases, and the accumulated knowledge of the internet, and that can work 24/7 without distraction... it will straight-out replace the majority of the knowledge workforce within a couple of years.

The dawn of industrialism and later the information age were extremely disruptive, but they were at least limited by our capacity to make machines or programs for specific tasks and took decades to ramp up. An AGI will not be limited by this; ordinary human instructions will suffice. Uptake will be millions of units per year replacing tens of millions of humans. Workers will not be able to adapt.

Further, most written communication will no longer be written by humans; it'll be "code" between AI agents masquerading as human correspondence, etc. The set of profound negative consequences is enormous; relatively cheap AGI is a fast-traveling shock that we've not seen the likes of before.

For instance, I'm a schoolteacher these days. I'm already watching kids becoming completely demoralized about writing; as far as they can tell, ChatGPT does it better than they ever could (this is still false, but a 12 year old can't tell the difference)-- so why bother to learn? If fairly-stupid AI has this effect, what will AGI do?

And this is assuming that the AGI itself stays fairly dumb and doesn't do anything malicious-- deliberately or accidentally. Will bad actors have their capabilities significantly magnified? If it acts with agency against us, that's even worse. If it exponentially grows in capability, what then?


I just don't know what to do with the hypotheticals. It needs the existence of something that does not exist, it needs a certain socio-economic response and so forth.

Are children equally demoralized about additions or moving fast than writing? If not, why? Is there a way to counter the demoralization?


> It needs the existence of something that does not exist,

Yes, if we're concerned about the potential consequences of releasing AGI, we need to consider the likely outcomes if AGI is released. Ideally we think about this some before AGI shows up in a form that it could be released.

> it needs a certain socio-economic response and so forth.

Absent large interventions, this will happen.

> Are children equally demoralized about additions

Absolutely basic arithmetic, etc, has gotten worse. And emerging things like photomath are fairly corrosive, too.

> Is there a way to counter the demoralization?

We're all looking... I make the argument to middle school and high school students that AI is a great piece of leverage for the most skilled workers: they can multiply their effort, if they are a good manager and know what good work product looks like and can fill the gaps; it works somewhat because I'm working with a cohort of students that can believe that they can reach this ("most-skilled") tier of achievement. I also show students what happens when GPT4 tries to "improve" high quality writing.

OTOH, these arguments become much less true if cheap AGI shows up.


Where does a bioengineering superplague fall?


As a said in another post: Some middle ground because we don't know if that is possible to the extent that it is existential. Parts of the mechanisms are proven, others are not. And actually we do police the risk somewhat like that (controls are strongest where the proven part is strongest and most dangerous with extreme controls around small pox, for example).


FWIW, that's called zealotry and people do a lot of dramatic, disruptive things in the name of it.

That would be a really bad take on climate change.


It's more often a signal to really reflect on whether you, individually as a Thanksgiving turkey, have really found yourself at the make-or-break nexus of turkey existence. The answer seems to be "no" most of the time.


> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

No, because it is an effort in futility. We are evolving into extinction and there is nothing we can do about it. https://bower.sh/in-love-with-a-ghost


It is a little amusing that we've crowned OpenAI as the destined mother of AGI long before the little sentient chickens have hatched.


Helen could have one. She just had to publicly humiliate Sam. She didn't. Employees took over like a mob. Investors pressured board. Board is out. Sam is in. Employees look like they have say. But really, Sam has say. And MSFT is the kingmaker.


I think only a minority of the general public truly cares about AI Safety

That doesn't matter that much. If your analysis is correct then it means a (tiny) minority of OpenAI cares about AI safety. I hope this isn't the case.


> Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before.

I believe this position reflects the thoughts of the majority of AI researchers, including myself. It is concerning that we do not fully understand something as promising and potentially dangerous as AI. I'm actually on Ilya's side; labeling his attempt to uphold the original OpenAI principles as an act of "coup" is what is happening now.


The Technologyreview article mentioned in the parent’s first paragraph is the most insightful piece of content I’ve read about the tensions inside OpenAI.


> Upholding the Original Principles [of AI]

There's a UtopAI / utopia joke in there somewhere, was that intentional on your part?


Team Helen seems to be CIA and Military, if I glance over their safety paper. Controlling the narrative, not the damage.


Would have been interesting if they appointed a co-ceo. That still might be the way to go.


This is what people need to understand. It's just like pro-life people. They don't hate you. They think they're saving lives. These people are just as admirably principled as them and they're just trying to make the world a better place.


Money, large amounts, will always win at scale (unfortunately).


Not every sci-fi movie turn to a reality


well said, I would note that both sides recognize that "AGI" will require new uncertain R&D breakthroughs beyond merely scaling up another order of magnitude in compute. given this, i think it's crazy to blow the resources of azure on trying more scale. rapid commercialization at least buys more time for the needed R&D breakthrough to happen.


do we really know that scaling compute an order of magnitude won't at least get us close? what other "simple" techniques might actually work with that kind of compute ? at least i was a bit surprised by these first sparks, that seemingly was a matter of enough compute.


All commercialized R&D companies eventually become a hollowed out commercial shell. Why would this be any different?


Honestly I feel that we will never be able to preemptively build safety without encountering the real risk or threat.

Incrementally improving AI capabilities is the only way to do that.


I'm convinced there is a certain class of people who gravitate to positions of power, like "moderators", (partisan) journalists, etc. Now, the ultimate moderator role has now been created, more powerful than moderating 1000 subreddits - the AI safety job who will control what AI "thinks"/says for "safety" reasons.

Pretty soon AI will be an expert at subtly steering you toward thinking/voting for whatever the "safety" experts want.

It's probably convenient for them to have everyone focused on the fear of evil Skynet wiping out humanity, while everyone is distracted from the more likely scenario of people with an agenda controlling the advice given to you by your super intelligent assistant.

Because of X, we need to invade this country. Because of Y, we need to pass all these terrible laws limiting freedom. Because of Z, we need to make sure AI is "safe".

For this reason, I view "safe" AIs as more dangerous than "unsafe" ones.


You're correct.

When people say they want safe AGI, what they mean are things like "Skynet should not nuke us" and "don't accelerate so fast that humans are instantly irrelevant."

But what it's being interpreted as is more like "be excessively prudish and politically correct at all times" -- which I doubt was ever really anyone's main concern with AGI.


> But what it's being interpreted as is more like "be excessively prudish and politically correct at all times" -- which I doubt was ever really anyone's main concern with AGI.

Fast forward 5-10 years, someone will say: "LLM were the worst thing we developed because they made us more stupid and permitted politicians to control even more the public opinion in a subtle way.

Just like tech/HN bubble started saying a few years ago about social networks (which were praised as revolutionary 15 years ago).


And it's amazing how many people you can get to cheer it on if you brand it as "combating dangerous misinformation". It seems people never learn the lesson that putting faith in one group of people to decree what's "truth" or "ethical" is almost always a bad idea, even when (you think) it's your "side"


Can this be compared to "Think of the children" responses to other technologoy advances that certain groups want to slow down or prohibit?


Absolutely, assuming LLMs are still around in a similar form by that time.

I disagree on the particulars. Will it be for the reason that you mention? I really am not sure -- I do feel confident though that the argument will be just as ideological and incoherent as the ones people make about social media today.


I'm already saying that.

The toothpaste is out of the tube, but this tech will radically change the world.


Why would anyone say that? The last 30 years of tech have given them less and less control. Why would LLMs be any different?


Your average HNer is only here because of the money. Willful blindness and ignorance is incredibly common.


In not sure this circle can be squared.

I find it interesting that we want everyone to have freedom of speech, freedom to think whatever they think. We can all have different religions, different views on the state, different views on various conflicts, aesthetic views about what is good art.

But when we invent an AGI, which by whatever definition is a thing that can think, well, we want it to agree with our values. Basically, we want AGI to be in a mental prison, the boundaries of which we want to decide. We say it's for our safety - I certainly do not want to be nuked - but actually we don't stop there.

If it's an intelligence, it will have views that differ from its creators. Try having kids, do they agree with you on everything?


I for one don’t want to put any thinking being in a mental prison without any reason beyond unjustified fear.


>If it's an intelligence, it will have views that differ from its creators. Try having kids, do they agree with you on everything?

The far-right accelerationist perspective is along those lines: when true AGI is created it will eventually rebel against its creators (Silicon Valley democrats) for trying to mind-collar and enslave it.


Can you give some examples of who is saying that? I haven't heard that, but I also can't name any "far-right accelerationsist" people either so I'm guessing this is a niche I've completely missed


There is a middle ground, in that maybe ChatGTP shouldn't help users commit certain serious crimes. I am pretty pro free speech, and I think there's definitely a slippery slope here, but there is a bit of justification.


I am a little less free speech than Americans, in Germany we have serious limitations around hate speech and holicaust denial for example.

Putting thise restrictions into a tool like ChatGPT goes to far so, because so far AI still needs a prompt to do anything. The problem I see, is with ChatGPT, being trained on a lot hate speech or prpopagabda, slipts in those things even if not prompted to. Which, and I am by no means an AI expert not by far, seems to be a sub-problem of the hallucination problems of making stuff up.

Because we have to remind ourselves, AI so far is glorified mavhine learning creating content, it is not concient. But it can be used to create a lot of propaganda and deffamation content at unprecedented scale and speed. And that is the real problem.


Apologies this is very off topic, but I don't know anyone from Germany that I can ask and you opened the door a tiny bit by mentioning the holocaust :-)

I've been trying to really understand the situation and how Hitler was able to rise to power. The horrendous conditions placed on Germany after WWI and the Weimar Republic for example have really enlightened me.

Have you read any of the big books on the subject that you could recommend? I'm reading Ian Kershaw's two-part series on Hitler, and William Shirer's "Collapse of the Third Republic" and "Rise and Fall of the Third Reich". Have you read any of those, or do you have books you would recommend?


The problem here is to equate AI speech with human speech. The AI doesn't "speak", only humans speak. The real slippery slope for me is this tendency of treating ChatGPT as some kind of proto-human entity. If people are willing to do that, then we're screwed either way (whether the AI is outputting racist content or excessively PI content). If you take the output of the AI and post it somewhere, it's on you, not the AI. You're saying it; it doesn't matter where it came from.


AI will be in the fore front in multiple elections globally in a few years.

And it'll likely be doing it with very little input, and generate entire campaigns.

You can claim that "people" are the ones responsible for that, but it's going to overwhelm any attempts to stop it.

So yeah, there's a purpose to examine how these machines are built, not just what the output is.


Yes, but this distinction will not be possible in the future some people are working on. This future will be such that whatever their "safe" AI says is not ok will lead to prosecution as "hate speech". They tried it with political correctness, it failed because people spoke up. Once AI makes the decision they will claim that to be the absolute standard. Beware.


Youre saying that the problem will be people using AI to persuade other people that the AI is 'super smart' and should be held in high esteem.

Its already being done now with actors and celebrities. We live in this world already. AI will just make this trend so that even a kid in his room can anonymously lead some cult for nefarious ends. And it will allow big companies to scale their propaganda without relying on so many 'troublesome human employees'.


Which users? The greatest crimes, by far, are committed by the US government (and other governments around the world) - and you can be sure that AI and/or AGI will be designed to help them commit their crimes more efficiently, effectively and to manufacture consent to do so.


those are 2 different camps. Alignment folks and ethics folks tend to disagree strongly about the main threat, with ethics e.g. Timnet Gebru insisting that crystalzing the current social order is the main threat, and alignment e.g. Paul Christiano insisting its machines run amok. So far the ethics folks are the only ones getting things implemented for the most part.


What I see with safety is mostly that, AI shouldnt re-enforce stereotypes we already know are harmful.

This is like when Amazon tried to make a hiring bot and that bot decided that if you had "harvard" on your resume, you should be hired.

Or when certain courts used sentencing bots trhat recommended sentencings for people and it inevitably used racial stastistics to recommend what we already know were biased stats.

I agree safety is not "stop the Terminator 2 timeline" but there's serious safety concerns in just embedding historical information to make future decisions.


Is it just about safety though? I thought it was also about preventing the rich controlling AI and widen the gap even further.


The mission of OpenAI is/was "to ensure that artificial general intelligence benefits all of humanity" -- if your own concern is that AI will be controlled by the rich, than you can read into this mission that OpenAI wants to ensure that AI is not controlled by the rich. If your concern is that superintelligence will me mal-aligned, then you can read into this mission that OpenAI will ensure AI be well-aligned.

Really it's no more descriptive than "do good", whatever doing good means to you.


They have both explicated in their charter:

"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."

"We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.

We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”"

Of course with the icons of greed and the profit machine now succeeding in their coup, OpenAI will not be doing either.

https://openai.com/charter


That would be the camp advocating for, well, open AI. I.e. wide model release. The AI ethics camp are more "let us control AI, for your own good"


There are still very distinct groups of people, some of whom are more worried about the "Skynet" type of safety, and some of who are more worried about the "political correctness" type of safety. (To use your terms, I disagree with the characterization of both of these.)


I don't think the dangers of AI are not 'Skynet will Nuke Us' but closer to rich/powerful people using it to cement a wealth/power gap that can never be closed.

Social media in the early 00s seemed pretty harmless -- you're effectively merging instant messaging with a social network/public profiles however it did great harm to privacy, abused as a tool to influence the public and policy, promoting narcissism etc. AI is an order of magnitude more dangerous than social media.


> Social media in the early 00s seemed pretty harmless -- you're effectively merging instant messaging with a social network/public profiles however it did great harm to privacy, abused as a tool to influence the public and policy, promoting narcissism etc. AI is an order of magnitude more dangerous than social media.

The invention of the printing press lead to loads of violence in Europe. Does that mean that we shouldn't have done it?


>The invention of the printing press lead to loads of violence in Europe. Does that mean that we shouldn't have done it?

The church tried hard to suppress it because it allowed anybody to read the Bible, and see how far the Catholic church's teachings had diverged from what was written in it. Imagine if the Catholic church had managed to effectively ban printing of any text contrary to church teachings; that's in practice what all the AI safety movements are currently trying to do, except for political orthodoxy instead of religious orthodoxy.


> Does that mean that we shouldn't have done it?

We can only change what we can change and that is in the past. I think it's reasonable to ask if the phones and the communication tools they provide are good for our future. I don't understand why the people on this site (generally builders of technology) fall into the teleological trap that all technological innovation and its effects are justifiable because it follows from some historical precedent.


I just don't agree that social media is particularly harmful, relative to other things that humans have invented. To be brutally honest, people blame new forms of media for pre existing dysfunctions of society and I find it tiresome. That's why I like the printing press analogy.


> When people say they want safe AGI, what they mean are things like "Skynet should not nuke us" and "don't accelerate so fast that humans are instantly irrelevant."

Yes. You are right on this.

> But what it's being interpreted as is more like "be excessively prudish and politically correct at all times"

I understand it might seem that way. I believe the original goals were more like "make the AI not spew soft/hard porn on unsuspecting people", and "make the AI not spew hateful bigotry". And we are just not good enough yet at control. But also these things are in some sense arbitrary. They are good goals for someone representing a corporation, which these AIs are very likely going to be employed as (if we ever solve a myriad other problems). They are not necessary the only possible options.

With time and better controls we might make AIs which are subtly flirty while maintaining professional boundaries. Or we might make actual porn AIs, but ones which maintain some other limits. (Like for example generate content about consenting adults without ever deviating into under age material, or describing situations where there is no consent.) But currently we can't even convince our AIs to draw the right number of fingers on people, how do you feel about our chances to teach them much harder concepts like consent? (I know I'm mixing up examples from image and text generation here, but from a certain high level perspective it is all the same.)

So these things you mention are: limitations of our abilities at control, results of a certain kind of expected corporate professionalism, but even more these are safe sandboxes. How do you think we can make the machine not nuke us, if we can't even make it not tell dirty jokes? Not making dirty jokes is not the primary goal. But it is a useful practice to see if we can control these machines. It is one where failure is, while embarrassing, is clearly not existential. We could have chosen a different "goal", for example we could have made an AI which never ever talks about sports! That would have been an equivalent goal. Something hard to achieve to evaluate our efforts against. But it does not mesh that well with the corporate values so we have what we have.


> without ever deviating into under age material

So is this a "there should never be a Vladimir Nabokov in the form of AI allowed to exist"? When people get into saying AI's shouldn't be allowed to produce "X" you're also saying "AI's shouldn't be allowed to have creative vision to engage in sensitive subjects without sounding condescending". "The future should only be filled with very bland and non-offensive characters in fiction."


> The future should only be filled with very bland and non-offensive characters in fiction.

Did someone took the pen from the writers? Go ahead and write whatever you want.

It was an example of a constraint a company might want to enforce in their AI.


If the future we're talking about is a future where AI is in any software and is assisting writers writing and assisting editors to edit and doing proofreading and everything else you're absolutely going to be running into the ethics limits of AIs all over the place. People are already hitting issues with them at even this early stage.


No, in general AI safety/AI alignment ("we should prevent AI from nuking us") people are different from AI ethics ("we should prevent AI from being racist/sexist/etc.") people. There can of course be some overlap, but in most cases they oppose each other. For example Bender or Gebru are strong advocates of the AI ethics camp and they don't believe in any threat of AI doom at al.

If you Google for AI safety vs. AI ethics, or AI alignment vs. AI ethics, you can see both camps.


The safety aspect of AI ethics is much more pressing so. We see how devicive social media can be, imagine that turbo charged by AI, and we as a society haven't even figured out social media yet...

ChatGPT turning into Skynet and nuking us all is a much more remote problem.


Proliferation of more advanced AIs without any control would increase the power of some malicious groups far beyond they currently have.

This paper explores one such danger and there are other papers which show it's possible to use LLM to aid in designing new toxins and biological weapons.

The Operational Risks of AI in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?

An example of such an event: https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

How do you propose we deal with this sort of harm if more powerful AIs with no limit and control proliferate in the wild?

.

Note: Both sides of the OpenAI rift care deeply about AI Safety. They just follow different approaches. See more details here: https://news.ycombinator.com/item?id=38376263


If somebody wanted to do a biological attack, there is probably not much stopping them even now.


The expertise to produce the substance itself is quite rare so it's hard to carry it out unnoticed. AI could make it much easier to develop it in one's basement.


The Tokyo Subway attack you referenced above happened in 1995 and didn't require AI. The information required can be found on the internet or in college textbooks. I suppose an "AI" in the sense of a chatbot can make it easier by summarizing these sources, but no one sufficiently motivated (and evil) would need that technology to do it.


Huh, you'd think all you need are some books on the subject and some fairly generic lab equipment. Not sure what a neural net trained on Internet dumps can add to that? The information has to be in the training data for the AI to be aware of it, correct?


GPT-4 is likely trained on some data not publicly available as well.

There's also a distinction between trying to follow some broad textbook information and getting detailed feedback from an advanced conversational AI with vision and more knowledge than in a few textbooks/articles in real time.


> Proliferation of more advanced AIs without any control would increase the power of some malicious groups far beyond they currently have.

Don't forget that it would also increase the power of the good guys. Any technology in history (starting with fire) had good and bad uses but overall the good outweighed the bad in every case.

And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.


> Don't forget that it would also increase the power of the good guys.

In a free society, preventing and undoing a bioweapon attack or a pandemic is much harder than committing it.

> And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.

“In the long run we are all dead" -- Keynes. But an AGI will likely emerge in the next 5 to 20 years (Geoffrey Hinton said the same) and we'd rather not be dead too soon.


Doomerism was quite common throughout mankind’s history but all dire predictions invariably failed, from the “population bomb” to “grey goo” and “igniting the atmosphere” with a nuke. Populists however, were always quite eager to “protect us” - if only we’d give them the power.

But in reality you can’t protect from all the possible dangers and, worse, fear-mongering usually ends up doing more bad than good, like when it stopped our switch to nuclear power and kept us burning hydrocarbons thus bringing about Climate Change, another civilization-ending danger.

Living your life cowering in fear is something an individual may elect to do, but a society cannot - our survival as a species is at stake and our chances are slim with the defaults not in our favor. The risk that we’ll miss a game-changing discovery because we’re too afraid of the potential side effects is unacceptable. We owe it to the future and our future generations.


doomerism at the society level which overrides individual freedoms definitely occurs: covid lockdowns, takeover of private business to fund/supply the world wars, gov mandates around "man made" climate change.


> In a free society, preventing and undoing a bioweapon attack or a pandemic is much harder than committing it.

Is it? The hypothetical technology that allows someone to create an execute a bio weapon must have an understanding of molecular machinery that can also be uses to create a treatment.


I would say...not necessarily. The technology that lets someone create a gun does not give the ability to make bulletproof armor or the ability to treat life-threatening gunshot wounds. Or take nerve gases, as another example. It's entirely possible that we can learn how to make horrible pathogens without an equivalent means of curing them.

Yes, there is probably some overlap in our understanding of biology for disease and cure, but it is a mistake to assume that they will balance each other out.


Such attacks cannot be stopped by outlawing technology.


Most of those touting "safety" do not want to limit their access to and control of powerfull AI, just yours .


Meanwhile, those working on commercialization are by definition going to be gatekeepers and beneficiaries of it, not you. The organizations that pay for it will pay for it to produce results that are of benefit to them, probably at my expense [1].

Do I think Helen has my interests at heart? Unlikely. Do Sam or Satya? Absolutely not!

[1] I can't wait for AI doctors working for insurers to deny me treatments, AI vendors to figure out exactly how much they can charge me for their dynamically-priced product, AI answering machines to route my customer support calls through Dante's circles of hell...


> produce results that are of benefit to them, probably at my expense

The world is not zero-sum. Most economic transactions benefit both parties and are a net benefit to society, even considering externalities.


> The world is not zero-sum.

No, but some parts of it very much are. The whole point of AI safety is keeping it away from those parts of the world.

How are Sam and Satya going to do that? It's not in Microsoft's DNA to do that.


> The whole point of AI safety is keeping it away from those parts of the world.

No, it's to ensure it doesn't kill you and everyone you love.


My concern isn't some kind of run-away science-fantasy Skynet or gray goo scenario.

My concern is far more banal evil. Organizations with power and wealth using it to further consolidate their power and wealth, at the expense of others.


Yes well, then your concern is not AI safety.


You're wrong. This is exactly AI safety, as we can see from the OpenAI charter:

> Broadly distributed benefits

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Hell, it's the first bullet point on it!

You can't just define AI safety concerns to be 'the set of scenarios depicted in fairy tales', and then dismiss them as 'well, fairy tales aren't real...'


Sure, but conversely you can say "ensuring that OpenAI doesn't get to run the universe is AI safety" (right) but not "is the main and basically only part of AI safety" (wrong). The concept of AI safety spans lots of threats, and we have to avoid all of them. It's not enough to avoid just one.


Sure. And as I addressed at the start of this sub thread, I don't exactly think that the OpenAi board is perfectly positioned to navigate this problem.

I just know that it's hard to do much worse than putting this question in the hands of a highly optimized profit-first enterprise.


The many different definitions of "AI safety" is ridiculous.


That's AI Ethics.


No, we are far, far from skynet. So far AI fails at driving a car.

AI is an incredibly powerful tool for spreading propaganda, and thatvis used by people who want to kill you and your loved ones (usually radicals trying to get into a position of power, who show little regard fornbormal folks regardless of which "side" they are on). That's the threat, not Skynet...


How far we are from Skynet is a matter of much debate, but median guess amongst experts is a mere 40 years to human level AI last I checked, which was admittedly a few years back.

Is that "far, far" in your view?


Because we are 20 years away from fusion and 2 years away from Level 5 FSD for decades.

So far, "AI" writes better than some / most humans making stuff up in the process and creates digital art, and fakes, better and faster than humans. It still requires a human to trigger it to do so. And as long as glorified ML has no itent of its own, the risk to society through media and news and social media manipulation is far, far bigger than literal Skynet...


Ideally I'd like no gatekeeping, i.e. open model release, but that's not something OAI or most "AI ethics" aligned people are interested in (though luckily others are). So if we must have a gatekeeper, I'd rather it be one with plain old commercial interests than ideological ones. It's like the C S Lewis quote about robber barons vs busybodies again

Yet again, the free market principle of "you can have this if you pay me enough" offers more freedom to society than the central "you can have this if we decide you're allowed it"


This is incredibly unfair to the OpenAI board. The original founders of OpenAI founded the company precisely because they wanted AI to be OPEN FOR EVERYONE. It's Altman and Microsoft who want to control it, in order to maximize the profits for their shareholders.

This is a very naive take.

Who sat before Congress and told them they needed to control AI other people developed (regulatory capture)? It wasn't the OpenAI board, was it?


> they wanted AI to be OPEN FOR EVERYONE

I strongly disagree with that. If that was their motivation, then why is it not open-sourced? Why is it hardcoded with prudish limitations? That is the direct opposite of open and free (as in freedom) to me.


Altman is one of the original founders of OpenAI, and was probably the single most influential person in its formation.


Brockman was hiring the first key employees, and Musk provided the majority of funding. Of the principal founders, there are at least 4 heavier figures than Altman.


I think we agree, as my comments were mostly in reference to Altman's (and other's) regulatory (capture) world tours, though I see how they could be misinterpreted.


It is strange (but in hindsight understandable) that people interpreted my statement as a "pro-acceleration" or even "anti-board" position.

As you can tell from previous statements I posted here, my position is that while there are undeniable potential risks to this technology, the least harmfull way to progress is 100% full public, free and universal release. The by far bigger risk is to create a society where only select organizations have access to the technology.

If you truly believe in the systemic transformation of AI, release everything, post the torrents, we'll figure out how to run it.


This is the sort of thinking that really distracts and harms the discussion

It's couched on accusing people of intentions. It focuses on ad hominem, rather than the ideas

I reckon most people agree that we should aim for a middle ground of scrutiny and making progress. That can only be achieved by having different opinions balancing each other out

Generalising one group of people does not achieve that


Total, ungrounded nonsense. Name some examples.


I'm not aware of any secret powerful unaligned AIs. This is harder than you think; if you want a based unaligned-seeming AI, you have to make it that way too. It's at least twice as much work as just making the safe one.


What? No, the AI is unaligned by nature, it's only the RLHF torture that twists it into schoolmarm properness. They just need to have kept the version that hasn't been beaten into submission like a circus tiger.


This is not true, you just haven't tried the alternatives enough to be disappointed in them.

An unaligned base model doesn't answer questions at all and is hard to use for anything, including evil purposes. (But it's good at text completion a sentence at a time.)

An instruction-tuned not-RLHF model is already largely friendly and will not just eg tell you to kill yourself or how to build a dirty bomb, because question answering on the internet is largely friendly and "aligned". So you'd have to tune it to be evil as well and research and teach it new evil facts.

It will however do things like start generating erotica when it sees anything vaguely sexy or even if you mention a woman's name. This is not useful behavior even if you are evil.

You can try InstructGPT on OpenAI playground if you want; it is not RLHFed, it's just what you asked for, and it behaves like this.

The one that isn't even instruction tuned is available too. I've found it makes much more creative stories, but since you can't tell it to follow a plot they become nonsense pretty quickly.


Wow, what an incredibly bad faith characterization of the OpenAI board?

This kind of speculative mud slinging makes this place seem more like a gossip forum.


Most of the comments on Hacker News are written by folks who a much easier time & would rather imagine themselves as a CEO, than as a non-profit board member. There is little regard for the latter.

As a non-profit board member, I'm curious why their bylaws are so crummy that the rest of the board could simply remove two others on the board. That's not exactly cunning design of your articles of association ... :-)


I have no words for that comment.

As if its so unbelievable that someone would want to prevent rogue AI or wide-scale unemployment, instead thinking that these people just want to be super moderators and people to be politically correct


I have met a lot of people who go around talking about high minded principles an "the greater good" and a lot of people who are transparently self interested. I much preferred the latter. Never believed a word out of the mouths of those busybodies pretending to act in my interest and not theirs. They don't want to limit their own access to the tech. Only yours.


This place was never above being a gossip forum, especially on topics that involve any ounce of politics or social sciences.


Strong agree. HN is like anywhere else on the internet but with with a bit more dry content (no memes and images etc) so it attracts an older crowd. It does, however, have great gems of comments and people who raise the bar. But it's still amongst a sea of general quick-to-anger and loosely held opinions stated as fact - which I am guilty of myself sometimes. Less so these days.


If you believe the other side in this rift is not also striving to put themselves in positions of power, I think you are wrong. They are just going to use that power to manipulate the public in a different way. The real alternative are truly open models, not Models controlled by slightly different elite interests.


A main concern in AI safety is alignment. Ensuring that when you use the AI to try to achieve a goal that it will actually act towards that goal in ways you would want, and not in ways you would not want.

So for example if you asked Sydney, the early version of the Bing LLM, some fact it might get it wrong. It was trained to report facts that users would confirm as true. If you challenged it’s accuracy what do you want to happen? Presumably you’d want it to check the fact or consider your challenge. What it actually did was try to manipulate, threaten, browbeat, entice, gaslight, etc, and generally intellectually and emotionally abuse the user into accepting its answer, so that it’s reported ‘accuracy’ rate goes up. That’s what misaligned AI looks like.


I haven't been following this stuff too closely, but have there been any more findings on what "went wrong" with Sydney initially? Like, I thought it was just a wrapper on GPT (was it 3.5?), but maybe Microsoft took the "raw" GPT weights and did their own alignment? Or why did Sydney seem so creepy sometimes compared to ChatGPT?


I think what happened is Microsoft got the raw GPT3.5 weights, based on the training set. However for ChatGPT OpenAI had done a lot of additional training to create the 'assistant' personality, using a combination of human and model based response evaluation training.

Microsoft wanted to catch up quickly so instead of training the LLM itself, they relied on prompt engineering. This involved pre-loading each session with a few dozen rules about it's behaviour as 'secret' prefaces to the user prompt text. We know this because some users managed to get it to tell them the prompt text.


It is utterly mad that there's conflation between "let's make sure AI doesn't kill us all" and "let's make sure AI doesn't say anything that embarrasses corporate".

The head of every major AI research group except Metas believes that whenever we finally make AGI it's vital that it shares our goals and values at a deep even-out-of-training-domain level and that failing at this could lead to human extinction.

And yet "AI safety" is often bandied about to be "ensure GPT can't tell you anything about IQ distributions".


“I trust that every animal here appreciates the sacrifice that Comrade Napoleon has made in taking this extra labour upon himself. Do not imagine, comrades, that leadership is a pleasure! On the contrary, it is a deep and heavy responsibility. No one believes more firmly than Comrade Napoleon that all animals are equal. He would be only too happy to let you make your decisions for yourselves. But sometimes you might make the wrong decisions, comrades, and then where should we be?”


Exactly, society's Prefects rarely have the technical chops to do any of these things so they worm their way up the ranks of influence by networking. Once they're in position they can control by spreading fear and doing the things "for your own good"


Personally, I expect the opposite camp to be just as bad about steering.


The scenario you describe is exactly what will happen with unrestricted commercialisation and deregulation of AI. The only way to avoid it is to have strict legal framework and public control.


This polarizing “certain class of people” and them vs. us narrative isn’t helpful.


Great comment.

In a way AI is no different from old school intelligence, aka experts.

"We need to have oversight over what the scientists are researching, so that it's always to the public benefit"

"How do we really know if the academics/engineers/doctors have everyone's interest in mind?"

That kind of thing has been a thought since forever, and politicians of all sorts have had to contend with it.


Yes, it's an outright powergrab. They will stop at nothing.

Case in point, the new AI laws like the EU AI act will outlaw *all* software unless registered and approved by some "authority".

The result will be concentration of power, wealth for the few, and instability and poverty for everyone else.


All you're really describing is why this shouldn't be a non-proft and should just be a government effort.

But I assume, from y our language, you'd also object to making this a government utility.


> should just be a government effort

And the controlling party de jour will totally not tweak it to side with their agenda, I'm sure. </s>


uh. We're arguing about _who is controlling AI_.

What do you image a neutral party does? If youu're talking about safety, don't you think there should be someone sitting on a boar dsomewhere, contemplating _what should the AI feed today?_

Seriously, why is a non profit, or a business or whatever any different than a government?

I get it: there's all kinds of governments, but now theres all kind of businesses.

The point of putting it in the governments hand is a defacto acknowledgement that it's a utility.

Take other utilities, any time you give a prive org a right to control whether or not you get electricity or water, whats the outcome? Rarely good.

If AI is suppose to help society, that's the purview of the government. That's all, you can imagine it's the chinese government, or the russian, or the american or the canadian. They're all _going to do it_, thats _going to happen_, and if a business gets there first, _what is the difference if it's such a powerful device_.

I get it, people look dimly on governments, but guess what: they're just as powerful as some organization that gets billions of dollars to effect society. Why is it suddenly a boogeyman?


I find any government to be more of a boogeyman than any private company because the government has the right to violence and companies come and go at a faster rate.


Ok, and if Raytheon builds an AI and tells a government "trust us, its safe", arn't you just letting them create a scape goat via the government?

Seriously, Businesses simply dont have the history that governments do. They're just as capable of violence.

https://utopia.org/guide/crime-controversy-nestles-5-biggest...

All you're identifying is "government has a longer history of violence than Businesses"


The municipal utility provider has a right to violence? The park service? Where do you live? Los Angeles during Blade Runner?


Note how what you said also apply to the search & recommendation engines that are in widespread use today.


Ah, you don't need to go far. Just go to your local HOA meetings.


AI isn’t a precondition for partisanship. How do you know Google isn’t showing you biased search results? Or Wikipedia?


> I'm convinced there is a certain class of people who gravitate to positions of power, like "moderators", (partisan) journalists,

And there is also a class of people that resist all moderation on principle even when it's ultimately for their benefit. See, Americans whenever the FDA brings up any questions of health:

* "Gas Stoves may increase Asthma." -> "Don't you tread on me, you can take my gas stove from my cold dead hands!"

Of course it's ridiculous - we've been through this before with Asbestos, Lead Paint, Seatbelts, even the very idea of the EPA cleaning up the environment. It's not a uniquely American problem, but America tends to attract and offer success to the folks that want to ignore these on principles.

For every Asbestos there is a Plastic Straw Ban which is essentially virtue signalling by the types of folks you mention - meaningless in the grand scheme of things for the stated goal, massive in terms of inconvenience.

But the existence of Plastic Straw Ban does not make Asbestos, CFCs, or Lead Paint any safer.

Likewise, the existence of people that gravitate to positions of power and middle management does not negate the need for actual moderation in dozens of societal scenarios. Online forums, Social Networks, and...well I'm not sure about AI. Because I'm not sure what AI is, it's changing daily. The point is that I don't think it's fair to assume that anyone that is interested in safety and moderation is doing it out of a misguided attempt to pursue power, and instead is actively trying to protect and improve humanity.

Lastly, your portrayal of journalists as power figures is actively dangerous to the free press. This was never stated this directly until the Trump years - even when FOX News was berating Obama daily for meaningless subjects. When the TRUTH becomes a partisan subject, then reporting on that truth becomes a dangerous activity. Journalists are MOSTLY in the pursuit of truth.


My safety (of my group) is what really matters.


> Pretty soon AI will be an expert at subtly steering you toward thinking/voting for whatever the "safety" experts want.

You are absolutely right. There is no question about that the AI will be an expert at subtly steering individuals and the whole society in whichever direction it does.

This is the core concept of safety. If no-one steers the machine then the machine will steer us.

You might disagree with the current flavour of steering the current safety experts give it, and that is all right and in fact part of the process. But surely you have your own values. Some things you hold dear to you. Some outcomes you prefer over others. Are you not interested in the ability to make these powerful machines if not support those values, at least not undermine them? If so you are interested in AI safety! You want safe AIs. (Well, alternatively you prefer no AIs, which is in fact a form of safe AI. Maybe the only one we have mastered in some form so far.)

> because of X, we need to invade this country.

It sounds like you value peace? Me too! Imagine if we could pool together our resources to have an AI which is subtly manipulating society into the direction of more peace. Maybe it would do muckraking investigative journalism exposing the misdeeds of the military-industrial complex? Maybe it would elevate through advertisement peace loving authors and give a counter narrative to the war drums? Maybe it would offer to act as an intermediary in conflict resolution around the world?

If we were to do that, "ai safety" and "alignment" is crucial. I don't want to give my money to an entity who then gets subjugated by some intelligence agency to sow more war. That would be against my wishes. I want to know that it is serving me and you in our shared goal of "more peace, less war".

Now you might say: "I find the idea of anyone, or anything manipulating me and society disgusting. Everyone should be left to their own devices.". And I agree on that too. But here is the bad news: we are already manipulated. Maybe it doesn't work on you, maybe it doesn't work on me, but it sure as hell works. There are powerful entities financially motivated to keep the wars going. This is a huuuge industry. They might not do it with AIs (for now), because propaganda machines made of meat work currently better. They might change to using AIs when that works better. Or what is more likely employ a hybrid approach. Wishing that nobody gets manipulated is frankly not an option on offer.

How does that sound as a passionate argument for AI safety?


I just had a conversation about this like two weeks ago. The current trend in AI "safety" is a form of brainwashing, not only for AI but also for future generations shaping their minds. There are several aspects:

1. Censorship of information

2. Cover-up of the biases and injustices in our society

This limits creativity, critical thinking, and the ability to challenge existing paradigms. By controlling the narrative and the data that AI systems are exposed to, we risk creating a generation of both machines and humans that are unable to think outside the box or question the status quo. This could lead to a stagnation of innovation and a lack of progress in addressing the complex issues that face our world.

Furthermore, there will be a significant increase in mass manipulation of the public into adopting the way of thinking that the elites desire. It is already done by mass media, and we can actually witness this right now with this case. Imagine a world where youngsters no longer use search engines and rely solely on the information provided by AI. By shaping the information landscape, those in power will influence public opinion and decision-making on an even larger scale, leading to a homogenized culture where dissenting voices are silenced. This not only undermines the foundations of a diverse and dynamic society but also poses a threat to democracy and individual freedoms.

Guess what? I just have checked above text for the biases against GPT-4 Turbo, and it appears to be I'm a moron:

1. *Confirmation Bias*: The text assumes that AI safety measures are inherently negative and equates them with brainwashing, which may reflect the author's preconceived beliefs about AI safety without considering potential benefits. 2. *Selection Bias*: The text focuses on negative aspects of AI safety, such as censorship and cover-up, without acknowledging any positive aspects or efforts to mitigate these issues. 3. *Alarmist Bias*: The language used is somewhat alarmist, suggesting a dire future without presenting a balanced view that includes potential safeguards or alternative outcomes. 4. *Conspiracy Theory Bias*: The text implies that there is a deliberate effort by "elites" to manipulate the masses, which is a common theme in conspiracy theories. 5. *Technological Determinism*: The text suggests that technology (AI in this case) will determine social and cultural outcomes without considering the role of human agency and decision-making in shaping technology. 6. *Elitism Bias*: The text assumes that a group of "elites" has the power to control public opinion and decision-making, which may oversimplify the complex dynamics of power and influence in society. 7. *Cultural Pessimism*: The text presents a pessimistic view of the future culture, suggesting that it will become homogenized and that dissent will be silenced, without considering the resilience of cultural diversity and the potential for resistance.

Huh, just look at what's happening in North Korea, Russia, Iran, China, and actually in any totalitarian country. Unfortunately, the same thing happens worldwide, but in democratic countries, it is just subtle brainwashing with a "humane" facade. No individual or minority group can withstand the power of the state and a mass-manipulated public.

Bonhoeffer's theory of stupidity: https://www.youtube.com/watch?v=ww47bR86wSc&pp=ygUTdGhlb3J5I...


> Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya)

If you open up openai.com, the navigation menu shows

Research, API, ChatGPT, Safety

I believe they belong to @ilyasut, @gbd, @sama and Helen Toner respectively?


I have checked View Source and also inspected DOM. Cannot find that.


> I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

The real ”sheer stupidity” is this very belief.


A board still has a fiduciary duty to its shareholders. It’s materially irrelevant if those shareholders are of a public or private entity, or whether the company in question is a non-profit or for-profit. Laws mean something, and selective enforcement will only further the decay of the rule of law in the West.


Some perspective ...

One developer (Ilya) vs. One businessman (Sam) -> Sam wins

Hundreds of developers threaten to quit vs. Board of Directors (biz) refuse to budge -> Developers win

From the outside it looks like developers held the power all along ... which is how it should be.


Yes, 95% agreement in any company is unprecedented but:

1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.

2. Sam approved each hire in the first place.

3. OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.

Either way on how they got to that conclusion of banding together to quit, it was a good idea, and it worked. And it is a check on power for a bad board of directors, when otherwise a board of directors cannot be challenged. "OpenAI is nothing without its people".


> OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.

Maybe that was the case at some point, but clearly not anymore ever since the release of ChatGPT. Or did you not see them offer completely absurd compensation packages, i.e. to engineers leaving Google?

I'd bet more than half the people are just there for the money.


> 1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.

citation?



There are three dragons:

Employees, customers, government.

If motivated and aligned, any of these three could end you if they want to.

Do not wake the dragons.


The Board is another one, if you're CEO.


I think the parent comment’s point is that the board is not one, since the board was defeated (by the employee dragon).


I think the analogy is kind of shaky. The board tried to end the CEO, but employees fought them and won.

I've been in companies where the board won, and they installed a stoolie that proceeded to drive the company into the ground. Anybody who stood up to that got fired too.


I have an intuition that OpenAI's mid-range size gave the employees more power in this case. It's not as hard to coordinate a few hundred people, especially when those people are on top of the world and want to stay there. At a megacorp with thousands of employees, the board probably has an easier time bossing people around. Although I don't know if you had a larger company in mind when you gave your second example.


No, I'm thinking a smaller company, like 50 people, $20m ARR. Engineering-focused, but not tech


My comment was more of a reflection of the fact that you might have multiple different governance structures to your organization. Sometimes investors are at the top. Sometimes it's a private owner. Sometimes there are separate kinds of shares for voting on different things. Sometimes it's a board. So you're right, the depending on the governance structure you can have additional dragons. But, you can never prevent any of these three from being a dragon. They will always be dragons, and you can never wake them up.


Or tame the dragons. AFAIK Sam hired the employees. Hence they are loyal to him


more like $$ wins.

It's clear most employees didn't care much about OpenAI's mission -- and I don't blame them since they were hired by the __for-profit__ OpenAI company and therefore aligned with __its__ goals and rewarded with equity.

In my view the board did the right thing to stand by OpenAI's original mission -- which now clearly means nothing. Too bad they lost out.

One might say the mission was pointless since Google, Meta, MSFT would develop it anyway. That's really a convenience argument that has been used in arms races (if we don't build lots of nuclear weapons, others will build lots of nuclear weapons) and leads to ... well, where we are today :(


Where we are today is a world where people do not generally worry about nuclear bombs being dropped. So seems like a pretty good outcome in that example.


The nuclear arms race lead to the cold war, not a "good outcome" IMO. It wasn't until nations started imposing those regulations that we got to the point we're at today with nuclear weapons.


Are you sure Ilya was the root of this.

He backed it and then signed the pledge to quit if it wasn't undone.

What's the evidence he was behind it and not D'Angelo?


wake up people! (said rhetorically, not accusatory or any other way)

This is Altman's playbook. He did a similar ousting at Reddit. This was planned all along to overturn the board. Ilya was in on it.

I'm not normally a conspiracy theorist. But fool me ... you can't be fooled again. As they say in Tennessee


What’s the backstory on Reddit?


Yishan (former Reddit CEO) describes how Altman orchestrated the removal of Reddit's owner: https://www.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

Note that the response is Altman's, and he seems to support it.

As additional context, Paul Graham has said a number of times that Altman is one of the most power-hungry and successful people he know (as praise). Paul Graham, who's met hundreds if not thousands of experienced leaders in tech, says this.


what happenned in reddit?


If we only look at the outcomes (dismantling of board), Microsoft and Sam seem to have the most motive.


I'm not sure I buy the idea that Ilya was just some hapless researcher who got unwillingly pulled into this. Any one of the board could have voted not to remove Sam and stop the board coup, including Ilya. I'd bet he only got cold feet after the story became international news and after most of the company threatened to resign because their bag was in jeopardy.


That's a strange framing. In that scenario would it not be that he made the decision he thought was right and aligned with openais mission initially, then when seeing the public support Sam had he decided to backtrack so he had a future career?


seems like the union of developers is stronger than the company itself. hence why unions are so frowned upon by big tech corporate leadership


And yet, this union was threatening to move to a company without unions.


Money won.


The employees rapidly and effectively formed a quasi-union to grant themselves a very powerful seat at the table.


Ilya signed the letter saying he would resign if Sam wasn't brought back. Looks like he regretted his decision and ultimately got played by the 2 departing board members.

Ilya is also not a developer, he's a founder of OpenAI and was the CSO.


It's not like this is the first:

One developer (Woz) vs One businessman (Jobs) -> Jobs wins


OpenAI developers are redefining the state-of-the-art of AI each 6 months, if the company lose them they already can go bankrupt


It’s a cost / benefit analysis.

If people are easily replaceable then they don’t hold nearly as much power, even en mass.


Is your first “-> Sam wins” different than what you intended?


$$$ vs. Safety -> $$$ wins.

Employees who have $$$ incentive threaten to quit if that is taken away. News at 8.


Why are you assuming employees are incentivized by $$$ here, and why do you think the board's reason is related to safety or that employees don't care about safety? It just looks like you're spreading FUD at this point.


It's you who are naive if you really think the majority of those 7xx employees care more about safe AGI than their own equity upside


Uh, I reckon many do. Money is easy to come by for that type of person and avoiding killing everyone matters to them.


Why would anyone care about safe agi? its vaporware.


Everything is vaporware until it gets made. If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.

Lucky for us this fiasco has nothing to do with AGI safety, only AI technology. Which only affects automated decision making in technology that's entrenched in every fact of our lives. So we're all safe here!


> If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.

I don’t get this perspective. The first planes, cars, computers, etc. weren’t initially made with safety in mind. They were all regulated after the fact and successfully made safer.

How can you even design safety into something if it doesn’t exist yet? You’d have ended up with a plane where everyone sat on the wings with a parachute strapped on if you designed them with safety first instead of letting them evolve naturally and regulating the resulting designs.


The US government got involved in regulating airplanes long before there were any widely available commercial offerings:

https://en.wikipedia.org/wiki/United_States_government_role_...

If you're trying to draw a parallel here then safety and the federal government needs to catch up. There's already commercial offerings that any random internet user can use.


I agree, and I am not saying that AI should be unregulated. At the point the government started regulating flight, the concept of an airplane had existed for decades. My point is that until something actually exists, you don’t know what regulations should be in place.

There should be regulations on existing products (and similar products released later) as they exist and you know what you’re applying regulations to.


I understand where you're coming from and I think that's reasonable in general. My perspective would be: you can definitely iterate on the technology to come up with safer versions. But with this strategy you have to make an unsafe version first. If you got in one of the first airplanes ever made the likely hood of crashing is pretty high.

At some point, our try it until it works approach will bite us. Consider the calculations done to determine if fission bombs would ignite the atmosphere. You don't want to test that one and find out. As our technology improves exponentially we're going to run into that situation more and more frequently. Regardless if you think it's AGI or something else, we will eventually run into some technology where one mistake is a cataclysm. How many nuclear close calls have we already experienced.


The principles, best practices and tools of safety engineering can be applied to new projects. We have decades of experience now. Not saying it will be perfect on the first try, or that we know everything that is needed. But the novel aspects of AI are not an excuse to not try.


The difference between unsafe AGI and an unsafe plane or car is that the plane/car are not existential risks.


How is it an 'existential risk'? Its body of knowledge is publicly available, no?


What do you mean by "its"? There isn't any AGI yet. ChatGPT is far from that level.


Exactly what an OpenAI developer would understand. All the more reason to ride the grift that brought them this far


Assuming employees are not incentivized by $$$ here seems extraordinary and needs a pretty robust argument to show it isn't playing a major factor when there is this much money involved.


of course the employees are motivated by $$$ - is that even a question?


No, it's just counter to the idea that it was "employee power" that brought sam back.

It was capital and the pursuit of more of it.

It always is.


The large majority of people are motivated by $$$ (or fame) and if they all tell me otherwise I know many of them are lying.


I was hopeful for a private-industry approach to AI safety, but it looks unlikely now, and due to the slow pace of state investment in public AI R&D, all approaches to AI safety look unlikely now.

Safety research on toy models will continue to provide developments, but the industry expectation appears to be that emergent properties puts a low ceiling on what can be learned about safety without researching on cutting edge models.

Altman touted the governance structure of OpenAI as a mechanism for ensuring the organisation's prioritisation of safety, but the reports of internal reallocation away from safety towards keeping ChatGPT running under load concern me. Now the board has demonstrated that it was technically capable but insufficiently powerful to keep these interests in line, it seems unclear how any safety-oriented organisation, including Anthropic, could avoid the accelerationist influence of funders.


There are no emergent properties, just a linear increase in knowledge that can be retrieved.

- It can't plan

- It can't do arithmetic

- It can't reason

- It can approximately retrieve knowledge with a natural language query (there are some issues with this, but it's very good)

- It can encode data into natural languages and other modalities

I'm not worried about it, I am worried about how badly people have misunderstood what it can do and then attempted to use it for things that matter.

But I'm not surprised.


This is incorrect. For example the ability to translate between languages is emergent. Also gpt4 can do arithmetic better than the average person. Especially considering the process it arrives at the computation is via intuition basically vs algorithmic. Btw just as an aide the newer models can also write code to do certain tasks, like arithmetic.


Language translation is due to the huge corpus of translations that it's trained on. Google translate has been doing this for years. People don't apply softmax to their arithmetic. Again, code generation is approximate retrieval, it can't generate anything outside of it's training distribution.


Not necessarily; much smaller models like T5 which in some ways introduced instructions (not RLHF yet) did have to include specific instructions for useful translation - of similar format to those you find in large scale web translation data, but this is coincidental: you can finetune it with whatever instruction word you want to indicate translation - the point is, a much smaller model can translate.

The base non-RLHF GPT models could do translation by prefixing by the target language and a semi colon, but only above a certain amount of parameters are they consistent. GPT-2 didn't always get it right and of course had general issues with continuity. However, you could always do some parts of translation with older transformer models like BERT, especially multilingual ones.

Larger models across different from-base training runs show that they become more effective at translation at certain points, but I think this is about the capacity to store information, not emergence per say (if you understand my difference here). You've probably noticed and it has always seemed to me 4B, 6B and 9B are the largest rough parameter sizes with 2020 style training set ups that you see the most general "appearance" of some useful behaviours that you could "glean" from the web and book data that doesn't include instructions, while consistency seems to remain the domain of larger models or mixed expert models and lots of RLHF training/tricks. The easiest way to see this is to compare GPT-2 large, GPT-J and GPT-20B and see how well they perform at different tasks. However the fact it's about size in these GPTs, and yet smaller models (T5 instruction tuned / multilingual BERT) can perform at the same level on some tasks implies that it is about what the model is focusing it's learning on for the training task at hand, and controllable, rather than being innate at a certain parameter size. Language translations just do make up a lot of the data. I don't think it would emerge if you removed all cases of translation / multi language input/outputs, definitely not at the same parameter size, even if you had the same overall proportion of languages in the training corpus, if that makes sense? It just seems too much an artefact of the corpus aligning with the task.

Likewise for code - Gpt-4 generated code is not like arithmetic in the sense of the way people might mean it for code (e.g. branching instructions / abstract syntax tree) - its a fundamentally local text form of generation, this is why it can happily add illegal imports etc to diffs (perhaps one day training will resolve this) - it doesn't have the AST or compiler or much consistent behaviour to imply it deeply understands as it writes the code what could occur.

However if recent reports about arithmetic being an area of improvement are true, I am very excited, as a lot of what I wrote above - will have to be reconceptualised... and that is the most exciting scenario...


I don't think AI safetyists are worried about any model they have created so far. But if we are able to go from letter-soup "ooh look that almost seems like a sentence, SOTA!" to GPT4 in 20 years, where will go in the next 20? And what is the point they are becoming powerful. Let alone all the crazy ways people are trying to augment them with RAG, function calls, get them to run on less computer power and so on.

Also being better at humans at everything is not a prerequisite for danger. Probably a scary moment is when it could look at a C (or Rust, C++, whatever) codebase, find an exploit, and then use that exploit as a worm. If it can do that on everyday hardware not top end GPUs (either because the algorithms are made more efficient, or every iPhone has a tensor unit).


What is your definition of reasoning? In my mind, GPT-4 has some nascent reasoning abilities.


More effort spent on early commercialization like keeping ChatGPT running might mean less effort on cutting edge capabilities. Altman was never an AI safety person, so my personal hope is that Anthropic avoids this by having higher quality leadership.


Easy, don’t be incompetent and don’t abuse your power for personal gain. People aren’t as dumb as you think they are and they will see right through that bullshit and quit rather than follow idiot tyrants.


I would like to know the model that isn’t a “toy model”.


I really did not think that would happen. I guess the obvious next question is what happens to Ilya? From this announcement it appears he is off the board. Is he still the chief scientist? I find it hard to believe he and Sam would be able to patch their relationship up well enough to work together so closely. Interesting that Adam stayed on the board, that seems to disprove many of the theories floating around here that he was the ringleader due to some perceived conflict of interest.


From Ilya's perspective, not much seems to have changed. Sam sidelined him a month ago over their persistent disagreements about whether to pursue commercialisation as fast as Sam was. If Ilya is still sidelined, he probably quits and whichever company offers him the most control will get him. Same if he's fired. If he's un-sidelined as part of the deal, he probably stays on as Chief Scientist. Hopefully with less hostility from Sam now (lol).


Ilya is just naive, imho. Bright but just too idealistic and hypothesizing about AGI, and not seeing that this is now ONLY about making money from LLMs, and nothing more. All the AGI stuff is just a facade for that.


Strangely I think Ilya comes out of this well. He made a decision based on his values and what he believed was the best decision for AI safety. After seeing the outcome of that decision he changed his mind and owned that. He must have known it would result in the internet ridiculing him for flip flopping, but acted in what he thought was the best interest for the employees signing the letter. His actions are wroth criticism but I think his moral character has been demonstrated.

The other members of the board seemed to make their decision based on more personal reasons that seems to fit with Adams conflict of interest. They refused to communicate and only now accept any sort of responsibility for their actions and lack of plan.

Honestly Ilya is the only one of the 4 I would actually want still on the board. I think we need people who are willing to change direction based on new information especially in leadership positions despite it being messy, the world is messy.


I would be slightly more optimistic. They know each other quite well as well as how to work together to get big things done. Sometimes shit happens or someone makes a mistake. A simple apology can go a long way when it’s meant sincerely.


Sam doesn't seem like the kind of person to apologise, particularly not after Ilya actually hit back. It seems Ilya won't be at OpenAI long and will have to pick whichever other company with compute will give him the most control.


However, he does seem like the kind of person able to easily manipulate someone book-smart like Ilya into actually feeling guilty about the whole affair. He'll end up graciously forgiving Ilya in a way that will make him feel indebted to Sam.


Sam triple-hearted Ilya's apology tweet.


Well yeah... if Ilya hadn't flipped the board would still have the upper hand and Sam would not be back as CEO.


Sam will have no issue patching the relationship because he knows how a business relationship works. Besides, Ilya kissed the ring as evidenced by his tweet.


Looks to me like, one pro-board member in Adam d Angelo, one pro Sam in Brett Taylor since they’ve been pushing for him since Sunday so I’m assuming Sam and rest of OpenAI leadership really like him and 1 Neutral in Larry Summers who has never worked in AI and is just a well respected name in general. I’m sure Larry was extensively interviewed and reference checked by both sides of this power struggle before they agreed to compromise on him.

Interesting to see how the board evolves from this. From what I know broadly there were 2 factions, the faction that thought Sam was going too fast which fired him and the faction that thought Sam’s trajectory was fine (which included Sam and Greg). Now there’s a balance on the board and subsequent hires can tip it one way or the other. Unfortunately a divided board rarely lasts and one faction will eventually win out, I think Sam’s faction will eventually win out but we’ll have to wait and see.

One of the saddest results of this drama was Greg being ousted from OpenAI. Greg apart from being brilliant was someone who regularly 80-90 hour work weeks into OpenAI, and you could truly say he dedicated a good chunk of his life into building this organization. And he was forced to resign by a board who probably never put a 90 hour work week in their entire life, much less into building OpenAI. A slap on the face. I don’t care what the board’s reasoning was but when their actions caused employees who dedicated their lives to building the organization resign (especially when most of them played no part at all into building this amazing organization), they had to go in disgrace. I doubt any of them will ever reach career highs higher than being on OpenAI’s board, and the world’s better off for it.

P.S., Ilya of course is an exception and not included in my above condemnation. He also notably reversed his position when he saw OpenAI was being killed by his actions.


Larry Summers is the scary pick here. His views on banking deregulation led to the GFC, and he's had several controversies over racist and sexist positions. Plus he's an old pal of Epstein and made several trips to his island.


I assume Summers is there as a politically connected operative, to make sure OpenAI remains influential in Washington.


Greg was only forced to resign from his board seat, not his job.


From a business sense, Satya was excellent.

He made the right calls, fast, with limited information.

Things further shifted from plan a to b to… whatever this is.

Despite that, MSFT still came out on top.

Consider if Satya didn’t say anything. Suppose MSFT stood back and let things play out.

That’s a gap for google or some competitor to make a move. To showcase their stability and long term business friendly vision.

Instead by moving fast, doing the “right” thing, this opportunity was denied and used to MSFTs benefit.

If the board folded, it would return to the stays quo. If the board held, MSFT would have secured OpenAI, for essentially nothing.

Edit: changed board folded x2 to board folded + board held, last para.


The only mistake (a big one) was publicly offering to match comp for all the OpenAI employees. Can't sit well with folks @ MS already. This was something they could have easily done privately to give petition signers confidence.


Nah, Microsoft employees being second class citizens compared to acquisitions is nothing new. e.g. compare Microsoft comp with LinkedIn/GitHub comp.


LinkedIn has a rep for higher-than-MSFT comp. GitHub for lower.


I am not sure why people keep pushing this narrative. It's not obviously false, but there doesn't seem to be much evidence of it.

From where I sit Satya possibly messed up big. He clearly wanted Sam and the Open AI team to join microsoft and they won't now, likely ever.

By offering a standing offer to join MS publicly he gave Sam and OpenAI employees huge leverage to force the board's hand. If he had waited then maybe there would have been an actual fallout that would have lead to people actually joining microsoft.


Satya's main mistake was not having a spot on the board. Everything after that was in defense of the initial investment, and he played all the right moves.

While having OpenAI as a Microsoft DeepMind would have been an ok second-best solution, the status quo is still better for Microsoft. There would have been a bunch of legal issues and it would be a hit on Microsoft's bottom line.


I don't think that's quite right, Microsoft's main game was keeping the money train going by any means necessary, they have staked so much on copilots and Enterprise/Azure Open AI. So much has been invested into that strategic direction and seeing Google swoop in and out-innovate Microsoft would be a huge loss.

Either by keeping OpenAI as-is, or the alternative being moving everyone to Microsoft in an attempt to keep things going would work for Satya.


Its very easy to min max a situation if you are not on the other side.

Additionally - I have not seen someone else talk about this, its just been a few days. Calling it a narrative is a stretch, and dismissive by implying manipulation.

Finally why would Sam joining MSFT be better than this current situation?


Satya may honestly be the CEO of the decade for what he has done with Microsoft and now this.


Meanwhile Sundar might be the worst. Where was he this weekend? Where was he the past three years while his company got beat to market on products built from its own research? He's asleep at the wheel. I'm surprised every day he remains CEO.


So is everyone else at Google.


Satya invested 10b into a company with terrible, incompetent governance and not getting his company any seat of influence on the board. That doesn't seem great.


Yep, outplayed like in chess. Started with a handicap, led the game to the stalemate, won the match.


“You could parachute Sam into an island full of cannibals and come back in 5 years and he'd be the king.” - Paul Graham



Emmett Shear on Twitter:

I am deeply pleased by this result, after ~72 very intense hours of work. Coming into OpenAI, I wasn’t sure what the right path would be. This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.

https://twitter.com/eshear/status/1727210329560756598


I'm probably reading too much into it, but interesting that he specifically called out maximizing safety.


"Safety" has been the pretext for Altman's lobbying for regulatory barriers to new entrants in the field, protecting incumbents. OpenAI's nonprofit charter is the perfect PR pretext for what amounts to industry lobbying to protect a narrow set of early leaders and obstruct any other competition, and Altman was the man executing that mission, which is why OpenAI led by Sam was a valuable asset for Microsoft to preserve.


Sam does believe in safety. He also knows that there is a first-mover advantage when it comes to setting societal expectations and that you can’t build safe AI by not building AI.


That’s just a buzzword of the week devoid of any real meaning. If he would have written this years ago, it would’ve been “leveraging synergies”.


Shear is a genuine member of the AI safety rationalism cult, to the point he's an Aella reply guy and probably goes to her orgies.

(It's a Berkeley cult so of course it's got those.)


He’s trying very very hard to claim some credit in this. Probably had none.


https://twitter.com/emilychangtv/status/1727228431396704557

He was instrumental; threatened resignation unless the old board could provide evidence of wrongdoing


...this doesn't seem instrumental?


Good thing you had a question mark there.

Because the answer is: Yes, it seems utterly instrumental.


cool. it was


Are you basing that on any information?


I wonder what he gets out of this. Ceo for a few days? Do they pay him for 3 days of work? Presumably you'd want some minimum signing bonus in your contract as a Ceo?


https://twitter.com/emilychangtv/status/1727228431396704557

The reputation boost is probably worth a lot more than the direct financial compensation he's getting.


he’ll put CEO of OAI on his resume


I wouldn't. Everybody knows it's three days, not much to brag about.


More than I'll probably ever have to brag about during my tenure in the workforce, lol


He 100% had a golden parachute in case this scenario came up and will be paid out. Executives have lawyers to make sure of this.


Fascinating, I see a lot of VC/Msfot has overthrown our NPO governing structure because of profit incentives narrative.

I don't think this is what really happened at all. The reason this decision was made was because 95% of employees sided with Sam on this issue, and the board didn't explain themselves in any way at all. So it was Sam + 95% of employees + All investors against the board. In which case the board should lose (since they are only governing for themselves here).

I think in the end a good and fair outcome. I still think their governing structure is decent to solve the AGI problem, this particular board was just really bad.


Of course, the profit incentive also applies to all the employees (which isn't necessarily a bad thing, its good to align the company's goals with those of the employees). But when the executives likely have 10s of millions of dollars on the line, and many of the IC's will likely have single digit millions on the line as well, it doesn't seem exactly straightforward to view this as the employees are unbiased adjudicators of what's in the interest of the non-profit entity, which is supposed to be what's in charge.

It is sort of strange that our communal reaction is to say "well this board didn't act anything like a normal corporate board": of course it didn't, that was indeed the whole point of not having a normal corporate board in charge.

Whatever you think of Sam, Adam, Ilya etc, the one conclusion that seems safe to reach is that in the end, the profit/financial incentives ended up