If nobody is willing to put their names on those statements then there is absolutely zero reason to believe it is anything other than one of their competitors trying to take advantage of an opportune time to weaken the forerunner.
This is worthless and shouldn't be on the HN homepage.
> We encourage former OpenAI employees to contact us at formerly_openai@mail2tor.com. We personally guarantee everyone's anonymity in any internal deliberations and public communications.
Sounds like they're just being intentionally protective of everyone's identities, which is understandable.
Also "shouldn't be on the HN homepage" is not really a pertinent comment, seeing as it is on the front page, meaning others (via the algorithm) do think it is a worthy submission.
This is unauthenticated, unverified,
anonymous. There is an enormous difference between journalism and this.
Anyone reading this thread right now can post any random text to a website and claim to be a former employee. Anyone can make up anything at all. Are all of those content that should be on the homepage? They would be up voted.
Credibility is important when making accusations, accountability is important. There is zero here. There are processes for actual whistle-blowers. This isn't it. It's worse than "yellow journalism". And it's concerning people don't see the difference.
> Also "shouldn't be on the HN homepage" is not really a pertinent comment, seeing as it is on the front page, meaning others (via the algorithm) do think it is a worthy submission.
I would argue that, as in a lot of cases with articles that appear on the FP, this one may be getting upvoted as folks are around for the discussion, not necessarily the featured article, as it's just "another clue" as to what the heck is going on with this circus show.
That may very well be the case, but even then there's no reason to disallow it from the front page necessarily. Let readers make their own judgement about whether it's genuine. Heck, that's an interesting discussion in and of itself.
OpenAI went through at least one purge, Musk founded it but famously is no longer associated with it. There’s a huge amount of board turnover, it would be interesting to see the causes behind that
Putting your name on something like this would make you unemployable anywhere, whether this is true or false. It makes perfect sense that nobody would do so.
I got fiercely attacked the other day because i suggested it might be ok to host grandmas chicken pot pie recipe using HTTP.
But then we turn around and accept as fact an anonymous document making serious accusations without proof of authenticity or tamper resistance as if it were gospel.
I seriously have no idea what the rules are anymore.
OpenAI has been around for 8 years and has hundreds of employees. So it would be surprising if there wasn't at least some ex-employees somewhere that had chips on their shoulders, either for valid or completely invalid reasons. The letter is anonymous, so we have no way of knowing if it's 1 or 2 people or 100. As such, the letter is fairly useless.
I'd say the number of employees openly standing with Altman is a much more important data point.
Journalists who work with whistleblowers typically validate that the person in question is reliable and is who they say they are. How do we know this was written by former employees?
This, honestly, feels like it ought to be removed. It is not a reliable letter without names or some kind of verification, just a random post on a website.
These are interesting, because from an outside observer what's described as the allegations here don't seem particularly damning, with the exception of the hostile work environment allegation against Greg, however the only significant allegation is completely irrelevant to the larger statements made in the letter.
It seems like they are making very strong statements but not really supporting them very much in the "details", and didn't sign it. I'm not sure I can take anonymous allegations with minimum details at face value.
> we witnessed a disturbing pattern of deceit and manipulation ...
Hard to know how authentic this comment is, but worth noting that none of these were mentioned or hinted in the recent explanation(s) presented to OpenAI employees: https://news.ycombinator.com/item?id=38356534 , which is at least not consistent (bad pun intended).
Having said that, the recommendations to expand the scope of the investigation and protect (but also verify) the identities sound very reasonable. Probably the first reasonable thing we heard since Friday.
> driven by their insatiable pursuit of achieving artificial general intelligence (AGI)
> those who remain at OpenAI continue to blindly follow their leadership, even at significant personal cost
Indeed -- regardless of what actually happened, this kind of hyperbolic prose undermines their credibility. This is simply not how trustworthy adult professionals talk.
> Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their
insatiable pursuit of achieving artificial general intelligence (AGI). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity.
From the outside-in, I'm having a hard time understanding how commercializing GPT-3 and GPT-4 doesn't "benefit of all humanity". I don't think these models are intelligent or world-changing, but offering an API for people to build on lit a fire under people's asses. Unless someone can produce a convincing argument that transparency is the wrong move, I don't get what the hubbub is about. Why would you oppose the API/sales side of the company but support the locked-down, isolated research side?
It feels like there's something we don't know. It's probably an innocuous personal problem or conflict of malicious interests, but the lack of communication is making people assume the worst.
There’s been a strong backlash to releasing AI models in my community particularly driven by EA and VC rhetoric that the potential risks far outweigh the benefits
This pushback seems quite surprising to me, since the same claims could be made for the internet, social media, YouTube, etc - the very technologies that put this group into power/wealth
My friends/family in India are far more pro-AI than friends in Silicon Valley / EU. And my artist friends seem to be more excited about it than my VC friends
I think subconsciously, a lot of people currently in power seem to be fearful - perhaps because their power/wealth/way of life will be threatened by a faster rate of change. While those in India seem to see universal translation / accessibility of global knowledge as an equalizing force
These are broad generalizations ofc. But it reminds me of the backlash against nuclear power (despite human life cost of oil/gas being way higher than nuclear) or the CEQA/environmental driven opposition to building more housing in California
I don’t know think individuals are ill intentioned in their focus on AI safety, but they sit in an echo chamber where this technology has more potential to harm them (they have a lot to lose) than benefit them (they are already at the top). So it’s natural to feel threatened and galvanize to slow down the rate of change
I tend to be quite dissatisfied by the current state of the world. Lots of violence (humans/animals), oppression, and gatekeeping of knowledge driven by our colonialist history. So I bias towards faster rate of change that could shift balance of power from western military-industrial complex to global south. AI excitement in global south excites me!
I don't think that the firing of Sam Altman was due to commercialization of GPT-3 and GPT-4, and I don't think there's been any reliable information that confirms that. There's been multiple statements that at least partially contradict that, including:
Emmett Shear: "Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models." [1]
Brad Lightcap: We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.” [2]
Also, I believe Adam D'Angelo made some pro-commercialization comments on Twitter earlier this year.
>From the outside-in, I'm having a hard time understanding how commercializing GPT-3 and GPT-4 doesn't "benefit of all humanity". I don't think these models are intelligent or world-changing, but offering an API for people to build on lit a fire under people's asses.
When it's controlled by the powerful, it stands to mainly benefit the powerful that control it. A current example is how they are filtering it. They can restrict answers to specific questions to the public that they themselves have access to.
Because at the moment ChatGPT is in the phase of advertising so you get rather cheap access to a pretty capable model.
As soon as competitors are eliminated,remember Altman's regulation push, and enough people depend on their service they switch to full profit mode including price hikes.
>But on Tuesday, Sam Altman, the chief executive of the San Francisco start-up OpenAI, testified before members of a Senate subcommittee and largely agreed with them on the need to regulate the increasingly powerful A.I. technology being created inside his company and others like Google and Microsoft.
That sounds more like regulate all that catch up to us, so we keep the benefit of experience because we were there first.
>he called for the United States to adopt a licensing and registration regime for AI models “above a crucial threshold of capabilities,” arguing that requiring government approval would help mitigate potential safety concerns.
>The trick, critics say, is where Altman draws that “crucial threshold.
Let's say it's the 17th century and you've stumbled across a cadre of intimidatingly clever and charming aristocrats who believe that they're on the verge of summoning an ancient Egyptian demon who will either be the salvatation or doom of humanity. It's critical that the ritual be fully understood and correctly performed if salvation is to be acheived and doom is avoided.
These gentlemen then tell you that other groups are almost as near to the summoning as themselves, but that any approach other than their own is reckless and ultimately apocolyptic.
That's the headspace or the serious AI Safety people. Like the 17th century occultists and their work, it's fanatical and zealous and not easily disprovable because it relies on secret, mystified knowledge that outsiders can't possibly conprehend. To them, skepticism only reflects ignorance.
And yeah, it easily looks totally nutso and delusional from the outside, whether they're ultimately right or wrong.
When AI Safety talks go beyond brand protection (don't let Bing play Nazi roleplay) and into existential melodrama, you need to interpret it through this kind of lens.
because it is not an objective goal, but a subjective one. What I, a libertarian anti-government individualist believes will "benefit of all humanity" is very different from a SanFran based Social Democracy pro-government person, and is very different than Theological Authoritarian from one of the 3 big religions.
This is been one of by big issues with OpenAI from the start, their imposition that their view of what will "benefit of all humanity" was and is clearly politically slanted, and IMO is very dangerous.
But really, it's hard to take something like this seriously. It may be 100% legitimate, but most of the claims are nebulous at best. The sort of claims that unless you're intimately involved with day-to-day of the company, you're not going to understand. While us randoms on the internet are not the target audience, the Board is not involved in the day-to-day either. So the only logical target could be the current employees. Whom either don't know about these situations...or turned a blind eye to it.
By shining a light on what has (reportedly) happened again, what is accomplished? Those that didn't know still don't know for sure. Those whom ignored it the first time around are... what? Going to change sides? So they can be the one of the 15 people remaining at the company after everyone else defects to Microsoft?
The timing is also perplexing. If things were really this terrible, why wait until after Sam et al has been fired/left to make demands for an investigation? Realistically, these accusations, if true, are useless. The current employee base has already signed away that they are going to leave OpenAI if the Board doesn't backdown and undo everything they've already done. If the BOD were to take this seriously and wait for the results of an investigation, which would take months... the entire employee base would already be gone.
Maybe it's an attempt to put a hit out on Sam & Crew at Microsoft. But MS really wants that AI juice. Some anonymous hearsay on the internet is not going to change their mind. A lot of these accusations can also be filed under "start-up growing pains". So MS isn't going to do anything about this.
Net-Net, if these claims are true or false. It changes nothing for no one. The boulder of doom is already rolling towards OpenAI and the BOD.
The dirt in this screed is kind of besides the point.
Startups die by self-deception.
The lie that OpenAI has still been pursuing its original mission was extremely transparent to every informed person inside and outside the company. Their attempts to distort reality were weak and useless. The chickens have come to roost. The company has been blown up as a result. It was only a matter of time.
You can tell the media and blogosphere are milking this debacle for everything it's worth.
The next article will be about how the employees pets feel about Sam Altman getting ousted. There will be a picture of an aquarium with a cute little goldfish and a caption "something fishy is going on with the OpenAI boards decision to fire Sam Altman".
Do you think this account is acting in earnest? They claim, that they (reddit user) and the board acted jointly and voiced concerns to Sam. The Reddit user makes it sound as if they are in direct contact with OpenAI leadership. Their word choice however comes off as junior and not senior like using “dude” or “bs.”
>>Their word choice however comes off as junior and not senior like using “dude” or “bs.”
Given the actions of the OpenAI board that seems to be fitting does it not? Regardless of the facts, the handling of this entire event comes off as "junior" to me
I can believe this was written in earnest, but in the earnest of someone who doesn't really understand how to build product, or (which would be worse) someone with standards so high they would never release anything (a profile I have seen many times in my life). So, I can't really take it as valid criticism, since none of it reads as useful, or even valid criticism.
I read through and believe they are being earnest. It's still a one-sided take, but it's a side we hadn't yet heard and it makes sense. It also reinforces today's general line of thinking vis-a-vis Microsoft's involvement (not to mention the Middle Eastern tour Sam was recently on).
All that said, even if this is 100% true, if you put yourself in the shoes of one of the board members you still have difficult decisions to make. If Sam, especially with Greg's support, believed he could run things dictatorially, then it is/was up to the board to decide how to deal with that (or not deal with it). It makes sense that the board ignored megalomaniacal behaviors for a long time because everything was up and to the right, but at some point recently they must have started feeling completely disintermediated, which became an existential threat (for them and probably for their perception of adequate corporate governance).
As they frequently say, this has the makings of a Havard Business School test case, but let's see how everything pans out over the next couple of weeks before jumping to too many conclusions.
This post came out just after the board posted the original notice, but before the reasons for the removal became public. So it seems like it has too much factual detail, unknown at the time to outsiders, to be entirely fake.
It would be very easy to identify them with what they wrote if what they're saying is true (even saying they were "in the room while this all went down.").
I think it's real, but they're not as senior as they seem. Or alternatively they've taken a serious, stupid, risk as senior leadership which calls their judgement into question.
Idk, but they were posting details of the disagreements on Friday a few hours ahead of any other sources so I'm inclined to believe that they work at OpenAI.
This doesn't say anything that's not already common knowledge, and doesn't really jive with the fact that 90% of staff signed a letter threatening to quit if they don't get Altman back.
On Friday their comments included a bunch of details that were not made public anywhere else at the time.
If they don't get Altman back the company is toast and they were about to get an option to sell their shares in a new funding round so it would be dumb for them to not sign it.
You know that any of us here could put this up in a matter of minutes with the help of ChatGPT. There is zero authenticity to such writings. I’m sure some won’t realise this in the heat of debate
A friend of mine worked with Sam Altman and Emmett Shear and he gave me a letter to leadership to post on his behalf (he'd like to remain anonymous):
Pay me back for the lunch you ordered last week using my card. Also, you made a deal with me to pay me $1 million. Emmett, you promised to uphold this bargain. Everyone should put some pressure on them to give me the money. Also, stop pretending to not be from the Planet Zondar. We know you destroyed its sun by using concentrated Rutherfordium.
Unless a journalist verifies the author’s identity this could just as possibly be written by someone on 4chan as a disgruntled employee.
Honestly none of these allegations are that bad. Cancelling projects, PIPing people who don’t deserve to be PIPed, surveillance of employees, rules that don’t apply to execs, etc is all standard operating procedure in tech.
"demand for researchers to delay reporting progress on specific "secret" research initiatives, which were later dismantled for failing to deliver sufficient results quickly enough."
I'm not native, so I missed that this is just 'cancelling projects'. Thanks for clarification.
This is worthless and shouldn't be on the HN homepage.