Poe AI is just a clear straight conflict, no question there.
On the governance matter, the thesis is a bit more shakey.
> Helen Toner and Tasha McCauley... participating in an... AI governance organization... calls into question the independence of their votes.
> I am not accusing anyone (to be clear, even the Board Directors that I consider conflicted) of having acted subject to conflicts of interest. [AKA "Just Asking Questions" technique]
> If... this act of governance was unwise, it calls into serious question the ability of these people and their organizations... to conduct governance
So they're conflicted because they're also in governance, and they shouldn't govern because they might have been conflicted.
It seems like the author's real problem isn't any specific conduct by these two board members, but more of a "you got chocolate in my peanut butter" issue.
> So they're conflicted because they're also in governance
I read it as saying that they were conflicted because they're both from the "highly ideological" Open Philanthropy; on a small board, having two people ideologically aligned seems precarious.
Not regarding the fundamental mission of the charity.
Ever seen a Catholic hospital with a Satan worshiper on the board?
If the mission of OpenAI and its reason for being created is to make sure AGI is kept in the public trust and not walled off by commercial forces then you’re not going to want people believing the opposite of that.
You don't have to actually act on your conflict of interest to be conflicted. It's enough to have the conflict of interest in the first place and that should be reason enough to resign. In fact: most board of directors realize this without being told and in case that they don't realize it there is usually a nice and handy document that they signed upon becoming a member of the board that pointedly reminds them of this fact.
I think there are a bunch of problems in the timeline, not least the ongoing mismatch between statements and filings, and the lack of anything in 2022.
It does look like governance very much played second fiddle, and the unsurprising outcome of that was that governance hasn't worked very well. I don't know who can rightfully take the blame for that, though, other than the Chair and maybe CEO. If the board wasn't fit, it was their job to fix it.
How can you have a conflict of interest with a charity?
It’s not a business. It’s not competing for business. It’s a charity.
Like if you’re on the board of a charity fighting cancer is it a conflict to be on a board of another charity fighting AIDS? Or also part of a for profit company fighting cancer?
Of course not. You’d have a conflict of interest if you had a relationship that was opposed to the charity’s mission like a tobacco company, or if you were personally profiting off your role with the charity.
The post here doesn’t articulate why these are conflicts of interest.
This thread and all the other 15 threads about all this start with the tacit assumption that OpenAI is a high growth tech company, with investors and customers and founders and so on.
"Under the tax law, a section 501(c)(3) organization is presumed to be a private foundation unless it requests, and qualifies for, a ruling or determination as a public charity. Organizations that qualify for public charity status include churches, schools, hospitals, medical research organizations, publicly-supported organizations (i.e., organizations that receive a specified portion of their total support from public sources), and certain supporting organizations."
Edit: Looking at the IRS determination letter from November 3, 2016, OpenAI was organized as a public charity under 170(b)(1)(A)(vi) "Organizations Receiving Substantial Support from a Governmental Unit or from the General Public"
Their last 990 form, filed November 15, 2021, for the calendar year 2020, shows total support over the past 5 years (2016-2020) of $133M, only $41M of which was individual donations of over 2% ($2.6M) so they easily met the 5-year public support test.
Note: Propublica has the 2021 990, but it isn't on the IRS website.
Total support includes over $70M in other income in 2018 and 2019 which is missing the required explanation in the 990's. In other words, out of the $92M in public support, $70M is unexplained other income.
Also, Open Philanthropy pledged $30 million in 2017 ($10 per year for 2017-2019) which is considered public support since they are a public charity. However that is more than the $22 in true public support that was reported. Perhaps they didn't complete the pledge.
> they asked: who on earth are Tasha McCauley and Helen Toner?
As a prominent researcher in AI safety (I discovered prompt injection) I should explain that Helen Toner is a big name in the AI safety community - she’s one of the top 20 most respected people in our community, like Rohin Shah.
The “who on earth” question is a good question about Tasha. But grouping Helen in with Tasha is just sexist. By analogy, Tasha is like Kimbal Musk, whereas Helen is like Tom Mueller.
Tasha seems unqualified but Helen is extremely qualified. Grouping them together is sexist and wrong.
I'm not sure what you are referring to with "AI safety community". Her background isn't exactly a strong "AI safety" either before joining the board if you read her CV. I don't see there sexism asking these questions.
You’re definitely correct that her CV doesn’t adequately capture her reputation. I’ll put it this way, I meet with a lot of powerful people and I was equally nervous when I met with her as when I met with three officers of the US Air Force. She holds comparable influence to a USAF Colonel.
She is extremely knowledgeable about the details of all the ways that AI can cause catastrophe and also she’s one of the few people with the ear of the top leaders in the DoD (similar to Eric Schmidt in this regard).
Basically, she’s one of a very small number of people who are credibly reducing the chances that AI could cause a war.
If you’d like to learn about or become part of the AI safety community, a good way to start is to check out Rohin Shah’s Alignment Newsletter. http://rohinshah.com/alignment-newsletter/
I think it's pretty safe to assume by now that US state-side counterintelligence has full control over AI safety circles, and given that everything is a nail when you're a hammer, the spooks probably consider these people domestic terrorists and process them accordingly, including active and passive prophylaxis. I mean, what proof do you need when people like OP lose sleep and brag about meeting the spooks a handful of times like it's the most important event of their lives! I guess it must be both empowering AND embarrassing for the spooks to have this situation turn into shit.
If you want to actually understand how the world works at scale, you're going to need think tanks like these that are academics and experts whose job is to research and understand issues. Or you could just watch whatever hatchet job is on Youtube or doing a Twitter screed.
Not seeing the sexism. I think the AI safety community is a little early and notoriety therein probably isn’t sufficient to qualify somebody to direct an $80 bn company.
Search for “Helen Toner” on Twitter and you will see she is being singled out for bullying by a bunch of weird creepy white dudes who I guess apparently work in tech.
> I think the AI safety community is a little early and notoriety therein probably isn’t sufficient to qualify somebody to direct an $80 bn company.
Normally you’d be right. In the specific case of OpenAI, however, their charter requires safety to be the number one priority of their directors, higher than making money or providing stable employment or anything else that a large company normally prioritizes. This is from OpenAI’s site:
“each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial” https://openai.com/our-structure
If you want to be the titan of an industry and do things that put you at the center of media attention, you have to expect comments of this kind and not be surprised when they happen. Whether you are a man, a woman or anybody else.
If you don't expect "not very nice" or ambivalent reactions from people, you are an amateur and you shouldn't be in the board of such a prominent company.
True. I just think it’s messed up that she is equally qualified for this specific board (given the unusual fiduciary duty definition defined in the OpenAI charter I linked above) as e.g. Adam D’Angelo, and I don’t see a bunch of creepy people [note: I edited this part because fsckboy made a good point] hating on him despite him also being part of this same exact power struggle. What does founding Quora have to do with developing safe, beneficial AGI? If anything, Adam seems like more of a token board member than Helen, in that Adam is “token rich dude from early days at Google or Facebook”.
> I don’t see a bunch of creepy people [note: I edited this part because fsckboy made a good point] hating on him despite him also being part of this same exact power struggle.
I'm not sure where I am on the creepy scale but I'm happy to hate on him because I really don't think he should be anywhere near that board. And yes, Helen Toner does have a claim. Not sure about the level of experience in the typical role of a board member but plenty of cred when it comes to the subject matter.
He does seem token in that regard but it was a known quality of tokenism. Being a founder of Quora and a CTO at Facebook has enough cache to explain being on the board of directors for another tech company.
I took Tasha and Helen being grouped together happening not due to being women but due to being relatively unknown
Because despite a sudden popularity of viewing white people as sub-human or non-human, they are in fact human, and this made that remark explicitly sexist and racist.
Being at the forefront of the development of new technologies means that you are sailing uncharted territory. It's my belief that in cases like this, previous qualifications are almost irrelevant.
Of course you have to be competent on the subject matter, work hard, iterate and have luck.
I don’t know about this tweet’s author. I mean that most of the people slandering her on Twitter are white men based on their profile pictures. But you’re right that I still shouldn’t stereotype; I’m sorry.
I agreed with you up until I looked her up. Then Tasha McCauley, who I forgot was Joseph Gordon-Levitt's wife. Shivon Zillis, Elon Musk's whatever that is.
So these three plus Mira Murati make 4 for 4 hot women governing OpenAI. I'm not a Data Scientist but that's a pattern. Not one ugly woman who has a concept of AI governance? Not one single George Elliot-looking genius?
Women are socially allowed to use artificial means to greatly improve their appearance. I'm referring to makeup and expensive hair treatments. Women from upper-middle or upper classes have even more of an advantage in using these. So if you're a thin woman, unless you were unlucky enough to be born disfigured, you're a single afternoon away of looking like a movie star.
If Sam Altman was socially allowed to wear makeup and a wig, you'd call him a heartthrob.
I noticed this too but it's easily explainable as a coincidence so calling it a pattern is a bit far stretched, especially when at least two of them have subject creds.
Honestly, maybe you're right and they're all hot, but I'm not attracted to men so I can't really say. I would be surprised if women/et al found Altman or Sutskever physically attractive but who knows maybe they are kinda cute. What do you think?
Are you claiming that physical appearance has nothing to do with politics, or that we just shouldn't comment on it?
I think it's pretty obvious that the OpenAI men aren't too attractive by most standards, as opposed to say US presidents, who are mostly sexually attractive.
They're the only board members who aren't notable. After the ousting the board consisted of a prominent CEO in D'Angelo, a prominent researcher in Sutskever, and Toner and McCauley. It's a grouping of two randos, not two women.
You have three people, one of them is at least quite well known to the outside world, or at least his / her company is well known along with an ex Facebook CTO title.
You have two people left, we have no idea what he / she is, their work are not public outside of specific domain, and no Public / PR exposure to even anyone who follows tech closely.
Those two people we group them together. And they happened to be woman. ( At least we assumed their gender ). And we are now being called sexist? Seriously?
Good question. We were the first team to demonstrate that this type of vulnerability exists in LLMs. We then made an immediate responsible disclosure to OpenAI, which is confirmed as the first disclosure of its kind by OWASP:
> As a prominent researcher in AI safety (I discovered prompt injection)...
You might want to try a little more humility when making statements like that. You might think it bolsters your credibility but for many it makes you look out of touch and egotistical.
You’re right. I’m sorry about that and have been behind on sleep during this whole debacle. You are 100% right and thank you for the kindness of letting me know. I just now tried to go back and update the comment to change it and say “my startup, Preamble, discovered prompt injection” so it’s less about me about more about our business. Unfortunately I’m past the HN comment editing time window but I wanted to write back and say that I took your feedback to heart and thank you.
They are being grouped together because they are the only two on the board with no qualifications. What is an AI safety commission? Tell engineers to make sure the AI is not bigoted?
> Tell engineers to make sure the AI is not bigoted?
That’s more the domain of “AI ethics” which I guess is cool but I personally think is much much much less important than AI safety.
AI safety is concerned with preventing human extinction due to (for example) AI causing accidental war or accidental escalation.
For example, making sure that AI won’t turn a heated conventional war into a nuclear war by being used for military intelligence analysis (writing summaries of the current status of a war) and then incorrectly saying that the other side is preparing for a nuclear first strike -- due to the AI being prone to hallucination, or to prompt injection or adversarial examples which can be injected by 3rd-party terrorists.
For more information on this topic, you can reference the recent paper ‘Catalytic nuclear war’ in the age of artificial intelligence & autonomy: Emerging military technology and escalation risk between nuclear-armed states:
It's an industry/field of study that a small group of people with a lot of money think would be neat if it existed and they could get paid for, so they willed it into existence.
It has about as much real world applicability as those people who charge money for their classes on how to trade crypto. Or maybe "how to make your own cryptocurrency".
Not only does current AI not have that ability, it's not clear that AI with relevant capabilities will ever be created.
IMO it's born out of a generation having grown up on "Ghost in the Shell" imagining that if an intelligence exists and is running on silicon, it can magically hack and exist inside every connected device on earth. But "we can't prove that won't happen".
The hypotheticals explored in the article linked by upwardbound don't deal with an AI acting independently. They detail what could be soon possible for small terrorist groups: flooding social and news media with false information, images, or videos that imply one or more states are planning or about to use nuclear weapons. Responses to suspected nuclear launches have to be swift (article says 3 minutes), so tainting the data at a massive scale using AI would increase the chance of an actual launch.
The methods behind the different scenarios - disinformation, false-flagging, impersonation, stoking fear, exploiting the tools used to make the decisions - aren't new. States have all the capability to do them right now, without AI. But if a state did so, they would face annihilation if anyone found out what they were doing. And the manpower needed to run a large-scale disinformation campaign means a leak is pretty likely. So it's not worth it.
But, with AI, a small terrorist group could do it. And it'd be hard to know which ones were planning to, because they'd only need to buy the same hardware as any other small tech company.
> But if a state did so, they would face annihilation if anyone found out what they were doing.
Like what happened to China after they released Tiktok, or what happened to Russia after they used their troll farms to affect public sentiment surrounding US elections?
"Flooding social media" isn't something difficult to do right now, with far below state-level resources. AIs don't come with built-in magical account-creation tools nor magical rate-limiter-removal tools. What changes with AI is the quality of the message that's crafted, nothing more.
No military uses tweets to determine if it has been nuked. AI doesn't provide a new vector to cause a nuclear war.
Great summary of several key points from the article, yes! If you’d like to check out other avenues by which AI could lead to war, check out the papers linked to from this working group I’m a part of callers DISARM:SIMC4: https://simc4.org
> But grouping Helen in with Tasha is just sexist.
No, it's not. They're grouped together because everyone knows who Sama, Greg Brockman, Ilya, and Adam D'Angelo (Quora founder / FB CTO) are, and maybe 5% knew who Helen and Tasha are. You linked to a rando twitter user making fun of her, but I've seen far more putting down Ilya for his hairline.
How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.
I don’t remember which media publication, but at least one of the ones posted on HN on Friday/Saturday noted that three board members had resigned this year, and it was also mentioned in related HN threads that this is probably what has made Friday's vote possible in the first place.
Maybe that's true, but I can't read every single one of them. I don't think I saw anything in Ars's in-depth article recaping the whole drama or "the Verge"'s reporting. Though as can be expected, I skimmed most of what I read, it makes no sense to re-read the same information re-told again and again by different writers.
In any case, finding out the key fact about a situation shouldn't require reading multiple articles by different publications. It should have been highly emphasized in any publication's reporting.
Yeah it definitely paints a picture of struggling to keep a board together because AI is "hot", and so many people have conflicts.
It does seem like the whole organization was "born in conflict", starting with Elon and Sam.
Then Reid resigned because of a COI, someone whose wife helped start the "offshoot" Anthropic, and then there was Elon's employee and mother of his children, etc.
I was going to say that some reporters weren't doing their jobs this whole time, but actually there are good links in the article, like this
But yeah I agree it's weird that none of the breathless and short articles even linked to that one!!! (as far as I remember) That's a huge factor in what happened.
1. i am surprised that Sam, a known prepper, left himself vulnerable to the most traditional risk a CEO can have. presumably he had a lot of control over board composition?
2. soooo is Sam back? how exactly might satya influence a nonprofit board he has zero votes in? why would the board flipflop so quickly in the span of 24 hrs when presumably they took the original action knowing the shitstorm that would ensue? none of these things make sense given that everyone in the room is presumably smart and knows how the world works
OpenAI can just as easily turn the switch and cut off access to gpt models. OpenAI can get compute anywhere, while Microsoft can only get GPT4 at OpenAI.
MS can live without GPT models, being just less competitive. Meanwhile, OpenAI will quickly go bankrupt without Microsoft's financial support and infrastructure.
In case the marriage between OpenAI and Microsoft ends, I'm pretty sure that Google, IBM, Facebook and any over big corp will try to step in to replace Microsoft.
Who owns the 51%? Seems like if it’s the non-profit, MSFT can make any demands and suggestions they want, and the 51% perfect can say “that’s nice” and do what they want. Which is probably why MSFT owns _only_ 49%?
From who? Google, the company who's funding their competitors, or AWS, who's also likely working on their own AI modeling? OpenAI needs $$$$$ of compute. There's not a ton of companies who can provide them with the large amount of computed needed to move fast.
It's not theft, Microsoft has a license to use them! OpenAI can't turn off the machines, can't turn off hardware access to the models or weights, and can't turn off the license either. They can sue, but they'd most likely lose, and go out of business beforehand anyway.
> none of these things make sense given that everyone in the room is presumably smart and knows how the world works
The board can try to anticipate the next actions after they fire them, but predicting the final outcome of what essentially are human actions seems quite difficult.
It’s sad that OpenAI talent will be split across two companies, one controlled by Sam, and the other by Ilya. This will probably delay the development of GPT-5.
So out of 4 people who fired Sam, 3 of them have conflicting interests with OpenAI. Adam more clearly as his company is creating ChatGPT's competitor, and Tasha and Helen's employer are funding Anthropic.
The curious part was how did investors including Microsoft missed this in their DD. Or they found out but they wanted to invest in OpenAI so bad that they let go of this fact.
Yes and the killer feature of Poe for me is the easy access of several models _besides_ OpenAI. They are there too but they are also on ChatGPT. The interesting bits are the rest, like an Anthropic AI and open source model interface, and customizing bots around those.
Wow OP, this is one of the best things I've read on HN. The last thing to capture my interest this much was the actual release of ChatGPT last year. I'm also learning a lot about company boards.
> They all resigned within a few months of one another despite OpenAI looking like the rocketship of the century? Something feels a little odd about that.
Exactly what I was thinking when reading through the timeline.
A > When a Board rapidly changes in size, rarely is the remainder left well-balanced. Potential conflicts only make the balancing act harder.
+ B > It seems much easier to govern a single-digit number of highly capable people than to “govern” artificial superintelligence. If it turns out that this act of governance was unwise, then it calls into serious question the ability of these people and their organizations (Georgetown’s CSET, Open Philanthropy, etc.) to conduct governance in general, especially of the most impactful technology of the hundred years to come.
=C > Many people are saying we need more governance: maybe it turns out we need less.
How does this conclusion come from these premises?
Looks like a board of a crypto(scam): weak, unclear profiles, frequent changes, no board experience, personal connections. It's quite amazing they've seemingly stumbled upon $100b product.
Finally a board that acts to benefit all humanity as the charter demands. VCs want billions so they will try anything starting with the nonsense of “do no harm” priciple which has nothing to do with corporate governance. It is the Hippocrates oath taken by doctors. VCs are already cheating, expect more to come. They come from the ethic of move fast, break things, make billions and show no conscience about the masive harm done to all humanity. Let us see in detail how a greedy bunch will remove the fiduciaries of all humanity and make us all pay dearly for their privilege.
I think some clarification of the Profit Participation Units owned by leadership in OpenAI is sorely needed. I'm fully expecting some sleight of hand that Altman has no traditional ownership "equity", but plenty of PPU that means he has shares of the profits, the $ portion of equity, and 1/6 the board, the ownership part of equity.
Which actually isn't. Some places use other oaths, more or less derived from the Hippocratic Oath, other places don't do it at all. But hardly anyone uses the OG oath itself.
One thing became apparent from this drama. The goal of any person should not be collective good but a cult. It should rather be about establishing a religious gathering of the people and directing them to a place where you can print infinite money. Oh while you are making the cult or religion or whatever make sure you feed the bunch. This way everything and everyone will support you as you are the guiding force of that religion.
As Godel said, you can't completely understand the system while you are part of the system. So this is my take from someone detached from the VC culture in general.
Money is King. Who brings money is the true king. From this perspective, there is no significant difference between drug cartels (for ex. weed) and entrepreneurs. They both care about the people around them and can bring money. The only difference is how the story was framed. Now that cannabis is legal in some parts, what difference does it make? You can be a weed farmer/entrepreneur.
Well I genuinely value them but celebrating them as a king? I haven't seen any big/famed entrepreneur (not the small/medium businesses entrepreneur, they are true heroes who society should actually value) who came from severely disadvantaged background (no connection, money, status) and make it through upto billions. Just because someone is born privileged doesn't mean they should be heralded like a king. That is all.
Most of them have traces that goes back to social connection, high status, good education. Of course there will be some exceptions. But if we look at the general status of the population, most come from the "well-off" background.
> You should reconsider being on this site then if you think so little of entrepreneurs.
I put that example because psychologist have found that social media is even more addicting than drug like weed. And pure weed doesn't make your body addicted to it, it is just the psychological effect. Also, we have seen what social media did to the fabric of society.
Drug cartels are pretty innovative and practice a lot of the same techniques when it comes to building out one’s business. They just operate in a legal environment where being cut throat is a bit more literal than say, ousting your board chairman with no notice
Yup. And organized crime is, in general, entrepreneurship with relaxed legal constraints. A lot of the structures and activities of criminal organizations are just running a business in a much more high-risk environment, due to inability to stay within, and therefore fully leverage the legal system (i.e. your competitors can't sue you, so instead they'll shoot you up).
I mostly struggled with the fact that OpenAI's influence extends far beyond the borders of the United States. Yet, its board is constrained by the US residents, limiting the ability to cultivate empathy locally. Such a viewpoint is both difficult and rooted in exceptionalism when it comes to designing gov policies.
He stated he did not have any equity in OpenAI. While he might not shares directly in OpenAI, but he might have indirectly, or has some other economic interest agreement.
OpenAI consists of at least the following entities:
OpenAI OpCo, LLC
OpenAI Global, LLC
OpenAI Nonprofit.
OpenAI GP LLC
OpenAI, Inc.
OpenAI, L.L.C
OpenAI, LP.
OpenAI Startup Fund Management, LLC
OpenAI Startup Fund I, L.P.
OpenAI Startup Fund GP I, L.L.C.
OpenAI Ireland Ltd
A Delaware Limited Partnership doesn't have shares, so he's technically correct by that statement.
He opted against financial stake in the company, "citing his concern regarding the use of profit as an incentive in AI development," according to the WSJ.
Rather than trying to assume it's some corporate conspiracy, I'm gonna go with Occam's Razor.
Your argument has no merit. I have already stated “Aside from a small investment through a YCombinator fund (sama was formerly its president), he doesn't have any equity in the company.”
Until you have some kind of proof he’s lying, I’m not going to assume he is.
People keep trying to make this about wokeness gone mad, but it was Ilya,the white guy most involved in their biggest technological innovations, that instigated all this.
Basically, it’s a nonprofit organization, and so they picked people who are professionally on the boards of nonprofit organizations and think about these matters in terms of ideology rather than tawdry profits.
Then everybody is shocked when they fire the CEO for caring more about profit than the ideologies they have based their careers around.
The problem isn’t that they were women. The problem is Sam tried to cosplay OpenAI as a nonprofit organization trying to do good in the world rather than just make money, and was in for a shock when it turned out the board members actually took that seriously.
>People keep trying to make this about wokeness gone mad, but it was Ilya,the white guy most involved in their biggest technological innovations, that instigated all this.
there are too many uninteresting debates here about wokeness that lead nowhere, so I'm not starting one.
But it is the very definition of wokeness to evaluate this situation on the basis of a white guy's race and gender, as if that indicates anything (everything?) about his POV.
I couldn’t care less that he’s a white guy. My point is simply that the instigator of all this was the exact opposite of a “diversity hire”, so it’s illogical to blame what happened on people getting hired for diversity rather than competence. Sutskever is by all accounts highly competent at what he does. He just didn’t agree with Sam’s vision, that’s all.
On the governance matter, the thesis is a bit more shakey.
So they're conflicted because they're also in governance, and they shouldn't govern because they might have been conflicted.It seems like the author's real problem isn't any specific conduct by these two board members, but more of a "you got chocolate in my peanut butter" issue.