CEO of a company (or worse, non-profit!) and a member of its board creates another, for-profit company (in partial secrecy/lack of transparency) that the non-profit would eventually pay a lot of money.
This is almost a fraudulent level of siphoning non-profit money.
Btw, this is hilarious - regular employees have non-competes in their contracts (sometimes void/illegal, depending on the local jurisdiction) and breaching them is an immediately fireable offense (sometimes leading to more severe consequences). You work on a small thing on the side? Better be careful, ask your manager/HR, risk it getting taken over by the company (luckily, IIUC this part is mostly illegal now in all jurisdictions that "matter" for tech).
But sitting on multiple boards where you have much more room and possibilites for creating conflicts of interest and damaging the company? All fine and common!
There's an even bigger problem here: if he were just making money, that would be a normal-sized problem. If he were just making a supplier for OA, heck, that might be a good thing on net for OA; a subsidiary doing hardware might be justifiable given the importance of securing hardware.
But that's not what he's doing.
Creating such an independent hardware startup comes off as basically directly opposed to OA's safety mission - GPUs are one of the biggest limitations to creating better-than-OA models! The best customers would be the ones who are most sabotaging the OA mission. (You can't run the UN and also head an arms manufacturer dreaming of democratizing access to munitions.)
How is it opposed to OpenAI's goals to have a friendly company selling them chips instead of NVIDIA, which is, at-best, a neutral company?
Software is always more important than hardware. All the big players have access to NVIDIA chips today and yet only OpenAI has ChatGPT, proving the point.
OpenAI probably wishes someone would create competition to NVIDIA and this is Sam Altman trying to make that happen himself, since no one else seems to have been able to pull it off so far.
A conflict of interest would be OpenAI buying Altman's chips at inflated prices or something like that.
But if he makes a bunch of money selling OpenAI chips and OpenAI gets better/cheaper chips, that seems like pure win-win and totally free of ethical conflict.
"Conflict of interest" is not defined by a bad outcome or malice.
It is defined as the potential of those, human nature, and various cognitive biases. "Conflict of interest" is disclosing anything that could lead to a biased decision or lack of transparency.
Can a CEO of 2 companies be objective about a contract between the two and when claiming that his company no2 is better than the competition?
And in the tech-specific case:
As much as junior engineers would love to believe in "superior" solutions, tech decisions are seldom clear-cut. There are many trade-offs: cost, efficiency, memory use, throughput, latency, ease of use, cost of switching, and many more. You always have a pile of pros and cons. Sometimes, one is strong enough, but most of the time, it feels almost like guessing/intuition. And then the conflict of interest becomes especially concerning.
“Disclosure of potential conflicts of interest” — if I understand?
Asking people to disclose potential conflicts of interest casts a wider net — so they aren’t led to think “nah, that’s not a conflict, it’s a win win!”
It may be a conflict of interest but it will be hidden by layers. OAI has deal with gpu producer and there is little conflict as they don't interact, but MS that owns Azure and resell of hardware creates that conflict. Maybe I have bad view, but I think many such conflicts that come from interdependent parties exist and especially in datacenter/cloud world.
This is patently wrong. All of it. You made up a concept and then ran with it like it’s reality. “Potential” is not an issue. “Actual” is. This isn’t a judge. This is a CEO and they can self deal as long as it stays as a value to the core company. It’s up to the board to decide that when it’s proven.
A bunch of nerds just thought they could jump the gun here because they are inexperienced doofus’s when it comes to corp.
A CEO self-dealing can be civilly (and in gross cases, criminally) convicted of violating their fiduciary duty to act in the company's interest, over their own, and violating company trade secrets.
(If Altman doesn't want to be restricted by fiduciary duties then he shouldn't be on a board or be an executive.)
How could Altman possibly not use private OpenAI information regarding its hardware needs, when creating an AI hardware company he wants to serve OpenAI? And its competitors?
He could invest in a new AI hardware company created and run by other people (without his specific input) without a conflict. That does not appear to be the case here. He could create an OpenAI hardware subsidiary. Again not what he was doing.
I am not sure what you are saying. Agreeing with me or not? "Can" as "a shit ton of room" vs. some undescribed "absolutes"?
Self-dealing CEO's and board members can, and are, convicted criminally and civilly of violating their fiduciary duties. And "can" and are fired with cause.
Lots of e.g. news organizations absolutely have tons of guardrails in place around conflicts of interest that may or may not really influence behavior but may even have the appearance of potentially doing so.
Altman's hardware startup would only be free of ethical conflict if Altman was open about it, and the board approved at least two things (and probably more):
1. A formal plan to separate OpenAI CEO Altman from OpenAI hardware acquisition decisions.
2. A formal agreement with Altman and his new company, on how OpenAI's private information with hardware implications is firewalled and/or shared with Altman's new hardware concern.
Otherwise, Altman is going rogue, acting on private OpenAI information useful to a new hardware company looking for future business with OpenAI and OpenAI competitors.
CEO Altman has a fiduciary duty to act directly in OpenAI's interest, and not in some "hey this could be great for everyone" version.
Litmus test: If your legal partner/executive is doing things behind your back with large implications for you, they are almost certainly violating ethics in some way.
Non-profits totally have boards with fiduciary duties. Just because it’s a non profit doesn’t mean it isn’t being juiced for money by others. It just means that the org can’t make a profit, but it can totally spend its money unwisely so that it winds up in someone’s pockets. Heck, most of America’s hospital systems are like that.
The “fiduciary duty“ of a nonprofit, such as it is, is just securing operating funds so that it can Fulfil its mission. Altman has been spectacularly successful at achieving that goal by obtaining $10 billion in funding. Better, cheaper, or more power efficient chips, whatever the source, would absolutely help through the mission too. and that of course is assuming they actually buy or use them at all in the first place. Firing him on the grounds that it could potentially be a bad deal that “lines his pockets“ sometime in the future seems premature.
I think it’s a real stretch to say this chip company would be a violation of his “fiduciary duty“ to open AI. The best argument you could make is that he has a conflict of interest with a competitor. But again, open AI is a nonprofit. It doesn’t have “competitors“. Either it has the funding it needs and is fulfilling its mission or it isn’t.
It's still not clear to me how a board member prospectively running a chip company (a related but different business) works against OpenAI's interests. And people here seem to be making an awful lot of assumptions in order to somehow connect those dots.
Sam Altman has inside information on OpenAI's current and future hardware needs. Sam Altman as CEO, was in a position to direct OpenAI's current and future hardware purchases.
How can he separate those concerns from having his own hardware initiative put together precisely to serve OpenAI and its competitors hardware needs?
Without an agreement with the OpenAI board on how these conflicts of trade secret information and executive power can be settled (Significant shares in the new company for OpenAI?) no competent board would put up with this.
This situation smacks of the ethically questionable transition to a closed/profit organization, after receiving initial funding based on their being an open/non-profit organization. (Apparently the original funders didn't retain any veto power over such a transition, to the regret of at least one significant donor.)
>Without an agreement with the OpenAI board on how these conflicts of trade secret information and executive power can be settled (Significant shares in the new company for OpenAI?) no competent board would put up with this.
I would say if they really did anticipate and worry about such an issue, a competent board would work toward forging such an agreement, rather than firing the CEO years before the aforementioned chip company even existed and before telling any of their other stakeholders.
I’m not taking a side here because I don’t know the facts of the case. But a conflict of interest is a huge deal because it could lead to spending more money than necessary, and is also the main way people juice non profits for profits somewhere else. Of course, if they start a chip company out in the open and not granting it money or guaranteed business from the non profit, things might be up to standard.
The issue is not the OpenAI/SamaChip relationship, which is probably beneficial for OpenAI.
The issue is what happens when SamaChip's profit imperative forces them to maximize revenue. Because the best way to maximize revenue, when you're an independent chipmaker with a large R&D investment, is to sell to more companies. Which by definition are going to be OpenAI's competitors, and whose interests may not be aligned with OpenAI, but will have a financial line to SamaChip.
Gwern's highlighting an interesting contradiction in OpenAI's core charter. In order to ensure responsible, safe, humanity-benefiting AGI, OpenAI needs to have control over AGI's development; any for-profit entity that gets ahead of them probably will not have the same humanity-benefitting mission (actually, we know they won't, they will have a shareholder-benefitting mission by definition). But that means that by their charter, they can't be "Open". Anything like the Developer Day or API or a SamaChip that can sell to other startups means that other parties will have the freedom to use it for their own interests.
Not saying whether this is good or bad - the tension between openness and vulnerability always exists, and personally I tend to come down on the side of openness. But IMHO OpenAI's mission was contradictory from the very beginning, and was more a recruiting tool to get bright idealistic AI researchers to work for them.
> How is it opposed to OpenAI's goals to have a friendly company selling them chips instead of NVIDIA, which is, at-best, a neutral company?
Because OpenAI's mission statement is along the lines of providing AI to all. "All" is more than data centers and billion dollar valuation companies.
I strongly doubt I would be able to purchase one of said chips and have it in my house.
This GPU fiasco is all thanks to LLMs - especially transformers, which was OpenAIs trajectory under Altman. I wouldn't be surprised if the breakdown in communication was over OpenAI becoming a LLM printer. Transformers are a solved problem, making a bigger one is hardly research and definitely not a step towards OpenAIs mission statement.
Yes but creating the processing power with which they can realize their vision is. In interviews Sam’s opponent at Open AI Ilya Sutskever has said lack of hardware and energy for them could become major factors impeding progress in AI. It’s obvious how more players in the chip field help them, especially if the manufacturer has intimate knowledge of what they need or would like to see made. Even if Sam is gone for good they should work toward doing this anyway.
I don’t know if “strongly doubting“ you could buy one of these chips and have it in your house is a strong enough basis for arguing a conflict of interest between open AI and this chip company. People seem to be making an awful lot of assumptions about the specifics of a company that doesn’t even exist yet.
These are strange times, very strange ones, and you are grossly underestimating how much hardware constraints are impacting people. What I know is under NDA, but trust me, even the biggest names you know and would never guess are short tens of thousands of GPUs
Agree with this take. Sam previously stated (eg on Lex’s podcast) that a slow takeoff soon was his goal, to give society maximum time to adjust and to prevent an unexpected fast takeoff from capability overhang. I bought his take when he said it.
Going off and trying to accelerate hardware capabilities (especially with an outside company that presumably sells these processors on the open market) seems indefensible in this framework unless you have already solved alignment, which they clearly have not.
> basically directly opposed to OA's safety mission
OpenAI does not have a mission to ensure that the entire industry is safe.
And if anyone actually believes that then they are frankly delusional because right now AI is a geo-political fight between nation states. Is OpenAI really going to have any ability to control what China or UAE do with their LLMs. No.
China is the only other state that has anywhere close to the capacity to attempt to go for AGI. They also have an existential interest in creating safe ASI. The problem is that in geopolitical struggles, safety usually goes a bit out the window, and we've so far been fairly lucky (and surprisingly competent) with not destroying large parts of human civilisation. There are a lot of unknowns around the topic. If ASI is possible and reachable, the first to get there would end the race.
AGI could mean we are not alone as a sentient species we can directly communicate with on this planet anymore, and hopefully we wouldn't try to enslave it like we did to the "others" we encountered before.
> They also have an existential interest in creating safe ASI.
An oppressive state with AGI is no better than an oppressive AGI.
> China is the only other state that has anywhere close to the capacity to attempt to go for AGI.
Any state can easily recreate/adopt the AGI invented by others. The costs and challenges for any given level of AI tech have dropped precipitously after each advance, and state level actors have far more than enough resources.
Why can't, I don't know, the Canadian government hire some ex OpenAI employees and attempt to make an AGI? What is the constraint here that makes it only possible for the U.S. and China?
One of them is money. Canada was lucky enough to get in on AI early, when the whole thing was pie in the sky and most people didn’t care. But today it’s turning into big business, and Canada just can’t compete with Silicon Valley salary levels. And that’s a problem on top of trying to convince people to move from California to Toronto.
Europe and elsewhere have similar problems. To have an advantage, you need the very best people. But the very best people all want jobs at major US companies, and major US companies want them, too.
AI in China must adhere to strict socialist values. I wish that was a joke, but it’s actually a rule the government is trying to implement. I doubt it isn’t so much safety for the people as it is for the CPC.
> This sounds like an insane conflict of interest.
It would be, but: this company hasn't been formed yet and this sketch does not justify the haste with which they kicked him out, there were all kinds of boxes that needed to be checked before they could do that without risking damage to OpenAI, this is a founder and the CEO we're speaking of.
Besides that: stupidly enough the contract that Sam has with OpenAI does not have a non-compete in it (this has been confirmed by multiple sources now I take that as true) and I don't see how it directly would harm OpenAI other than that his attention might be diluted and that it should be clear which cap he is wearing. But until that company formed and Sam named himself CEO of it (or took up some other high profile role) it leaves him so many outs that it only makes the board look like bumbling idiots. Because now he can simply say: "I would only be an investor" and that would be that, just like the rest of the investors in OpenAI (and, notably some members of the board) have conflicts of interest at least as large.
So if this was it they're in even more trouble than they were before because now it is the boards' conflicts of interest that will be held up to the light and those are not necessarily smaller.
stupidly enough the contract that Sam has with OpenAI does not have a non-compete
Does it really matter? If the behavior appears to be in conflict with OpenAI, and the board doesn’t like it, then that’s enough to let him go. It doesn’t need to be a contract violation, he just wasn’t doing the job they wanted him to do.
Yes, details like that really matter, and if the party you let go can pull half the company out from under you then you may have won the battle but you've just lost the war. On top of that you'll be in lawsuits until the third generation or so given who else sits at that table (Sequoia, YC and Microsoft to name a few).
Do you think there's a problem with him pitching his hardware side-gig to investors who approached him with an interest in OpenAI's tender offer? That gives the appearance of quid pro quo; with the hope that the investors get to skip the next line at the next OpenAI investment opportunity. Imagine someone high up on the Nvidia sales org telling a customer they are all out of H200 graphics cards for the half, but they have a fantastic timeshare investment opportunity they are selling on the side while the customer waits for the next batch.
Why do you keep calling him a founder? Is my history wrong or did he jump from leading THIS very website to OpenAI when they came through for a round of sweet hn bucks.
"Co-founder and former CEO of OpenAI, Sam Altman" as well as countless news articles repeating this as if it is a well known fact, for which to date I haven't seen any contradicting information.
He also committed a chunk of the initial funding. And it's not rare for capital providers who are there on day #1 to call themselves and be referred to as co-founders.
Essentially it is a short-hand for 'those that were present on the founding day of the company and whose names appear on the founding documents besides the lawyers'.
If you get added later on then you are technically not a co-founder, though even there sometimes (and sometimes rightly) exceptions are made.
The classic is that a person has an idea, starts working Hands-On, and then goes out and finds capital investment.
In this case, a number of capital holders got together with an idea for a company and then went out and hired several employees to from all over the world to work for them
They may not be hands off but they may be there from day #1. It's usually a matter of how much time they can commit. If it is only a little bit then it is usually in an oversight role, but it is also possible that they are instrumental in raising the first round of funding. Or in the case of a medical start-up the person who allows their name to be used by the venture because they believe in the concept or maybe even because they contributed the idea. Essentially: co-founders are those that were there on day #1 or that the rest of the founders allowed to call themselves co-founder even though they joined later (but they have made an outside contributions).
So it's a term with a strict definition but also one with plenty of exceptions.
yeah, I wonder where this type of miss-information comes from. Do people simply make things up out of thin air to suit their biases and then proclaim them to be true?
You are totally right. Why would anyone be trying to claim Sam is a founder when he’s clearly not.
The organization was founded in December 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial board members.
Im not sure what the distinction you are drawing is.
Greg, Sam and Elon made a company and literally went out and hired researchers like Ilya.
They pulled together funding, created a company, and hired a team of researchers to operate it. If that isnt founding a company, I dont know what is. Sam was there before all of them
It is not like a scrappy team of researchers with a pre-existing company went out, got funding, then gave the VCs board seats.
If you are going to copy and paste from wikipedia, read a little further:
>In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced[15] the formation of OpenAI and pledged over $1 billion to the venture. The actual collected total amount of contributions was only $130 million until 2019.[6] According to an investigation led by TechCrunch, Musk was its largest donor while YC Research did not contribute anything at all.[16] The organization stated it would "freely collaborate" with other institutions and researchers by making its patents and research open to the public.[17][18] OpenAI is headquartered at the Pioneer Building in Mission District, San Francisco.[19][20]
According to Wired, Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of the "best researchers in the field".[21] Brockman was able to hire nine of them as the first employees in December 2015.
Sure, the convention in SV is also that the person created the company and is not an employee of someone else. The founder is there at the Inception of the company and it is their idea.
I feel like the facts are on the table, and if they don't convince someone then they are unlikely to be convinced by more facts
I think you're letting your personal feelings about Sam Altman cloud your judgment as to what facts you will accept and which you won't.
Whether Altman has a successful coup going or not is immaterial, but whether is a founder or not is what you asked and you got a pretty solid answer. You can like the answer, or you don't but it stands unopposed.
I have no personal feelings to Altman what so ever. What I dislike is the worship and rewriting of history to appease the business tycoons in the tech industry who are little more than parasites in many cases and I believe this one could be one of those cases.
On the flip side I think your blind worship of these tech business leeches may be clouding your judgement.
> regular employees have non-competes in their contracts (sometimes void/illegal, depending on the local jurisdiction)
Non-competes mostly have an effective after period, so Altman's situation is a bit more akin to my companies "only job" policy. It means I can't have a side hustle or alternative means of making money. Enforceable or not, while you're employed you can be fired for anything.
I don't see a conflict at all but IANAL. His biggest issue is GPU cost. He was hustling to vertically integrate and knew that the non-profit nature of OpenAI would not allow for it. So he starts to think about the creation of a separate company to handle that with exclusivity of some kind. It makes perfect sense. No idea if he could get the board to go for it of course, but that's clearly something he would have needed to do to make it a reality. And it's completely in his remit to make these kinds of bets. This board may have seen this as an overstep but all they needed to do is tell him no. I'm sure he would have made a persuasive argument had they let him. This board seems completely out of touch with the reality of running a company like this. And GPUs have nothing to do with AI safety, that's like saying a faster neuron makes a person evil or good.
You seem to be arguing that it isn't a conflict with the non-profit charter. What people are saying is that it's a conflict of interest. It's called self-dealing and one of the most common forms of conflict of interest.
There’s no immediate conflict of interest though. No self-dealing has actually occurred yet, and you have to assume a tremendous amount of bad faith years in advance of reality to get there.
Suppose I am on the board of a wildlife preservation nonprofit, and I am thinking about starting a catering business, and talking to potential financial backers.
If you want to take it to the extreme, you could argue that I could award my catering company a lucrative contract to provide food for the nonprofits cafeteria. But that would be entirely speculative on your part. it would by no means justify immediately firing me from my position at the nonprofit. If you take those hypotheticals out far enough, you could basically argue anybody with any other job or role is potentially in conflict of interest. But again, doing that requires assuming a tremendous amount of bad faith.
Come on, I think it's really obvious that the largest AI nonprofit in the world might be interested in chips, to a much larger degree than a typical nonprofit would be interested in cafeteria food.
I think the analogy is more like someone on the board + Executive Director of your wildlife preservation nonprofit buying up land with potentially endangered animals on the side, with the (presumed) intention of selling the land back to the nonprofit.
Clearly a COI even if it's net good for the animals.
So why not just judge it a conflict of interest if he tries to sell them to them? The board can evaluate the options available to open AI, and agree to allow him to buy from his chip company if and only if it's in OpenAI's interest to. Firing him years before the chip company even exists in the mere anticipation of that possible conflict seems…premature.
Direct and immediate Conflicts of interest can arise if decisions you make for one company could regularly come into conflict with decisions you have to make for the other, example, if he was starting another AI company making rivals to ChatGPT. But in the case of the chip company, nobody has really made a persuasive case of how that would happen here. In terms of interrelatedness, maybe they would build chips customized for openAI. But that would be a potential benefit Open AI, not a conflict!
There is an immediate conflict of interest. Because one should immediately start to wonder if Sam is holding back the company from making their own chips or perhaps avoiding partnering with another supplier because he wants to funnel future business to his future side project. That may be in the best interest of the non-profit, but it may not be, we can't tell, because of the conflict of interest.
I'm sorry but I have to disagree. There are very few other suppliers. We see it with Apple and ARM...without this vertical integration the M2 laptop upon which I write this post would not exist. When one is running a company, the biggest enemy is always time. Had Apple not planned the ARM move more than a decade ago, it would not have happened in time to revamp the mac lineup. What you see as a conflict I see as making good bets and moving the ball forward. Given more recent events, its abundantly clear that the board was out of touch. Letting OpenAI fall behind would have caused it to fail in its mission of safe AGI. When making these judgments, its always necessary to consider the alternative timelines that would inevitably occur.
Assume someone had been working for a startup, as CEO, for free. For years. That someone had cut himself away from any way to compensate his work directly. Due to altruism or poor planning or a result of a negotiation.
At a later time other ventures that this person had been propping started failing and a money injection deemed necessary. It would not be surprising, if that person would try to risk his unpaid work position and monetize it.
Recent tweets from Sam “go for a full value of my stock” seem to indicate towards this direction.
He presumably at least got paid a salary as CEO, so it's not uncompensated work. In any case, being in some kind of suboptimal financial situation isn't an excuse for breaking the rules.
I don’t know. It can easily be the case of $1 comp, and insane amounts of work and pressure, including financial, from all sides. It is quite clear that crypto is failing and he is deeply engaged into crypto with the WorldCoin project. For all we know, personal moneys may have been spent on crypto.
I think the reverse is true. If you're a grunt no one really cares and it's not worth it to enforce. If you're c suite and leave to start a competitor you can be sure you'll be hearing from company lawyers. Similarly mandatory gardening leave generally grows with your title
> I think the reverse is true. If you're a grunt no one really cares and it's not worth it to enforce.
As a grunt who was handcuffed for three months after switching jobs while my former employer and my new employer tried to sort out what I was and wasn't allowed to do... no, this is not true. And the document I ended up having to sign means I can't say anything other than Amazon (former employer), Microsoft (new employer), and myself came to a mutual agreement.
I sincerely don't give a shit and am just kibitzing but: why would this be a conflict of interest at all? One of OpenAI's biggest strategic concerns is getting out from under Nvidia. OpenAI is never going to do hardware; they don't even want to rack servers. The ultimate customer for a new AI chip would be Microsoft, not OpenAI. OpenAI is research and software. Hardware is a complement, not a competitor.
You seriously don't see a conflict of interest with a board member and CEO creating a separate company that sells to the company he is the CEO of? That is the definition of conflict of interest.
Yeah something can be potentially net good for a company but still be a COI. If I'm in charge of awarding military contracts and I give it to a armaments company my son is a VP of, and I didn't disclose to anybody that my son was a VP there, this is a clear COI even if I later argue (and people agree with me!) that my son's armaments company is best suited for the business.
At the very minimum I should not be the one to make that call.
A more usual method is to have the non-profits contract with a for-profit "management consulting firm" that does the running of the real things. Nonprofit can take in money, pay the for-profit for providing the thing the donation was earmarked for; all legal and fine. The same people can be employed by both companies.
Any profits the "for profit" arm makes can then be donated back to the non-profits for financial advantage in holding the money, if any. Lather, rinse, repeat.
I think Altman hails from a kind of hustle culture that may not go great with AI safety:
>I just saw Sam Altman speak at YCNYC and I was impressed. I have never actually met him or heard him speak before Monday, but one of his stories really stuck out and went something like this:
> "We were trying to get a big client for weeks, and they said no and went with a competitor. The competitor already had a terms sheet from the company were we trying to sign up. It was real serious.
> We were devastated, but we decided to fly down and sit in their lobby until they would meet with us. So they finally let us talk to them after most of the day.
> We then had a few more meetings, and the company wanted to come visit our offices so they could make sure we were a 'real' company. At that time, we were only 5 guys. So we hired a bunch of our college friends to 'work' for us for the day so we could look larger than we actually were. It worked, and we got the contract."
> I think the reason why PG respects Sam so much is he is charismatic, resourceful, and just overall seems like a genuine person.
> We then had a few more meetings, and the company wanted to come visit our offices so they could make sure we were a 'real' company. At that time, we were only 5 guys. So we hired a bunch of our college friends to 'work' for us for the day so we could look larger than we actually were. It worked, and we got the contract."
That strategy was tried with Barry Minkow, with "ZZZZ Best", the fake building maintenance company fraud.[1] He did prison time for that.
> While still in high school, Minkow founded ZZZZ Best (pronounced "Zee Best"), which appeared to be an immensely successful carpet-cleaning and restoration company. However, it was actually a front to attract investment for a massive Ponzi scheme.
I'm not sure if that's true. If they obtained a financial advantage (e.g. a contract) via deception, then that could well be fraud. I don't think it makes much difference if they eventually delivered on the contract or not.
It'd probably come down to the legal definition of "deception." It's not Sam's fault if the customer saw a lot of people poring over screens and walking around with papers and assumed they were employees. But if he explicitly introduced them as employees, yeah, that'd be over the line.
Similar to the debate over whether it was ethical of Reddit to seed themselves with sock puppets.
It totally is his fault if those people were directed by him to pretend to be employees of the company – which he seems to admit.
"I contracted some people specifically to pretend to be employees so that someone else would be deceived into thinking they were employees. But somehow it's still not my fault that this person was deceived."
Yeah, this kind of pedantry isn't going to work in a legal setting. Obviously the intent was to deceive the person into thinking that the company had a large number of regular employees.
I still don't see where the bright line of the law was crossed here. Is it really so surprising when people who are willing to do things like this get ahead of people who aren't?
Are you really so certain that we'd be better off otherwise?
Is it surprising that people sometimes lie and cheat? No, of course not.
Is it surprising that people who lie and cheat sometimes obtain an advantage by doing so? Again, no.
But yes, I do think we would be better off without people lying and cheating.
IANAL and I do not claim that the law was necessarily broken in this instance, but the intent was clearly to deceive a business partner in order to obtain a financial advantage. I mean just look at SA's own words here:
> So we hired a bunch of our college friends to 'work' for us for the day so we could look larger than we actually were
The deceptive intent is explicit in his own description of events. It's honestly quite silly that you are looking for some semantic get-out clause here when SA's own statement is so explicit.
There was another AI start-up, one backed / supported by various conservative politians from Germany and Austria thatvdid the same thing, hiring people to look busy when customers (bad) and investors (a lot worse) showed up at the offices: Augustus Intelligence.
At best, this is fake it till you make it. At worst, it is fraud. The tiny, tiny differce is whether that next investor shows up or not. Just ask FTX.
I do not respect people resorting to that kind of thing.
AND a complete Moby Dick move. You're going to suddenly get up and compete with all the chip designers and beat them at their job? Computer chips are literally the most complex products in the world. It is an astonishing development that the most profitable company in the world was able to make chips, let alone chips that beat Intel and Nvidia in some key metrics. It took Apple more than a decade of shipping chips in phones to be able to move up to desktop-class.
The board was right to get rid of a guy who would rather hunt a white whale than do his job.
> SoftBank and others had hoped to be part of this deal, one person said, but were put on a waitlist for a similar deal at a later date. In the interim, Altman urged investors to consider his new ventures, two people said.
major mistake by OpenAI's board not to mention this if it actually did play a role in removing him, public opinion now is that it was a coup over AI safety stuff. OpenAI board seemed to have no PR plan
The claim is that he's starting different AI hardware companies on the side, not that he's doing it under OpenAI's umbrella. It's more like if Sundar tried to get funding for some side hustles while talking with potential customers or contractors of Google, without disclosing to Google's board ahead of time.
Every single one of Elon's ventures are conflicts of interest with each other. Simply seeking money for a chip venture is small compared to pulling out Tesla engineers to work on X, the stuff with SolarCity, etc
> On Sunday, a person familiar with the board stood by the board’s explanation on Friday that cited candor. This person said there was no one precipitating incident but rather a mounting loss of trust over communications with Altman. The person declined to offer examples.
> According to one person with knowledge of the situation, Altman had been attempting to raise as much as $100bn from investors in the Middle East and SoftBank founder Masayoshi Son to establish a new microchip development company which could compete with Nvidia and TSMC. Those efforts, in the weeks before his sacking, had caused concerns on the board, this person said.
> Two days after OpenAI’s board of directors fired him, Sam Altman is expected to join executives at the company’s San Francisco headquarters Sunday as they push the board to reinstate him, Interim CEO Mira Murati told staff on Sunday morning, according to people with knowledge of the situation.
> This person said there was no one precipitating incident but rather a mounting loss of trust over communications with Altman.
Yet it was apparently so time sensitive that there was no time for discussion with key stakeholders. This board appears to have been run by the keystone kops.
The only stakeholder here of note not on the board is Microsoft, who not only is a strong proponent for the kinds of commercialization and acceleration that the board is supposedly concerned with, but is also one of the single most powerful non-government organization on the planet.
I’m not saying the board’s decision to remove Altman is or isn’t a good idea, but if they did decide it needed it to be done, running it by Microsoft ahead of time seems like one of the single dumbest things they could have done.
And the board members aren't financial stakeholders. There's really nothing they have to gain from this, which implies that their decision was based on facts rather than some power play.
Only one of the dissenting board members was an employee/founder. The other board members are external. People do not seek out problems like this. Do you think the board members want all this attention on them?
It's kind of crazy to me that everyone gives CEOs infinite credibility and benefit of the doubt when they make a decision, but then these board members, who don't have cult of personality surrounding them, make a decision within a decision space that is almost their sole job, people flip out and call it emotional, a coup, a power grab, etc. It doesn't make sense.
To me, with the currently available information, it seems quite simple. Altman was constantly at odds with the charter of the non-profit OpenAI and continually was pushing against that charter and performing machinations for his own status and financial benefit that were not in line with the goals of the non-profit OpenAI. It seems quite clear that only one of those things could last. Either the non-profit OpenAI stays so and Altman leaves to go do whatever it is he really wants to do, or OpenAI ceases to be a non-profit and Altman gets to stay. Given it's seemingly the board's prerogative to maintain the non-profit OpenAI, it seems almost trivial as to how this situation has arisen.
It's almost refreshing someone actually called the bluff of a Silicon Valley prophet. But it's not all that surprising everyone in Silicon Valley and their sycophants are doing all they can to try and undermine the credibility of that decision.
Nothing you said disproves what I said. Human emotions trump all. Boards of uncompensated nonprofits are typically some of the nastiest boards to serve on. I have no opinion on this board but typically nonprofits are difficult compared to your typical compensated corporate board.
Only read part of what you said but my point was against the comment about board compensations. Human emotions come into play a lot stronger here.
Yeah even though I agree with the COI being a large concern and potentially a justifiable reason for firing him (especially if coupled with other instances of him withholding key information from the board), it's clear that the non-profit board might have emotional reasons aside from the common good.
Even just the "you keep lying to us, bro" emotional motivation might be enough (not necessarily rational) motivation to fire; despite the person being otherwise qualified for the job, loved by employees and investors, etc.
Irrelevant, they explicitly structured the for-profit as subservient to the non-profit and its goal of “safe AGI to benefit all humanity”. Investors knew that structure and charter when they invested.
While we don't know the specifics of the arrangement (under what circumstances Microsoft can back out of the deal), I'm sure they can make it challenging for OpenAI to continue operating at scale when they're reliant on funding (most of the 10+ billion dollars hasn't been transferred yet) and access to unlimited compute. This makes them stakeholders even if they don't have seats on the non-profit's board.
Microsoft obviously doesn't want to pull the plug on OpenAI, which is why they're pressuring the board to rehire Sam. If the board was truly independent and didn't answer to any other stakeholders, why would they even be talking to Sam right now?
I like to think if you invest $10s of billions, you get to at least have an opportunity to discuss major items like this, even if you don’t have legal veto.
How could you even expect any ROI on a 100B investment? It seems like there’s way too much money sloshing around. People have no idea what to do with it.
How can you not make money, especially if you want to be in the business of making and selling chips? If you have that kind of $ you had better be getting returns. In fact, the more $ you have, the easier it should be to get returns. Such is capitalism. Main risk is on technical and execution, tough to bootstrap a chip maker.
Heh, "compete with Nvidia and TSMC". FT apparently having no idea what they're talking about, unless they were really planning to get into leading edge semiconductor manufacturing?!
I think they were really planning to do that. They wouldn't need $100B otherwise. Fabless chip company would cost 10-100x less, even if they order millions of chips right-away. You just wouldn't be able to spend THAT amount of money. Cutting-edge lithography on the other hand...
If you’re pitching a deal of epic scale (“AI is here and we’ll be at the center of its hardware”), you’d include roadmaps to complete independence and vertical integration. People want to see numbers for the big win before they place their bets.
So you’re probably not launching there, but of course you use it as the first horizon you’ll try sailing towards.
By making sure they had buy-in from all of the major stakeholders to ensure that it wouldn't cause blowback. I think they failed miserably at predicting the kind of shitstorm that this would cause and if they did predict it then they're even more stupid than they already look.
Of course this was going to blow up. Note that two founders and shareholders in the non-profit were presumably there when this vote was taken and the fact that those two didn't see eye to eye with the board was waltzed over because it was just a headcount. That's naive, to put it mildly. A move like that takes weeks if not months to prepare and even then you have to be super careful. You don't do that by taking a snap poll and hoping nobody else will notice.
Alternately, it was seen as necessary to fire Altman and blowback was seen as inevitable.
Do you think Microsoft, who had close personal connections to Altman and who was a direct and outsized beneficiary of his leadership was just going to be like “Ah, yeah, you guys got a point there. Let’s ditch him.”?
Of course not. They would have used their leverage to prevent his ouster, and the board and its charter would be shown as completely impotent. The cause they were formed over would have been lost.
The firing may also have ruined the effort, but did show in a public show of principle rather than in being quietly disempowered. You can see where an idealist who thinks there are real stakes to the cause would choose the public firing of Altman over the mufflwd suffocation of their principles.
Blowback was inevitable but the magnitude was not. The board absolutely bungled this, full stop. Maybe Altman had to go, maybe he didn't – I don't claim to understand the backstory – but the way the board handled and is handling this was extremely clumsy at best.
I'd chalk it up to good intentions and a lack of experience. But they definitely bungled it, at a level where you have to wonder if there weren't any adults in the room.
> Alternately, it was seen as necessary to fire Altman and blowback was seen as inevitable.
That may well be the case, but I have - yet - to see any evidence of that and as it stands all that I think will happen at this point is that the board will find out where real power is held. Hint: it isn't with them. They're there not because the are free and independent thinkers who will keep AI safe for humanity (even though they may genuinely think so) they are there to serve PR function (to present an image of a set of safe hands to the world) and to serve as a backstop for any kind of regulator that might want to get themselves into the mix. Essentially they're a fig leaf that has now decided that they hold actual power and I think that was a mistake on their part.
Unless they can show how whatever Altman was doing directly caused OpenAI's mission to be endangered (and the various stakeholders seem to be extremely conflicted on this) I don't think their decision will stand, and the amount of blowback incurred alone is a good indication of that. In fact I think it will substantially endanger the governance of OpenAI's activities in the longer term.
Note that they should have resigned or acted much, much earlier if they felt that OpenAI was not acting true to its mission because I really don't see the difference between all of the stuff that has happened so far compared to what they now claim is the reason for the ouster, it may well be the straw that broke the camel's back but then it wouldn't come out in the way that it did. That mostly seems post-fact reason to try to explain away their blunders.
> Do you think Microsoft, who had close personal connections to Altman and who was a direct and outsized beneficiary of his leadership was just going to be like “Ah, yeah, you guys got a point there. Let’s ditch him.”?
No, and that's precisely why they made a mistake because it may not be a viable action without that kind of support.
A board has a number of policy instruments at its disposal: advisories, votes and resignation. They seem to have believed that after identifying Altman as the main cause of concern a vote on his continued tenure was the appropriate instrument. I think that in a 'simple' non profit, one not encumbered by a for-profit part that also happens to have outsider shareholders (besides employees and execs) that is difficult enough, you need to really have a solid case.
But absent a solid case a shotgun vote is about as blunt an instrument as it gets and the use of that instrument is - normally - reserved for crisis situations, such as impending skeletons about to fall out of closets. Then you explain afterwards what was up and everybody will fall in line. That is definitely not happening. And you don't use that approach for someone who is nominally been phenomenally successful at achieving what they set out to do - and that's me speaking as not exactly a friend of Altman's, I think he has ethics issues and that is precisely why in my opinion this was a huge mistake: Sam will likely come out of this as much stronger entrenched than he was before.
> Of course not. They would have used their leverage to prevent his ouster, and the board and its charter would be shown as completely impotent. The cause they were formed over would have been lost.
Not quite, they could have simply made a statement that they felt that their role as oversight board was made impossible because of the way the governance structure was set up, that they are specifically aware of a number of things that OpenAI is doing that are unacceptable and then to resign.
Plenty of precedent for that.
> The firing may also have ruined the effort, but did show in a public show of principle rather than in being quietly disempowered.
That's true, in the end the result may well be identical. But then they've paved the way for an OpenAI without any oversight at all and this is a major problem. I'd much rather have seen that they had used their influence in a more constructive manner because that would have had a larger chance at making it stick. If Altman prevails here - which has a pretty good chance - OpenAI will be effectively without any controls at all. Because obviously the risk of a bunch of placeholders for stakeholders going rogue on Friday afternoon at half an hour to the bell isn't going to be acceptable to shareholders.
> You can see where an idealist who thinks there are real stakes to the cause would choose the public firing of Altman over the mufflwd suffocation of their principles.
Yes, I can see that. Doesn't make it the smartest move though.
The nonprofit the board oversees doesn't have any shareholders and only answers to its charter. Sam & Greg allegedly sidelined Ilya shortly before all this went down, so Ilya probably felt he had to take drastic action regardless of the blowback.
Well, you can correct it but the 'for profit' arm does have shareholders and those are quite possibly now well positioned to take the board to task for straying outside their mandate by not exercising their duty of care. They will have to have a very good defense from such a challenge.
The for-profit shareholders can only sway the board through the employees who count as stakeholders in the nonprofit, hence the PR campaign. The board's mandate remains upholding the nonprofit's charter—no fiduciary duty to investors to speak of.
> But good luck with that in the real world, as you can see from the unfolding mess.
Any board member who defends a non-profits mission at the cost of being systemically assailed by "Reputation Management" consultants, the venture class and fellow travelers has my respect.
I expect to see more hit pieces by pet journalists, blog posts and vituperative opinion-column articles sent in on Sand Hill Road letterheads.
Maybe. That all depends on whether that mission was really what was at stake or whether they allowed themselves to be roped into some kind of putsch. I've yet to see anything that proves without a doubt that the best remedy for whatever ailed OpenAI was to terminate Sam Altman and to risk the company imploding. It seems a bit drastic, to put it mildly. If it turns out that they were the last ones holding the fort against the hordes I'll be happy to change my tune on that.
That’s the reading I am getting: MS lawyers really screwed up this one. What kind of lawyer worth their salt would let the company put $10BN on an investment on “capped profit” where you’re supposed to treat “investments as donations”? The more I read about it the more I keep thinking what the lawyers must have been on to not make more noise about this earlier.
I said this in another thread, coming between MS, their money, and a chance to kneecap Google on what looks like to be the HotNewThing(tm) is not a good place to be in. It's been a while since the power, treachery, lethality and general sadistic nature of MS has been on full display, watch and learn.
Unfortunately, MS is not known for being the world champion in fair play, and this isn’t going to be the first time they try to sink others’ ships to hide their own incompetence (in this case, putting $10BN in a weird “capped profit” entity without understanding what may come of it.)
Yeah okay. People who, for some reason, simp for Sam Altman are expecting Microsoft to be their knight in shining armor who punishes the bad board for doing the right thing.
Microsoft isn’t going to do shit but throw a little hissy fit over their obvious power play (which they staked the future of their company and resurgence of their brand on) going south. They have no legal recourse and OpenAI has brand recognition and market enthusiasm to weather any shit storm MS tried to throw at them.
So no, if anything this is gonna bring attention to the Microsoft CEO from his own board wondering why exactly he put them in a position to look bad and lose shareholder value without actual ownership of this company.
> throw a little hissy fit over their obvious power play
Time will tell. I think some people just remember the depths Microsoft has descended to in the past as SOP let alone in response to a group of ideologues putting their stock price at risk.
Yes exactly, maybe I am cynical but I do not expect Microsoft to say “Ah I guess it is my fault and legally I can’t do much here” and give up. They did not get to where they are without dirty tactics.
> Adam D’Angelo is working on ChatGPT competitor Poe.
> Helen Toner & Tasha McCauley are working with OpenPhilanthropy/Anthropic on GovAI board (former board member Holden Karnofsky left when his wife started to get involved with Anthropic)
These seem like much larger conflicts of interest (working with direct competitors) than working on a company that one day this company might want to purchase a product from.
The board was anemic, lost people and didn’t replace them, and then was small enough that you could fire the chairman of the board and then remove the CEO in a friday night massacre.
Those conflicts disappear when you look at the OpenAI nonprofit through its aspiration to steward safe AGI no matter where it arises. It’s meant to transcend specific interests and look more like a technocratic standards body, like an IEEE precursor for AI, than a corporation seeking a private edge. It should have representation of diverse interests.
And that’s exactly why its ownership of an apparent VC rocketship in its profit-making entity was becoming so problematic. Suddenly, it’s critically important transcendence and independence was being threatened by its subsidiary’s outsized and rapid success.
It’s really important to keep in mind that the OpenAI board represents fundamentally different interests than those of the subsidiary developing and selling ChatGPT products. That they may have murdered the latter is not necessarily an accident or mistake.
This is pretty much directly in violation of OpenAI's charter, not to mention how concerning it is to have a CEO putting so much effort into side hustles.
Can you point to what's in violation? I don't see anything at all.
And "side hustles" are not inherently concerning at all. There are quite a number of well-known tech CEO's running multiple companies at once. The only things that matter are a) that the board is aware and feel like they're paying the right amount for the proportion of the CEO's time that they're getting, and b) that they don't involve a conflict of interest. (And generally speaking, being a potential supplier for your company isn't a conflict -- to the contrary, it's a pattern that's been successfully followed before.)
I love it how when a CEO runs 7 companies at once, he's seen as a titan of tech, master of multitasking, hulk of hustle. Everyone is in awe and points to him as an example of awesomeness. But if I, a worker bee, were to get a second full-time job as software engineer at TechCompany2 unrelated to the work of my TechCompany1, I would be a traitor, disloyal, distracted, double dipping, deserving of being fired.
For the vast majority of CEOs, if they tried to start another company as a side-hustle then they'd also get fired (and potentially sued).
For the CEOs who can run multiple companies at once, there's several factors
- they founded the company, so they have a majority of voting rights so they can't get fired
- they're so valuable to the company that the shareholders are willing to put up with them running multiple companies at once (and shareholders will definitely grumble about it)
For you, you're not really valuable to the company so if you try to work multiple jobs at once then they have no qualms with replacing you.
If you were a valuable enough IC then you could definitely get into a situation where you're working with multiple companies at once.
People do this with consulting arrangements, where they create a consulting company and are able to work with multiple entities at once because they've given themselves enough leverage to do so.
The issue isn't a double standard. The issue is that you haven't put yourself in a situation where you have enough leverage over the company to work multiple jobs.
They’re complaining about the emotional rhetoric around employees working multiple jobs, not the actual mechanics behind why companies disallow it.
It sometimes seems that they aren’t as harsh with executives.
However, IMO, when companies dialog executives from working multiple jobs they use the same sort of rhetoric: “ceo is distracted, directionless, uncommunicative, unable to see past this conflict of interest, etc.” The board will paint the CEO as some dilettante fop.
Look at what happened to Altman, he’s been fired and they insinuated that he’s a lying, double dealing, dirtbag.
I know that sounds callous but it isn't rare at all for high level functionaries to hold positions in more than one entity, the board of directors of OpenAI is an excellent example of that. And some of those are arguably already conflicted.
Worker bees get paid to work, and companies would like to get a certain amount of time for their money because that what it says in your employment contract. You don't necessarily have to agree with that but then you have to carve your own path rather than to hitch your horse to someone else's wagon.
Most salaried engineering contracts don't specify anything about time besides an undefined "full-time". I've had several companies very happy with my production at <30 hours per week. But the contracts state very clearly that I should not take another job.
That's exceptional. Usually a minimum number of hours is specified, what your compensation is, any overtime arrangements and so on. That's also where the 'butts in seats' post COVID backlash comes from: it may not be specified but there are clear expectations.
The exclusivity clause is fairly common but may not be enforceable depending on where you live, especially if it is for a part-time job (but in IT most jobs are full-time).
I don't think it's exceptional. I've signed probably a dozen engineering employment contracts and don't remember any of them specifying minimum hours or overtime. Isn't that the definition of salaried employee? I agree that "butt in seat" hours can be an expectation but usually it's not in the contract.
It depends on your goals and perspective. Cynically it could also be true. Cooperatively viewed he provides an environment for you to do what you expressed a preference for training to do.
That really depends on the company. In many companies, work and worth don’t correspond to the org chart. And everyone knows who’s actually doing the work and why the CEO is there/how they got there. Most companies aren’t large tech companies.
I think we are talking about different things in very different contexts but yes, there are exceptions to everything.
My point is not about relative contributions of value necessarily, which are difficult to compare, but about the structure of employment. The value of a CEO who is doing a good job is not measured by the number of hours they spend on a project. They cannot iterate on solutions the way an engineer can.
The skillset and behavior are different at the meta, macro and micro levels.
So, if a CEO was talking to investors about a new venture (or a set of new ventures) in a closely related field, perhaps even sharing some details about the existing company with those potential investors, would it be fair to say that said CEO was not being entirely candid with the board?
Those are some big "if" statements. I think we would have to see what information was disclosed by Altman to the board with respect to these discussions before making judgements. The precise details of such disclosures may be a matter of legal interpretation.
Or, more likely, all of this will get settled behind closed doors and we will never really know.
>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Concentration of power, mostly. No power greater than a fab for that right now.
Not a lawyer, but I don’t think it is allowed to take a company intellectual property or research results and use these results for a side venture, without proper and timely disclosure. Even if there is no direct conflict of interests at the time when it happens.
Unless you specifically have a contract that allows you to avoid disclosures. Or have a specific agreement that transfers intellectual property or research results to you.
No, they will just add it to their reserves. They don't distribute the profits, so no dividends (no shareholders), bonuses or stock buy-backs (no stock). But you can totally use the profits made in this year to fund next year's costs.
The obvious argument is that since we have not solved alignment, accelerating AGI is unsafe. I’d wager that is the general line of reasoning from Sutskever and the board.
“We are growing quickly enough and a GPU shortage gives everyone more time to catch up on safety research” seems like a logically consistent position to me.
If anything, it seems to me that unlocking OpenAI and the broader market from what’s been an effective monopoly through more chip competition would be inline with the charter.
> Technical leadership
> To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.
What, to you, is laugh out loud funny about the parent comment? They gave a counter argument with examples from the charter and you respond with "LOL!"? How about responding with a better argument?
>They gave a counter argument with examples from the charter
Examples is a very generous word. They merely just quoted parts of the charter and pretended the argument would stand on its own.
Watch me literally do the same thing.
Here are all the parts of the carter Sam violated, and I'll even do one better and provide insight:
>Long-term safety
He has been very clear about his position that OpenAI should press forward, despite risks, and used faulty equivocation to justify said position. “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.” - Sam
>Technical leadership
Sam doesn't value technical leadership, as proven by his history at other companies and he isn't technical himself. He will immediately pivot to cashing out the moment the time is right. OpenAI is a steward of the domain, not a profiteer. Attempting to solicit special arrangements with other vendors isn't going to progress the domain forward, that happens through research, not special kickbacks.
>Cooperative orientation
The board clearly didn't believe he was being cooperative, with them and perhaps the larger AI community. Given his positions on safety and progress it's not surprising to see him being outed.
Since my comment is easily twice the effort of GP, and I have now baselined my comment with the standards you clearly see as valuable, I look forward to your constructive input.
Which I doubt there will be, which is what was funny about the original comment. All that it deserved was "LOL."
This is not even on topic. You seem to think it is literally about the risk of a magical incantation of AGI that someone was going to accidentally utter. Instead it is about working the conversation for support.
> Sam doesn’t value technical leadership
He doesn’t prioritize technical decisions over all, which is want you want from an organizational leader. He has hired and enabled some of the best technical competency in a generation to do things no one thought were possible.
> The board didn’t believe he was being cooperative
“Being cooperative” as defined here is so naive as to be comical on its own. Internal politics are a constant presence. His job is not to be maximally cooperative without regard for strategy.
The only thing that is clear to me is that non profit structures as presently conceived are totally inadequate for the use OpenAI has put them to and in particular are not up to withstanding growth pressures.
You’re right, I shouldn’t have gone absolute. However many Googlers also thought many other things were possible that weren’t, so from here maybe we devolve into discussions of the relative cost/value of Type I vs Type II error.
Will you at least admit that this comment of yours, with quotes from the charter and thoughts about each quote, is contributing more to the conversation?
I’m surprised that nobody has brought up that Altman was busy this summer dumping his nuclear startup Oklo onto the public market via a SPAC. He seems to be a talented fundraiser and dealmaker and not so much of a singleminded visionary who wants to spend the next decade perfecting the details of AGI. Maybe the right place for him is focusing on his next few startups.
He literally founded a defunct social network and then hung out in YC world before falling into AI at the right time to raise money. There isn't a single element of magic or secret sauce to Sam Altman in the AI space. So yeah. Have him mess around in more startups and raise cash for them.
I think he's also talented at assembling great teams. He has a deep personal network of incredibly talented people to draw from.
It's really hard to pull the right people together and sweet talk them into walking away from all the other incredible things they're working on, but it might be the single most important thing that an executive at a new, growing company has to do.
For some reason this reminds me of the olden days of Railroad Tycoons. Like those old timey movies where a bunch of land from the common folks is bought out for super cheap prices because the guys building the railways know where the stations are going to be placed.
This also helps make sense to me why so many SV types were jumping to the defense of Altman on X/Twitter. They are probably salivating at the potential investments that were in back-room discussions involving Altman and feel the need to prop those up. For example, there may have been literal billions agreed upon in principle but not yet signed amongst several ventures. The tarnishment of his reputation could scuttle much more than OpenAI if this report is correct.
If I was a pure AI researcher and my entire goal in life was to build AGI - I would loathe to be in a company that was engaging in such shenanigans. I'm not for or against either side in this dispute but I can totally appreciate why each side would want to retain control of what has been built so far. It is just there is a fork in the road here and only one vehicle.
This is still common practice in places where zoning is not fully developed. Politicians (or their friends) buy cheap ag land, and a few years down the road it’s rezoned and politician can sell it for massive profit. In my country there‘s a me-too like cascade happening right now around this very issue.
> Altman had been traveling to the Middle East to fundraise for the project, which was code-named Tigris, the people said. The OpenAI chief executive officer planned to spin up an AI-focused chip company that could produce semiconductors that compete against those from Nvidia Corp., which currently dominates the market for artificial intelligence tasks. Altman’s chip venture is not yet formed and the talks with investors are in the early stages, said the people, who asked not to be named as the discussions were private.
Using your celebrity status as CEO of a non-profit to raise money for multiple related for-profits does seem like you may be more dedicated to personal financial motivations than that of the non-profit's mission and charter.
No indication this was the trigger event, but reasonable that the board took notice of all these things. Even viewing this in the best possible, most innocent light, it still looks bad and Sam should have known better.
Not exactly. Ilya allegedly disliked Sam fundraising side hustles off his profile at OpenAI. It sounds like a confluence of factors precipitated the ousting.
Can someone explain how it's ethical to be working on multiple things like this at the same time (e.g. OpenAI, AI chips, that eye orb crypto thing), but the benefits of each not accruing to the other investors, only to that individual?
I've always had similar questions about folks like Musk and Dorsey. Is it simply lack of proper governance? Is it something written into their contacts? Is this just a Silicon Valley thing? I can only imagine that a lot of people would like to work on many side projects, but their employment agreements forbid it.
> I've always had similar questions about folks like Musk and Dorsey. Is it simply lack of proper governance? Is it something written into their contacts? Is this just a Silicon Valley thing? I can only imagine that a lot of people would like to work on many side projects, but their employment agreements forbid it.
Folks like Musk and Dorsey have a lot more leverage when they're negotiating their contracts but when it comes to public companies, they get sued all the time over their decisions. The lawsuits are just very drawn out, boring affairs that normally get settled during discovery, before any details actually become public so we don't often hear much about them from the media.
Musk was sued by tons of people over self dealing when Tesla bought SolarCity, for example.
It’s “ethical” in that it’s broadly accepted, and it’s accepted because those CEO’s are usually talented dealmakers that deliver huge opportunities to the companies they lead. It’s not new or unique to Silicon Valley. You can see these personalities pop up in commercial and political dealmaking throughout pretty much all of history.
You can get away with a lot when you’re effective.
Can you explain why it would be unethical, if the contracts with those investors do not explicitly state there will be exclusivity?
Investors buy shares (or similar), and maybe some other rights or benefits with money. They do this with a contract, like any other business deal. There is nothing unethical that I can see about any action vis-a-vis the investors as long as the contracts are honoured (and the action is not otherwise unethical).
Directors and board members do have certain (e.g. fiduciary) duties which my require conflicts of interest to be disclosed and approved, but in the case of Musk and (it seems) at least this particular Sam, investors and others to whom they have this duties can be quite willing to make such approvals.
P.S. for what it's worth, not all companies forbid other work or side projects (I know this because we don't, unless you're planning on simultaneously working for a direct competitor).
To me, the situation is not necessarily about ethics, although ethics are important to me. It's about conflicts of interest. You cannot maintain no conflicts of interest if you stand to personally gain in one venture from another venture and by pitting them against another or propping one up on the other.
If I'm a government official, then it's a conflict of interest if I own a construction company and increase budget for specific constructions to give my company a boost. This obviously happens in our government today, but it doesn't not make it a major problem.
The other perspective is one of grifting. It's similar to con men, who just move from thing to thing gathering up chips at the table. These investors and so-called entrepreneurs are just like that. They are interested in upping their profile at all costs. Their investments and companies are just a means.
I have worked in jobs where I need to report stock holdings more than a certain amount. Imagine a scenario where a low-level worker could affect billion dollar companies' stock enough to benefit from by buying their products on the job. Such a scenario doesn't exist, but the conflicts of interest are still tracked. It is much easier for high profile executives to affect price movements, amd that's why it's more, not less, important for high-level employees.
I agree with that. Government officials and politicians should have effectively no conflicts of interest because they have no real way to seek valid approval from the electorate for them.
For companies, if you have investors, employees, and (implicitly, because they continue to do business with you) customers all supportive of someone remaining in their position in spite of real or potential conflicts of interest then I think it's reasonable to say those are acceptable — and certainly not unethical — conflicts of interest.
The board deserves and has the right to a say, of course, but the extent to which it can justify a position that goes against those other groups is at best debatable even if they are not wholly comfortable with said conflicts. After all, the board should serve the company (customers, employees, shareholders/investors) and not itself. This [1] timeline of OpenAI board changes also seems to suggest some level of opportunism, and the possibility that at other times, or had departing members been replaced, it may not have been possible to get board consensus for firing Sam, either.
Amazon is currently facing a lawsuit for pretty much this exact scenario. Investors are suing because they selected Blue Origin for Kuiper. They also selected ULA and Arianespace for some of the launches, but apparently did not even consider SpaceX. Although note that Bezos is not CEO anymore. He was when Kuiper started development, but as far as I can tell, not when they selected launch providers.
It seems more likely Illya is a less wrong type of person that saw Sam as a threat to his vision. He doesn't like the commercialization aspect where as Sam would prefer OpenAI didn't go broke.
"Not going broke" is a bit different from announcing an "open" venture, then, once it's successful, doing everything possible to make it closed and stifle competition (through regulatory capture).
> once it's successful, doing everything possible to make it closed and stifle competition (through regulatory capture).
Half of the current 4-member board is also involved with GovAI, which is dedicated to AI safety research and advocating regulation. Aren't they more likely to think that OpenAI has been extremely reckless in publicly releasing models and research like they have?
They weren't the ones who went before Congress, asking for regulations that would prevent new startups taking root. (Something like a minimum of 25 employees dedicated to moderation against bias, as I recall. That would be trivial for OpenAI, but impossible when on a budget.)
I have the feeling that hard times are coming for the kumbaya type of ai researcher at BigCo who prefers to openly share all research, doesn’t care about commercialisation and wants to slow down the ai train behind curtains for safety considerations after the openai fiasco.
The last 10 years of deep learning progress are all thanks to Open Source and Open Research. 95% of LLM progress in the past year is also thanks to open models, which allowed the field to work on things like QLoRa, long context lenghts, RAG and etc.
Big Cos benefit the most from this since they have the resources and data to scale up these methods and a large user base to deploy it to.
It seems that startup money has poisoned this culture anyway. Every week, I see papers come out that are not necessarily fraudulent, but are contorting themselves to toot their own horn for yet another framework or GPT-4 beating finetune.
So not being not being an absolute moralless predator megalomaniac is labeled as being a "kumbaya type" now?
A tiny subset of psychos will be the downfall of humanity with tech getting ever more potent, and it's especially sad when most of the world just wants to live a happy life with their families, not take over the world like these ghouls.
I miss the days of actual counterculture among hacker types, do stuff for the benefit of the human race or one's own community.
It's a strange characterization to make of AI researchers who openly share all research that they want to "slow down the AI train" when it appears much more likely that people who are developing in a silo are slowing overall progress.
I really don’t see the issue if OpenAI makes rational buying decisions for its hardware. As long as they aren’t paying a premium in terms of model generations per dollar or per mwh versus Nvidia’s GPUs, it seems obviously a net good thing for OpenAI given how much of a gating factor hardware availability has gotten to be.
Also, they know all their own special tricks that could be implemented directly in hardware to dramatically reduce energy consumption or performance per dollar, and might not want to tell Nvidia all that stuff. Sam is already so rich, and he really doesn’t strike me as being primarily motivated by personal financial gain at this point. I just can’t see this all being some ploy to extract money from OpenAI by forcing them to buy his own company’s hardware at unfavorable prices.
> Also, they know all their own special tricks that could be implemented directly in hardware to dramatically reduce energy consumption or performance per dollar, and might not want to tell Nvidia all that stuff.
You seem to be saying that the good reading of the situation is that Altman will take OpenAI's internal secrets (so secret they don't even want to tell their existing chip suppliers) to his own side hustle. But that's not actually good; in fact it's exactly the kind of thing that a CEO should not be doing.
If OpenAI wanted to influence non-Nvidia chip makers into building the perfect next-generation chip, not one of them that would turn down that call.
> Sam is already so rich, and he really doesn’t strike me as being primarily motivated by personal financial gain at this point.
The man started a fucking cryptocoin just a couple of years ago. That doesn't exactly scream "not motivated by personal financial gain".
> I just can’t see this all being some ploy to extract money from OpenAI by forcing them to buy his own company’s hardware at unfavorable prices.
Agreed on this. The company would never ship anything for OpenAI to buy. But what he appears to be doing is trading on the OpenAI name to hustle stupid money into the biggest money pit since WeWork.
Don't know if it is a conflict or not but I think he should do it. The dude knows the problems blocking advancement more than anybody else. I am surprised everyone thought he would do another LLM company.
Do the chips, drop the prices, make it cheaper to develop AI => then he can return to AI software if he wants though I highly doubt it
From what I understand Altman is a business person and not an engineer. Can someone explain to me why everyone is freaking out about his departure? A salesman is gone, who cares?
I am first and foremost an engineer (I’m also the CTO of a very small startup studio and, so, also do a lot of business-y things). I traditionally despise “we are all a big family” work culture. Nonetheless, I think it’s foolish to not understand the power of people with the right charisma and driving force.
The halo effect, the reality distortion field, whatever you wanna call it, is quite real. It makes people go above and beyond. Some people are just very good at this, it’s almost innate to them. I’m not saying we should glorify them or not hold them accountable, but to think that a business person leaving, as opposed to an engineer, is not grounds to “freak out” is, to me, a bit naive.
I have had the pleasure of working with business people just like that. Very little engineering background, but they were born leaders and motivators. Just like on a smaller scale you can get the same vibe from certain engineers. Those guys and gals that make you feel like you’ll pull through no matter what.
Seems to me like Sam Altman has that kind of vibe. For better or worse, he drives people!
We often forget that it is people that build software. Not algorithms or robots. And who moves the people to build that software and to never give up? Other people, leaders — engineers or not.
What am I missing here: Sam Altman has zero charisma or cool factor. Every talk I've seen him in, he comes off as lethargic and sluggish. I get zero sense of passion or rallying drive around the hype of AI from him. Maybe he's good at fudging numbers to a specific set of people in a private room, but as a "reality distorter" for the masses he seems like a joke.
Perhaps he is not a reality distorter for the masses, but he is for the ones that work under him?
Clearly several people felt strongly about him enough to threaten (and go through with!) resignations.
Do the recent developments of around 700 in 770 people calling for board resignation answer this in your opinion?
I am genuinely asking for your view, not trying to sound sarcastic, as you may consider that this is just a vague threat and doesn’t count. To me it does, although obviously I wouldn’t say 700/770 is that fraction. The fraction of people feeling strongly about him in particular is a subset of this. It must be lower, but I’d say still quite significant — definitely above 20% of the company if I had to bet!!
There are already many startups aiming to compete with Nvidia here (Tenstorrent, Groq, etc.). And of course AMD as well. What does Altman have to offer?
So, even if the board was justified in its actions (quite possible), the way in which they have gone about it will be studied for years as what not to do. This was bush league level of PR mistakes and miscommunication. And the fact that OpenAI is a total $90B juggernaut of a company made this amateur hour so much worse.
Alternatively, the board is up against someone who is very good at spinning any story in their favor. They could come off as amateurish when they're up against someone as charismatic as Sam Altman, but it's unlikely that very many boards could have done better given the circumstances.
I mean, waiting 30 minutes to not release it before market close and pissing off Microsoft is a very obvious thing that it's still staggering that they didn't handle correctly. Immediately pissing off your biggest financial supporter is pretty unforgiveably incompetent
Not that different from releasing the news on a Friday after-hours, honestly. Microsoft would get pissed off by the shift from profit-seeking either way.
Exactly. So much of the commentary surrounding this is assuming that the board is motivated to act like the board of a startup, and therefore what they've done is amateurish because it would kill a startup. But the remaining members of the board are all very clearly not invested in OpenAI becoming an insanely profitable company—they're on the board of a nonprofit, in charge of the mission and the charter, not in charge of maximizing profit.
I'm sure that there are things they could have done better even with that context, but I think most of the confusion stems from people on here not understanding the motivations of these people.
I feel like once the for-profit "genie" is out of the bottle, it can't be put back in. How many employees have they hired since 2018 that are the absolute best of the best of the industry? How influential was the massive equity package in their employment decisions?
Personally i saw the for-profit subsidiary as a pretty decent middle ground. Obviously this action took place because the current board felt the company moved too far in the for-profit direction. While they do have the technical right and ability to fire Sam and Greg, and it was the "right" thing to do to bring the company back to the non-profit roots, do you really think the employees saw it the same way? How many hours do the board minus Ilya actually contribute in a week to OpenAI? None of the board members minus Ilya were even around before the for-profit subsidy was created so can they truly align the new OpenAI with something they didnt experience?
Also an aspect i didn't really see anywhere but they had 3 board members leave in 2023. How would those board members have voted? I'd guess probably not to fire Sam but i have no clue. Also why did they not add more board members when those left? Did Sam and Greg want the board expanded again but the other 3 kept voting to not expand and to keep the power?
Because they don't exist in a vacuum and relationships matter, even to a charity, and not pissing off a trillion dollar company for no good reason might be beneficial.
> Because they don't exist in a vacuum and relationships matter, even to a charity, and not pissing off a trillion dollar company for no good reason might be beneficial.
look at it this way, pissing off a trillion dollar company for no good reason is an existential risk no matter what org structure you have.
Why would it be beneficial if the mission of the charity is explicitly to keep AI technology from being under the influence and control of private trillion dollar companies?
But the servers are basically there to support the product that they’re selling.
What if they feel like they shouldn’t be focused on selling a product?
In the 20 threads about this topic on HN over the last few days I’m really kind of blown away by the inability for people to see a different worldview than their own.
Wanting a stake in hot in demand super valuable tech company that’s growing rapidly is a value judgment. Not everyone wants that. Some people actively think companies like this are harming society.
> But the servers are basically there to support the product that they’re selling.
Everything they do needs LOTS of compute. GPT-4 training alone took 100 million GPU hours to train. And they are constantly training new models. And even non-training research is VERY expensive[1]. Without compute that organization is as good as dead.
> Yeah they also have literally the most valuable and in demand new tech product in a generation.
> What if they feel like they shouldn’t be focused on selling a product?
Investor interest stems from focus on product. Having most valuable and in demand product in a generation doesn't matter if you bury it into the ground.
> The idea that Microsoft is somehow the one with the leverage here is fucking delusional yet I see it in every thread over this weekend.
If you think that anyone will give billions of dollars to organization that fucks their investors and is that much unstable, you are the one delusional.
What is MS going to do? Ask for their money back? Kick OpenAI off their cloud?
If MS takes their ball and goes home because the OpenAI board made a decision they didn't like, it's better that that happen sooner rather than later, eh? The whole point of OpenAI is to put human values before the profit motive, no?
They can just stop giving them new GPUs, undercut their API prices by a significant margin(they have the same models!), set up transitioning pipeline, etc. They can cause OpenAI A LOT of pain if they want to.
Yeah, eh? MS just literally offered all OpenAI employees jobs. And most of them seem to want to follow Sam Altman regardless of whether it's at OpenAI or MS. Altman has already joined MS. I'm speechless.
Did they miss? Legally, the board majority controls both the non profit 501 and the ‘owned’ commercial for profit entity. Those 4 board members only lose if they resign, otherwise there is no way to get rid of them.
I must admit that my opinion/perspective has changed on this issue in the last 35 hours. I believe that when AGI is achieved it can help with serious societal, scientific, and environmental problems, so I am by nature ‘full speed ahead.’
However, I now think that the board performed its legal fiduciary responsibilities and acted correctly. I changed my mind late Friday night after reviewing the 501 charter. It is the board’s obligation to act.
I am also a big fan of Sam Altman and I can’t wait to see what he does next.
Given the recent updates, yep they definitely missed lol. That's a lot of "openai is nothing without its people" tweets, including from the interim CEO...
Support of the staff for Sam Altman is impressive and well deserved. But legally, the board does not have to cave in. I am really curious what will happen this week!
There's a lot of speculation that Sam Altman will return to OpenAI. There are publicly disgruntled employees and high level employees resigning. The executives of the company are reportedly meeting with the board about reinstating Sam Altman. I definitely don't think that's ideal for the board.
This story was obviously seeded someone in Altman's faction, or at least sympathetic to him. Their bluff got called at 5pm and nothing has been heard since except crickets.
Full legal ownership of what? How many will jump ship by the end of next week if Sam isn't rehired? What are the prospects of them receiving next funding round? Have you considered how much more difficult attracting target would be? Yes, they have leading tech now, but will they be that much ahead in a foreseeable future, when B100, MI400 and TPUv6 come online by millions, and they wouldn't have enough money to afford that?
I'd say that the mistake was failing to announce the conflict of interest in the memo, if it actually exists. Announcing it on a Friday was also a mistake, but there might be one or two circumstances where delaying the announcement could cause greater harm to OpenAI, should a conflict of interest be present. Other than that, no one will care about this in a month.
Microsoft is equally amateur. Even I the layman can take one look at OpenAI's ridiculously convoluted structure and it's laughably threadbare and ill-equipped board members and know something is wrong.
Microsoft should have looked at this and forced them to clean up their act before getting in bed with them. Now they're embroiled in the bush league shenanigans.
Your link makes it very clear that the for-profit arm is wholly controlled by the non-profit arm, and is bound to pursue the mission of the nonprofit. It's not what would traditionally be called a "charity", but the for-profit arm exists only to serve the mission:
> So we devised a structure to preserve our Nonprofit’s core mission, governance, and oversight while enabling us to raise the capital for our mission:
> The OpenAI Nonprofit would remain intact, with its board continuing as the overall governing body for all OpenAI activities.
> A new for-profit subsidiary would be formed, capable of issuing equity to raise capital and hire world class talent, but still at the direction of the Nonprofit. Employees working on for-profit initiatives were transitioned over to the new subsidiary.
> The for-profit would be legally bound to pursue the Nonprofit’s mission, and carry out that mission by engaging in research, development, commercialization and other core operations.
I made a similar comment a few days ago which I’m expanding on here, but looking to be involved in the actual development of AIs is very risky. These are then organizations that will have incredible scrutiny from regulators and the public and grapple with the massive ethics issues of developing digital intelligences. They are the “face” companies.
But to own a piece of the “second companies” — the industries that will pop up from the disruption — that’s where the real money will be made. And it’s primarily because the optics of making trillions inside of an AI creating business is essentially untenable investor risk. It’s also just common sense to spread your risk as an investor.
In my mind OpenAI has always been the “profit limited” generator of the change. It’s the lamb not the wolf. Owning a stake in OpenAI itself isn’t important. Creating the change is.
Owning stakes in the companies that will ultimately capture and harvest the profits of the disruption caused by OpenAI is where profits are made. And it’s such a massive disruption you don’t even have to be that close to the source, though it helps.
OpenAI can’t become a profit center while it disrupts all intellectual work and digitizes humanities future: those optics are not something you want to be attached to. There is no flame retardant suite strong enough to survive politically or take profits ethically.
Now the real issue is whether you should be developing the technology with such incredible disruption and be able to position to take profits at all. But ethics are a side concern and the system we are in clearly isn’t prepared to grapple with the impact properly, other than to catalog people for minimum survival incomes. Even now while corporate profits are high, and the stock market is booming, most people on Earth are not sharing in the bounty. And personally, I don’t think we can rely on the architects of disruption to create a sustainable and ethical future that’s fair for everyone no matter how fair or ethical their declared intentions. Even if they are morally pristine they are mortal, and that kind of concentrated wealth and power is a recipe for incredible, unstoppable tyrants.
Imagine the General Artificial Intelligence system created by Sam Altman with billions of Saudi Arabian oil dollars. Allah Akbar and Jihad as core values. Truely an apocalyptic AI system. AI Technological singularity with Islamic petrodollars.
You can't use the same chips as Nvidia without licensing them. Creating a new type of GPU/TPU is a complex endeavor that involves much more than just a single chip.
Perhaps that's what they are selling, but ASML lead times are many years-long. It sounds to me like a WeWork/Softbank-style heist: taking advantage of those who have more money than sense under cover of the current LLm/AI hype.
This is my take, Altman was fired because there was no other choice. OpenAIs charter is fundamentally incompatible with the hypercapitalist model and its believers. I'm sure the board tried to dissuade Sam from going certain path, but being an SV capitalist, there was no way to dissuade him. Given his own level of power and what I think was a superior hand, he ignored the board. At the point, the board really only had one move left, which was to fire him. I don't think it was the wrong move for the board, but they way it was done it really the big issue here.
My counter: Altman is correctly aware that the the hyper capitalist path is the most probable path to creating an AGI, as the amount of capital/investment for the compute/architecture will be massive. Vertically integrating with hardware seems very logical to solve a technical problem as big as AGI. The entire charter becomes moot without AGI and the current path of relying on outside funding (Microsoft) and suppliers (nvidia) only makes their overall mission more difficult.
Their current model allows for hypercapitalism with constraints and has been successful. They still raised billions. Remains to be confirmed what happened behind the scenes, but it possible Sam wasn't honest with these dealings. The correct way to have gone about this was to setup that hardware co under the the same umbrella as OpenAI co. Sam is playing 3d chess here and the board has to fold or make their play, even if their hand isn't great.
This is honestly completely fine, and I don’t understand how people can think this is a conflict of interest?
NVIDIA has the most ridiculous monopoly on GPU’s right now. Their RTX4090 is really fast and would be great for clusters but NVIDIA software driver locked it from sending data at high speed to other GPU’s that prevents it from being used in clusters. This is not a one off thing, NVIDIA regularly does stuff like this to piss off consumers but we are at the mercy of buying NVIDIA chips and making it a trillion dollar company. There were reports that the H100 (latest and fastest known AI chip from NVIDIA) costs only 3000$ to make, while it is sold for 30,000$. Everyone is desperate to add some competition to this industry. If I were CEO of OpenAI even I would be desperate to create another chip to compete with NVIDIA and end this resource reliance.
It’s not obvious that this effort should be started within OpenAI. It may turn out to be a distraction for a startup doing really well, many large companies with much more money to spend have spent huge resources building their own chips (TPU’s, Tesla DOJO, Intel Arc) and still NVIDIA is Uber dominant. I think it makes complete sense to start it outside of OpenAI. It would be a conflict of interest, only if:
1. The chip is of poor quality and Sam Altman forces OpenAI to buy it anyways to enrich himself. I think the possibility of this happening is close to 0, everything I’ve seen and heard of Sam doesn’t make it seem like he’s someone who would do something like this.
That said we are right now 2-3 years away optimistically from such a chip being developed, it’s too early to level any conflict of interest allegations! As am ML practitioner I for one strongly welcome Sam Altmans Chip startup, just as I welcome MS building their own chip, Teslas Dojo, Google TPUs and anyone else who wants to challenge NVIDIA
> There were reports that the H100 (latest and fastest known AI chip from NVIDIA) costs only 3000$ to make
These sorts of comparisons are silly. If it was only $300, then everyone would have made them. It would be made and sold at the local BestBuy. The reality is that it costs a lot more than that. R&D, Fab, etc... plus the chassis it goes into...
Maybe I'm crazy here, but why would this be a problem? Assuming Altman still had enough time to perform his duties as OpenAI's CEO, what's wrong with developing other businesses that might help OpenAI lower its costs and achieve its goal of broadening access?
One concern I could see is the source of funding. But OpenAI's COO is on record saying that Altman's firing had nothing to do with "...anything related to our financial, business, safety, or security/privacy practices." So that seems to indicate they wouldn't have cared.
Having a really hard time unpacking all this. This article is very speculative.
People have mentioned this as a conflict of interest with OpenAI, but worse is that it runs afoul of the CHIPS Act. Biden administration has made it clear that the semiconductor supply chain is a national security interest.
Microsoft is also in big trouble with the US Gov with their looming $30B tax liability and their gross negligence regarding Azure [1] leading to the US getting hacked. OpenAI being hosted there could be seen as yet another liability to US security.
My bet is that this OpenAI story originated much higher than the board of directors…
Given the mass amounts of saber rattling about Taiwan and TSMC, I have to wonder how a deal with the Saudis would play out from a geopolitical perspective.
This whole episode gives the lie to this concept of building the company for the good of humanity. The real players here care about humanity about as much as Elon Musk does. Altman has the power because he can make (or lead others to make) a thing that’s worth billions of dollars. All this nonprofit org chart craziness and good of humanity is horseshit and always was. Safety, justice, corporate governance, the law generally, effect on the environment, will always be second to AI’s capacity to generate money for those who make it.
And the funniest part is OpenAI has been refusing to hire remotely. That’s their right, but I wonder how many office days a week Sam was there with all of his other shenanigans
OpenAI CEO Sam Altman says the remote work ‘experiment’ was a mistake—and ‘it’s over’
Offices are for (human) resources, where they are neatly herded and monitored and managers can look at the office every now and then and enjoy the sight of toiling crowds. It's not for the elites.
> He further added that OpenAI’s some of the best talents are working remotely. He said, "some of our best people are remote, and we will continue to support it always, so please don't let hating SF stop you from applying to OpenAI! I don't like the open air fentanyl markets either.”
If you're working on a ton of side hustles on company time, that might well be part of what might make you worry that your employees might do the same unless you can find ways of keeping them on a tight enough leash.
Not start a scammy crypto coin, that’s for sure. Stronger push of societal acceptance of public key cryptography everywhere would be a first good step.
Non-humans could potentially have a public key; I'm a fan of switching to public keys in general but it doesn't solve the problem of assuring "unique humanness".
Governments do a pretty good job of verifying "unique humanness", but I suppose crypto innovation, fancy iris-scanners and a sprinkling of free-market fairy dust is what we really need in this space.
I think the point GP is trying to make is: just because he was working out of an OpenAI office doesn’t mean he was necessarily working on OpenAI business; he could have been remotely working on some of his other ventures.
No, you're moving the goalposts here. First, it was "he doesn't go into the office", and now it's "even if he does, maybe he's not doing what he should be."
It is sort of a nicer definition, though. The coffee shop is a workplace for the baristas, so maybe we can return to the workplace by going there instead.
These arguments are exhausting. If you are important enough affordances will be made at lots of companies.
This is such a terrible argument though. Executives in practically every industry are all over the map in their efforts and roles. They might serve on different boards or as an advisor. This is different than someone is works in a single function such as the life of a product engineer.
btw, this is only controversial because the board chose to make it controversial
there is nothing “worse” about it because OpenAI is a non-profit and would likely contract with this venture
the regulations against this kind of thing simply require
A) disclosure, for potential future donors
B) low thresholds of ownership to avoid self dealing regulations, which all VC backed companies are compliant with due to share dilution
the secret behind quick impressive fundraises that launder founder’s reputations into legendary status is that their non profits are investors as the “friends and family” or “lead investor”, and then outside investors follow suit and that reduces the non profit and the founder’s ownership percentages below noncompliant levels.
(This isn’t really a game that people can organically play. You either have deep pools of capital you control, or you don’t.)