Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI's misalignment and Microsoft's gain (stratechery.com)
463 points by todsacerdoti 5 months ago | hide | past | favorite | 282 comments



I can’t believe people are cheerleading this move

* Tigris is DOA - If because it would piss off the MSFT board but mostly because the SEC would arrest Sam assuming he’s an officer at MSFT. He could maybe be a passive investor, but that’s it

* People really think many Open Ai employees will give up their equity to get whatever mediocre stock grant their level at Microsoft has? And 1% raises, no bonus some years, and the board forced headcount reductions?

* Sam has even less power with the board, and the board in a 3 trillion dollar corporation would be even more risk averse than the OpenAI one

* there was a ton of fan fiction yesterday online about Satya forcing a move on the board. This was never really a thing. He made one of the worst investments in the history of SV in terms of keeping power to make sure your money is used correctly. $10B for 0 votes or any power to change votes


We witnessed the insane value destruction over the weekend. Every OpenAI employee is aware that everything they have and what is promised to them is whipped out. Their best chance is that Sam brings them to that new company within MS. They will get the same or better deals as they had. And they will probably deliver within 6m what OAI has now, and what spooked Ilya to launch this coup.

This was a brilliant power move by Satya. I don't see any hope for OpenAI after this brain drainage.


Yeah, just like reuters mentioned

"The OpenAI for-profit subsidiary was about to conduct a secondary at a $80 billion+ valuation. These 'Profit Participation Units' were going to be worth $10 million+ for key employees. Suffice it to say this is not going to happen now," chip industry newsletter SemiAnalysis said."

Insane self own by OpenAI


That sounds like exactly the kind of thing the board of a non-profit should be preventing.


As an employee of a company, I trade my time and effort for some amount of rewards. I enter deals with the expectation of stability from the other party.

My unfounded Internet opinion: OpenAI just removed or reduced a large source of reward and have shown fundamental instability. OpenAIs success is very much tied to Employees and Compute.


If your goal is to work for a profit-sharing company, then don't work for a non-profit.


Plenty of non-profits give a lot of money to employees. There is nothing stopping non-profits from paying exorbitant sums to their employees, and executives often do get paid exorbitant. Non-profits mean they don't pay out to investors, but they are usually used as a grift to get people to work for less so the top people make more money and do fundraising on their pet projects.


The employees work for the for-profit part of OpenAI.


That is owned by a non-profit organization. It seems like a lot of the employees are chasing money, and forgetting that it's fundamentally not trying to maximize profit. Of course, Sam seems to have perverted its mission to be the latter (serving as the latest high-priest of mammon, like Elias served Lillith)


Yeah I mean, who cares if ASI kills us all as long as a couple hundred of the most well-paid people on the planet get even more rich.

It's insane to see all these takes when we don't even know what caused the loss of trust in the first place.


No one sincerely believes they have, or will soon achieve, AGI. Neither can they believe that the CEO can push them to achieve it and forcefully release it, whereas they would responsibly develop it (whatever that may mean) without him around.


Great summary.

We are very complicated creatures and things get out of control, both internally and externally. My armchair opinion is that they started to believe that all of it is so advanced and important, that they lost a bit of a grip on reality. Sutskever imagining planet covered with data centers and solar panels shows me that [0]. Every single person is limited in his/her view - I get a strange feeling when listening to him in this video. Also, they are not the only people left on this planet. Fortunately, this task of creating AI/AGI is not a task for a pack of ten, trying to save us from harm. Still, it may and probably will get rough. /rant

[0] https://www.youtube.com/watch?v=9iqn1HhFJ6c


Your second paragraph is pretty ironic given your first.


> Yeah I mean, who cares if ASI kills us all as long as a couple hundred of the most well-paid people on the planet get even more rich.

creating ASI for money seems particularly asinine as the machine overlords won't care terribly much about dollars


How do you know what ASI will value?


As an employee of a bay area tech company, presumably, in which a mid-level IC can make as much money as a C-suite executive in some less prominent industry*


Well, they're almost certainly 'not profiting' right now.


I agree, Satya is an operator. He translated a mess into a historic opportunity for Microsoft. They'll get some significant chunk of some of the best AI talent on the planet. All the heatseakers will go there. That, plus the IP they already have, will turbocharge Microsoft.

OpenAI, in contrast, will become more like an academic research unit at some university. Those who prefer life slow and steady, will select to stay there, making tech for effective altruists.


they make nothing open source, so I'm not sure why effective altruists would join it.

if they can't predict and contain the consequences of their own actions, how can they predict and contain their so claimed future "AGI".


Is there any reason to assume open source is a prerequisite for effective altruism?

Open source doesn’t necessarily imply good for humanity, for example distributing open source designs for nukes would probably be a net negative.

And even if it did, effective altruists wouldn’t need to prioritize the benefit provided by openness over all other possibilities.


I don't think relying on a proprietary license to make sure enemies can't get AI for nukes is a sane security model. Something else needs to give.


Operator?


Operator in this context refers to someone who successfully runs a business that someone else founded. Often the visionary founder is not good at the nuts and bolts of running and growing a business so they hand off the reins to someone who is. Think Steve Jobs vs Tim Cook.


It doesn’t mean that at all, it’s slang

https://www.urbandictionary.com/define.php?term=operator


For a decade, "operator" in Silicon Valley as has been used exactly as the commentator above describes it.

Which creates separation from "investor" or "engineer" or "founder" or "PM" or "sales" or "finance". Somebody has to make stuff happen in organizations. And the people who are good at it (Satya is excellent) set their organizations up for unique success.

And yes, ex-special forces people roll their eyes at it. Which is appropriate! But the usage is now totally distinct.


I learned the business context before the spec ops context and honestly the former makes way more sense to me than the latter.

A business operator is like a machine operator. You're pulling levers on a machine that someone else built while optimizing and tweaking to get the best performance out of that machine as possible.


I was quite wrong about this, but I still don't think it's especially relevant that someone else built it. You would never call Zuckerberg an operator? Or when someone else built it does that not mean that you had no role in building it? That would be the exception, but it would be analogous to owner/operator in general business parlance.

I think now, having tried to fill in my missing knowledge, that it comes from the same root as DevOps, which I erroneously thought was related to SpecOps. DevOps comes from IT Operations which comes from Operations Management, which yes, is like a machine operator. https://en.wikipedia.org/wiki/IT_operations https://en.wikipedia.org/wiki/Operations_management

Edit: here's a post with "Founder Operators". Which seems like maybe if you just heard "Operator" you would assume they're not a founder, but also that the term can be applied for those running businesses they founded: https://startupceo.com/2023/01/5-things-successful-founder-o...


It seems like underselling the successful part and overselling the part about not being the founder, but I can see it's a slang term. Thanks.

And yeah I'm wrong about it being the same term, though I did imagine a different use, I was also thinking of smooth operator, apparently I was unfamiliar with the term in tech.


It has a meaning in a business context apart from a slang term


> And they will probably deliver within 6m what OAI has now, and what spooked Ilya to launch this coup.

Do you realize how hard it is to make something like GPT4? I think all the non-AI people posting about this all over X/HN have this idea that if you move everyone over you just "remake the AI" as if this were a traditional API.

There is no way MS catches up to OAI in a short period of time. During that time the rest of the market will be pressing the accelerator as hard as possible.

I think MS is in a really, really shit spot right now.


They have access to the weights as per their agreement with open ai; idk if that allows them to use it as a starting point. They also will have access to several of the people who did it. It’s insanely hard to do the first time, but probably just hard to do the second time after you already know what worked before.


sure but what does "hard to do" entail in terms of timeline? in my experience nothing complex can launch in 3 months at a big corp. 6 months would be aggressive. a year seems the most likely. but where will competitors be in a year?


I wonder if that agreement also has an insurrection clause saying that if you benefit from this, you must wipe your memories clean of any of that shared IP.


I mean, if MS literally gets:

- all the code that created GPT-4

- all the weights for GPT-4

- all the people who created both of those things

then, y'know, I like their odds.

They have access to the first two already, per their licensing agreement with OAI; by the end of the week, they may very well have the third.


Value as in money or value as in values? There are people who value also other than the former in the deal. Like people who are trying to keep OpenAI at least somewhat non-profit.


granted, now MSFT basically has another research arm like Google Brain/FAIR, but whether or not their "brain trust" can equal Yann Lecun's or whatnot who knows. Altman and Brockman are on the MBA side of things. The ML magic was Ilya's. If they can recruit a bunch of mini Ilya's over from Open AI, maybe they have a chance at regaining the momentum.


Ilya has backtracked and signed the letter saying he would also leave to Microsoft if the board doesn't resign.


In one fell move he demonstrated he had neither conviction nor any foresight, ouch. I'm starting to believe this was just a unthought out ego emotional reaction by Ilya. Dude is like Walter White, he just "had to be the man"



Wtf? Isn't he on the board?


Ilya signed a letter asking 4 members of the board to resign, including Ilya himself. He even posted a public apology for his actions on X https://twitter.com/ilyasut/status/1726590052392956028

Yes, this is probably the biggest self-own in corporate history.


From satyas tweet Sam's new division/subsidiary is going to run more like LinkedIn or GitHub, and openai has pretty explicitly just declared that they don't like making money, so I don't think the comp is gonna be an issue. And for now, if sam wants to make product and money and Microsoft wants the same thing, then having power over the board doesn't really matter. And Microsoft has all the IP they need. That's a better deal than equity given who is in control of openai now. They're actively not focused on profit. Whether or not you think this is a good outcome for AI or mankind - Microsoft absolutely won. Plus the more people they pull from openai the less they have to deal with openai, everything is in house.

Edit: god damn - even the guy that pushed Sam out announced he wants to resign if Sam isn't brought back what the hell


> Edit: god damn - even the guy that pushed Sam out announced he wants to resign if Sam isn't brought back what the hell

It reads like an orchestrated coup against the other three members of the board. Convince the board to do something you imagine will get this kind of blowback. Board is forced to resign due to their fiduciary obligations to the non-profit. And now you control the entire board.


> fiduciary obligations to the non-profit

What fiduciary obligations does a non-profit have? Isn't the board pretty successful already at not making money?


Fiduciary isn't about money, it's about the best interests of the non-profit they are governing. If staying on the board means a harm to the goals of the non-profit charter, then they have a duty to resign.


Fiduciary obligations need not be profit-seeking. They often - perhaps especially - involve keeping the lights on for the chartered company.


OAI employees have no equity. They have a profit sharing right. The board is clearly not interested in profit.

MS is risk adverse in every way except for one, which is to blow up Google. They will set the world on fire to do that.


MS is risk adverse in every way except for one, which is to blow up Google.

To me this is exactly why I’m skeptical of Microsoft’s strategy here. They seem to be convinced that their success at unseating Google is assured. Meanwhile, google’s share price has barely flinched. Also, the way this has played out just feels desperate to keep the whole thing going. Wouldn’t Microsoft want at least some clarity about what actually happened by the board up and fired the CEO on the spot before doubling down to bring him into a position of power within the company?


OpenAI is doomed; in fact, it has ceased to exist; it's an empty shell, and its ressources provider is now its biggest competitor.

But I doubt MSFT will win this round.

1/ We still don't know why Sam Altman was fired; does MS know? or think they know?

2/ It will take the new team at MS a long time to recreate what they had at OpenAI (what does "MS has a perpetual license to all OpenAI IP" actually mean and entails, legally speaking?); during that time anything can happen.


Exactly. I’m very surprised nadella would take this kind of risk. It seems extremely cavalier to not investigate what happened before quickly going all in on hiring the entire team. You risk having to do a very embarrassing about face if something serious comes out and could lead to himself having to resign


Nadella isn’t getting his info from HN threads.


Yeah, but OpenAI was Microsoft’s ace the hole. Imagine if Nadella sat on his hands and waited while OpenAI burned down. In two weeks the narrative shifts from “Microsoft locked down the best AI people in the business” to “Nadella burned $10B for nothing”.

You don’t have to do a ton of diligence to know you want to keep the people you bet the farm on. If it turns out that Sam is actually a massive liability, Nadella will deal with that after this crisis passes.


When you’re in a hole stop digging


> OAI employees have no equity.

OpenAI employees have no equity? Well, then where exactly is that $86B of "existing employees' shares" coming from?

> ChatGPT creator OpenAI is in talks to sell existing employees' shares at an $86 billion valuation, Bloomberg News reported on Wednesday, citing people with knowledge of the matter.

https://www.reuters.com/technology/openai-talks-sell-shares-...

https://www.reuters.com/technology/openais-86-bln-share-sale...

A random OpenAI eng job page clearly states: "Total compensation also includes generous equity and benefits."

https://openai.com/careers/software-engineer-leverage-engine...


I believe OAI Inc employees and board members have no equity, but OAI LLC employees can have equity.


I will admit I haven’t seen an OAI contract, but have seen articles and multiple Levels.fyi posts for about $600k equity/year (worth $0 right now obviously)

So any idea how that translates into the profit sharing? They have no profit right now. Curious how employees get to that number


I have not seen any of the employee contracts so this is purely an educated guess which might be entirely wrong. There is a chart from Fortune[1] that spells out how the profit caps work. I have not looked at any of the documents myself so I am interpreting only what I have consumed. My guess is that the employee equity/contracts spell out the cap structure so perhaps the equity value is based off those caps. Assuming the profit cap for employees is correct I would assume you could not value any "equity" based off the last raise value. At best you could perhaps value the max cap available for those shares.

[1] https://fortune.com/2023/01/11/structure-openai-investment-m...


What equity at OAI? You mean the equity for profit sharing? Seems to me anyone who cared about their stake in the profit sharing would be fairly pissed off with the move the board made.

Investors loved Satya's investment into OAI, not sure how we can qualify it as one of the worst investments in the history of SV?

How can we even compare the risk concerns of MSFT with OpenAI? The impression we have of OpenAI is the risk concerns are specifically about the profit drive. From a business standpoint, LLMs have huge potential at reducing costs and increasing profit in multiples.


We went from OAI employees flaunting “$1,000,000 per year salaries” to the New York Times to “what equity?” Really fast

This isn’t personally against you but they never had the $1M/year they flaunted when Sam the savior was their CEO


I realize you have a bias against Sam Altman but lets dig into your current statement. The NYT article you are quoting I believe is this one [1], in which it describes Ilya Sutskever making $1.8 million in salary. I am not sure exactly what you are trying to say but from the beginning the equity has not been part of the comp of OpenAI. Salary as far as I know is typically just that, the cash compensation excluding bonus and stock.

I don't know exactly how employee contracts are drawn up there but OpenAI has been pretty vocal that all the for-profit sides have caps which eventually lead back to 100% going into the non-profit after hitting the cap or the initial investment amount. So I am not quite clear what you are saying? Salary is cash, equity is stock. There has always been profit caps on the stock.

My only point was that you made an argument about employees giving up their equity for "mediocre" MSFT stock. It is just a misinformed statement that I was clearing up for you. 1) MSFT has been doing amazing as a stock 2) Employee equity has profit caps. 3) Employees who actually care about equity profit would most likely be more interested in joining MSFT.

[1] https://web.archive.org/web/20230329233149/https://www.nytim...


Im referencing large PPU grants in OpenAI offers, with 4 year vests, they sure made it feel like regular employees are being given a chance to join now and be millionaires via theirs PPUs

If this was never true, that’s on the OpenAI team that misled their engineers

https://www.businessinsider.com/openai-recruiters-luring-goo...

Their job postings even specifically mention “generous equity” - again if there is no equity in the contract - that’s OpenAI misleading its recruits

https://openai.com/careers/research-engineer-superalignment


> Sam has even less power with the board, and the board in a 3 trillion dollar corporation would be even more risk averse than the OpenAI one

This is where I see the win. Newcomer’s concerns about Altman are valid [1]. It is difficult to square the reputation he nurtured as OpenAI’s CEO with the reckless lawlessness of his crypto start-up.

Microsoft knows how to play ball with the big boys. It also knows how to constrain big personalities.

[1] https://www.newcomer.co/p/give-openais-board-some-time-the


If OpenAI is beholden to Microsoft for investment, and OpenAI's license is exclusive to Microsoft, OpenAI has nothing to offer those who remain except mission and ramen. If OpenAI slows their own research, that impairs future profit allocation potential to remaining talent. Enterprise customers will run to Azure GPT, and Microsoft will carry the research forward with their resources.

This morning at OpenAI offices will be talent asking current leadership, "What have you done for me lately?"

Edit: https://news.ycombinator.com/item?id=38348010 | https://twitter.com/karaswisher/status/1726598360277356775 (505 of 700 Employees OpenAI tell the board to resign)


OpenAI employees are already quitting en masse in public on twitter.

Their pay is majority equity, so its worthless now with a board that says it hates profits and money. OpenAI is probably worth 80% less than it did a weekend ago, so the equity pay is also worth 80% less.

Microsoft is perfectly willing to pay those typical AI salaries, because Nvidia and Google are literally doing a recruitment drive on twitter right now. Apple and Amazon are also probably looking to scoop up the leftovers. So Microsoft has to pay properly, and Sam will demand them to, to get the OpenAI team moved over intact. There aren't that many core engineers at OpenAI, maybe 200-300, so it is trivial for Microsoft to afford it.


Driving the narrative doesn't mean driving reality. It is clear that Sam and friends are great at manipulating the media. But this is a disaster for Microsoft and the CEO damn well knows it. It is also a personal disaster for Altman and probably not a great move for those who choose to join him.

Time will tell if the OpenAI non-profit vision of creating safe AGI for the benefit of humanity can be revitalized, but it really does look like all involved are basically acknowledging that at best they were fooling themselves into believing they were doing something on behalf of humanity. Egos and profit seeking took over and right now the ethos which they championed looks to be dying.


Why is this a disaster? They managed to acquihire the founders of a 90B company. Probably the most important company in the world until last Friday.

Seems like a huge win to me. They can write off their entire investment in OAI without blinking. MS farts out 10B of profit in about a month.


They acquired two of the founders least responsible for the actual tech. They made a huge bet on OpenAI to produce the tech and that relationship is going down the drain. Watch the market today, the next week, the next month, the next six months and that will tell you what I say: this is a disaster for MS and they damn well know it.


> They acquired two of the founders least responsible for the actual tech

Microsoft also “has a perpetual license to all OpenAI IP (short of artificial general intelligence), including source code and model weights.”

If you’re a scientist, OpenAI is a fine place to be, though less differentiated than before. If you’re an engineer more interested in money and don’t want the risk of a start-up, Microsoft seems the obvious path forward.


Based on credits in Gpt3 and 4 papers, I think the team that follows Sam and Greg are the main drivers of the tech. Ilya is an advisor more or less.


https://twitter.com/ilyasut/status/1726590052392956028

Ilya just said he will do everything he can to reunite the company. If that’s the case the easiest way to do it is to resign and join MS


You're making the assumption that the technical folks won't follow him, and that's a pretty ridiculous bet at this point unless you've got some more data you're just not sharing.

Out of the gate the technical folks at OA had to be perfectly fine with Microsoft as a company given they knew all of the tech they were building was going to be utilized by MS carte blanche.

So now that their OA equity is staring down the barrel of being worthless, what's stopping them from getting a potentially slightly lower but still significant payday from MS directly?


The only technically person who matters here, the one who came from deepmind and who is the worlds top AI researcher, is sure as hell not going to follow him since he’s the reason Sam is gone.


You're right, I have no idea what I'm talking about, clearly people aren't going to leave and follow Sam instead of Ilya. Nobody at all... just 550 of 700 employees, nothing to see here.

https://twitter.com/karaswisher/status/1726598360277356775


> 550 of 700 employees

Including Ilya Sutskever who is (according to the posted document) among the 550 undersigned to that document.

It's pretty clear this is a fast-moving situation, and we've only been able to speculate about motivations, allegiances, and what's really going on behind the scenes.


You’ve nailed it. The excitement is going to be short lived imo


given that 500 employees are saying "either give us sama and gdb back or we are going to msft", i say nadella won hard.


That’s how it appears currently but experience has taught me to be very careful about making snap judgments in these types of fast moving situations. Nobody seems to know yet why he was actually fired. The popular theory is that it was a disagreement about mission but something about that narrative just feels off. Also Nadella and Altman are both enjoying God-like reputations and the OpenAI board totally being dismissed as clueless and making a stupid, impulsive decision even though basic logic would tell you that a rational acting person would not do that. There’s a lot of room for the pendulum of public opinion to swing back the other way and it’s clear that most of the most fervent supporters of Altman and Microsoft are motivated by money rather than truth.


Most human beings are motivated by money.


> a rational acting person would not do that.

Non-profit boards have no incentive to be rational.


Did you even research the basic facts?

Microsoft stock is up in the pre-market, because they basically got half of the OpenAI team for free.

The majority of top researchers at OpenAI are expressing solidarity for Sam and basically signalling they want to move too, just check out twitter. That also includes like the majority of the execs tehre.


Yes, low volume pre market moves on the back of a nonstop news flow always predict how they end up


> the SEC would arrest Sam

SEC does not have the power to arrest anyone. Their investigations are civil.


Criminal charges can be filed due to SEC investigations. For example:

https://www.sec.gov/files/litigation/admin/2021/ia-5764.pdf


The cheerleaders are the LLM AI future true believers. I imagine they are the same people that were telling us about how NFTs will change the world last year.


I really don't get the comparison of nfts to llms. I mean yeah some hype cycle idiots have redirected both of their brain cells to shilling useless startups that'll get obsoleted by minor feature additions to bard or whatever in a year, but who cares about them? NFTs didnt do anything but enable fraud.

Llms do stuff that has value. I can use Rfdiffusion with motif scaffolding to create a fusion protein with units from enzymes that have no crystal or cryoem structures with as high as a 5-10% success rate!!!!!! That's absolutely insane! I only need to print 20 genes to have a chance of getting a variant that works. Literal orders of magnitude improvement. And if you don't have a good multisequence alignment for getting a good folks from alphafold? Pure llm based esmfold can fill in the gaps. Evodiff is out here generating functional proteins with disordered regions. Also, to bring it back to openai, if I ask chatgpt to write some code, it writes some pretty decent code. If I give it a bunch of PDFs and ask it to give me a summary, it gives me a summary. I don't buy the agi end of the world hype so a shift that means that we get more focus on d eveloping useful tools that make my life easier that I'm totally willing to pay 20 a month for? Yeah I'm down. Get this cool tech and product into a form that's easy to use and useful and keep improving it!


To me, this sounds very similar to the type of over-hyped, exaggerated response when someone criticized cryptocurrencies by saying they don't do anything. The response would be:

-I'm literally drinking a coffee I bought with bit coin right now.

-I was able to send large sums of money to my grandma in another country while paying a fraction of the fees going through banks

-It's a stable store of value for people in volatile countries with unstable currency

-It's an investment available to the small timers, normally only the wealthy have these opportunities

-It lets me pay artists for their art directly and bypass the corrupt middlemen

this is a forum coding so i have no idea what any of that biology mumbo jumbo means, but everything you mentiond about chatgtp is conveniently missing a lot of details.

>write some code, it writes some pretty decent code. Is it trivial code? Is it code that shows up on the first page of any search engine with the same terms?

>it gives me a summary. Is it an accurate summary? Is it any better than just reading the first and last section of the report directly?


Dude I'm talking about it being worth 20 bucks a month (which NFTs are not), not the hype cycle nonsense. Just because you don't understand the scientific applications of protein folding, one of the most important problems in biology, doesn't mean that its mumbo jumbo. Ever heard of folding at home? Man is silicon valley ridiculous sometimes, but since apparently the accomplishments of coders don't count on this coding forum if they're in fields that web developers don't understand let's focus on consumer applications.

In terms of writing code, yeah it's pretty simple code. I'm paying 20 bucks a month not 200k a year. I've found it really useful to dive into open source code bases for papers (just upload the repo and associated paper) - academics write pretty garbage code and even worse documentation. It's able to easily and correcttly extend modules, explain weird uncommented and untyped code (what exactly is xyz data structure? Oh it's a tensor with shape blah where each dimension represents abc value. Great saved me 2 hours of work).

For the summaries - uhh yeah obviously the summaries are accurate and better than reading the first and last sections. Spend the 20 bucks and try it yourself or borrow someone else's account or something. Especially useful if you're dealing with nature papers and similar from journals that refuse to give proper respect for the methods section and shove all the information in a random way into supplementary info. Make a knowledgebase on both and ask it to connect the dots, saves plenty of time. I don't give a damn about the flowery abstract in the first part of the report and the tryhard conclusion in the last part of the report, I want the details.

It's comical that these useless hype bros can convince folks that a genuine computational breakthrough and a pretty solid 20 dollar a month consumer product with actual users must be bunk because the bros are shilling it, but luckily the baker lab doesn't seem to care. Can't wait to play around with all atom so I don't have to model a zinc atom with a guide potential and can just model the heteroatom directly in the functional motif instead! Not sure it'll work for the use case I have in mind until I try it out and print a gene or two of course but I'm glad folks are building these tools to make life easier and let me engineer proteins that were out of budget 3 years ago.


You see no use case for LLMs? I've successfully used GPT4 to actually transcribe hundreds of pages of PDF documents with actual accuracy. That alone is worth something. Not to mention I can now literally ask questions from these pages and come up with cited, reasonable answers in a few seconds. This is amazing technology. How can you not see the use case?


Wow OCR. How innovative.


Accurate OCR that answers questions from source documents? Yes... very innovative. As an example, I have a real estate data company that provides zoning code analysis. Whereas before I would have to manually transcribe tables (they come in many different formats, with table columns and rows that have no standard structure), I can now tell GPT.... Examine these images and output in my custom XML format after giving it some examples. And ... it does. I've fed it incredibly obtuse codes that took me ages to parse through, and it... does it. I'm talking about people using non-standard notation. Handwritten codes, anything. It'll crunch it

tell me... how much would it cost to develop a system that did this with pre-GPT OCR technologies? I know the answer. Do you?


Did you make anything on those NFTs?


Nope. Crypto has no value and I've consistently avoided it


Microsoft can offer more if it wishes, no?


but they can't offer the whole "we are doing this for the benefit of humanity" lark

will researchers that were lured into OpenAI under this pretense jump ship to explictly work on extending microsoft's tendrils into more of people's lives?

(essentially the opposite of "benefit humanity")

no doubt some will


I don't think Microsoft cares about that crowd, since now without capital they can't really do anything anyway. The rest of the crowd that wants to make bank? Might be more appealing


> without capital they can't really do anything

Not a bad moment for a rich patron to swoop in and capitalise the non-profit. If only there were a billionaire with a grudge against Altman and historic link to the organisation…


Why don’t they have capital?


I mean if someone else wants to give them billions of dollars to make an AGI that they think will extinct humanity while not commercializing or open sourcing the tech they do have because they're scared of extinction, then be my guest. Usually if say I'm happy to be proven wrong but in this case I'd just be confused.


> People really think many Open Ai employees will give up their equity to get whatever mediocre stock grant their level at Microsoft has? And 1% raises, no bonus some years, and the board forced headcount reductions?

What long term prospects do those employees have of raises, profit-sharing, equity, etc. at OAI if the board is willing to destroy value to maintain their non-profit goals?

I think the whole point of this piece is that OAI's entire organizational structure is built against generating a large startup valuation that would provide a large payout to employees and investors.

OAI has cash from ChatGPT revenue that it could use to offer competitive pay, but also this entire situations is based around the board being uncomfortable with the decisions that led to this revenue or attempts to expand it.


Regardless of what anyone thinks about it - M$ was going to pay an entity they did not control 18 Billion to be a player. Now they don't have to - they get it almost for nothing. Hat's off to M$ - this is certainly one of the largest corporate missteps by a board in charge of such hot technology that I have ever witnessed.

The Open AI board has taken the keys of Paradise and willingly handed them directly to the devil;)


Nobody cares what you think about it either.


>more risk averse than the OpenAI one

At least it's not sci-fi-risk averse ;)


HN is filled with temporarily-embarrassed billionaires (and actual billionaires) who would very much like to preserve the notion that big corporations can move with impunity and quash any threat to investment returns. Reality is not aligning with that, so they've entered their mental safe pods (with the rose-tinted windshields).


OMG this ^


> no bonus some years,

What do you mean? MS employees are getting bonuses on a yearly basis, this year included.


I’m referring to Satyas email from May saying there will be no raises and the bonus pool will be significantly reduced

That’s fine for corporate employees, but OAI employees were promised mountains of money to leave Google/Meta, they might not be as happy


They don't have to leave OAI.

OAI is a startup. All these OAI employees who were playing up their million dollar salaries should know that startups come with risk. How many times has it been said that equity is worth nothing until (and if) it is?

In the grand scheme of the current IT economy, top of the queue for sympathy to me is not "people making seven digit salaries at startup who may have to put up with only making $500K+ at MSFT".


Just a tip for OpenAI employees that plan on leaving: this is probably one of the best opportunities you’ll ever get to start your own thing. If you’re joining a new startup make sure you’re a founder and have a good chunk of the equity. For the next few months there will be a line of investors waiting at your door to give you money at a wild valuation, take it and do what you always wanted and know that if it doesn’t work out there will be plenty of large companies ready to acquire you for much more than they’d ever be willing to pay you.


The analysis in the article is mostly very good, but I object to this observation

`The biggest loss of all, though, is a necessary one: the myth that anything but a for-profit corporation is the right way to organize a company.`

I don't see how anything that happened this weekend leads to this conclusion. Yes, it seems likely that the board's actions will result in an OpenAI with much smaller revenue and consumer traction. But the whole reason for setting up OpenAI as a non-profit was precisely ensuring that those were not the overriding goals of the company.

The only conclusion that seems warranted is that "the myth that anything but a for-profit corporation is the right way to organize a for-profit company.", but that is pretty obvious.


It means that those 'organisations' can never scale, and therefore make the titanic impacts on society they hoped to have.

No investors will touch these non-profits with a 10 foot pole now. An unaccountable board that can lead the majority of the company and investors to revolt is radioactive for investors.

It proves that the shares=votes, standard corporate structure is the only way to organize mega scale organizations.

OpenAI will keep its goals, but it'll simply accomplish nothing. It'll probably devolve into some niche lab with no resources or GPUs to do anything significant.


Right! The Wikimedia Foundation is dead in the water, and everyone except Jimmy knows it. If only it could raise hundreds of millions in capital from investors then they could actually start delivering value and get significant market share.


Pithy response but poor comparison -- Wikipedia's startup costs were in, what, the tens of thousands of dollars? Less?

OAI is burning billions in compute/salary to create their thing, and will spend billions more before truly massive value to society could ever be wrought.

I can't think of a nonprofit structure that has ever effectively allocated that much capital aside from, like, a government.


The parent criticism was that non-profits cannot scale.


Would you say Wikipedia has had a significant impact on society?


Of course it has. Wikipedia is the first (and only) truly global source of knowledge, to a depth no other encyclopedia has ever covered before - and with an unmatched capability to react to current events.


For-profit vs. non-profit is an increasingly meaningless distinction in today's business/legal environment. It seems like more of a marketing ploy than anything else. For example, one can set up a 'socially responsible do-gooder' non-profit with the left hand, and a for-profit corporation with the right hand, and then funnel all the money that goes into the non-profit into the for-profit by making that the sole supplier to the non-profit, thus avoiding many taxes and generating robust profits. These schemes are legion - there are hundreds of examples if you go looking.

The real issue with LLMs is open-source vs. proprietary.


Hundreds of examples? Can you name one?

As someone who works at a non-profit that partners with various for-profits, I'm skeptical that the IRS would allow such sort of large-scale tax fraud to happen.


> "In this scenario, I set up my non-profit school-- and then I hire a profitable management company to run the school for me. The examples of this dodge are nearly endless... consider North Carolina businessman Baker Mitchell, who set up some non-profit charter schools and promptly had them buy and lease everything - from desks to computers to teacher training to the buildings and the land - from companies belonging to Baker Mitchell."

https://curmudgucation.blogspot.com/2016/07/for-hrc-profit-v...

As far as the IRS, this may be entirely legal due to tax loopholes pushed through by lobbyists and bought politicians, or it may take so many IRS resources to unravel that it tends to go ignored.


This non-profit structure means that 4 people can decide to do whatever they want, with no consequences, putting 100s of jobs in danger, 1000s of companies futures on the line and disrupting millions of people who rely on the service.

Because they had a difference in opinion about a devday presentation...?

Just confusing to me why so many people are thinking the board is so altruistic here. That kind of unchecked power is insane to me.


If Altman goes back it could potentially salvage the model he helped create - maybe there needed to be some mechanisms in place to validate decisions like firing the CEO. This drama was made all the more intense because no one still really knows why they made the call. As a non-profit, some level of transparency for decisions like this seems like a super reasonable demand.


> I don't see how anything that happened this weekend leads to this conclusion.

They seem to need additional investments, but their goals are not aligned with most of their would-be investors.

If their goal really is the 'safe' development of AI, they are now in an objectively weaker position to pursue that goal, even if the actions of this weekend were otherwise justified.


I'm a non-native speaker and this sentence makes my head hurt. Feels like a double negative.

Is the loss of the myth meaning it is not a myth anymore (so reality) or that a for-profit is not the only right way?


I'd retitle this as "OpenAI's blunder and Microsoft's excellent damage control"

I don't think Microsoft is necessarily in a better position than it was on Thursday. If we're tallying up points:

    + More control over the full AI stack
    + Effectively they will own what was once OpenAI, but is now really OpenAI 2.0
    - OpenAI 2.0 is probably a worse business than OpenAI 1.0, which was, prior to the coup, a well-oiled machine
    + Control over OpenAI 2.0 operations, which can lead to better execution in the long term
    - Higher wage costs
    - Business disruption at OpenAI, delaying projects
    - Potential OpenAI departures to competitors like Google, Meta, Anthropic, Amazon, Apple (?)
    - Risk that OpenAI 1.0 (what's left of it) either sells to any of those competitors
    - Risk that OpenAI 1.0 actually goes open source and releases GPT-4 weights


GPT-4 weights, the RLHF s/w & logs, data sources … if all of that were truly open, it would be incredible.


>Risk that OpenAI 1.0 (what's left of it) either sells to any of those competitors

Who else is positioned that they could possibly do anything with it commercially? Even Microsoft is supposedly renting GPUs from Oracle (deals with the devil!) to keep up with demand.

Amazon is the only other company with the potential computational power + business acumen to strike, but they already have their own ventures. Google could, but not sure they would willingly take the reputation hit to use a non-Google model.


Seems like a pretty good list, but I think a lot depends on how much you weight each item and how many of the negatives were already "priced in" to the status quo ante when Microsoft was strategically linked to OpenAI without any formal control.


OpenAI 1.0 to become Netscape? It's great if it happen.


“ Finally, late Sunday night, Satya Nadella announced via tweet that Altman and Brockman, “together with colleagues”, would be joining Microsoft”

Called it, EEE is complete. This is old Microsoft magic. I hope younger people starting their careers are taking note. All that money gates is giving away to buy his way into heaven came from the same tactics.


Disagree. Satya's Microsoft is more like Embrace-Extend-Share: he's running it more like an old-school conglomerate --not BillG's "one company to rule them all".

AFAICT, New Microsoft is a platform of platforms with profit flowing down to lower-level platforms (Windows, Azure) but being made in all levels (Office, Xbox, LinkedIn) and shared with others (employees / partners) at some levels.

Satya has basically taken the Bezos insight --use your scale to build platforms that you also sell to outsiders-- and inverted it: use as many platforms as possible to build and sustain scale. This is not new, this is exactly what a conglomerate would have done 100+ years ago. But he's doing it methodically and while breaking fewer rules than Amazon or Google. Mad respect.


All the shade going the board (legit) - that said Altman and Brockman just lost so much of their independence its unbelievable - sad state of affairs that its being described as a win for them (or a good salvage). Also - everyone is pinning the board for all the problems ... everybody's hands are dirty here that it even got to this point. What a mess.


... What? How is this in any way related to EEE? The OpenAI board did this to themselves.


We can't know that - this may have been orchestrated.


By someone on the board, with the approval and participation of the rest of the board. So, the board did it to themselves.

Or do you think MS fabricated evidence to falsely convince the board Sam Altman was lying to them?


It's possible that Sam was given clear parameters for removal, there was a discussion with Microsoft about what would happen after removal, and then a decision was made to fulfill those parameters for removal to move things forward.


Embrace, Enhance, Extinguish for those unfamiliar.


Extend is the middle E.


It may be OpenAI's loss and Microsoft's gain, but any support that AI gets is a tragedy for humanity.

Everything is good in moderation, and with AI we are taking our efficiency past that good point. AI not only takes jobs away from creatives such as illustrators and can be used for identity theft, it also is removing the reliance that people have on each other.

Society is held together because people need each other. People meet each other through needing each other, and this is what makes local communities strong. AI takes that away so that the only entities we need are big tech like Microsoft, Google, and other soulless organizations.

It is a descent into feudalism, where everyone will pay big tech for basic living necessities (first option, later a necessity, like the smartphone).

If a man or woman wants to live independently of big tech, it will be harder and harder now, and we are losing are sense of what it means to be human through the immoral actions of these companies.


I couldn't disagree more. The more wealth is created out of thin air by technology, the better we all live, and the better our relations with one another. Scarcity creates conflict and pain. Prosperity makes good neighbours out of enemies.

I don't care if I have to pay megacorps for a right to my modern conveniences, if that means they extend to more and more people. Monsanto can take all my money if no one ever dies of starvation again. Microsoft can take all my data if we never have to do rote tasks again.


The more wealth is created, the more we abuse wild resources and the natural ecosystem as well. If there were only humans on the planet, I would not disagree. But it is immoral to live better if it comes to destroying out natural connection with the biosphere.

I also disagree that scarcity creates conflict and pain. Scarcity limits growth, which means it limits our expansion, which is good because other creatures have to live here too.


> I also disagree that scarcity creates conflict and pain.

Every war in human history (and there have been a lot of them) was fought over control of scarce resources. Sure looks like scarcity creates conflict and pain to me.


War causes some short-term pain but nothing in comparison to the long-term pain of post-scarcity where we use up so many resources that there are none left to use -- such as depleting most fishing stocks, rainforest destruction, and global warming.

I'd take a few wars here and there over what we have now any day. Even if it means I die. It would be much better if humanity's population were limited.


>It would be much better if humanity's population were limited.

It's a shame your argument devolved into eugenics so quickly.


If you want to limit human population, with all respect, you first. Talk is cheap.


I think it's important to keep track of which particular wealth we create and how it gets distributed. If eg GDP grows 2x, but instead of affording 2x more food everyone can now afford 10x more/better memes and better drones for their country military, I don't consider it a win. In other words, "growing wealth" means better access of people (on average) to some resources, but whether it's a good thing depends on the exact resources the world gains.


It's funny, in a sad way, that you think corporations will hold up their end of the "bargain". Can't remember which Youtube video I was watching, but this stuck out: "The irony that we're automating the production of art instead of the jobs everybody hates shouldn't be lost on us."


That pseudoprofound bullshit about automating art is the new version of the old ignorant chestnut of "If we can put a man on the moon why can't we just X?" Newsflash: there are vast differences in what is accomplishable. Self driving cars stalled out but not for lack of trying.


Congratulations on writing a comment that had zero to do with Reaganite valleybros reneging on the social contract for more cheddar, I guess.


I find it amusing that just a few short years ago the idea was that automation / ai would replace the truck drivers, factory workers, and blue collar jobs where the Developer, the lawyer, information worker was safe from this...

It seems the last mile in replacing blue collar jobs may be more expensive and more challenging (if not impossible) than replacing information workers with AI...


I actually agree with this. All along, people thought that automation would mainly replace manual labor (which I also disagree with in many instances -- I believe people should be able to make a DECENT wage doing things, even manual things, even if other things become more expensive for people "at the top").

It seems likely that AI will either replace or augment people in the most creative fields, creating a homogeneous MUSH out of once interesting things, making us consumerist and mindless drones that simply react like amoebas to advertising, buying junk we don't need so that the top 0.1% rule the world, pushed their by their intense narcissism and lack of empathy (like Sam Altman, Ilya Sutskever, Sundar Pichai, Satya Nadella who are by definition narcissists for doing what they do.)


I emphatically agree, with a caveat: I work in the music business. I've seen homogenous mush before. AI and related technologies have already augmented people in the homogenous mush business, and will most certainly replace them, and this will serve a sort of mass market with mass distribution that's recognizable as 'creative fields' of a sort.

This is self-limiting. It's a sort of built-in plateau. You can't serve the whole market with anything: the closest you'll get is something like a Heinz Ketchup, a miraculously well-balanced creation with something for everyone. It's also a relatively small market segment for all its accessibility.

We cannot be made 'consumerist and mindless drones that simply react like amoebas to advertising' more than we already are, which is a LOT: whole populations are poisoned by carefully designed unhealthy food, conned by carefully designed propaganda in various contradictory directions. We're already at saturation point for that, I think. It's enough to be a really toxic situation: doesn't have to threaten to be far worse.

The backlash always happens, in the form of more indigestible things that aren't homogenous mush, whether that's the Beatles or the Sex Pistols or, say, Flamin' Hot Cheetos. The more homogenous everything is, the more market power is behind whatever backlash happens.

This is actually one argument for modern democracies and their howling levels of cultural tension… it's far more difficult to impose homogenity on them, where other systems can be more easily forced into sameness, only to snap.


Yup. The world we are headed into, accelerated by huge leaps in technology like AI, will be a sorry, pathetic, unforgiving world, that amplifies suffering and nullifies humanity.

Is that image of a child being assaulted by terrorists real or fake? To the victims, its real, to political powers it fake, to mega corporations, its money.


I can't believe I'm playing devil's advocate here because I'm generally a skeptical / pessimistic person, but what?

The world we are headed into, accelerated by huge leaps in technology like AI, will be a sorry, pathetic, unforgiving world, that amplifies suffering and nullifies humanity.

Is that what has happened so far in your life? Technology has messed it up for you ?


I think technology has messed up a lot of lives. Sure, we've got good healthcare and resources in the first-world, but smaller communities are eroding to the global force of technology.

Don't bully people into not expressing critical thought by emphasizing that they have their basic physical needs met. We should be critical of society, even if on the surface it seems pretty good.

We also have more advanced psychological needs such as the need to depend on others, and that is what is at stake here.


I see a contradiction in your message here. On the one hand you're saying people are worse off because of technology, but then you're also saying that people don't rely on each other because they have more of their needs met. So which is it?

Wouldn't the poverty rebuild these communities ? I mean the hardships make people rely on each other right?

I don't entirely doubt what you're saying either, but I'm not so sure I see what you see.


There is no contradiction, because I do not believe that having all your physical needs met is necessarily the best end for a human being, especially if they have the capability to meet a lot of those needs themselves if they were given the tools to do so. It's like those people in the Matrix pods: they have their needs met, but they are being plugged into a machine and are not truly human.


Name a single instance or person who has all their physical needs met by an LLM?

Seriously, we're a long way off technological utopia. Even if we had some type of AGI/ASI that was self-aware like in movies ,there's little if any evidence to suggest it will just work for us and make sure you're ok.


Yes where have you been?


How has it been messed up for you ? Would going back to medieval times fix it?


> Would going back to medieval times fix it?

Going back to the 90s probably would.


> The decrease in the number of people affected by hunger has happened in a period where the world population has increased by 2.2 billion - from 5,3 billion in 1990 to 7.5 billion in 2018. The share of undernourished people in the world has therefore fallen markedly in the past three decades. From 19% in 1990 to 10.8% in 2018.

https://www.theworldcounts.com/challenges/people-and-poverty...

Why don't you go ask the 189 million people that since 1990 have avoided hunger if they agree?


Was social media required to feed those people? I don't think hunger in the 90s was a technology problem.


Yes, actually. Social media and internet made ad industry much more effective, which in turn made wonders in commerce, which is exactly what raised people out of hunger.

A lot of people in southeastern Asia and China are working in industry and live tens time better than subsiste farmers just one generation before them exactly because the niche trinkets they make can now be sold on instagram.


Cars bad, horses good


This pessimism may play out but I continue to fall on the optimist side.

AI tooling seems to be allowing for more time on creativity and less on rote work.

If that continues to play out creativity inevitably drives more collaboration as creativity cannot thrive only in a silo.


Genuinely curious.

Except in the shovelware games industry and the instagram ad scam industry where is AI actually, currently, threatening to take away jobs?


1. Illustrators

2. Photographers (Check out Photo.AI). I know a local photographer who takes nice portraits for businesses. Now people can use this service for their headshots. (You may argue that it's good for people but at which point does it remain good if NOBODY can use their creative talents to make a living.)

3. Writing. Many websites are now using writers to write their articles. That means they hire less writers. Again, you can say that it makes society more efficient and I guess it does for consumers, but those people such as copy editors will have to find new jobs.

You may say that new jobs will be created but we have not seen such a versatile tool that can take away so many jobs at such a quick pace before. Moreover, will the world really be a nice place to live in if most of the creative jobs or at least jobs involved IN producing something nice will be left to machines?

What about Hollywood and their desire to replace writers and actors? You may say that Hollywood is dead anyway, but I'm sure it's about to get a lot worse...


I think I worded my question wrong. Where except in those industries I named are AI actually, currently, taking jobs


Think of all the serfs and slaves that were put out of work by the invention of the tractor.


4. Translators

5. Programmers


6. Voice actors and narrators (eventually news anchors, reporters, etc)

7. Composers, hired musicians, eventually singers, producers


There’s a real chance that one thing AI will make better—not just cheaper—is original scores for movies. Get us out of this “just match the shitty generic placeholder music we already edited the movie to” norm we’ve been in for almost two decades (which itself came about due to changes in technology!)


Software engineering for sure. Lots of SV types have a very distorted view of what the average programmer not in a high prestige silicon valley company does. Especially contractors and outsourcing firms and the like? Yeah not great for them. Also analyst type roles and data science type roles, since the level of reasoning plus the ability to parse structured data and write code is pretty much there. Medical scribes are already being automated, voice to text plus context aware parsing. I also think areas of law like patent law (writing provisionals etc) are probably in a situation where the tech is already better than humans at writing claims that are not going to conflict with prior art and the like, though there'll be legal barriers there to adoption. But a lot of the legal staff involved might be replaced even if the lawyers and agents are not. Anyone who writes review papers/research summaries like market reports without actively doing non-internet research like interviewing people are going to struggle against AI written reviews that can just pull from more information than humanly possible to parse. Accounting, preparing financial statements, etc where "creativity" is not necessary a good thing also, though again regulations might stop that. And obviously in healthcare, doctors like radiologists and surgeons etc which we've been talking about as a possibility for a long time but looks more possible than ever now.

Also there's areas where it's quickly becoming a required skill set, so it's not that it's replacing people but that the people there are getting their skills obsoleted. All the molecular biologists I know that used to joke about how they picked biology since they suck with computers and hate excel are at a high risk of getting left behind right now, especially with how steep the improvement's been with protein design models like RFDiffusion. Though by latest rumors it looks like the vast vast majority of biologists involved in protein work have already started using tools like alphafold and esmfold so it does look like people are adapting.


> layoff announcements from U.S.-based employers reached more than 80,000 in May — a 20% jump from the prior month and nearly four times the level for the same month last year. Of those cuts, AI was responsible for 3,900, or roughly 5% of all jobs lost

> The job cuts come as businesses waste no time adopting advanced AI technology to automate a range of tasks — including creative work, such as writing, as well as administrative and clerical work.

Source: https://www.cbsnews.com/news/ai-job-losses-artificial-intell...


Customer service/support. Low-level legal services. Copywriting. Commercial music composition. Commercial illustration production. Junior-level software development.


Indeed, and these people should be able to do things that are useful, not just for themselves, but because interacting with humans to get these things is much better for society than EVERYONE interacting with a damn computer!


it also is removing the reliance that people have on each other.

On the other hand access to information has given me more independence, and this hasn't been a bad thing. I do rely less on others, like my parents, but I still love them and spend more time having fun with them rather than relying on them.

I do understand what you mean, it just doesn't like up as all negative to me.

I also think open source AI will destroy any feudalistic society. These companies like MS are going to have a problem when their own technology starts to destroy their value add.

Look ad Adobe and some of the open source video and graphis editing AI software, there goes one fiefdom.


> AI not only takes jobs away from creatives such as illustrators

Why do these people deserve protection from automation any more than all the millions of people who worked other jobs tat were eliminated by automation up to this point?


> Why do these people deserve protection from automation any more than all the millions of people who worked other jobs tat were eliminated by automation up to this point?

You got it!!! All those other people DO deserve protection from automation! But our society made a mistake and pushed them out. Many communities were destroyed by efficient automation and guess what, efficient automation via cars and vehicles is what caused our most immense disaster now, the climate crisis.

We made a mistake by creating so much automation. We should strive to recreate smaller communities in which more creative AND manual tasks are appreciated.


> We made a mistake by creating so much automation. We should strive to recreate smaller communities in which more creative AND manual tasks are appreciated.

Those exist?


Yes, they do. Indigineous tribes for one...which did more manual labor but also had creative outlets as well.


> We made a mistake by creating so much automation.

So you're volunteering to become a stoop-labor agricultural serf, then?


Basically, you are creating a strawman. I said too much automation. And nothing about removing automation implies going back into serfdom. If you want to actually have a debate, say something intelligent rather than a quick quip.


> I said too much automation.

Why are you the one who gets to decide how much is "too much"?

> And nothing about removing automation implies going back into serfdom.

Sure, just because essentially every society above the level of hunter-gatherers has had some kind of peasant/serf/slave doesn't mean that your utopia will.

But that's not the way to bet, my friend.


They don't. But if you look at predictions of the future like those in optimistic SciFi, the dream was that automation would eliminate repetitive, dirty, and dangerous jobs, eliminate scarcity of basic necessities like food, and free up every individual to pursue whatever creative endeavours they wish.

What we're getting instead is the automation of those creative endeavours, while leaving a "gap" of repetitive, mind numbing work (like labelling data or doing basic physical tasks like "picking" goods in an Amazon warehouse) that still has to be done by humans.


You're confusing "enjoying a creative endeavor" with "being able to make a living from that creative endeavor".

Hand weaving is a creative endeavor, but almost no one makes a living from hand weaving today. People still do it for fun, though.

The upside there is that most people are able to afford more than two sets of clothing (one for daily wear, one for church, weddings, funerals, etc.).


I agree with your point, but I think the honest answer to your question is that people view creative jobs as aspirational whereas the other "rote" jobs that were being automated away were ones that people would have preferred to avoid anyway.

When we're getting rid of assembly line jobs, or checkout counter staff, or data entry clerks, or any other job that we know is demeaning and boring, we can convince ourselves that the thousands/millions put out of work are an unfortunate side effect of progress. Oh sure, the loss of jobs sucks, but no one should have to do those jobs anyway, right? The next generation will be better off, surely. We're just closing the door behind us. And maybe if our economic system didn't suck so much, we would take care of these people.

But when creatives start getting replaced, well, that's different. Many people dream of moving into the creative industry, not out of it. Now it feels like the door is closing ahead of us, not behind us.


The answers may or may not have any sense depending whether or not you find something in us not worth automating at all. Is there such a thing ?


Because we don't like white collar people losing their jobs. Blue collar on the other hand deserve what's coming to them, as they didn't prepare themselves for what the future brings.

/s


learn to code.


learn to maintain HVAC


No kidding. It looks like the most secure jobs will be "the trades" since the environment these professionals work in is the most variable/least structured and thus least susceptible to automation by robotics.

My "Plan B" is becoming an electrician.


We can say fro big-tech the same for oil, penecilin or gmo's. What does it mean to be human when we, all humans, are the sons and daughters of big industry for profit ventures? Open AI board stopped trusting Altman when he went started pitching the Open AI technology elsewhere behind their backs. At least that the rumors I read. If OpenAI developers truly believe the AI can be weaponized and that they should not be following the leadership of for-profit ventures they won't jump ship. We will see.


> We can say fro big-tech the same for oil, penecilin or gmo's. What does it mean to be human when we, all humans, are the sons and daughters of big industry for profit ventures?

That is why we should be cautious about technology instead of inviting it with open arms. It is a question we should continue to ask ourselves with more wisdom, instead of relying on our mass global capitalistic system to deliver easy answers for us from the depths of the profit motive.


Anyone here that has worked with a non-profit can recognize the scenario of boards operating untethered by the sometimes more relatable profit motive.

I think what remains to be seen is who is on the right side of history. The real loser here is probably ethical AI. I know this won’t be a popular opinion around here, but it’s clear to me that with computing and AI, we may be in an echo of the Industrial Revolution where the profit motive of the 19th century led to deep human injustices like child labor and unsafe and inhumane working conditions.

Except of course that AI could have even more impact - both positive and negative, in the same way socmed has.


If you think child labor or unsafe and inhumane working conditions started with the industrial revolution your history teachers have deeply failed you. Those existed long before industrialization. Hell, for most of history education was child labor!


I was a history teacher! And you are absolutely correct that of course child labor existed prior to the industrial revolution. But the industrial revolution introduced systematicized, institutionalized child labor and unsafe working conditions at a sense of exponential scale and density. Which is why it's illegal now, which we would all agree is a good thing, whether it happened before or not.


> The biggest loss of all, though, is a necessary one: the myth that anything but a for-profit corporation is the right way to organize a company.

Hmmmh?

https://en.wikipedia.org/wiki/Robert_Bosch_Stiftung

https://en.wikipedia.org/wiki/Tata_Sons

If anything it seems like the more likely lesson is that Altman manoeuvred OpenAI into a situation which was incompatible with its non-profit objectives.


The Bosche example doesn't really match:

>Although the charity is funded by owning the vast majority of shares, it has no voting rights and is involved in health and social causes unrelated to Bosch's business

There's also the example of Ikea's ownership structure, but that's just a giant tax dodge.


> The Bosch example doesn't really match

Yes, I think you're right there.


> Here’s the reality of the matter, though: whether or not you agree with the Sutskever/Shear tribe, the board’s charter and responsibility is not to make money. This is not a for-profit corporation with a fiduciary duty to its shareholders; [...] to the extent the board believes that Altman and his tribe were not “build[ing] general-purpose artificial intelligence that benefits humanity” it is empowered to fire him; they do, and so they did.

I would quibble with this slightly. They do have a right to fire, but they're doing a poor job of working in the not-for-profit's interests if they do so in a way that collapses the value of their biggest asset (the for-profit), especially when other options are potentially available. e.g. a negotiated exit, providing proper warning to their investors etc etc.


I worked for a startup acquired by Microsoft and suffice to say, MS is a culture killer. Our open dynamism and free discussion withered under a blanket of MS management.

I don't think it's possible that the cultures of OpenAI and MS can be made compatible. MS is dreary. Highly effective at productizing, yes. But the culture that propels deep innovation -- that is not going to last.


Was that under Nadella, though?

Things have changed quite a bit, as I understand it.


Yes, Microsoft is the new trillion dollar corporation with a human face.


Yes, Nadella.


It's so weird that people think a bunch of personnel moving to MS is a win for MS.

They can't just magically recreate what OpenAI has done. The data sets, the tooling, the models, the everything around it. It will take so long for MS to catch up even if they had 100% of their people working on it tomorrow.

The rest of the market is going to benefit more than MS.


Their contract with OpenAI gives them unlimited access to the data sets, the tooling (which is all on Azure and was built out with the help of Azure engineers), the models and their weights, and basically everything around it.


OpenAI's co-founder Ilya Sutskever and more than 500 other employees have threatened to quit the embattled company after its board dramatically fired CEO Sam Altman. In an open letter to the company's board, which voted to oust Altman on Friday, the group said it is obvious 'that you are incapable of overseeing OpenAI'. Sutskever is a member of the board and backed the decision to fire Altman, before tweeting his 'regret' on Monday and adding his name to the letter. Employees who signed the letter said that if the board does not step down, they 'may choose to resign' en masse and join 'the newly announced Microsoft subsidiary run by Sam Altman'.


> Sutskever is a member of the board and backed the decision to fire Altman, before tweeting his 'regret' on Monday and adding his name to the letter. Employees who signed the letter said that if the board does not step down,

This reads like a disingenuous strategy to get rid of the other three members (one half) of the board. A real coup, not a fake one. I know nothing about any of these people, but it seems possible Sutskever convinced the board to make a decision that he knew would have an outcome that would end in them being fiduciarily obliged to resign so that he, Altman, and Brockman could come back as the entirety of the board. And if the hiring by MS is involved, then MS would then control the board of the non-profit.


Interesting and insightful read - definitely seems like someone has been paying attention throughout.

I can't get past this snippet:

> they ultimately had no leverage because they weren’t a for-profit company with the capital to be truly independent.

Maybe I don't understand non-profits, but.. they're allowed to amass a war chest for expansion and to pay for dependencies, right? They're not compelled by charter to have no capital like some sort of a corporate equivalent of a monk -- it's just OpenAI that did not have enough capital to grant them better negotiating terms. How is that different from any other startup that gives up ownership of its IP in exchange for investment?


This article read to me like someone tryint to shoehorn "non-profits sucks" into an otherwise interesting narrative.


In this case, I think it's "non-profits suck" for business analysts. How can you predict what those do-gooders will get up to if they sometimes refuse to make the numbers get bigger?


I think this mainly means they dont pay out dividends to shareholders


It's different from other startups, because other startups can promise potentially infinite returns in exchange for investments, while Open AI had capped returns.


That sounds like the very likely most notable difference.


The pre-open market doesn't see it as a big win for MSFT. The stock is still lower than it was at Friday's open.


It doesn't seem like it is a big win for MSFT. Hard to argue that MSFT is now in a better position than they were before this happened.

Best case scenario for MSFT probably would have been to negotiate Altman et al back to OpenAI with some governance changes to boost stability. Keep the OpenAI GPT-N momentum going that they have harnessed for themselves.

MSFT have managed to neutralize (for now) the threat of Altman et al creating a competitor, but this has come at the cost of souring their relationship with OpenAI who they will continue to be dependent on for AI for next couple of years.

Big question here is what happens to OpenAI - can they keep the momentum going, and how do they deal with their sugar-daddy & compute-provider MSFT now also being a competitor? How much of the team will leave, and can they recover from that ?


The thing is Satya has played the best possible hand in the current situation. MSFT did not fall much and 1-2% deviation does not mean much on the long run.


Agreed - he contained the damage as best as could be done.


If Mr. Market was perfectly rational and correct, profit would cease to exist.


That's not true, Capital would still accumulate returns higher than the cost of inventory plus wages, the return would just be the same 'everywhere', and you'd have a perfect market alpha for all stocks, whether it be 1, 2, 5, or 10%. Even perfectly rational markets do not establish socialism overnight. Now maybe you could argue under a Marxist lens that exploitation would be more 'visible', causing socialism to arrive out of social rebellion faster, but that's really besides the point.

What would cease to exist would simply be speculation and arbitrage. Since all prices are perfect, you simply wouldn't be able to make more (or even less, for that matter) money than the return on capital everyone gets by buying and selling shares quickly.


Why? That is a gigantic statement to make to provide no backing for.


MSFT is also at an all-time high so it's natural for them to be slightly lower


It really depends what actually happens, on paper OpenAI business leadership is now at MSFT. Research leadership seems to be at OpenAI. What does OpenAI need to pursue its goal? One may argue that hiring developed under ex-OpenAI leadership was to facilitate the productization of the models. Does someone know the actual engineering/product/research makeup of the OpenAI that can provide substance?


It does lend credence to an emerging landscape trend that suggests large companies, not disruptive startups, will dominate AI development due to high costs and infrastructure needs.


"The biggest loss of all, though, is a necessary one: the myth that anything but a for-profit corporation is the right way to organize a company."

Alternatively, we could have these companies turned into research organizations run by the government and funded by taxes they way most research (e.g. pharmaceuticals) should be. There's more than one way to get good research done, and having it public removes many strange incentives and conflicts of interest.


Compare OpenAI's funding to the national labs.

Sandia and Los Alamos both receive about $4.5 billion per fiscal year. OpenAI is likely spending an order of magnitude more than that.


Really? OAI spending 45 billion dollars a year with fewer than 1,000 employees? Seems unlikely.


Shengjia Zhao's deleted tweet: https://i.imgur.com/yrpXvt9.png


I'm a bit confused. Does MSFT have a perpetual license to the original OpenAI LLC's IP *or* the capped company OpenAI "Global" LLC that was specifically created for MSFT's stake? Because, if the latter, it seems like the new/abandoned/forsaken OpenAI could just fork any new IP back into its original non-microsoft-stained LLC and not be mere tools of Microsoft moving forward.


Undoubtedly they have a perpet on what they released so far: chatgpt4. Not so for new innovations or tech.


So when the author states that "Microsoft just acquired OpenAI for $0" they mean, effectively, only a fixed-time snapshot of code that is likely old news in about 18 months by the time other models have caught up. Microsoft still needs to execute like mad to make this work out for them. Right now the entire thing seems to rest on the hope that enough talent bleeds out of OpenAI to make this worthwhile. They'll probably get that. But it's still a delicate game. I most wonder what breakthrough Ilya has been alluding to recently [1] and whether it'll be available under MSFT's license.

[1] https://youtu.be/Ft0gTO2K85A?si=YaawmLi8zKrFxwue&t=2303


Plenty of them can go to Google, Anthropic, Apple, Tesla, Amazon or any other attractive company to work for. By attractive I mean they'd be compensated well enough to have a nice life there.

There's not a lot to suggest everyone will jut join M$ by default.


If you have:

- intellectual property as of today

- the servers that run it

- the people that wrote it

- the executive leadership that executed the play so far and know the roadmap ahead

What else do you need?


Development work on GPT5, curated input datasets, human feedback data, archives of all ChatGPT conversations, DALL-E, stats on which users are the big spenders, contracts with cheap labor to generate data and moderate abuse...


I wonder if MS is aware of the allegations against Sam Altman, which were put forth by his sister, of sexual, financial, and other abuse.


> Two years later, and the commitment to “openly share our plans and capabilities along the way” was gone; three years after that and the goal of “advanc[ing] digital intelligence” was replaced by “build[ing] general-purpose artificial intelligence”.

Be no evil, for example. Billions and billions were made when that phrase was erased.


I really see no reason that LLMs can't go the way of operating systems when it comes to the success of the open-source approach vs. the proprietary closed-source approach.

The argument over for-profit vs. non-profit is largely meaningless, as anyone who is paying attention knows that 'non-profits' just use different methods to distribute the revenue than for-profits do, using various smoke-and-mirrors approaches to retain their legal status. Arguably a non-profit might put more revenue back into R & D and less into kickbacks to VC investors but that's not always the case.

Additionally, given all the concerns about "AI safety" open-source seems like the better approach, as this is a much better framework for exposing biases, whether intentional or accidental. There are many successful models for financing the open-source approach, as Linux has shown.


If the openai board can't get humans to align, what hope do they have of "aligning" ml models?


people in alignment with each other is agreement.

a language model in alignment is control.

the model does not need to be aligned with the desires of people. just person. it could be people, but getting alignment with people is…


> The biggest loss of all, though, is a necessary one: the myth that anything but a for-profit corporation is the right way to organize a company.

This is a big picture idea that we should examine more closely. Right now, in the heat of the chaotic collapse, it's easy to conclude that for-proft corp structure is the only way to go. But I think we should take a "proof is in the pudding" approach and remember all the amazing things that OpenAI accomplished under it's non-conventional org structure. Maybe that non-conventional org structure was a key ingredient in OpenAI's success? Sure, we now know that "long-term stability" does not seem to be a quality of this org structure, but nonetheless it seemed to have lots of other desirable qualities.


Why does it have to be all or nothing? Why not non-profit in the start when you need to attract scientists and engineers who are in it for the challenge, and then change to for profit when you need to attract product managers and people who will scale the company and make it sustainable.


> What gives me pause is that the goal is not an IPO, retiring to a yacht, and giving money to causes that do a better job of soothing the guilt of being fabulously rich than actually making the world a better place.

Ouch. Is that really the ideal vision of founding a SV startup?


https://news.ycombinator.com/item?id=38350637 :

"None of these companies appease China; they refuse to provide service under those conditions and/or they are IP range blocked. Microsoft does service China with Bing, for example.

You should not sell OpenAI's to China or to Microsoft, [or to China or Russia through Microsoft]

Especially after a DDOS by Sue Don and a change in billing.

[1] https://en.wikipedia.org/wiki/List_of_websites_blocked_in_ma...


So? What does it matter if China is blocked?


I can't imagine what it will be like to work at OpenAI over the next few months. A massive load of talent has just left, the primary source of investment has just taken exactly what you do and brought it in house. Even if you wanted to stay at OpenAI how can you reasonably believe that MS will continue providing the compute and investment necessary for you to retain leadership? It just seems impossible. It may be that in the medium term this move means OpenAI is going to back to more research focus, because I just don't see how the MS partnership strategy makes any sense as a long term strategy now.


The biggest problem of Artificial Intelligence are the humans running it


Seems the author is expecting OAI to continue merrily along its way working towards AGI (albeit at a stated slower pace) while MSFT is able to take Altman et al and circle the wagons on what already exists (GPT4) squeezing it for all its worth. While that's entirely possible, there are other outcomes not nearly as positive that would put MSFT at a disadvantage. It's like saying MSFT's AI hedge is built on what appears like sand; maybe it's stable, maybe it's not.


Don’t think they can just outright steal GPT4 and they definitely won’t be taking the world class data set with them


Unusually well written and apparently well informed article.


Ben Thompson has been a keen observer of the tech industry for quite some time. I recommend browsing the archives, or HN’s history of submissions.

https://news.ycombinator.com/from?site=stratechery.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: