Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Sam Altman says Meta offered OpenAI staffers $100M bonuses (bloomberg.com)
91 points by EvgeniyZh 27 days ago | hide | past | favorite | 123 comments




I feel like there's a lot of half-truths in the article and some of the comments here.

1. The $100M is a huge number. Maybe there was 1 person, working on large-scale training or finetuning, who had an offer like this, which surely was a high offer even in base (like let's say $1M+), and had a lot of stock and bonus clauses, which over 4+ years could have worked out to a big number. But I don't believe that the average SWE or DE ("staffer") working on the OpenAI ChatGPT UI Javascript would get this offer..

2. One of the comments here says "[Zuck] has mediocre talent". I worked at Facebook ~10 years ago, it was the highest concentration of talent I've ever seen in my life. I'm pretty sure it's still an impressive set of people overall.

Disclaimer: I don't work in the LLM space, not even in the US tech space anymore.


Comments aimed at Zuck’s talent always seem jealous to me. An argument can be made that he lacks a good moral compass using specific public examples but I haven’t seen any similar evidence to argue a lack of talent.

I also know many folks who’ve worked at Meta. Almost all of them are talented despite many working there regretfully.


They did “pay” $14B to the CEO of ScaleAI. $100M sounds plausible, relatively.


Is it just me, or does Mr. Wang give grifter/charleton vibes. Like I get you don't hand 14bb and positions like this to 28 year olds for nothing. He seems like a really good salesperson mostly, which sometimes give me pause. However, I'm sure Meta needs excellent sales people... for 14bb though? Like did Meta really need labeling/training infra? Idk, the whole deal is weird to me.


It's incredible to me that talent != moral values is this widespread. I know this was pre-Cambridge Analytica but the writing was on the wall, and we see the same with each new tech wave.

Whenever I ask such people, they talk about the incredible perks, stock options, challenges. They do say they are overburdened though.

These are people who would be rich anyway, and could work anywhere, doing much more good.


They’re people who would have been on wall street in another era a lot of the time. Smart people focused on money.


> "[Zuck] has mediocre talent"

I read it as he not talented himself. Not about the talent he employs.


> "[Zuck] has mediocre talent"I read it as he not talented himself. Not about the talent he employs.

I know Zuck personally and this is one of the big understandings to do with him. If you adjust his selector switch (just below the 3rd rib-like component on the pseudo thorax) to "science and engineering", you'll find he's the most brilliant guy ever, like Data from Star Trek! But this mode consumes some CPU cycles normally spent on hu-man interactions so he can come off as awkward.

A year or two back we switched it to "JW" (Jack Welch) and a sticky-fingered unix programmer spilled diet mountain dew all over the switch, it's been stuck there ever sense, hence here we are, hence the reputation for no-talent. It's there we just have to figure out how to get that switch jarred loose.


> it was the highest concentration of talent

What a waste of a generation


Re: Mark. I agree. It surprised me to no end that he knew the staffing and open headcount of every team. I was a manager for most of my tenure, but his interactions with Lars, Chris and others were always insightful enlightening.


OpenAI has very few employees, so I think it's possible for the correct ones. Crazier things have happened.


> it was the highest concentration of talent I've ever seen in my life

And yet they don’t have much to show for it.


3B people use the products daily for an hour on average. FB products are the primary way these people communicate with their friends & families online.


And among those popular products, the main and only decent one is Whatsapp, because it was already good when they bought it and fortunately haven't touched it much since then.


the product (practically unchanged for decades) existed long before most were employed there I would assume. It's like saying you had a great life achievement by being employed at Coca-Cola while their soda existed


aside from a $1.75 trillion market cap


Such an impressive life achievement for those employees


Whether this is true or not, this is a clever move to publicize. Anyone being poached by Meta now from OpenAI will feel like asking for 100m bonuses and will possibly feel underappreciated with only a 20 or 50 million signing bonus.


This can backfire and work the other way around. Existing employees may try to renegotiate their compensation and threaten to leave.


20 mil is peanuts. who would accept it?


I would, where do I sign


If you have to ask this question, the offer is not for you. :-)


How much for a ZJ?


Since I'm not that deep into American pop culture, I had google:

https://www.youtube.com/watch?v=2gVhZT1tHzg

(for those who are also out of the loop)


sourcee: beerfest (2006) ::

Barry Badrinath, down on his luck man-hooker: It's $10 for a BJ, $12 for an HJ, $15 for a ZJ... Landfill: [Interrupting] What's a ZJ? Barry Badrinath: If you have to ask, you can't afford it.


It's like twelve life incomes in the US.

    > 20000000 / (40 * 40000)
    12.5
An obscene amount of wealth.


> twelve life incomes in the US

Or 2 trips to the hospital


Rich people usually have good insurance though.


Insurance always takes more than it gives.


The key thing is that insurance also gets monopsony power over what they pay providers, so they can pay less than the provider would nominally charge.


In the aggregate.


people not completely out of touch with reality


Yeah, and everyone will know you did it for the money.


Isn't pretty much everyone working at OpenAI already clearly motivated by money over principle? OpenAI had a very public departure from being for-good to being for-money last year...


Everyone works for money unless you are refusing to take your salary.


Lots of people working for AI labs have other AI labs they could work for, so their decisions will be made based on differences of remuneration, expected work/role, location, and employer culture/mission.

The claim above is that OpenAI loses to other labs on most of the metrics (obviously depends on the person) and so many researchers have gone there based on higher compensation.


Not what the phrase means, when you decide to take a vastly less lucrative offer you’re working for something other than money.


How many people do that out of the working population?


Millions take a noticeable pay cut, it suppress wages in many fields.

It’s one of the reasons so many CEO’s hype up their impact. SpaceX would’ve needed far higher compensation if engineers weren’t enthusiastic about space etc.


Probably most people working for non-profits or any level of government.


Arguably anyone who's working in a something they're "passionate" about.


Well - only if they had alternatives.


Obviously this is not the case, and you're deliberately choosing to misunderstand the point.


> OpenAI had a very public departure from being for-good to being for-money last year...

Were they ever “for good”? With Sam “let’s scam people for their retinas in exchange for crypto” Altman as CEO? I sincerely question that.

There was never a shift, the mask just fell off and they stopped pretending as much.


It was originally called "open" and run as a not-for-profit and a lot of people joined - and even joined the board - on that understanding.


I'm not sure that's an answer to the question of whether or not it was ever for good


It’s not like tech companies have a playbook for becoming “sticky” in peoples’ lives and businesses by bait and switch.

They still call it “open” by the way. Every other nonprofit is paying equivalent salaries and has published polemics about essentially world takeover, right?


Who would have believed it in the first place? Not I.


There are options other than money and virtue signaling for why you'd work a given job.

Some people might just like working with competent people, doing work near the forefront of their field, while still being in an environment where their work is shipped to a massively growing user base.

Even getting 1 of those 3 is not a guarantee in most jobs.


While your other comment stands, there is no separating yourself with the moral impetus of who you're working for.

If your boss is building a bomb to destroy a major city but you just want to work on hard technical problems and make good money at it, it doesn’t absolve you of your actions.


I don't see how this counter to my point.

If you worked at OpenAI post "GPT-3 is too dangerous to open source, but also we're going to keep going", you are probably someone who more concerned the optics of working on something good or world changing.

And realistically most people I know well enough who work at Open AI and wouldn't claim the talent, or the shipping culture, or something similar are people who love the idea of being able to say they're going to solve all humanity's problems with "GPT 999, Guaranteed Societal Upheaval Edition."


> There are options other than money and virtue signaling for why you'd work a given job.

Doing good normally isn't for virtue signaling.


Working at a employer that says they're doing good isn't the same as actually doing good.

Especially when said employer is doing cartoonishly villainous stuff like bragging how they'll need to build a doomsday bunker to protect their employees from all from the great evi... er good, their ultimate goal would foist upon the wider world.


Good point. I was thinking the "actually doing good". Absolutely there's a lot of empty corporate virtue signalling, and also some individuals like that. But there's still individuals who genuinely want to actually do good.


Are people sacrificing 40 hours of their lives every week to mega corps for anything other than money???


40?!? That's not hardcore at all!


I think most of us work for money ;)


As opposed to?


I'm really confused by this comment section, is no one is considering the people they'll have to work with, the industry, the leadership, the customers, the nature of the work itself, the skillset you'll be exercising... literally anything other than TC when selecting a job?

I don't get why this is a point of contention, unless people think Meta is offering $100M to a React dev...

If they're writing up an offer with a $100M sign on bonus, it's going to a person who is making comparable compensation staying at OpenAI, and likely significantly more should OpenAI "win" at AI.

They're also people who have now been considered to be capable of influencing who will win at AI at an individual level by two major players in the space.

At that point even if you are money motivated, being on the winning team when winning the race has unfathomable upside is extremely lucrative. So it's still not worth taking an offer that results in you being on a less competitive team.

(in fact it might backfire, since you do probably get some jaded folks who don't believe in the upside at the end of the race anymore, but will gladly let someone convert their nebulous OpenAI "PPUs" into cash and Meta stock while the coast)


> even if you are money motivated, being on the winning team when winning the race has unfathomable upside

.. what sort of valuation are you expecting that's got an expected NPV of over $100m, or is this more a "you get to be in the bunker while the apocalypse happens around you" kind of benefit?


$100M doesn't just get pulled out of thin air, it's a reflection of their current compensation: it's reasonable that their current TC is probably around 8 figures, with good portion that will 10x on even the most miserable timelines where OpenAI manages to reach the promised land of superintelligence...

Also at that level of IC, you have to realize there's an immense value to having been a pivotal part of the team that accomplished a milestone as earth shattering as that would be.

-

For a sneak peak of what that's worth, look at Noam Shazeer: funded a AI chatbot app, fought his users on what they actually wanted, and let the product languish... then Google bought the flailing husk for $2.7 Billion just so they could have him back.

tl;dr: once you're bought into the idea that someone will win this race, there's no way that the loser in the race is going to pay better than staying on the winning team does.


Imagine!! I would never live down the humiliation of getting a $100m signing bonus (I'd really like the opportunity to try though).


This isn't punk, nobody cares if you're a ""sellout"".


I believe the Sex Pistols were quite happy to take the man's money! Maybe hippies would have more scruples in that area.


Ehh. I think much less of people who “sellout” for like $450k TC. It’s so unnecessary at that level yet thousands of people do it. $100M is far more interesting


I think money and the promise of resources will convince enough qualified people to join Meta, but I guess it doesn't help their recruiting efforts that Zuck seems to have the most dystopian and anti-human AGI vision of all the company heads.

Of course we have good reasons to be cynical about Sam Altman or Anthropic's Dario Amodei, but at least their public statements and blog posts pretend to envision a positive future where humanity is empowered. They might bring about ruinous disruption that humanity won't recover from while trying to do that, but at least they claim to care.

What is Zuckerberg's vision? AI generated friends for which there is a "demand" (because their social networks pivoted away from connecting humans) and genAI advertising to more easily hack people's reward centers.


I think Dario has the most dystopian and anti people AI vision


What makes you say that?


He said "none of our best people have left" which means some are leaving.

And OpenAi probably had to renegotiate with those with a $100m offer so their costs went up.

Suppose it is karma for Zuckerberg, Meta have abused privacy so much many dislike them and won't work for them out of principle.


> And OpenAi probably had to renegotiate with those with a $100m offer so their costs went up.

That sounds like the actual move here. Exploding your competitors cost structure because you're said to pay insane amounts of money for people willing to change...

On the other hand: People talk. If Meta will not pay that money that talk would probably go around...


Following this logic if Meta had been offering this money and stop doing so going forward, this article is pretty good cover for reigning those costs in (if they wanted to).


They will just give them more equity which costs nothing


I am nearly certain that’s not how Zuckerberg thinks about equity.

You also have to publicly account for RSU’s to the market just like any other expense.


This is an offer made 2 year ago to someone:

>Base salary: $250,000 Stock: worth of $1,500,000 over 4 years Total comp projected to cross $1M/year

https://www.linkedin.com/posts/zhengyudian_jobsearch-founder...

https://www.theregister.com/2025/06/13/meta_offers_10m_ai_re...


Meta is doing $10 billion in buybacks every quarter. Giving out equity undoes that.


Huh? Equity costs a lot. It's diluting existing shareholders, ie Zuck.


> He said "none of our best people have left" which means some are leaving.

If you define "best" as "not willing to leave", the statement "none of our best people have left" is actually near to a tautology. :-)


This is software developing a transfer market like footballers, isn't it? We've still got a long way to catch up with Ronaldo.

In both cases this is driven by "tournament wages": you can't replace Ronaldo with any number of cheaper footballers, because the size of your team is limited and the important metric is beating the other team.

It's also interesting to contrast this with the "AI will replace programmers" rhetoric. It sounds like the compensation curve is going to get steeper and steeper.


The curve is getting steeper, yes. That's not a contrast to the "AI will replace programmers" rhetoric.

Steeper means: higher at the top. Lower on the bottom.

Right now, AI can do the job of the bottom large percentage of programmers better than those programmers. Look up how a disruptive S-curve works. At the end, we may be left with one programmer overseeing an AI "improving" itself. Or perhaps zero. Or perhaps one per project. We don't know yet.

Good analogue is automation. Mass-scale manufacturing jobs were replaced by a handful of higher-paid, higher-skilled jobs. Certain career classes disappeared entirely.


> you can't replace Ronaldo with any number of cheaper footballers

Of course you can. It's a team game. Having Ronaldo wearing your team's shirt doesn't guarantee a win. So a team of 11 cheaper footballers with a better plan and coaching has often beat whatever team Ronaldo plays on. "Cheaper" != "cheap" of course; they're still immensely talented and well-paid athletes.


What I mean is there's only eleven slots on the field. You can't swap one star player for a hundred mediocre ones. It's worth seeing which industries do and don't work like that.


The "I trained a trillion parameter model" club is a very small club.


> We've still got a long way to catch up with Ronaldo.

Pretty sure Alexsandr Wang just blew Ronaldo out of the water.

Before that, the WhatsApp/Instagram founders.


I think the real breakthroughs will come from some randos or some researchers, not sure if throwing huge amounts of money to something is always the solution, otherwise many diseases would have been dealt with already.


Yup, also somebody with a completely different perspective, not tainted by biases stemming from the wrong incentives.


Raising the bar for salaries to be so high creates a huge moat for all these massive companies. Meta and OpenAI can afford to pay $100M for 10-20 top employees, but that would consume the entire initial funding round for startups such as Superintelligence from Ilya Sustskever, who raised $2 billion.


Which is what these companies want [0]. So, if you don't have a moat, build one!

[0] https://semianalysis.com/2023/05/04/google-we-have-no-moat-a...


https://archive.ph/7lvlj

"Up to"

Still, though, as far as I know that kind of hiring bonus is unheard of. Surely Deepseek and Google have shown that the skills of OpenAI employees are not unique, so this must be part of an effort to cripple OpenAI by poaching their best employees.


They are world-class engineers of course, but it's always been clear OpenAI's core-advantage was simply the access to massive amounts capital without much expectation of a return on investment.

The ML methods they use have always been quite standard, they have been open about that. They just had the gall (or vision) to burn way more money than anyone else on scaling them up. The scale itself carries its own serious engineering challenges of course, but frankly they are not doing anything that any top-of-class CS post-grad couldn't replicate with enough budget.

It's certainly hard, but it's really not that special from an engineering standpoint. What is absolutely unprecedented is the social engineering and political acumen that allowed them to take such risks with so much money, walking that tightrope of mad ambition combined with good scientific discipline to make sure the money wouldn't be completely wasted, and the vision for what was required to make LLMs actually commercially useful (instruction tuning, "safety/censoring"...). But frankly, I really think most of the the engineers themselves are fungible, and I say this as an engineer myself.


That is a common narrative but Google had LaMDA as an LLM with over 100B parameters before the ChatGPT release. There was even a Xoogler that claimed it was alive.

From my POV Google could have released a good B2C LLM before OpenAI, but it would compete with their own Ads business.


True, actually people forget that quite good LLMs existed 2-3 years before ChatGPT, from Google, Microsoft, Facebook… OpenAI itself open-sourced GPT-2 all the way back in 2019 and had a GPT-3 API service for years before ChatGPT.

The breakthrough that ChatGPT brought was not technical, but the foresight to bet on laborious human-feedback fine-tuning to make LLMs somewhat controllable and practical. All those previous LLMs where mostly as “intelligent” as the GPT-3.5 that ChatGPT was built on, but they hallucinated so much, and it was so easy to manipulate them to be horribly racist and such. They remained niche tech demos until OpenAI trained them, not with new tech really, just the right vision and lots of expensive experimentation.


Man did I get some pushback when I said this a week ago. People just really don't want to believe the sums involved here.


I thought that the report that was being screenshotted a few weeks ago on the relative movements of staff between the top AI labs[0] would make for a good companion data point. Except now that I look at it, Meta didn't even make it to the graph :-/

[0] https://www.signalfire.com/blog/signalfire-state-of-talent-r...


If I were one of the targeted, I'd probably take it as a learning opportunity. Doesn't Yann LeCun work there as the VP and Chief AI Scientist? I don't know many others doing much research other than monetizing LLMs. DeepMind by far would be my (uninformed) first choice.

As for ethics, Meta/FB is disliked but they seem pretty transparent compared to OpenAI [sic].


The job market in software seems crazy to me at the moment. It's becoming all or nothing.


Only for the top 1% of AI talent. As it is a limited pool.


I was never involved in doing ML myself, even through my CS studies. However, from the outside it looks... not that complicated? How do they justify these salaries? Where do they see it coming back to them in terms of revenue?


Most of the people pursued in these "AI talent wars" are folks deeply involved in training or developing infrastructure for training LLMs at whatever level is currently state-of-the-art. Due to the resources required for projects that can provide this sort of experience, the pool of folks with this experience is limited to those with significant clout in orgs with money to burn on LLM projects. These people are expensive to hire, and can kind of run through a loop of jumping from company to company in an upward compensation spiral.

Ie, the skills aren't particularly complicated in principle, but the conditions needed to acquire them aren't widely available, so the pool of people with the skills is limited.


it's a bit like rocket engines, training a big fat LLM is super duper expensive like a rocket and all else being equal, you'd like to get it right the first try. someone who has built a lot of rocket engines knows all the gotchas and where to look out for traps and gremlins, same for someone who has built a lot of giga sized LLMs


It both is and isn't. But finding genuinely new things that actually work better is very difficult.


Well idk... recruiters from orgs have been really active in reaching out off late, anecdotally speaking, but I would not say I am the top1% of AI talent.


I can confirm first hand.


$100m for a staff member sounds crazy, but on the other hand if they were hired before ChatGPT was released and they got stock options still vesting, you might need to compensate them a 100m just for losing their stock.


Is there any reason to think he’s not just lying? His entire track record is riddled with dishonesty, about OpenAI’s mission, about the capabilities of their next AI model, about his own role and financial incentives.


I see no reason to believe anything Altman says, but food for thought:

Is it possible such a bonus, if it exists, would be contingent on Meta inventing AGI within a certain number of years, for some definition of AGI? Or possibly would have some other very ambitious performance metric for advancing the technology?


Simple hack for Sam. Hire a bunch of people for some nominal amount. They do not have to do anything and they know nothing about AI. Then let Zuck waste his wad paying out signing bonuses to fake employees.


Sam Altman is a sociopath who seems to desire only power. He plays teams against each other at the same company. He backstabs people. He lies. He betrays people. He knows how to push levers and exploit relationships and normal human behavior.

I'm disappointed how many people here are accepting it so non-critically. It could be true, but for me, it's very difficult to believe. Are OpenAI staffers really telling Sam Altman what their offers are?

From Bayes' theorem it is much simpler to assume, Sam is lying to burnish the reputation of his company, as he does every week. From a manipulation point of view, it's perfect. Meta won't contradict it, and nobody from OpenAI can contradict it. It hinders Meta's ability to negotiate because engineers will expect more. It makes OpenAI look great -- wow, everyone loves the company so much that they can't be bought off -- and of course he sneaks in a little revenge jab at the end, he just had to say that, of course, "all the good people stayed". He is disgustingly good at these double meanings, statements that appear innocuous but are actually not.


Atlman doesn't get paid and does AI because he loves it . Of course he would never lie https://youtube.com/shorts/XzqqzcpmtTw?si=vIusneDF3IkvjCEF


Being the truth engine for sites like reddit is much more valuable than money


Altman really is a generational bullshit artist. Exaggerating the value of his talent while pretending he hasn't already lost a lot of his most valuable people (he has).

It makes sense he focuses on Meta in this interview -- his other competitors actually have taken some of his top talent and are producing better models than GPT now.


So is 10s of million a common sign on bonus for individual contributors in the AI space?


Whatever Altman says can't be trusted.


Any provable confirmation here around?


[flagged]


Please don't cross into personal attack, fulminate, or call names on HN. All that is against the site guidelines and, more importantly, the intended spirit of HN.

You may not owe $billionaire-celebrity better, but you owe this community better if you're participating in it.

https://news.ycombinator.com/newsguidelines.html


Noted. It was a stupid comment by me. Sorry, Dang. I will do better.


Appreciated!


You answered your own question :) !


What an idiot indeed, buying useless companies like whatsapp and Instagram. Where did that take the org. His stupidity shows clearly - the arrogance to think social media could be monetized. Look at that stupidly low sum of a few hundred billion in revenue. Laughable.


Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html


Man, I wish I also had that sort of mediocre talent. What has he achieved in his career anyway?


Plus when you have the kind of money that Zuck has, buying your way out of everything seems to work pretty well.


This has to be rage bait lol




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: