Most AI researchers in industry make a lot of money, $200k+, but that is not so outrageous in the context of big tech companies.
And in fact the vast majority of AI researchers are making $20k-$30k a year, because they are graduate students.
But there is a large variance in the nonprofit world, and it's more common for a PhD in CS to earn something like $60K at research nonprofits. Other nonprofit employees - social workers, for instance - have it even worse, earning not much more than minimum wage with Masters degrees. Sadly, many nonprofits capitalize on the idealism of applicants to compensate for poor comp and grueling working conditions.
So the word 'nonprofit' really doesn't convey much information; it all depends on specifics.
Very misleading article
Of course, the average/above-average case isn't terrible. I think non-famous AI researchers are still getting hired into $400-500k/year at FAANG at "senior" level, it's just that you would be making roughly the same amount of money if you just went directly into software engineering at a FAANG out of undergrad and did above-average at promotions.
To me, that makes it not really make sense to go into AI research (especially now, when it's a hot topic and you have a 3-5 year wait to get your PhD) unless it's something you are genuinely interested in. For purely career reasons, it doesn't seem much better than just going right into software engineering.
Even despite this, your comment strongly indicates taking a rather pedantic position. If you are suggesting that the mere amount of the grant somehow counters the spirit of the discussion, I’d say that’s extremely incorrect and a strict adherence to some threshold like “millions” would be totally missing the point.
Good for him, but I really don't see that as a rebuttal to "wages are so low for this occupation that people really good at this choose to do other things as a job." It's a bit like saying "Well, just do what you love anyway! You could totes win the lottery as a way to keep a roof over your head!"
Well, kudos to the lottery winners! But most folks would like a paycheck they can kind of count on that covers their needs reasonably adequately without hoping for a windfall out of the blue as the solution to their problems.
And the reason it matters is because earned income encourages people to develop useful skills that the world desires (so as to presumably make the world a better place). Telling people they can actually make bank at X if they keep applying themselves is telling people "You don't have to be a pie-in-the-sky idealist to make the world a better place. You can do things the world actually values, be really good at them and get paid excellent money, so: win/win!"
The Nobel Prize and even MacArthur grant are intended to reward people after the fact for making the world a better place. Part of the point is to empower them to keep doing what they believe in and not give up and go do something else for pay that would be less beneficial to the world at large.
It may help encourage some folks to keep at the thing they believe in despite the low pay, but it's not a plan you can take to your accountant for how you will make your retirement work: "I'll just be amazingly good and then get a random windfall. It's Fine!"
I'm a freelance writer. Like a lot of writers, I work for peanuts.
I don't expect to be the next JK Rowling, but I can learn from her (she rewrote the first chapter of the first book like twenty times) and I've applied myself and gotten better. My pay has been gradually going up.
Because of how publishing works, JK Rowling's financial success is somewhere between salary of a million annually and random grant out of the blue. Plenty of people think writers must be nuts because most writing pays so poorly. (I have been repeatedly told to "get a real job" if I don't like being poor.)
But there's still a difference between that and your "win the lottery" framing for this discussion. It's really not the same thing.
On-topic: that's misleading; once you pay fixed costs, you have a lot more latitude for discretionary income. Not everything is the same ratio more expensive in the Bay Area.
If you’re raising a family you’ll likely need two $200k incomes to afford a house, and even then, it’ll take at least 2 years to save for the down payment.
The Bay Area is for college kids and millionaires. That’s it.
In my wife's mom's group (which had a number of dual-tech-income couples working for places like Apple, Google, Palantir, Facebook, etc.), not a single parent owned their own home. I live in a small 13-unit townhome complex, and about 8 of the units have families with young kids.
On a pragmatic level, high-income people who rent often have jobs that require a large level of geographic flexibility to get their maximum salary. Think of your professional skillset & network as an asset that various companies can "rent" for different rates, depending on how much value it adds to them. Different companies will pay wildly different amounts for that; it doesn't make sense to buy if you get a 50% raise by moving to a different city (or just a different area of your metro region) but costs of selling your house eat up several months salaries.
My credit union will do loans of up to 95% loan to value for $1.5M to first time homeowners or 90% loan to value up to $1.5M if you don't meet the first time qualifications.  I'm sure other volume lenders in the bay area have similar lending standards and programs.
It's worth at least looking down the path, and considering the details of your current situation, and not just not considering purchasing at all because you only have 10%, 15%, 18%, or whatever that isn't 20%.
I don't understand how you can be 'underwater' but yet have capital gains. Certainly, selling a house is expensive, and if you put down only 5%, the loan balance could be more than your net sales price.
See eg. this $1.9M home (estimated payment $9000/month, not including PMI) vs. this $4900/month equivalent:
(Also note that the $1.9M home is currently assessed at $170K, so the current owners are paying < 10% the roughly $15K/year in taxes that the buyer would be.)
I will say though, might not make the most sense to pick a house in Mountain View as an example. For a couple just starting out in the South Bay that don't have two high earners and want to own a house, Fremont/Union City/Milpitas/San Jose are more realistic.
Also because of prop 13, it can make sense to lose money on buying vs. renting in the short term if you anticipate staying somewhere for a long time, because of the potentially huge property tax savings. And for basically the same reason it can make sense to lock in a mortgage payment that doesn't look like a great deal on paper in the short term.
And then on the flip side, most of those folks own financial assets elsewhere - either they get stock options in their employer, or they sell those stock options and diversify among other stocks, or they buy 5 homes in the Midwest and rent them out as an absentee landlord.
The system is driven by the massive amount of money flowing into the region from all over the globe as software eats the world - this fills local municipalities with tax dollars that they can use to provide public goods, and it inflates the stock compensation that companies pay their employees, which gives them money to afford the exorbitant rents while still saving up a million or so. It's ultimately unsustainable - bad things are going to happen to Silicon Valley once software is not the engine of growth throughout the world - but that's probably a few decades off, and in the meantime people who partake in the system can afford to buy up whole city blocks in Detroit.
Parks: check Google Maps for anything that's a splotch of green and just go to it. Local parks are free, county & state parks often charge a $5 admission (frequently not enforced). Most of them are pretty good. BAHiker.com has a good guide to state & county parks, Google Maps has pictures and reviews on local parks.
Events: Google [$city events] or check out EventBrite, Johnny Funcheap, or SFGate & the Mercury News's events sections. Also look for flyers when you visit your local downtown area, or check out the local library's website (they often sponsor a lot of fun stuff like storytime for kids or Star Wars Day). Many of the peninsula towns sponsor things like classes, swim lessons, outdoor movie nights.
Socializing: the Bay Area has a thriving Meetup scene, but tbh I've never made a friend that stuck through public Meetup groups. Had much better luck with staying friends with former coworkers, and with paid activities (gyms, courses, music lessons, kid stuff, etc.) Google is your friend here; Google [$activity near me] and you're almost certainly going to find something. Because the Bay Area is both wealthy and densely populated, it's a magnet for really good instructors: aside from the aforementioned Krav Maga gym (KravZone in Sunnyvale), there are 3 world-class gymnastics gyms (West Valley, San Mateo, Airborne); a mandolin orchestra (SF Mandolin); some hardcore cyclists (have seen them riding over the mountains in a group occasionally); and many other activities.
For mom's groups: there're some that are just organized over Facebook (oftentimes by invitation - if you get plugged into the parent networks you often find out all sorts groups), some that are public, and some that are organized through local non-profits (eg. Blossom Birth or El Camino Hospital). I'd probably start with a for-pay class to meet people (folks are often more committed in the for-pay classes, and it draws from a social class that is more tightly networked) and then ask them if they know of other resources.
Museums & amusement parks: the big science ones are the Exploratorium and Cal Academy in SF, the Tech in San Jose. Smaller science ones include Chabot Space & Science Center in Oakland, Monterey Bay Aquarium, SF Aquarium, and Seymour Marine Discovery Center in Santa Cruz. Kids ones are CuriOdyssey in San Mateo, Happy Hollow in San Jose, Junior Museum in Palo Alto, the Discovery Museum in San Jose, and Gilroy Gardens in Gilroy. Amusement parks: Great America in Santa Clara, Raging Waters in San Jose, Children's Fairyland in Oakland, Santa Cruz Beach Boardwalk, and Six Flags in Vallejo. History & general interest: Egyptian Museum in San Jose, Hiller Aviation Museum in San Carlos, Computer History Museum in Mountain View, Maritime Museum in SF, the Hornet in Alameda, and probably a few others. There are also 3 railroads that my kid loves (Billy Jones @ Vasona Park, which also has a carousel; Roaring Camp @ Henry Cowell Park; and the seasonal Train of Lights between Fremont and Sunol), and most of the local parks have splash pads that he really enjoys.
Lived like this for over a decade. Now moving onto college.
Here's how it worked out:
$100k salary = $74k after tax, so that's about $6k per month take home. $3000 per month for rent leaves $3000 for everything else.
My wife and I have $1000 per month on our student loans. So we're down to $2000 per month. Health insurance and medication was $600 a month for the two of us, down to $1400. Groceries worked out to about $600 a month as well, down to $800. We got around via muni, so that takes off $200 per month for passes, down to $600. Electricity, water, internet, and cell came to about $200 per month. That left $400 for entertainment, incidentals, and savings.
And that was for 2 of us. I can't imagine 5, but kudos for doing it.
I would also highly recommend paying off the student loans before having kids. My parents borrowed close to $100K to send my sister and I to college, and 4 years after graduation it was paid off completely, after saving 80% of my income and contributing it back to them. Because of compound interest, a few extra contributions towards the principal shorten the term of the loan dramatically, because it means that the loan balance shrinks, the amount of interest owed each month shrinks, and so a greater percentage of each payment is applied to the principal, which speeds up repayment even more.
But I know it was right around 100k and rent is exactly 3k(live in Cupertino) a month. Not sure what other strings had to be pulled, but I'm glad I lived a decent life in SV.
Kind of ridiculous part is we don't even qualify for financial aid for some colleges/application fees.
425k after signing bonus is probably closer to 300-350k base + annual bonus, which feels low for someone in such a distinguished position (close to a typical specialized physician or big tech software engineer with 5-10 years of experience). After finishing my masters degree in machine learning in the mid-2000s, I immediately got job offers higher than my much more talented professors. To me, the real story is that academia has set the compensation bar low for people adding so much value.
Edit: to be clear I know these salaries are high and people in my field are lucky to be making so much. But I also don’t think it’s accurate that people in the field are overpaid, which titles like OP may make it seem.
You can walk into NIPS and find a handful of other researchers that perform at or better in this space.
People in everything have no clue. Nothing to do with tech people being ignorant.
Of course, part of it is being a good negotiator --and I am not the best by far.
Again, it is all about what one side needs vs. what the other side can deliver. You then add a context that includes such things as urgency, uniqueness of the problem to be solved, supply of practitioners with the required expertise as well as other dynamics (does the company need to meet a milestone ASAP?) and you have the makings of doing very well if you are in the right place, at the right time and with adequate negotiating skill.
Of course, if close such deals you have to be able to deliver the goods or you better start thinking about an alternative career.
Of course, I am talking about outliers here, that should be understood. In most of these cases it is, as I said, a right-place, right-time, right-skills circumstance, an alignment of the planets, if you will.
I've seen even crazier stuff. Back during the most heated phase of the internet bubble savvy guys were making outrageous deals. I remember one guy who wanted to charge $500 per hour if he worked from home and $800 if he had to come to the office...to complete a Visual Basic project. No, he was not hired. Wrong place/time/skills.
If you publish a major conference paper, 7 figures is pretty easy to attain. You don't have to be in the 0.001% just the top 5%. That's hard work but it's doable.
The only thing that's remotely fungible in AI is python-based data scientists who can do little more than port new data to existing models open sourced on GitHub. And even then they do pretty well because of the huge demand.
I don't see that changing anytime soon. I believe expecting automated systems to replace data scientists is the same level of naivete as believing we'd reach L5 autonomy in a few years in 2014.
Anthony Lewandowski was making 120 million dollars a year when he left Waymo. If you're an AI expert in the valley and you're not making at least $500k, you are one job hop away from doing so if you play your cards remotely right.
The data-scientist is the new web-developer.
How does it work in practice? Let's say I just published a major conference paper. Where do I apply for my million dollar job? Who do I reach out to?
Sadly, you're really only worth what you can convince another person to believe you're worth. But then the USA is suffering through someone convincing an electoral majority of Americans he was the best and only choice to lead the country and that's just working out splendidly right?
But I've learned that money isn't everything the hard way. However, if you're whining about being underpaid, there's something you can do about it. There are plenty of solid reasons not to however especially if you enjoy your current gig and they're paying you sufficiently that you have a promising future. Money truly isn't everything.
I also know because the day I left academia myself long ago was a major payday for me. Finally, I know because people like Andrei Karpathy pulled it off because of his excellent communication skills, intelligence and the right background. And people like Mu Li pulled it off by associating themselves with an academic who who helped found AWS AI and in his case he landed a principal engineering position straight out of school. That's the sort of position one usually only gets after one to two decades of job experience.
Can we just agree to disagree here?
Only reason they sell out is because both presenters and researchers who need to keep up with the field try getting tickets.
Also he's likely going to end up in jail because of what happened with that Uber fiasco.
That said, my exiting salary at some companies was dramatically higher than my salary when I joined because the stock had appreciated and so had my compensation. And if someone wants to hire me away that's the compensation they have to deal with, not my initial comp.
Anyway this is all a digression away from the fact that you are wildly misrepresenting salaries. You're also ignoring the two companies he sold to Google as part of his compensation. If you're interested you should google a bit about 510 Systems because the history is actually interesting, especially with the added knowledge of the founder becoming a total scumbag.
The difference in compensation here is nowhere near as bad as what happens in Hollywood.
Hollywood actors salaries (or those of sports stars, or CEOs, or authors of best sellers or pop stars) are a known thing so you don't write a 'news' story about the magnitude they have obtained. AI researchers making large sums is not widely known so is 'news' and of interest to readers.
Techniques like deep learning were out of fashion for a long time. Google was making tons of money on non-AI techniques; I remember Sergey Brin saying he was surprised by the deep learning breakthroughs in 2012 because the conventional wisdom was that "AI doesn't work".
And people who did Ph.D's under Hinton and others 10+ years ago also were not following trends.
You have to take risk to get reward. The way to do that now would also be to labor on an obscure subfield of AI that isn't certain to even work, let alone become commercially viable. (And there are plenty of people who did that on fields that looked more promising than deep learning 10+ years ago.)
There's definitely a need for new AI techniques, see this other current story:
Also, having a few NEURIPS or ICML papers is not a hiring guarantee any more. It's decidedly not 2015-17 any longer. In particular, I feel bad for the people who started their PhDs with $$$ in their eyes around that time.
Don't get me wrong, they will all be employable, but the field moves so fast, I have my doubts there will be cushy FAANG jobs for everyone capable of playing around with network architectures in a few years. It's a terrible idea to start one based on the hype few years ago.
- "containing nothing; not filled or occupied"
- "lacking meaning or sincerity"
- "having no value or purpose"
If one lacks that primary motivation, they certainly "lack" it and are "not filled" with the academic ethos. So "empty" is a fair term.
Now, I understand the realities as well -- seeking publications, recognition, tenure, and funding are also political activities. But this does not contradict the underlying community norm that I mention above. In fact, it supports it -- it explains why so many people endure a tough, grueling, political process despite it not being their wheelhouse.
No idea why you think this. A phd has a contrat with a salary, starting and end dates.
My comment above makes an argument. You don't have to agree with it, but it should give you some idea about how I came to my conclusion.
So basically your conclusion doesn't go well with my experience. I only saw a contract with low salary and a lot of work on my end. It was worth it for me (I think?), so I accepted.
To what degree did you enjoy the process of learning, collaborating, teaching, writing, experimenting, and so on?
I'd wager you did enjoy some or many of these... otherwise, it might have been a long slog. :/
Maybe the following story can convey part of my message. You might have seen movies about a wily protagonist villain (or a flawed, tenuous partnership between several) who meticulously plan to steal some priceless artifact from some nearly-impregnable facility. What drives such people? I don't think it is purely money -- there would be alternatives that would, rationally speaking, generate more income, on a risk-adjusted basis. In the case of the ninja-suit wearing infiltrator(s), I'd argue they fundamentally enjoy the process (the preparation, the planning, the deferred gratification, the meaning). Perhaps the same is true for people that pursue and complete a Ph.D. -- some get a decent financial payout, but on average, I don't think the degree made them better off financially compared to other alternatives (e.g. holding together some rotting infrastructure with bailing wire). They value the title, the activities, the identity, the community, the kind of work they do.
Seeking a job only for money doesn't really endow much meaning -- (Please, don't take this as an endorsement to go off and work for some harebrained startup when you have better options. :P) -- though I think there is plenty of meaning even in the mundane (e.g. rearranging JSON) to be found if you open yourself to experience (e.g. books with dragons about parsers).
I would like to share my views around fairness and judgement. My apologies if the numbering makes them seem formal; my intention is only to give them a rough ordering.
1. One should not be eager to criticize others.
2. One should seek to understand others.
3. However, one should be willing, intellectually, to differentiate between aspects and assess those differences.
4. It requires some care to balance 1, 2, and 3.
5. One should be honest with oneself, at least, about your conclusions.
6. One should be comfortable with your assessments, particularly if you've thought them through.
7. One should be willing to share these thoughts with others, because debate will improve your thinking, scope, and articulation.
8. One should accept the consequences of what you say.
9. One should learn from what you say.
10. One should not refrain from making assessments only out of a fear that someone will label you as "judgmental".
11. Some people criticize others because they dislike the other person judging others. This is somewhat ironic, because in some cases this criticism is premature. If one judges another without engaging to develop an understanding, I think that is unfortunate. Doing so would be acting in a way inconsistent with one's own values.
All of these "should" statements should be adjusted to the situation. For example, repeated experience, if reflected on fairly, may warrant that some particular people do not deserve the same degree, say, of "benefit of the doubt".
Presumably because technical needs of FAANG might be moving in other directions.
Could someone comment why this might be the case, and what other fields might look relevant
I don't think this will happen, because typing "import tensorflow.keras.*" isn't the skill that an ML Ph.D develops, and it is the part of the skill set that is (and will be) commodity along with the automl stuff.
Constructing a problem, handling the data and running a proper process is harder, and it's the value that will put processes that use ML at risk, and deliver differentiating value for the ones where it works.
- Would you say that you'd have got an AI researcher position without a Ph.D.?
- Also, why is NEURIPS or ICML papers is not a hiring guarantee? I thought they're highly sought after.
It's difficult to get any true researcher position without a PhD. It doesn't mean that PhD has to be in AI. Research involves a lot of reading and writing papers, which a PhD is supposedly training you how to do.
That said most places will say "equivalent practical experience" and it's entirely possible to be competent in AI/ML without a PhD.
I did a PhD in space science, I now do machine learning in ecology and spent the summer working on machine learning for disaster management. The interesting jobs (to me) are where domains cross, and it's also (hint hint) much easier to get a job doing AI for X than it is doing "fundamental AI". In any case, you're often doing stuff that nobody has done before anyway, but you don't need to spend your life hunting for the new ResNet.
> Also, why is NEURIPS or ICML papers is not a hiring guarantee?
What the OP probably might be implying is that everyone has a publication in NeurIPS nowadays.
I think it goes deeper than that though, publishing in machine learning is broken. Having 10k people at one conference is not an efficient way to distribute research. You have to submit a full paper in November for a conference next Summer - pretty much only computer science does this madness.
What's interesting is how unique this attitude is. In astronomy, for example, conferences are a fun place to catch up with folks in your niche. There might be a few hundred people and probably it'll be single-track. We publish whatever journal is the most relevant and they're generally all considered equivalent. Nobody cares if you publish in ApJ vs A&A vs MNRAS, if your research is good.
There are also concerns that the quality of these venues is decreasing because the pressure to publish in them is so high.
Do you think it is possible to that without any background in anything? I mean could someone apply black box frameworks without understanding them. How would they be caught?
To do machine learning research? Or work in some random domain?
> I mean could someone apply black box frameworks without understanding them. How would they be caught?
Machine learning is rapidly becoming commoditised, but lots of people still don't understand just how much effort it is to get a good dataset and to prep
Domain experts scoff at machine learning people who are trying to solve Big Problems using unrepresentative toy datasets, but also tend to have much higher expectations of what ML can do. Machine learning people scoff at domain experts for using outdated techniques and bad data science, but then propose ridiculous solutions that would never work in the real world (e.g. use our model, it takes a week on 8xV100s to train and you can only run it on a computer the size of a bus).
There are also a lot of people (and companies) touting machine learning as a solution to problems that don't exist.
Overfitting models is probably the most rampant crime that researchers commit.
From the second half of your comment it seems that the answer is yes?
Maybe a comparison would help: someone pretending to be an experienced iOS/Android developer without any qualifications or ability would quickly be caught. Since they couldn't produce any working app or use a compiler, and anyone can judge an app for themselves. You can't really just make it up out of whole cloth, people judge the results. You would have to start actually doing that, and if you couldn't or didn't want to, then unless you outsourced your own job or something the jig would be up pretty much instantly. (Unless you caught up.)
So, how about machine learning? Do you think a fraud could land and keep such a job, without any knowledge, qualifications, ability, or even interest in getting up to speed? Just, a pure, simple fraud.
What's your guess?
Fake it til you make it isn't a terrible strategy. But pure fraud? If you didn't even make an attempt to learn on the job? You'd get caught pretty fast as soon as someone started asking any kind of in depth questions about the models you were supposed to be training.
I'm not sure you could land a job knowing nothing. Maybe. Depends how hard you get interviewed and whether they know about machine learning. If you could fake a portfolio and nobody questioned it perhaps? I can see that happening in academia for sure.
There are a few problem classes where you could throw stuff into a black box and get great results out. Image classification for example. Fast.ai have made that three lines of code.
So maybe there are a bunch of applications where you could fake it, especially if you were willing to Google your way round the answers.
Would be harder in industry I think, but you find incompetent people everywhere.
>But pure fraud? If you didn't even make an attempt to learn on the job? You'd get caught pretty fast as soon as someone started asking any kind of in depth questions about the models you were supposed to be training.
That's just what I mean. It would depend on someone asking you about it, right? (As opposed to being an iOS or Android developer or running microservices on the backend: in those domains nobody has to ask you anything, it's instantly obvious if you're not building and can't build anything.)
For machine learning, who is asking these questions?
If you throw data into a black box (3 lines of code) and are incompetent, can you please tell me a bit more about where you would get found out?
Let's use your example, ecology.
I show up, I get a dataset, and I put it into tensorflow using three lines of code I copy from stackoverflow.
I lie and bullshit about the details of what I'm doing, by referencing papers from arxiv.org that I don't read, understand, or actually apply. It's just the same 3 lines of code I copied on day 1. I don't do anything on the job.
How long could I last? An hour? A day? A week? A month?
Assuming I am outputting 0 useful work. I'm not doing any machine learning. Just 0 competence, or I make something up by hand or in excel.
I am trying to understand how people are judged.
If you really wanted to you could fabricate results and in lots of cases nobody would be any the wiser unless you were supposed to be releasing software. Despite emphasis on peer review and repeatability, science relies heavily on etiquette. If you don't release code or a dataset a lot of times it's extremely difficult to repeat paper results, and that also means it's hard to disprove the work.
It's quite hard to get rid of incompetent people in academia, so I imagine you could get away with at least a year or two.
They're sought after, but the conferences have also grown huge. NeurIPS 2018 accepted around 1,000 papers! Based on a query of the DBLP  dataset, there were 4,409 distinct authors who had a paper at either NeurIPS 2018 or ICML 2018 (or both). If you add in a few of the other big AI and ML conferences (AAAI, IJCAI, ICLR), the number grows to 10,995 distinct authors, again solely for the year 2018. The field is hot, but is it hot enough for ten thousand people to be automatically hired because of one paper?
There's also decreasing confidence in the big conferences' review processes I think. NeurIPS 2014 actually did a study to estimate how random acceptance was by assigning some papers to two different sets of reviewers and checking how similar the decisions were , and found there was a much higher degree of luck in acceptance/rejection decisions than they had expected. I personally have more confidence in the review processes of smaller and more focused conferences (and journals!), though they don't have the same level of name recognition.
Also, NeurIPS reviewing has gone to absolute hell. I mean, peer review everywhere has problems. But I've never seen something quite this bad. At this point I think it's safe to say that most reviewers wouldn't even make it to an on-site interview for a faculty position at a research university. That's definitely nowhere near normal. You can't really blame anyone, I guess; the community is growing way too quickly for any real quality control.
Frankly, I think those conferences have outlived their usefulness as anything except marquee marketing events. I'm now mostly attending smaller and more specialized conferences.
Besides just "quality" in the general sense, one thing this has really hurt, I think, is any sense of history or continuity. There are a ton of reviewers who have basically no familiarity with the pre-2010 ML literature, and it kind of shows in both the reviews and the papers that get published. I mean I get that deep learning beats a lot of older methods on major benchmarks, but it's still not the case that literally every problem, controversy, and technique was first studied post-2010.
I also suspect that other fields have similar patterns where a few experts make a ton of money, they're just not as publicized because the media is so hyped on AI. How much you think Jim Keller is making?
TL;DR do the PhD because you love the research. If you want the money do some internships, land a FAANG job, and try and get to senior as fast as possible.
The problem that most technical people seem to have is that they think a qualification is a licence to print money. It isn't. The majority of people who obtain qualifications have no idea how to make money (that is why they are doing the qualification rather than making money, and this effect increases with the complexity of the course of study).
So if you want to make money and your interest is AI, sure do a PhD...but at some point, you are going to have to work out how you can use those skills to sell something.
I think, at some point, there is going to be a rude awakening. Right now, we this situation where you can tap up some VC idiot to pay your salary for a few years whilst you fuck about on some bullshit. You can get acquired by some megacorp with a ton of cash and have no accountability from shareholders while you light huge stacks of their cash on fire...this always ends. Always.
You are either a bureaucrat chasing qualifications to get no-accountability jobs or you are someone that can create value for other people. I am not knocking any kind of advanced degree (I have one). But the point of it is not money.
So even posing the question is totally misguided.
Frankly, I think this is a bubble, and don't expect to ever make anything like that much money, in academia or industry. Yes, deep learning has a lot of successes and industrial applications, but deep learning wizards are not nearly so impossible to replace or replicate. As the field settles more towards engineering than wizardry, and as expectations are recalibrated towards the realistic, it's going to stop raining cash.
When it was founded in 2015, OpenAI was originally a nonprofit, but restructured into this new arrangement earlier this year in order to be able to raise VC money: https://techcrunch.com/2019/03/11/openai-shifts-from-nonprof...
They are at least talking about AGI. If those people get anywhere close to AGI, they should start charging ten or hundred times more, because having that kind of technology could be worth hundreds of billions or more. So even though there is no indication they are close to it, if there is even a remote possibility of getting there, the salaries are questionable in the context of those potential astronomical profits.
RSUs generally vest over 3-4 years, so at the end of the first 12-13 months, it’s another $100k towards total comp.
You may or may not receive additional grants in future years. If you get $400k grants every year, you might get up to $1M/year, but not immediately.