In practice, they're often frustrated for years by the lack of infrastructure to work on their ideas, but live with it because life is good.
I suspect AI today is like big data ten years ago : a lot of company think they need AI, but in fact what they need is a good product and a few algorithm requiring high-school level maths.
That could explain the shortage...
Exactly. Also as soon as big data came around nobody was doing just data, everyone was doing big data even if they had the same 10GB MySQL database they had from previous years.
AI is a bit the same. Doing any analytics? - Now it's AI. Opening and excel spreadsheet and doing a curve fit - I am a data scientist doing AI. Doing any actual ML - not learning anymore but super deep learning.
Sometimes tech reminds me rich, bored, stay at home SO's that are constantly redecorating their house. Not because they need it, but because they are bored and the next trendy design looks cool anyway.
My guess is he is a consultant hired to do a typical project (i.e., help me move X into the cloud, help me re-architect our data model for big data, help me implement AI for our dog walking app) and at that point he just shows them they aren't ready for it or flat out don't need it.
It's just my guess, but it's what consultants do. The bad ones are happy to take on your project and charge you $400 bucks an hour. The good ones unfortunately, deal with the dilemma of turning down lucrative work in the spirit of doing what's right.
After all, if they think they are going to hit big data scale and want the tools to handle it, you aren't completely a bad guy if you help them do it right, especially if you've already advised them not to do it.
No need for clever optimizations - Moore's Law will bail you out a lot of times.
> Isn't regression always a form of curve fitting?
Sure and that's been done before for many years. I was just saying that today everyone who was doing that, isn't doing curve fitting or regression analysis anymore but "AI" and "Deep Learning" (doesn't matter if there are not neurons involved).
Neurons are just an inspiration from biology. You can call them layers of neurons or you can call them matrices and do matrix multiplication. Nothing special about neurons.
I understand your argument that people like to use buzzwords and you don't like this. But it's a genral problem which applies to everything, not just ML.
Not just startups, but big established companies as well. At bigger companies not only do you have data and infrastructure issues, you also have business process and political issues. I've seen more than a few cases where fixing a business process would have a much higher ROI than a model, but it's easier and cheaper to hire a data scientist and make a big noise about it than it is to admit your business process (that includes 50+ people) is a mess and do the work to fix it.
Yep, AI is the new silver bullet that will solve everyone's problems. It seems much easier to throw a million dollars at someone with the right credentials than to make hard choices and build a better product.
Are the issues we see in many software companies w.r.t. AI/ML/DS an effect of poor role definition/team hierarchy? It seems to me that a five-person team complete with an "engineer", scientist/researcher, a couple of devs for APIs and pretty pictures, and one other "utility" role would totally kill it and create amazing things. But, I've never worked in an environment where the ML people aren't completely segregated into their environment, so I don't know.
I don't think so, unless they are getting AI/ML/DS people to do regular software features and bug fixes.
Front-end/Back-end dev roles merged into full-stack roles at places, but you don't see the same merging between AI/ML/DS roles and front/back/fullstack roles.
I think this pretty much solves 90% of the problems in the software business.
1. Startups give whatever sexy title they want to the people they can afford, which makes titles fairly useless, because they almost never can afford the talent commanding “sky-high” salaries.
2. Within large tech companies like Google and Facebook, it’s hard to immediately tell which of the many data sciencey, machine learning-y titles correspond to the truly stratospheric salaries versus the engineers that work with those roles. For some teams, like DeepMind, Google Brain or FAIR, it’s easier to tell. But for others it’s a mixed bag.
For comparison, see the fashionable term of art “quant” in the financial industry, which has similarly devolved into marketing and a bimodal distribution. As a rule of thumb, you generally can’t trust AI titles or salaries at startups unless those startups are really known for their talent; further, you can safely assume that, at companies capable of paying for top talent, the very impressive salaries belong to titles which seem the most exotic and out of reach for general engineers in the job description.
Fortunately startups don’t really need to compete for top talent as a genuine technological differentiator, they just need to engage in signaling, so this is mostly a non-problem. Startups almost never have problems usefully improved by the cutting edge of machine learning research, and can instead use off the shelf tools and existing software to accomplish the same things. Frankly, it’s exceptionally rare for a startup to even have the massive data, pipeline and munging infrastructure requisite for actual research.
As Napoleon said: titles don’t honour men, men honour titles.
In most enterprises, 90% of problems can be solved by somebody who can get phone calls answered, a $20/hour intern and Excel.
But... Consider the wide variety of solutions for taking notes. It’s a trivial problem that can be addressed in any number of ways. But it’s a problem that people spend a lot of time solving.
Coders should stop being so mercenary.
Also, how do you reconcile your statement with the fact that companies will fire someone as soon as it makes sense for them?
My solution to your "fact" is to not work for a bunch of dicks. I've never gotten fired "as soon as it makes sense" once I transitioned into development. You seem to work for a lot of dicks. Stop doing that.
This is going to be the last reply, but you've not made your case as to why I should gift my employer free money by leaving it on the table. There is exactly zero benefit to me for not getting everything I can, and quite a bit of upside. I have also found your double standard regarding the behavior of companies and the behavior of employees, and your handwaving away of that double standard, to be quite insane.
In short, you feel free to do whatever you want, and gift your employer free money. I, on the other hand, am going to take care of myself and my family by getting the maximum value I can out of the time I have to give my employer, and get paid as much as I can.
So yeah, I think that was an inherently civil relationship. Agrarian empires were the original forms of civilization. Just because there's a hierarchy doesn't make it not civilized. Hierarchy is instead what makes it civilized. Hierarchy means that everyone can relax and focus on what's in their wheelhouse.
Look, if we're talking giant corporations here, I agree with you. The company isn't going to miss another $5k/year. But if you play hardball with a $3M company, then they're only going to keep you until they can outsource your job away. Their budget is what they live and die on, and surplus profit typically gets rolled back into the business, not wasted on dividends.
You're directly affecting company viability by not being willing to leave some money on the table.
You've got the start-ups that are focused on a big exit to pay their investors and let the founders cash out. They (and their VC investors) want their employees to sacrifice/invest/commit "like a founder" - but without the founders upside.
You've also got the large businesses that will, in your own words, "keep you until they can outsource your job away" for a little more profit.
This is the reality for developers and they have slowly decided to seek their fair share to the consternation of businesses that were used to adding that value to their own bottom line.
When I hear someone saying that "coders" (or sometimes "code monkeys") shouldn't be focused on salary I hear someone who (A) doesn't respect my profession and (B) doesn't see why they shouldn't be able to exploit me.
(When it comes to my own clients, I do leave money on the the table. I could justify it by saying that I do it "in the interests of a long term relationships" but in reality I'm emotionally invested in their success and I want their projects to succeed.)
To paraphrase on old saying "Developers go where they are wanted and stay where they're well treated."
Look at from a developers perspective; why should they bust their hump to deliver 20% more value only to be rewarded with a 3% increase in salary? ("Gee Chris, I would love to give more but company policy...") Why shouldn't I go someplace that does value my efforts?
> Look at from a developers perspective; why should they bust their hump to deliver 20% more value only to be rewarded with a 3% increase in salary? ("Gee Chris, I would love to give more but company policy...") Why shouldn't I go someplace that does value my efforts?
You should not bust your hump. Deploy adroit political acumen to reduce your workload. I can't remember the last time I busted my hump on a development job.
But if you want more money, you absolutely should go somewhere else. What I'm saying is that expecting your existing company to be the vehicle for that advancement is naive at best.
I requested, and got, two large raises at my last job. I was still underpaid at the end of it. I'm underpaid now, even though I got another massive raise when I switched companies.
The reality is, you get a market salary from the market, not from any one company. A company is either going to be open to paying market rates or they won't be. You have to make the decision whether to accept that. Playing hardball with a company that's not prepared to pay you market just won't get you anywhere. Find a company that's prepared to pay market.
They're going to do that anyway.
"You're directly affecting company viability by not being willing to leave some money on the table."
If my extra $5k is the difference between live and die, then the company was failing anyway, and I should take as much as I can before the company closes, to tide me over while I look for a new job.
Definitely heard this problem for Silicon Valley returnees to some smaller markets.
The point I am trying to make here is that these figures vary from industry to industry and from job to job, I could have conversely changed these numbers around and shown that it doesn't make sense to pay Conan a huge multiple of the no-name comic's pay. For example, if the revenue does not increase substantially (like from 1M to 2M) or if the salary of the person in question makes up a much larger portion of the overall expense of the company.
Those other people - with the exception of maybe Justin Gatlan - make a far more modest salary. It's less than a 1% difference in performance that leads to orders of magnitudes differences in outcomes.
Later down the career path in Canada, I was frequently asked "you worked for guy A and B, than must've been hell of a job?" or "how I got there to begin with?" All of them dismiss my explanation that "I just used to be on line 1 of google for thing A"
The back side of the coin? The moment web 2.0 became a more of an "in-house" production with mid-to-big sized companies that no longer needed a "hired gun" outsider, and hype wave moved to other things I really hit a wall. Actually, on my next 2 jobs I took salary in $70ks and was contemplating selling major life assets after my last employer in Canada was unable to extend my work permit.
Things went much better after I scaled down my appetites and stopped looking for employment with companies obsessed with "rockstar hiring"
Whereas if a name in the entertainment business has 3 times the audience draw as another you could make a pretty good stab at what their relative worth should be.
Whatever they can negotiate, in my view.
The effect of this being that all late-night talk show hosts are always subject to a direct financial comparison (but fortunately for their audience is sticky and belongs to them personally not to the television network). With top AI researchers, there's no real comparison or stack ranking, it's all about passing a certain bar which is objectively evident based on their past work, at least as long as AI is hot.
That's maybe half correct at best. Jay Leno could only take a small portion of his audience with him from the Tonight Show, if he chose to set up a competitor show. The same is currently true about Fallon, and likely far worse in his case today. Conan could never match his audience potential as host of the Tonight Show (pretty much no matter what he does, and certainly not on cable at TBS), because of the value of that specific platform, built up over decades and given its prominence on NBC.
A very large share of the Tonight Show audience, stays with the Tonight Show, regardless of host (barring the next Johnny Carson abandoning the show, or a truly horrendous product implosion at the current show). That audience largely belongs to Comcast NBC.
If Jimmy Kimmel leaves ABC, he would be replaced. Kimmel would struggle given the limited options, ABC would simply plop the next Kimmel into his spot and move on, with a large percentage of the same audience giving the next person a try.
Craig Ferguson's audience did not go with him, as another example.
It takes mental energy to watch stand-up comedy. If it's not funny, I've wasted mental energy in watching them and trying to get the jokes, so I generally only watch the names I know.
While many stand-ups may be funny, it takes a lot of work for them to get the name-recognition required to actually draw a large crowd.
Agreed RE Colbert - it almost feels like wathing a totally different comic (it kinda is - he's playing a whole different bit now and his fake-pundit style had years to grow and develop).
Struggling to bring this back to AI...agreed we're off-topic here :)
But what about your analysis which is based on unit economics. How does that tie in? (because it is of course relevant) By the fact that if you hire lots of super stars by market price, but your unit economic does not allow the operation to be profitable, you will eventually have to shut the operation down. Good bye super stars and whole operation!
I think that in the real world paradoxically math skills are easier to find than solid software design and development abilities. It may stem from the fact that the first one is taught quite well in school while the other one is more about individual learning and sometimes a contrarian stance to the system (can be reflected even a somewhat childish "I'll learn Haskell because the OOP and Java suck!") which is harder to find and therefore, more valuable.
A $400k salary is nothing if it makes it easier to unlock $XXm in funding.
However, for all the reasons that the article may be wrong
- the shortage is temporary
- the shortage is overblown
- the shortage is illusory
- the shortage doesn't apply etc
Please compare with the situation for other high earners (CEOs, Entertainers and Bankers). Can the same arguments be made for them? (Conan O'Brien is funny, but he isn't 30x funnier than the person at my local comedy club?)
The work of an AI researcher is mechanically reproduced so a 1% benefit can be enormous. It could be the case that the competition isn't for more of them, but for the best of them.
And it's not just that. Entertainers acquire a fan base. You cannot grow your earnings without engaging with and growing your fan base. You can make a seven-figure income as a celebrity entertainer when 300000 people are willing to throw $10 your way every year. (The other 2/3 goes to support staff and overhead.)
That's not a matter of shortage. It's a matter of competition. At a certain point, people cannot spare another moment to follow another person, and have exhausted their entertainment budgets. The top names aren't the best. They're the most reliable.
CEOs and bankers have an uncommon skill set, for business management and financial management, but they don't have anything that can't be easily replicated by people in the same business that want to move up into the higher-paid positions. Those high salaries are partially from prestige competition. The CEO of a 20000 person company doesn't necessarily have more skill than the CEO of a 2000 person company. The manager of a $1 billion fund isn't necessarily more skilled than the manager of a $10 million fund. They just work for people who can afford to pay more. Often, they just know more influential people, and were in the right place at the right time.
AI research, on the other hand, depends on some serious skill. While there are a lot of people out there that can write dumb programs, and fewer that can write programs that can handle every foreseeable situation, there are rare individuals that can write programs able to do things the programmer never anticipated. It's like a comedian that can write a superjoke that gets a laugh from everyone, every time, forever. Or a banker that can get 15% returns every year, without fail, for 50 years. Or a CEO that grows earnings by 3.5% every quarter, and always meets expectations. Building a robot that can catch a thrown ball is a feat of that magnitude. It's absolutely incredible that we have any people able to do that.
There are plenty of software developers out there that can move into AI if they had to, but it would take some time for them to get up to speed on the state of the art, and it would take entire teams of them to produce the same level of benefit as one current AI specialist. Companies that see a path to monetization for AI are looking to find and hire the Carmack of AI and get there first, rather than 100 people able to constantly surf behind that leading wave by about six months.
The shortage is temporary because software folks are good at following the smell of money. The shortage is overblown, because this research was already happening before the truckloads of money pulled up to the dock. It is not illusory, because the previous lack of funding for AI has produced relatively few experts. But it will probably turn into a glut later on, because the people offering the money will eventually learn that AI research can't be rushed in the manner to which they are accustomed, and will abandon all those they enticed into the field.
Fortunately, the applied DS jobs don't really require a PhD in machine learning. A master's in CS, stats, etc is usually plenty.
Also the dot.com crash ...
Wasn't wage-fixing a Silicon Valley thing?
At least from my recollection, JS/2S base approach that but have very significant performance bonuses that easily match big tech.
I had a hard time making the decision when I chose an engineering PhD over a CS one, but 6 years ago machine learning hadn’t taken off like it has now, and engineering / hard science prospects seemed brighter at the time. If I had known it was going to become this big and this interesting, I definitely would have gone for CS instead.
I thnk you're doing it wrong because that's exactly why there's so much demand for CS PhDs right now.
Rewind to mid-2000's when the CS postdocs/phds graduating over the past couple of years were choosing to major in CS. They were warned that everything was being outsourced to India and advised to choose a "real" engineering field, or perhaps finance/physics/math. It's hard to imagine today, but lot of smaller colleges/universities were killing CS majors back in the mid 00's!
So not only is there a shortage of CS PhDs in the pipeline, but the ones that made it through came in with a burning passion for the science (as opposed to the money/hotness). This combination of input bias and restricted supply is what makes the current labor market so damn hot.
In 3 years, that will invert, and some major struggling to justify its existence because it's hard but has "no future" will blow up. Rinse and repeat.
Doesn't intuition to resolve a domain specific problem require domain specific knowledge?
I've done operational improvement work across supply chain/inventory management, marketing, digital analytics, ecommerce, healthcare revenue cycle management, and call center operations. In almost every case I started with little if any direct domain knowledge. The intuition that was valuable for my work was around systems-oriented thinking applied to business processes and being able to quickly suss out weak or suspiciously opaque areas of the system. The necessary domain knowledge to do so was always picked up from domain experts as I went along.
In fact, taking a naive approach on domain knowledge has always worked in my favor to uncover invalid assumptions that those with domain knowledge just accepted without question.
That said, you're spot on that the core of what I do is data consulting, although most of it falls under a domain-specific name and has been W2, internal consulting roles. I'm actually in the process of switching my full time role to a less demanding one so I can focus on ramping up my actual consulting business.
"Designing AI systems requires a hard-to-come-by blend of high-level mathematics and statistical understanding, a grounding in data science and computer programming".
EDIT: Improved readability
Not even Calculus or Linear Algebra? Do they take Discrete Math?
I assume parent meant "beyond the standard discrete/calc required of any reasonable cs major".
We must take Calculus 1 and 2, Discrete Math, Probability and Statistics 1, and Linear Algebra.
We can get a Math minor if we take 3 additional math courses, which is what i'm doing with Combinatorics, Graph Theory, and some other math course.
Of course, as a physicist myself I fervently believe this is good and well, and the path towards enlightenment for all mankind etc.
I was generally on an upward climb on the ladder of abstraction (Electronics Tech -> EE -> CS -> Math), but the early engineering bent meant that I needed a few physics classes, and despite settling on the math degree, I still think the handful of physics classes I took were some of the best education I've had. It's an interesting confluence of abstract reasoning, practical concerns, model-building, and problem solving. It's not as if choosing math made that confluence unavailable to me, or that I really regret it, but I do sometimes think the particular balance a good physics program strikes might have been better for me.
The engineering / science math was pretty much a matter of looking at the problem, guessing its "form," and applying a known technique based on that form. For instance, "this looks like integration by parts." Eventually you'll be shoved out into the world where there are problems for which there is no known solution, and you have to create your own techniques.
The more advanced courses did two things. First, they set aside problems and advanced towards proofs. This is really where math came alive for me. Proofs are so much more varied that you have to abandon the security of a bag full of known tricks. The other thing is that the derivations get longer, so you have to develop a longer train of thought, if you will.
Courses that were higher-level were like:
I don't know what new goodies there are, but I'd love to dive back into it again.
Wait, really? I'm just starting out as a research scientist and my pay is nowhere that high. Am I missing something?
* PhD in deep learning / ML with papers at NIPS/ICML
* Working for BigCo (FAANG)
* Half of that is stock, annually.
* In the bay area or a big city
If you don't hit all those items then you pay is likely more modest. If you are academic, it is still likely laughable.
Ah ha. I read it as saying people getting 300k in cash. It makes more sense now. For me, I'll have to wait out several years until my RSU gets fully vested. Let's hope I don't get fired in the meantime :)
I think self-driving tech has matured past this phase where a only few key university dept's have most of the intellectual capital, but I still here mid-200k figures for freshly minted PhD's from the right school.
You get what you negotiate.
People who just know TF and couldn't implement it and know what it does and doesn't do well aren't the kind of people attracting 7 figures.
There's a huge amount of advanced statistics, math, and "intuition" that takes years working with data to build. Abstractions like TF (etc.etc.) make applying existing solutions to existing problems more tractable, but the real gold-rush is happening around the new/relatively-unsolved problems.
This situation is a pure win for smart people who put their minds to the task at hand. Buzzwordy or not, AI/ML/Newfangled Regression has more than enough wins with speech recognition, game-playing, image recognition, and recommendations to have a strong future.
Or, if you insist on negativity and you prefer to expend your time kvetching about whether Famous CS Person X or Famous CS Person Y should make the most money (Fantasy Data Science?), then more for me getting $h!+ done whilst you bicker. I mean I scratch my head that Mark Wahlberg is the highest paid actor in Hollywood, but hey, good for him in my book. Too bad his burger shop is crap.
If you do research in the same area that Google's research department is throughout your graduate degree, net research internships at insert fancy company with AI component here every summer, and get a PhD, $300k seems completely reasonable.
Personally, I think we're in a SaaS bubble and an AI bubble. "AI" isn't nearly as developed or useful as the amount of VC money being dumped into it.
To earn that kind of money as a plumber you would need to either work your way up over the course of roughly 15 to 20 years at a very generous plumbing company, or be an independent plumber with next level marketing skills.
The average plumber, pipe or steamfitter just coming out of their training is not going to get anywhere near 300 000 a year.
TLDR: Chance favored the prepared skill set. No fancy degree required, just results.
PS Before that, I blew a boring blind-allocated gig at Google insisting that GPUs were about to play a huge role there a year before they acquired DNNResearch. I even tried to join the very beginnings of the Google Brain team but they didn't have any openings or the budget/willingness to make one for me.
I'd be surprised if the take-home on $150/hr is even close to $60/hr.
Are you just making up numbers? You've asked for sources, other commenters (including myself) have provided several, and the sources disagree with what you're stating spectacularly. I'm baffled by how confident you're being about something that, as far as everyone can currently tell in this thread, is plainly incorrect.
...how many plumbers do you personally know? The reason everyone in this thread is pushing back against this is because the suggestion is ludicrous. It doesn't make sense in terms of market dynamics - the barrier to becoming a plumber is significantly lower than the barrier to becoming a software engineer. That's not intended to demean the profession, it's just true - you do not necessarily need any education (and the profession embraces this far more than tech does, which is itself progressive on that point) and requisite domain knowledge is not as extensive or as rapidly changing as software engineering. You need technical knowledge, but unless you're taking the most complex plumbing jobs and the most mundane engineering work, they're simply not comparable.
Some plumbers do well, particularly if they own their own business and are thereby successful entrepreneurs. But you can't judge the typical outcome of a professional career by the entrepreneurs who use it as a basis for their business. The modal plumber doesn't earn anything resembling $300k/year - according to the BLS, there isn't a single state where the average is even $100k. Where are you getting your data, from how much they're billing you when you have someone fix a problem in your house?
When the data disagrees with your mental model, consider re-evaluating your mental model.
The amount a client pays per hour for labor != the takehome pay of the person whose time you're ostensibly being charged for.
Revenue is not profit, and "price per hour for labor" != "hourly rate I pay my laborers". This is even true in owner-operator businesses. Furthermore, in bursty markets, "annual income amortized over career" != "my hourly rate multiplied by 40-60 multiplied by the number of weeks I want to work".
The numbers from my links are INCOME, not "what the business charged the client".
Why keep insisting on proxy variables and anecdotes?
That's the entire point of VC though - VCs don't put money into regular roads or into normal houses, as these things have already got their usefulness determined and we know they're darn useful. VCs pump money into things that look like they could use a few billion dollars to get developed and ready for the mass market. AI fits that perfectly according to your description.
VCs who put money into things that are already proven are VCs who are missing out on the Next Big Things. "AI" will become fully developed largely using the money that VC is putting into it.
My issue is that when I hear that a startup is AI-focused, that means they're just implementing something with machine learning (and possibly unnecessarily). These startups generate a lot of hype, which I believe is unwarranted.
I imagine Google researchers are working towards advancing AI techniques and knowledge more generally, not trying to disrupt some industry with existing machine learning techniques.
Any job that requires significant expertise will command salaries at the top of the curve. Corporate litigation and IP attorneys command 400k-2m salaries, yet we don't get articles about IP lawyer bubbles, because the competition is vicious.
More interesting highly paid salaries to debate:
*Not in America? Yep, that's different.
(1) Calculus? I did well in courses in calculus, advanced calculus, advanced calculus for applications, general topology, modern analysis, real analysis, measure theory, functional analysis, and lots of applications to US national security and business. Taught calculus in a good university. Published peer-reviewed original research in calculus. Studied a lot more in calculus -- exterior algebra, numerical methods, the Navier Stokes equations, ordinary differential equations, deterministic optimal control, optimization, etc.
(2) Linear algebra. Worked in it for numerical methods, various approaches to curve fitting, multi-variate statistics. Did undergraduate honors paper on group representations that is just more linear algebra. Programmed a lot with linear algebra. Worked carefully through some of the best books, e.g., one by E. Nearing, student of E. Artin, and Halmos, assistant to von Neumann. Did a lot in linear algebra as part of optimization. Same for the FFT. Same for Markov processes. Reinvented k-D trees and associated cutting plane tree back tracking for nearest neighbors. First actual course in linear algebra was an "advanced, second" course from world expert R. Horn -- found very little new and led the class by wide margins on all measures. Using linear algebra and LINPACK for a small part of current startup.
(3) Crucial tools in the applications desired by AL, ML, data science need a lot in probability, stochastic processes, statistics, and optimization. Have excellent backgrounds in all of those.
(4) Ph.D. research in stochastic optimal control, a grand example of a machine doing some learning as it exploits the history of the stochastic process driving the system to be controlled.
(5) Software. Programmed in lots of languages for lots of operating systems for lots of applications, especially for US national security.
So, it looks like (1)-(5) would be good qualifications for the "shortage" of people for AI/ML?
But, I sent over 1000 resumes; my resume is on several public resume collections; I've applied to Google, Microsoft, and many others. I've never been arrested or charged with a crime other than minor traffic violations. I have not been convicted to a traffic violation in over 10 years. I've never used illegal drugs or used legal drugs illegally. I'm a native born, US citizen in the US. I have no handicaps and no serious medical problems. I've done good work in applied math and computing at GE, FedEx, and, in AI, at IBM's Watson lab.
Result: I don't get phone calls from recruiters. Basically I'm 100% totally unemployable at anything above manual work at minimum wage.
So, I'm doing my own startup based on computing and some applied math I derived. To users, my work will be just a Web site. But the site is an excellent, and the first even good, solution for a problem pressing for about 90% of everyone in the world with access to the Internet. I've designed the Web site and server farm and written the code. The code is 100,000 lines of typing based on Microsoft's .NET. The 100,000 lines have lots of comments and about 24,000 programming language statements. The code appears to run as intended and to be ready for at least early production.
Still, with that background, I'm 100% unemployable, at ANYTHING above minimum wage. This situation has cost me my chances of owning a house, getting married, having children, and my savings and inheritance.
I suspect it has something to do with the way you (a) look, or (b) socially interact with people. I don't mean this in an offensive way, but if you look / sound like Donald Knuth you probably won't get a job because you're not Donald Knuth. You're just some quirky eccentric person that probably won't fit in because you can't make small talk around a water fountain or you don't have an interesting story to tell after a weekend because you probably don't actually do anything on weekends that's fun. I don't know, it could be a hundred reasons. I bet it's got absolutely nothing to do with how smart you are, what you know, or who you studied under. I mean, for starters, why not just go back to academia? Are they rejecting you too?
At some point, if you're being rejected for 1000 jobs, I think you need to have an honest talk with yourself and ask whether the problem is you or whether the problem is the job market. You're a smart guy, I'm sure you can run the math and answer that question.
Best of luck!
Again, once again, over again, yet again, one more time, the qualifications (1)-(5) on a resume are terrific for AL/ML and innovative applications but get no responses at all.
Lesson: The claims that there is a shortage of people with good qualifications is total BS.
So, I'm starting my own business with some crucial, core original applied math. The math is difficult to duplicate or equal, especially by people who don't value (1)-(5) above.
Each job requires someone to create it. For the special case of a company founder, he creates his own. I'm no longer hoping someone will create a job for me and, as a company founder, am creating my own.
For my Web site, it's enough for lots of people to like the site. I hope and believe that a lot of people will like the site. For the revenue, it's enough for the advertisers that my Web site delivers lots of clicks from users with good demographics, and I hope and believe that will happen. And, except for trivialities, those two are enough.
Point: Back to the "lesson" above, there's no shortage.
Why are people claiming a shortage when there isn't one? There is a standard list of reasons, and there may be reasons not on the list. I don't have information enough to select the reasons in this case.
Perhaps start by having a friend with a steady gig look at you resume? Perhaps forward it to a recruiter? Or if by the time you read this you are filthy rich, congratulations!
Moreover, the article also claims there's a pipeline for the folks (below the PhD level? The writing is a little unclear) that's coming out: " At the current education rate, an influx of new experts will start to moderate salaries in three to four years, he says."
So who is it who is making educational choices today who will be well-positioned to cash in. Probably nobody. Rather, like with all talent shortages, the people who win are those who had the luck or foresight to have already studied something that happened to get big, and now everyone else is rushing to catch up and flood the market.
Everything I did then turned out to be strongly related to techniques like Naive Bayes, Logistic Regression, Principal Component Analysis, Boosting, Bagging, and bunch of other techniques that get reinvented over and over again. Once I mapped them over to their ML incarnations, they were familiar territory going forward.
I know quite a few folks at Apple and Google without PhDs in machine learning who are now making damn good money working on ML in more research oriented capacities.
Thing is general AI, as developed by Google et al is usually only a starting point. There's a lot of engineering effort required to make a valuable solution out of it.
That said, I think it's a bit unfair to imply that the only thing required of people to participate in this particular frenzy would be to learn a little probability and statistics. I've been working on a system that learns from an enormous set of medical imaging studies for the purposes of analyzing same, and the technical ML and domain knowledge you need to bring to that party is actually pretty humbling. You're not gonna do this stuff with a little statistics and some H1B's. At least, not in a fashion that any medical practitioner will take seriously.
I think in a few years, you'll have lots of lower level people who will know enough to provide meaningful contributions. Of course, the flip side of that is that their salaries will be significantly lower.
Alternatively Dwayne Johnson as well (whom I happen to enjoy more than Wahlberg). He doesn't do high-brow entertainment (starting from his wrestling days). Simple, fun entertainment for a super broad audience never goes out of style. Your average person is pretty happy to forget their 9to5 and troubles, and go see a big movie for escapism. That will never cease to be true. Arnold Schwarzenegger filled that role when I was a kid. That's not a criticism at all, it serves just as legitimate of a purpose as people that prefer to watch TED Talks instead of the next Rock action flick. My father had a basic saying that I didn't appreciate until I was much older; whenever I would criticize something that seemed a bit neanderthal-like (so to speak), he'd respond with: it takes all types. Generically what he meant, was that the world functions courtesy of a wild variety of all types of people. Would it function better if everyone were elitist and brilliant? I don't think so.
Miranda Sings, Jenna Marbles, PewDiePie, minecraft videos, twitch game streaming, et al., same fundamental as Wahlberg and The Rock.
The entirely idea that it's "fair" to pay someone $400k a year to do a job but "unfair" to pay them $350k to do the same job is silly.
Innovation and differentiation is only one side of the ledger. There's also a strong desire to keep the competition away from the talent.
Some other resources I have bookmarked:
- Convolutional Neural Networks for Visual Recognition Youtube playlist 
- Deep Learning for Self-Driving Cars 
- Natural Language Processing with Deep Learning