Since "machine learning" and "artificial intelligence" are now mostly used as marketing slang and, hence, don't convey anything of substance anymore, it makes sense that job seekers will want to get in on this new highly paid trend.
Of course, that attracts institutions who sell matching certificates and magical one click solutions. But they conveniently forget to mention that to use ML/AI tools well, one needs years of experience in stochastics... The average person still gets confused by mean vs. median despite both of them being included and explained in Excel for 10+ years.
I predict that in 5 years, we'll have discrimination lawsuits by people who took a weekend AI course and feel offended that the math PhDs earn so much more, despite both of them working in AI.
When I was learning these words I got lost in the terms. I think most people have an intuitive understanding of these concepts if they are properly motivated by an example. E.g. With a spam email detection system, you don't want to send real emails to spam, so you want high precision (the emails you predict are spam better be spam!).
ML today is what a website backed by a database was in the mid-nineties. This kind of CRUD website is considered boring today, but making one used to be taught in an MIT course (https://philip.greenspun.com/teaching/one-term-web)
There will soon be ML frameworks that are as easy to get started with as a basic web app, and most people will be using one of them.
Yes! Most developers today don’t think about hash joins, write ahead logs or b-tree indexes but we easily download postreSQL, tweak some settings and get a functioning database. Of course as companies scale they will end up employing experts to tune their databases for max performance but they work well out of the box. Good machine learning tools should be like this too: good performance out of the box and easy to use for non-machine learning experts but highly tunable and configurable for advanced use cases.
With pure crud you don't and did not need any code since the early 80s (there have been nocode crud generators since the early 80s); it is always crud+something where something is human and usually something custom which requires programming. For ML this something might be easier or impossibly hard: it is already quite easy for someone who knows almost nothing about the theory to utilize ML that does stuff that could not be done 15 years ago. However, depending on what 'something' is in the ML case, it might be impossible for even almost-experts to fill in: an expert might be needed at that point. This is not the same with crud where most 'something', albeit usually not codeless, can be done by very mediocre coders. What I want to say: the gap between 'works with a few clicks' and 'custom code' is usually larger with ML as the expert level needed to reach it is concerned.
Huggingface is close, for nlp algos. Add an abstraction layer, or plug n play integrations with web app builders and it becomes very accessible.
Colab et al allow very complex methods to be run by rank amateurs, which gives people a self learning path toward more sophisticated uses.
Cogview and dall-e and clip are revolutionary for image production, and video is close. Music transformers, synthetic voices, and other content can be thrown together to produce brand new styles of art.
Between the ever more general capabilities of large text models and increasing mastery of media synthesis, ai is on the threshold of making the world really weird, really fast. I hope the next 10 years feel like the 90s with these technologies maturing and expanding our horizons in computing and entertainment .
I think the notion that “3D printing will change how we put stuff together” hasn’t really manifested in meaningful ways.
I mean, you can 3D print cakes and buildings. But you could also just, you know, dumb-stack stuff and get the same result.
In the same vein there are many examples of machine learning solutions searching high and low for problems to solve, when it could really be solved by much simpler means.
Anecdotally I once heard a talk on how a local government thought they needed AI to solve housing allocation. They later found was that, for historical reasons, some applications had to go through an unnecessary number of hoops before being accepted. By policy changes alone they eliminated this bottleneck. I wish I could find the case, if it’s published anywhere.
3D Printing is a very apt comparison to ml, but for the exact opposite reason.
It didn't replace the entirety of all production like some people predicted it to, but it has become an extremely valuable tool for quick, low-cost prototyping and small production runs that otherwise wouldn't be economical because of the huge setup cost of traditional manufacturing (like injection molding).
I'm fairly certain that ml will be the same way. Transforming from the magical bullet that we currently want it to be, into just another tool in the kit.
As with 3d printing this process will take some time where some people apply it way too much, while other people won't even consider it at all.
Eventually we will settle on a sweet spot where most people have an understanding of when to use it and when not to.
Of course, that attracts institutions who sell matching certificates and magical one click solutions. But they conveniently forget to mention that to use ML/AI tools well, one needs years of experience in stochastics... The average person still gets confused by mean vs. median despite both of them being included and explained in Excel for 10+ years.
I predict that in 5 years, we'll have discrimination lawsuits by people who took a weekend AI course and feel offended that the math PhDs earn so much more, despite both of them working in AI.