1) Using containers, the SASS software can be downloaded to the client cluster and managed by Kubernetes operators. Hence the cost of training and storage will be bare by the clients themself and not by the SASS company.
2) Second, the use of AutoML should increase the productivity of the startup employees (especially with the ongoing retraining of models, deployment, monitoring, etc).
The one problem that will always be there is new data and edge cases in the data. I do believe that this would be the major obstacle for the next 5 years.
Also, I would expect to see the number of models actually explode (assuming that they are trained and deployed by AutoML). Case in point is Uber with models per city/ time of day (with 1000's of models in production).
So I would argue that most businesses use tabular data that can be used to train classical models (XGB, etc), and reach the same performance (if not better) than deep learning.
> Case in point is Uber with models per city/ time of day (with 1000's of models in production).
That is a good case-in-point, because one of the arguments in the article is that AI is expensive.
At what point does 40,000 compute-hours and a few million dollars spent on hundreds of city models become a better use of time and money than an afternoon noodling with ARIMA or some Fourier analysis on a $5,000 workstation?
Perhaps -- perhaps -- at Uber's scale, squeaking out a tenth of a percent is worth the time and money. But the rest of us schmoes can do pretty well with an SQL query, some R or Python or Julia and a generous dollop of good old-fashioned all-American hubris.
1) Using containers, the SASS software can be downloaded to the client cluster and managed by Kubernetes operators. Hence the cost of training and storage will be bare by the clients themself and not by the SASS company.
2) Second, the use of AutoML should increase the productivity of the startup employees (especially with the ongoing retraining of models, deployment, monitoring, etc).
The one problem that will always be there is new data and edge cases in the data. I do believe that this would be the major obstacle for the next 5 years.
Also, I would expect to see the number of models actually explode (assuming that they are trained and deployed by AutoML). Case in point is Uber with models per city/ time of day (with 1000's of models in production).