This is sadly so consistent with what I'm seeing at a big corporation. We are working so hard to make a centralized ML platform, get our data up to par, etc. but so many ML projects either have no chance of succeeding or have so little business value that they're not worth pursuing. Everyone on the development team for the project I'm working on is silently in agreement that our model would be better off being replaced by a well-managed rules engine, but every time we bring up these concerns, they're effectively disregarded.
There are obviously places in my company where ML is making an enormous impact, it's just not something that's fit for every single place where decisions need to be made. Sometimes doing some analysis to inform blunt rules works just as well - without the overhead of ML model management.
> or have so little business value that they're not worth pursuing
It seems that I'm inverted from you. The Machine part of Machine Learning is likely of high business value, but the Learning part is the easier and better solution.
We do a lot of hardware stuff and our customers are, well let's just say they could use some re-training. Think not putting ink in the printer and then complaining about it. Only much more expensive. Because the details get murky (and legal-y and regulation-y) very quickly, we're forced to do ML on the products to 'assist' our users [0]. But in the end, the easiest solution is to have better users.
[0] Yes, UX, training, education, etc. We've tried, spent a lot of money on it. It doesn't help.
> Everyone on the development team for the project I'm working on is silently in agreement that our model would be better off being replaced by a well-managed rules engine
That was one of the better insights with our team. We should measure the value-add of ML against a baseline that is e.g. a simple rules engine, not against 0. In some cases that looked appealing (‘lots of value by predicting Y better’) it turned out that a simple Excel sort would get us 90-98% of the value starting tomorrow. Investing an ML team for a few weeks/months then only makes sense if the business case on getting from 95% to 98% is big enough in itself. Hint: in many cases it isn’t.
I think part of the problem here is that ML development is extraordinarily more expensive then traditional dev.
I don't generally need to develop my own deployment infrastructure for every new project. However I've yet to see an ml team or company consistently use the same toolchain between 2 projects. The same pattern repeats across data processing, model development, and inference.
Oddly, adding more scientists appears to have a super-linear increase in cost - with the net effect being either duplicated effort or exhaustive search across possible solutions.
Being mostly disconnected from the fruits of your labor while being incentivized to turn your resume into buzzword bingo causes bad technology choices that hurt the organization, what a surprise.
There are obviously places in my company where ML is making an enormous impact, it's just not something that's fit for every single place where decisions need to be made. Sometimes doing some analysis to inform blunt rules works just as well - without the overhead of ML model management.