I believe that the parent post means that with current simulation-based tools and large amounts of data generated from manufacturing processes, one can work directly with abstract machine learning models instead of creating physical models or approximations thereof---thus being able to dispose of the mathematical baggage of optimization/control theory and work with a black-box, general approach.
I disagree since we have very few guarantees about machine learning algorithms relative to well-known control approximations with good bounds; additionally, I think it's quite dangerous to be toying with such models without extensive testing in industrial processes, which, to my knowledge is rarely done in most settings by experts, much less people only recently coming into the field. Conversely, you're forced to consistently think about these things in control theory, which I believe, makes it harder to screw up since the models are also highly interpretable and can be understood by people
This is definitely not the case in high-dimensional models: what is the 3rd edge connected to the 15th node in your 23rd layer of that 500-layer deep net mean? Is it screwing us over? Do we have optimality guarantees?