Hacker News new | past | comments | ask | show | jobs | submit login

I disagree with the above, but I think I can shed light on what they might mean. Usually, control theory (which is used in most manufacturing processes) requires quite a bit of background knowledge on the processes at hand along with fairly powerful (mathematical/physical) tools to both approximate and model such processes, along with creating systems that use these models to perform the desired task.

I believe that the parent post means that with current simulation-based tools and large amounts of data generated from manufacturing processes, one can work directly with abstract machine learning models instead of creating physical models or approximations thereof---thus being able to dispose of the mathematical baggage of optimization/control theory and work with a black-box, general approach.

I disagree since we have very few guarantees about machine learning algorithms relative to well-known control approximations with good bounds; additionally, I think it's quite dangerous to be toying with such models without extensive testing in industrial processes, which, to my knowledge is rarely done in most settings by experts, much less people only recently coming into the field. Conversely, you're forced to consistently think about these things in control theory, which I believe, makes it harder to screw up since the models are also highly interpretable and can be understood by people

This is definitely not the case in high-dimensional models: what is the 3rd edge connected to the 15th node in your 23rd layer of that 500-layer deep net mean? Is it screwing us over? Do we have optimality guarantees?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: