Hacker News new | past | comments | ask | show | jobs | submit login

Incorporating priors from physics into hybrid DNN-blackbox + traditional models makes a lot of sense for these kinds of applications. It also makes sense that regularizing the DNN blackbox to make sure it's "smooth enough" (i.e., ensuring the change in output in relation to the change in input stays below some threshold) helps make these complicated models more stable.

However, I don't quite understand how the authors are encoding "domain invariance" with "a domain adversarially invariant meta-learning algorithm." I'm not sure what that means. If any of the authors are on HN, a more concrete explanation of such "domain invariance encoding" would be greatly appreciated!

Finally, I have to say: The field of deep learning and AI is going to benefit enormously from the involvement of more people with strong backgrounds in physics, specially the theorists who have invested many years or decades of their lives thinking about and figuring out how to model complicated physical systems.




Without reading the papers it’s hard to know for sure, but we can break down the terms and make a decent guess.

Domain invariance is the ability for a model to deal with multiple problem domains (e.g. wind conditions, type of drone, etc).

Adversarial training means teaching the system to deal with different domains by deliberately giving it the hardest possible examples of different domains - where these difficult examples are in fact typically learned by looking at the gradients from the main algorithm and seeing what would cause it the most problems, and then giving it that problematic domain and forcing it to behave well. The math is similar to GANs where you have multiple neural nets effectively fighting against each other to achieve some desired outcome.

Meta-learning is where you use ML to learn something about how to train your model. This can take many forms. Sometimes this means learning how an optimizer should work. Sometimes it’s a lot like fine-tuning where the meta-learner learns a kind of base model that can easily be adapted to new situations. I’d guess in this situation it’s the latter.


That was my initial impression too, but the authors' use of the term "invariance" throws me off -- it makes me think that maybe they have found a way to identify and model mathematical invariants across different domains...?


> Incorporating priors from physics into hybrid DNN-blackbox + traditional models

do you have any good references where you think this is done particularly well?


Take a look at the blog of Chris Rackauckas, a big proponent of such hybrid modeling, who uses the term "scientific machine learning" to refer to the notion: http://www.stochasticlifestyle.com

For example, see http://www.stochasticlifestyle.com/the-use-and-practice-of-s...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: