Hacker News new | past | comments | ask | show | jobs | submit login
Neural-control family: what deep learning and control enables in the real world (gshi.me)
126 points by sebg 4 days ago | hide | past | favorite | 18 comments





Incorporating priors from physics into hybrid DNN-blackbox + traditional models makes a lot of sense for these kinds of applications. It also makes sense that regularizing the DNN blackbox to make sure it's "smooth enough" (i.e., ensuring the change in output in relation to the change in input stays below some threshold) helps make these complicated models more stable.

However, I don't quite understand how the authors are encoding "domain invariance" with "a domain adversarially invariant meta-learning algorithm." I'm not sure what that means. If any of the authors are on HN, a more concrete explanation of such "domain invariance encoding" would be greatly appreciated!

Finally, I have to say: The field of deep learning and AI is going to benefit enormously from the involvement of more people with strong backgrounds in physics, specially the theorists who have invested many years or decades of their lives thinking about and figuring out how to model complicated physical systems.


Without reading the papers it’s hard to know for sure, but we can break down the terms and make a decent guess.

Domain invariance is the ability for a model to deal with multiple problem domains (e.g. wind conditions, type of drone, etc).

Adversarial training means teaching the system to deal with different domains by deliberately giving it the hardest possible examples of different domains - where these difficult examples are in fact typically learned by looking at the gradients from the main algorithm and seeing what would cause it the most problems, and then giving it that problematic domain and forcing it to behave well. The math is similar to GANs where you have multiple neural nets effectively fighting against each other to achieve some desired outcome.

Meta-learning is where you use ML to learn something about how to train your model. This can take many forms. Sometimes this means learning how an optimizer should work. Sometimes it’s a lot like fine-tuning where the meta-learner learns a kind of base model that can easily be adapted to new situations. I’d guess in this situation it’s the latter.


That was my initial impression too, but the authors' use of the term "invariance" throws me off -- it makes me think that maybe they have found a way to identify and model mathematical invariants across different domains...?

> Incorporating priors from physics into hybrid DNN-blackbox + traditional models

do you have any good references where you think this is done particularly well?


Take a look at the blog of Chris Rackauckas, a big proponent of such hybrid modeling, who uses the term "scientific machine learning" to refer to the notion: http://www.stochasticlifestyle.com

For example, see http://www.stochasticlifestyle.com/the-use-and-practice-of-s...


This depends on which "real world" you're talking about.

Doing the behavior that feedback driven control-systems do but even better is a nice and impressive applications. That seems most useful for applications like the application that's being described - swarms of flying drones. Flying generally already yielded to various control system - autopilots work because the skies are mostly empty and so your system working according to your predictions is all that matters. A drone swarm is much more complicated but is still under the system's control.

It's worth saying that the "real world" where a lot of robots fail has different challenges. Whether you're talking self-driving cars, robot dogs accompanying troops or wheeled delivery robots in hospitals, the problem is figuring both what you're looking at and how to respond to it. And this has the problem that nearly anything can show up and require unique responses, causing progress here to never quite be enough. And better physics and better cooperation between controlled elements doesn't seem that useful here and this approach might not help this "real world".


That is why DNNs will not solve the general AI problem. They do not cover the full decision space, they do not cover the true feasible region, they make huge (and unpredictably wrong) assumptions about interpolation and extrapolation.

In controlled environments with well-known responses they can probably work (not sure why not go with traditional control approaches though), but really don’t see DNN working outside that.


Agents that are embodied and can do interventions in their environment have a chance to go beyond correlation learning. They can formulate hypothesis and try them out in the real world, like a scientist.

Thing is that you dont have to formulate hypotheses for things we have figured out. We, humans have already figured out for example that momentum balance holds, so when we make trajectory predictions, this (differential) equality is a hard constraint in our calculations.

A DNN cannot guarantee that its predictions respect momentum balance. By proper training you can just only reduce the probability of violating it.

It is plainly a wrong tool for the task.


I've been watching Steve Brunton's lab closely on discovering dynamical/control systems via NN's/Auto-Encoders. His videos really helped me figure out what was happening in the background to figure out sparse solutions to chaotic systems: https://www.youtube.com/watch?v=KmQkDgu-Qp0

What bugs me about most sci-fi is that the robots have bad aim. Watching these neural control videos, it becomes pretty clear that the robots will kill the people in a fictional sci-fi setting from miles away before our protagonist even knows they're there.

How about this: in movies AI formulates good, semantic phrases but text to speech still sounds like tin-can voice. In reality TTS is easy and phrasing hard.

Yeah, R2D2 is the epitome of this. He(?)'s apparently fully sentient and can comprehend a bunch of languages but only communicates in beeps and whistles. The only way it makes sense is if he's not actually incapable of speech, he's just an eccentric who refuses to use "human speak."

Oh, there are so so many ways in which sci-fi robots are annoying. Probably the worst overall category for me is the "robot = metal man" trope where a robot is just meant to be a person in a clanky metal suit. They do this in so many ways; the "battery going flat = tearjerker death scene" thing as if people who build sentient robots somehow don't know about flash drives, the "robot can only be in one place at one time" as if wifi isn't a thing, I could go on for a while.

Surprisingly?! You just set less need for false positives...

Wasn't Andrew Ng doing this since 2008?

https://www.youtube.com/watch?v=M-QUkgk3HyE


Wow that video is an amazing time capsule of top AI researchers working together! But aside from controlling helicopters the research is very different. That research was based on apprenticeship learning where a human provides the ground truth and the algorithm learns to mimic it. This paper is learning general control systems, and critically provably stable control systems, without human involvement.

Interesting part about taking advantage of invariances. There is more to this article than what I can digest on Thanksgiving. Book marked for later.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: