Hacker News new | past | comments | ask | show | jobs | submit login

This depends on which "real world" you're talking about.

Doing the behavior that feedback driven control-systems do but even better is a nice and impressive applications. That seems most useful for applications like the application that's being described - swarms of flying drones. Flying generally already yielded to various control system - autopilots work because the skies are mostly empty and so your system working according to your predictions is all that matters. A drone swarm is much more complicated but is still under the system's control.

It's worth saying that the "real world" where a lot of robots fail has different challenges. Whether you're talking self-driving cars, robot dogs accompanying troops or wheeled delivery robots in hospitals, the problem is figuring both what you're looking at and how to respond to it. And this has the problem that nearly anything can show up and require unique responses, causing progress here to never quite be enough. And better physics and better cooperation between controlled elements doesn't seem that useful here and this approach might not help this "real world".




That is why DNNs will not solve the general AI problem. They do not cover the full decision space, they do not cover the true feasible region, they make huge (and unpredictably wrong) assumptions about interpolation and extrapolation.

In controlled environments with well-known responses they can probably work (not sure why not go with traditional control approaches though), but really don’t see DNN working outside that.


Agents that are embodied and can do interventions in their environment have a chance to go beyond correlation learning. They can formulate hypothesis and try them out in the real world, like a scientist.


Thing is that you dont have to formulate hypotheses for things we have figured out. We, humans have already figured out for example that momentum balance holds, so when we make trajectory predictions, this (differential) equality is a hard constraint in our calculations.

A DNN cannot guarantee that its predictions respect momentum balance. By proper training you can just only reduce the probability of violating it.

It is plainly a wrong tool for the task.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: