Hacker News new | past | comments | ask | show | jobs | submit login
Every Good Regulator of a System Must Be a Model of That System (1970) [pdf] (vub.ac.be)
110 points by tischler on Mar 8, 2018 | hide | past | web | favorite | 28 comments

It’d be nice to list the names of the books, rather than just links. I’m curious about your suggestions, but don’t necessarily want to visit Amazon.

The books are "Have Fun at Work" and "Friends in High Places," both by William L. Livingston

Do you know any other books around the same topic. I'm not sure what good keywords would be for this. Ebooks preferrably. Looks interesting.

Thanks! Many of the articles on pangaro.com were quite interesting.

I don’t see how the last sentence (the brain must model the environment) follows from the admission on the previous page that a regulator can skip the model by taking on unnecessary complexity. It seems there is a built-in assumption that the brain has no unnecessary complexity, if I am following correctly. I wouldn’t be so sure about that! (Although, I should add, the idea that the brain models its environment sounds intuitively true beyond question... I’m just trying to follow the arguments put forth in the paper itself.)

>> modelling might in fact be a necessary part of regulation.

That seems false. PID is used all over, no models. Very simple, very effective.

The fact that PID loops work for many systems is a consequence of the dynamics of those systems, for example that the integral of velocity is position and the derivative of velocity is acceleration. A vast array of phenomena can be modeled as 2nd order linear ODEs.

Which just means that for many things your brain doesn't need a new model, just to parametrize one of the most common models.

One way to look at a PID controller is that it uses a simple model that is correct in a lot of situations.

Yeah tuning the PID controller is in fact modeling the dynamics of the system. You can't really expect an untuned PID controller to just work.

As a bad physicist ("physics is the art of approximation", some prof said during my studies): Indeed, you only have to model the relevant aspect(s) of the system, which may be completely unrecognizable as the system.

I guess it's somewhat more true in programming, because if you have only ten lines, can you even call it modeling?

That said, I like to say that the best kind of code teaches you something new abou the problem it's solving. Some invariant, widely usable simplification, etc. Still not the same as modeling the system as such, though.

>because if you have only ten lines, can you even call it modeling?

Ten? Do you even Perl?

Yes. That sentence is generally untrue if the context is not specified.

Either the authors made rudimentary error or they specified the context. If you read forward, they define the problem and discuss error and cause controlled regulation.

linear model

This is interesting in context of so-called model-free and model-based reinforcement learning.

I tacitly assumed that "model-free" really meant "model has been marginalized out", i.e. instead of treating it as a function to be estimated/learned through regression. Is that not the case?

It feels like people keep rediscovering the implications of Turing-completness in different domains.

I don't think anyone doubts that being able to regulate a Turing machine would require a system that is Turing complete. But this seems very far from saying that regulating any system requires a complete (isomorphic) model it. Or did you mean something else?

A surprisingly large number of things that we normally don't think of as computers are Turing-complete. It's a very low bar of complexity. And regulating requires modeling requires simulation. That's what Turing's result says: to know whether a program halts you must run it.

A surprisingly large number of things can be modeled as Hamiltonian systems -- in fact, all things -- but this does not imply that Liouville's theorem can be usefully applied universally (or even frequently). The reason is that the subset of variables we actually have access to and care about are not Hamiltonian.

Likewise, observing that some microscopic piece of a system has operations that can be mapped on to a Turing machine does not mean that the output of the Turing machine controls the variables we care about.

Additionally, we prove constraints about the outputs of particular software (executed on Turing machines) all the time. Noting that some piece of a system is isomorphic to a Turing machine does not actually mean it will be fed arbitrary instructions.

Would you mind expanding your comment?

Don't know why parent was downvoted...

If one can't make the connection between Turing completeness/automata theory and Neuro-cognititive function...think more.

LOL. The second paragraph in the abstract starts with "m this paper a theorem is presented […]"

Probably an OCR error.

Especially likely since the Introduction section contains has "i(lea" instead of "idea".

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact