For instance, a standard thermostat keeps your house at 20 +/- 2 degrees C. In order to keep it at 20.00000000 exactly, it might need to model every air molecule bouncing around.
The colloquial restatement of the theorem suggests a regulator "is a model of [a] system" if the regulator's action is a function of the system, and nothing else. As in, a constant regulator that has only one state still meets the criteria for being a "model". The possibility that is excluded is that the regulator has additional details that don't correspond to states of the system.
In your thermostat example, the theorem doesn't even begin to say that, even though that's what the title suggests, and what everyone wants to believe. In fact, the theorem merely says a thermostat that only depends on the temperature of the room is simpler than one that also cares if the walls are blue.
So, it may be a correct proof, but it's a useless one.
But you didn't need my comment for that, the fact that there is an entire engineering discipline thriving on doing exactly what the article claims to be impossible should suffice.
That said, I think I agree with Baez about the actual mathematical content of the paper. That is, the proof is fine, but the way they try to formalize "model" seems so weak as to be useless. The definition used in the proof is that R is a model of S if the state of S determines the state of R. There's nothing about preserving the fidelity of those states.
edit: Maybe the clearest way I can put this confusion is that a regulator based on measurement of the system it's regulating is a "model" per the paper. The authors choose a formalism which sounds like it was more common at the time that describes the regulation as a function of "disturbances"/inputs only. Through this lens their definition of model makes more sense to me, but only in those cases where there is no measurement or feedback.
I think you're mistaking the term model for a replica. You don't need a replica, but you do need a model for your relevant tasks.
If the heater needs 30 mins to warm up before it starts pumping heat, you'll need to model that. If it overheats after running for 3hrs straight you'll need to model that too. You don't need every molecule of the physical item replicated but you do need to model all of the relevant behaviors.
I can think of a couple of places where it could have avoided accidents (one tragic and one expensive.) The 1974 DC10 crash in Paris was supposed to be impossible because the airplane could not be pressurized unless the cargo door was properly latched, but the mechanism depended on the position of the handle rather than the latching pins. At Three Mile Island, the operators were trained to use the pressurizer water level as a primary indicator of the state of the system, and turned off the emergency cooling feed as a consequence.
~W. Ross Ashby "Introduction to Cybernetics", http://pespmc1.vub.ac.be/ASHBBOOK.html
I worked at a place that had some sort of Just-in-Time heater for the sinks in the bathroom. One of them oscillated too hot too cold too hot too cold, etc. Eventually someone adjusted some setting and it worked properly. That setting was (part of) the "model" of the heater+sink+bathroom system.
For the latter, there's an entire field devoted to this problem called "system identification", one popular textbook seems to be online nowadays at http://user.it.uu.se/~ts/sysidbook.pdf
Even more interesting is the fact that this idea from control theory in fact completely transposes to government regulation, as in: it'd be really good if regulators had the faintest idea what they're doing (aka in a perfect world, they would have a somewhat decent model of the system they're trying to regulate).
Except ... that's very probably not the case ... (I'm looking at you economists).
I'm not sure I agree. From what I've seen, it's usually politicians that ignore unintended consequences and incentives that lead to poor outcomes. Not sure what that has to do with economists ... they don't run the country.
(edit: Ha! I just realized it's the same Principia Cybernetica Web!)
I think it's still cargo-cult control theory. You can't just make your team as diverse as your market and expect results to magically improve.
What I'm not seeing is how the "requisite variety" that control theory talks about, relates to "diversity" in modern usage (which for practical purposes is coeval with the political demand of hiring fewer members of certain demographics).
Let's use a less loaded analogy, intellectual diversity: the Manhattan Project, which needed to adapt to a most complex and adversarial environment (world war), hired a lot of STEM scientists, in particular mathematicians, physicists and chemists, and mostly from a few top universities, in other words, a highly homogeneous group that were educated on the same few core subjects (linear algebra, real and complex anaysis, Newtonian and quantum physics, special and general relativity). I do not believe that the Manhattan Project would have been better by hiring fewer STEM graduates, and more psychologists, historians, yoga teachers and so forth. How would modern management science convince me that I'm wrong?
The heart of the issue is the conflation of "diversity" (a political concept) with control theory's "requisite variety". The two are related, but not the same.
I partly agree that it's cargo-culting, but that's not the whole story: the Melanesians after whom Cargo Cult is modelled, would probably not have tried to fire / doxx / cancel anyone who wasn't on board
with the belief that
building of an airplane runway will by itself bring desirable Western goods.
What was being discussed was the idea that a company or organization that wants to appeal to the populace at large should have a workforce representative of that population at large. I think that’s debatable, but you can imagine that having someone on staff who can say “this is offensive (to my group of people)” can be useful to an organization. In this case, “I find this offensive” would not be a political or cultural message, but an actionable piece of data that says “For our goal of appealing to the populace at large, this statement/product may have lower, or negative, utility in appealing to this part of the population we’d like to like us/like to sell widgets to.”
What does this mean?
When the system becomes very complex this involves simulation. Think the game of chess as a system you try to regulate towards win. The only working solution is to simulate gameplay, dong sequences of moves and evaluating their outcomes (search). Chess computer replicates the system it regulates.