It troubles me that Chesterton charges the man who would tear down the fence with a duty: "Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."
I see the charm of Chesterton's favourite rhetorical judo. But we have learned many hard lessons about the cost of losing the documentation. We should charge the man with the duty of looking up the purpose of the fence in the archive, and finding the date that the purpose ceased to hold.
We should go further and invent the conservatism of the archive. When we contemplate changing the rules of society, the archive needs to contain more than just the justification of the new rules, analogous to the purpose of the fence. The defeated opposition must also have their place in the records.
Write down your rules. Write down why you have chosen them. Write down what your critics say will go wrong. Write down what your critics say we should do instead. Keep it all safe in the archive for 100 years.
When things don't go according to plan, dig through the archive. Did you stick to your rules? Really? In a way that is faithful to the reasons why they were supposed to work? What about the critics? Did things go wrong in the way that they predicted, or in some other way?
If the critics predicted the exact way that things would go wrong, they win. Dig out their suggestions and give them a try. If the critics predicted different screw-ups than actually happened, cry. Nobody knows anything. But at least you have an archive. What it was like. What people thought. How it actually turned out. That is a basis for working out what to do next.
> We should go further and invent the conservatism of the archive. When we contemplate changing the rules of society, the archive needs to contain more than just the justification of the new rules, analogous to the purpose of the fence. The defeated opposition must also have their place in the records.
> Write down your rules. Write down why you have chosen them. Write down what your critics say will go wrong. Write down what your critics say we should do instead
This is a lot less valuable than you're implying. The problem with this approach is that practices succeed or fail independently of why people believe they succeed or fail. The archive can only tell you what someone thought the benefit of a practice was; it can't tell you what the benefit actually was.
On the other hand some practices should have explicit rationale. For example I’ve learned that tests should contain some associated rationale for what the case is covering. Otherwise the next person to update the test risks eliminating coverage somewhere.
Most practices do have explicit rationale. For example, the practice of touching or knocking on wood after mentioning a hypothetical bad thing is explicitly justified by reference to the practice's power to prevent the mentioned misfortune from becoming true. This is the norm; mostly behaviors have explicit rationales whether or not those rationales justify the behaviors.
But as the example shows, a practice's rationale is generally not related to the function served by the practice. Even if a practice is adopted for straightforward reasons and more or less fulfills its intended purpose with few side effects, over time other practices will come to depend on that one in unclear ways. Where should that be written down?
You're right about superstitions but I think it's disingenuous to take a statement about engineering practices and expand it to include all statements ever made about how to do anything.
It's interesting to think of evo-psych as essentially the discipline of trying to explain the underlying rationale for the "chesterton fences" within our own subconscious/preconscious behaviors.
I don't know that it's any worse off than anthropology, or any of the other social sciences.
There is certainly the temptation to engage in the naturalistic fallacy (i.e. "just because something is natural that it is therefore necessarily moral").
At the base of it, evo-psych rests on the premise that for a behavior to persist, it must have a purpose. Asking what that purpose is, without ascribing a value judgement, seems like a reasonable avenue of inquiry.
> At the base of it, evo-psych rests on the premise that for a behavior to persist, it must have a purpose. Asking what that purpose is, without ascribing a value judgement, seems like a reasonable avenue of inquiry.
This conflicts with the broader understanding of evolution in general, which treats "selection" (changes that have a purpose) and "genetic drift" (changes that have no purpose) separately.
A more correct way would be the changes that improve the chances of survival and procreation, and changes that don't but persist because they at least don't decrease those chances.
> There is certainly the temptation to engage in the naturalistic fallacy (i.e. "just because something is natural that it is therefore necessarily moral").
My point was more that with evo psych, you can additionally declare almost any behaviour to be "natural" by coming up with some convoluted reasoning why it confers an evolutionary advantage.
I certainly see no problem with making it easier for a would be innovator/destroyer to do their homework by making such justifications readily available.
But yes, the intention here is really to communicate the importance of a certain mindset.
I agree with everything you've said except the part where you imply Chesterton was wrong to charge the demolition-proposer with a duty to do research. That duty absolutely still also exists. In the situation described, even if people putting up fences have a duty to record why, the time for that to happen is long past and there's no utility in complaining about it. It's natural not to bring it up, especially if you assume that the records do exist in some dusty church or town hall attic.
I think one of the less appreciated corollaries of Chesterton's line of thought, is that we should be wary of building fences in the first place. In programming, I think that abstraction should be avoided until the need for it arises. On the first pass (or two or three), a program should be written clearly and simply and only do what it needs to. Working with many abstractions in software can be very difficult later and it can be especially hard if it turns out the original abstractions are the wrong ones in hindsight. Because later Chesterton will apply and people will try to shoehorn their new work into the existing abstractions instead of taking a step back and realizing that they would never be there in that form if the abstractions were designed today.
There's no silver bullet with anything in life, but something to keep in mind.
I think one of the less appreciated corollaries of Chesterton's line of thought, is that we should be wary of building fences in the first place
Would that change anything?
Assume the original author did think about whether it was necessary, and they thought it was. You know have to think whether the fence is still necessary.
Now assume the original author didn't think. You still have to think about whether the fence is necessary.
Would it change anything? Uh...of course? If you build fewer fences it will result in fewer fences. I'm not really sure what confuses you.
Unless your point is whether not building as many fences now will save you from Chesterton's reasoning for any fences already built, then well of course not.
Changing the metaphor, I'm just saying that you should consider stretching to avoid future injury. Of course that doesn't save you from dealing with injuries that have resulted due to a lack of stretching in the past.
If you build fewer fences it will result in fewer fences.
Chesterton's Fence only applies to removing existing fences, so it obviously only applies to fences that already exist. It doesn't say anything about whether or not you should build the fence in the first place.
> Chesterton's Fence only applies to removing existing fences, so it obviously only applies to fences that already exist. It doesn't say anything about whether or not you should build the fence in the first place.
I'm also really having trouble understanding why it's so complicated for you to understand. Chesterton's fence says you shouldn't remove a fence until you understand why it's there. Ergo the removal of a fence requires effort. Ergo you should not put up fences unless provide actual value, because they will require effort to remove later.
Regardless, Chesterton's fence is a principle with whatever wisdom one chooses to draw from it. I guess if you disagree with me it doesn't really matter.
Software Engineering has been blessed, over the course of the past several decades, by exponential increases in available resources (Moore’s Law et simila). This has somewhat offset the inherent laziness that the author speaks of in his essay, in that while it requires effort to implement a feature (or an abstraction, is your case), but thereafter the increase in processing power essentially rapidly makes the performance penalty ‘free’. Now that Moore’s Law seems to be slowing down somewhat (though maybe that was just Intel’s monopoly position in CPUs showing, as AMD’s recent releases and the unending growth of GPU and ARM performance suggests), implementing additional features that carry a performance penalty will entail considering the effects of that slowdown in perpetuity, and therefore will be (hopefully) taken into account.
I’m not saying that people will revert to writing assembler onto the metal, but perhaps the whole trend of VMs inside VMs on dockers on compartmentalised OSes will slow down.
Honestly I didn't even consider the abstractions as a problem due to performance penalties in the vast majority of cases. I consider the abstractions a problem due to their cognitive penalty and the fact that you often need to put a square peg in a round hole (which requires a cognitive load both initially and in maintenance) due to incorrect abstractions. The fact that automatic optimizations (e.g. compilers) may get rid of these abstractions' performance costs doesn't mean their cognitive costs are removed.
I agree we shouldn't necessarily be writing in assembler, but we should be cautious of over-generalization. A math professor I had once said you should study problems in the right generality and no more. I think that software is the same. Abstractions have a cognitive cost and they are often not apparent until they are taken together. Keeping design simple and focused is a way to try to avoid those costs.
> It troubles me that Chesterton charges the man who would tear down the fence with a duty: "Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it." ...When we contemplate changing the rules of society, the archive needs to contain more than just the justification of the new rules, analogous to the purpose of the fence. The defeated opposition must also have their place in the records.
I don't think those are mutually exclusive. As a maintainer, I insist that commit messages which fix things contain loads of information about the problem they're fixing; and in my own commit messages I often include references to other possible approaches and why this one was chosen, to make the "Chesterton's Fence" challenge easier to deal with.
But if you find a bit of code doing something that looks unnecessarily conservative, and the original commit message from 10 years ago doesn't explain why, it's still a risk to remove the checks; and it's still right to insist that an attempt be made to go back and find out why things were done that way in the first place.
In a sense, the second is the reason I do the first. The pain of trying to reconstruct from 10-year-old email conversations the purpose of a given line of code is what motivates me to insist that commit messages contain all that information in the first place.
Similarly, the difficulty of going back to find out why a law was enacted in the first place should be a motivation for people to insist that we store that information for the future.
Just recording those things isn't sufficient. What if the person hasn't read the archive? What if they don't have the capacity to understand what they are reading in the archive?
It's an old saying, "I can explain it to you, but I can't understand it for you." And that's what Chesterton is ultimately driving at: don't reform something if you haven't taken the time to understand why it is the way it is. Documentation helps, but is not sufficient.
>We should charge the man with the duty of looking up the purpose of the fence in the archive
A fine sentiment, but still falls short of preserving & benefiting from the wisdom of the yore.
In the eternal battle of the new & better versus the old & tried & tested, the old & tried & tested may have been well reasoned, formulated, and described in the past.
But then again it might have never been reasoned through; instead it might have evolved, might have undergone a process of natural selection, might have won long term battle of ideas without being understood. The reasons might not have been explicitly known at any point in the past.
A good idea doesn't necessarily appear good, even on closer examination. There are some counter-intuitively good ideas; for example using random, stochastic processes[1] can yield better results than strict plans in face of partial information. For another example, we humans are pretty bad at understanding exponential and other non-linear processes (cue the present Coronavirus worries), and any reasoning based on estimation of several interacting non-linear effects tends to diverge wildly from reality. Not only our human ability to rationalize is limited, at times we aren't even aware that our ideas were suboptimal, and that there are whole classes of possible better solutions. Conversely, there are several XIX and XX century ideologies, programs, projects etc. that were well grounded in reasoning, but turned out to have led to such terrible results that we decided to never try them again.
There is a slow- but constantly-running natural selection of ideas, customs, traditions, organizational schemas, cultural trends, etc. The better ones tend to become more successful, and establish themselves as part of the culture. Some of the selected ideas won't be fully reasoned through or even understood at that time, and that is fine. Just like we make technical & business decisions with imperfect, partial information, we need to be able to handle the past customs, ideas etc., with partial, or even missing, reasoning behind them. Having been successful in the longer term is also a pretty good indicator of their qualities.
--
[1] in context of a hunter-gatherer society, using supernatural divination for making hunting plans has no rational basis, but in certain natural settings yields optimal results due to essentially random nature of the divination
It's a fun essay, but because of the way it's written, a reader could easily confuse "second order thinking" with "understand why a decision was made before planning on changing it." I think the author distinguishes them, but the reader might not notice the distinction because of how it's written.
Here's how I would distinguish these two ideas (hopefully the author would agree, but no guarantees):
* Second-order thinking is the ability to understand the impacts of the impacts of a change. If you're playing a chess game, looking only 1 move ahead (just looking at the impacts) means you'll lose against even a mediocre player. The better you can foresee later impacts, the better the decisions are likely to be. In the real world, there aren't a limited number of moves and no one knows all state, so you can't really look ahead multiple stages for all possibilities. Nevertheless, trying is really important. The close says: "The first step before modifying an aspect of a system is to understand it. Observe it in full. Note how it interconnects with other aspects, including ones that might not be linked to you personally. Learn how it works, and then propose your change."
* "Chesterton's Fence" is a useful rule-of-thumb to get at least a sliver of second-order thinking. Basically, if someone else did something, make sure you understand why they bothered to do it before you undo it. That exercise will help give you a bigger picture & may reveal something important that you hadn't considered.
At least, I think those are some of the points the author is trying to get across. If I've totally misunderstood things, I'm sure someone here will correct me :-).
This is exactly what I was thinking. I was intrigued by this idea of second order thinking, but after finishing the article I didn't think that I had seen an example of it.
Someone was asking the other day what other readers use git history for over the long term.
We use it for differentiating Chesterton's Fence from cargo culting and arbitrary decisions.
You go back and find out from context that code was there to fix a bug or implement a requirement that no longer exists, you remove the code. You find out the person just liked to write code this way, you remove the code.
You find out that it was to solve a problem on FreeBSD or old Docker versions or for your biggest customer, you leave the code, or reimplement the fix.
Nobody will remember why they did something, but you can often figure it out from the shape of the commit and its siblings.
It's worth mentioning that cargo culting and arbitrary decision-making aren't necessarily bad from the perspective of the Chesteron's Fence thought experiment.
It may not matter if people who left the fence alone even knew why they were doing it. What matters is whether the fence serves an important purpose. Of course, there is wisdom in finding out exact reasons, but sometimes the reasons aren't consciously chosen. There are such things as emergent and unintended benefits.
Cargo-culting can sometimes be okay. I normally have to work on large legacy code bases and sometimes it is simply better to try to copy something similar in the codebase in terms of functionality and try to follow the structure of what went before. Without documentation it is sometimes the only way to udnerstand the codebase. Especially when people have (ab)used reflection or dependency injection.
However, a number of readers were irritated and left unconvinced.
Chesterton's Fence sounds like Pascal's Wager - it's perfectly convincing and rational until you realise it's misleading and oversimplified.
Of course you shouldn't get rid of things just because you want to change them. But unless a fence-builder left an explicit record explaining why the fence is there, your belated estimate of their reasons is at least as likely to be wrong as your desire to remove the fence.
And that doesn't matter, because the original reasons are irrelevant.
The fence may have been built for a good reason, and the reason may no longer apply. It may not have been built for a good reason, but a good reason now exists.
Skip straight to the problem, ask stakeholders (fenceholders?) if you can find them, run predictive modelling, maybe some A/B testing if it's Schrodinger's Fence, make a backup just in case, and then do what you need to do.
The original fence builder is likely long gone and may not want to talk to you anyway.
(Although if it all goes horribly wrong, maybe try looking on LinkedIn?)
Is that really what second order thinking is? I wanted to see some examples of successfully thinking through consequences of consequences, but the examples seem to be only going one level deep. For example in the CFO in a startup example, I would think that old employees jumping ship when snacks are made paid is a first-order impact - just that the CFO did not think through all possible connotations. I would think second order thinking would be thinking about what is the impact of the savings itself, and realizing that it might not be worth it.
From the CFO’s perspective, the cost-savings is the first-order impact. Then unhappiness and possible attrition is a second-order impact.
Side story: Was at a place a long time ago where free soft drinks were made no longer available. First day of the new policy, email to all staff announcing the change. An hour later, one of the stronger engineers sends an email to all local staff announcing he’s thirsty and going to the supermarket to buy some soft drinks. Invites all engineers along in case anyone else is thirsty. Several respond (on thread) that they are also thirsty. About 90 minutes later, emailed status report of the successful 5-person mission to the supermarket is sent and proposing a rotating schedule of supermarket trips. The next day, email announcing the return of free soft drinks is read by all. :)
The thing about second-order thinking is that once you recognize it, you start seeing the lack of it everywhere. And, congratulations, you're now struck by the urge to fix it somehow.
This sucks, by the way. You'd probably be happier and have more friends if you didn't notice. Once the screaming stops, somebody has to tip the pee out of the shoe. And that's you now, peeman.
Does the author really think that the creators of what he calls 'supposedly hierarchy-free companies' don't meet Chesterton's bar? That they swept away hierarchy without having first thought about why hierarchy exists in the first place?
Or does he perhaps suspect that they haven't thought about it enough? After all, they can't have truly understood why hierarchies were first instituted, because they have failed to see the benefits he so obviously believes hierarchical organization has.
See, this is the problem with Chesterton's fence. You tell the reformer to go away and think about why the fence is there, and then she comes back and tells you the fence was put there to keep her out. And you say 'ah, poor child, you still don't really understand why the fence is there.'
Sometimes the burden of proof needs to be the other way round.
We're ripping out that fence. And if someone wants to keep it, they need to go away and think and, if they can tell us why that fence needs to continue to exist, only then can they stop us.
Whoops. You just removed the fence which was put in place thousands of years ago to contain radioactive waste with a million-year half-life. You couldn't read the warning signs since they were written in an ancient long-forgotten script. So you assumed it must be unimportant.
One alternative would be for you to follow the prescription, "find out why the fence was put there, then I may allow you to remove it". Investigate what's on the other side of the fence, without tearing it down (this might involve experimentally re-discovering the theory of radiation, depending on the current state of mankind's knowledge). Then go back to the authority and report that the fence seems to be there to contain radioactive waste with a million-year half-life. Armed with this information, the authority will be in a better position to determine whether or not to allow you to tear down the fence.
> "Take the case of supposedly hierarchy-free companies...."
>
> "Someone needs to make decisions and be held responsible for their consequences. During times of stress or disorganization, people naturally tend to look to leaders for direction."
it's ironic that the author misses the first-order purpose of hierarchies, which is to solve the coordination problem, not simply to have a leader barking orders.
we've also (re-)learned a lot about the second-order effects of allowing power to centralize in organizations as a result. yes, hierarchies can be useful, and yes, hierarchies are highly problematic. so yes, we should be looking at better models of coordination.
seems like they didn't think through any order of effects before making control hierarchies one of their central examples.
There seems to be a bit of confusion regarding what “second order thinking” is (awareness or consideration of the consequences of consequences) and an instance of it (removing a fence without understanding why it had been erected in the first place). The latter is an instance of the former (or rather, absence thereof) but the former has far greater ramifications and applications than those that are (currently) being discussed in these comments, which seem to be concentrating either on software engineering aspects (abstractions, to name one) or on the idea of banishing hierarchy from firms. That’s all fine and well, but it hardly begins to scratch the surface of what the author presumably thought this principle encompasses (and with whom I agree).
Anybody interested in second-order-thinking and decision-making could do far worse than obtain a copy of Dietrich Dorner’s excellent The Logic of Failure: Recognizing and Avoiding Error in Complex Situations (1997) wherein the author explores “patterns of thought that, while appropriate in an older, simpler world, prove disastrous for the complex world we live in now”. One important aspect of those errant heuristics is the failure to consider consequences of consequences. I highly recommend it.
> what “second order thinking” is (awareness or consideration of the consequences of consequences)
This is a valid interpretation of the term. Another, more specific interpretation is "including other minds in your mental model," e.g., when you're imagining the results of your actions, you don't just include actions and reactions of a mechanical nature. You also consider other agents, their values, and their resources.
This is usually put forward as a conservative parable. I'm not sure it necessarily takes a position there. It does not assert that the fence should not be removed, just that one should make the effort of attempting to understand its context prior to doing so.
It privileges status-quo-bias over novelty-bias. Both biases are bad (neither is a substitute for reason), but Chesterton's fence presumes that status-quo-bias is less harmful.
Which is fine if you like the status quo.
Fundamentally, conservatism says 'things are generally fine, change will probably make things worse'. Progressivism says 'things could be better, change is necessary to make things improve.'
Chesterton's fence is preserved by default because Chesterton assumes it's not doing any harm.
I don't believe this is a true understanding of Chesterton.
He doesn't believe either that the status quo is good or that progress will probably make things worse.
The issue is being ignorant of why things are the way they are. One aspect the author put is that people are lazy, so there must have been a good reason. More subtly, human emotions and ideals could have evolved to this current situation.
If you don't know the reason for the current situation, then your new "solution" is lacking knowledge and you should have realized that.
How deep do you want to go on 'the reason' for the current situation? What kinds of 'causes' do you admit as constituting a valid understanding of the situation?
Why is the question we're interested in only 'Why was the fence built?'
What about: 'Why is the fence still there?' or 'Why do we keep maintaining this fence?' or 'Why wasn't a wall built?' or 'Why are you so invested in the continued existence of this fence?'
What makes you think anybody is only interested in "Why was the fence built?" and not these other questions?
I think these are all good question that Chesterton thinks you should be asking. The point is that the one who tears down fences or ceases maintaining fences without asking these questions is a fool.
again, you’re putting the burden of proof only on the person who wants to change.
Why isn’t it also an equal obligation on those who want to maintain the fence to justify that? Keeping the fence is not free of costs.
Implicit in Chesterton’s parable is the idea that those who preserve things have understanding, while those who would change things do not. That is why it is seen as a conservative parable.
The point is that you can't make an informed decision about whether or not the status quo is a good idea unless you understand the reasons behind the status quo. If you don't understand the reasons behind the status quo, your opinion is worth diddly squat one way or the other. That applies equally whether you are inclined to keep the status quo or reject the status quo.
So, to me at least, the parable isn't about privileging the status quo but about making informed decisions. Someone who assumes the status quo is without merit is not making an informed decision.
Indeed, I've always find it a little bizarre that anyone objects. Do they really think the best strategy is not to find out why the fence is there before removing it?
The issue is when nobody really knows...the fence have just been there as long as anybody remember. Then people start to make up justification. Maybe the fence protect against trolls? Maybe the fence is important for the moral character of the population, and if it is removed people will think there are no limits to anything and will soon descend into murder and cannibalism. Better safe than sorry!
The fact that people might come up with silly/nonsensical/bad justifications for the fence doesn't seem undermine the importance of doing due diligence to determine if there is a good reason for the fence in the first place.
Due diligence is great, but it is an objection against the default stance of keeping the fence if you can't explain why it is there. The fence becomes onions in the varnish or cargo cult. At least in software development it is poison to have too many "we cant change that area of the code, nobody understand what it does". Sometimes you just have to tear down the fence and observe the consequences.
I guess I don't read the parable as insisting that we have to keep the fence if we can't find the reason for it after due diligence. It simply doesn't try to speak to that case.
What are you, some kind of robot? (If so, awesome!) This is natural language, you can't treat it like a statement in predicate logic.
Its very common in natural language to state general principles which apply most of the time without intending it to be an iron-clad rule that would apply in any circumstances without exception.
For example, if there is a "Stay off the grass" sign, you'd avoid going on the grass, but you'd still go on if the need was important enough.
The case of being unable to find the reason is an exceptional circumstance in which the principle does not apply, at least in quite the same way.
We keep knocking down fences and it keeps being great, the most recent being the millenia-long prohibition on gay marriage.
People put up fences for all sorts of dumb reasons. If you see a fence that seems to cause more harm than good, knock it down. In the rare situation where this leads to surprising problems, put the fence back up.
In software, most WTFs are just bad engineering. You should figure out what the code does and how you can do it better, not take a step back and contemplate the mind of its author.
No, Chesterton's Fence says for you to leave and think about why someone put it up originally, and figure out what was good about the fence. What you should instead do is look at the fence and the dangers on the other side and see if it seems worth it.
A lot of 'fences' in engineering are there for no good reason at all, so blocking until you can come up with some good reason for the fence is a waste of time.
Chesterton would have you believe that the ancients had great wisdom, and that you should try to find that wisdom before you make changes to anything. The reality is, the ancients knew even less than you do, reasoned worse about problems, and used inferior systems for generating solutions. Progress! It's real!
To remind what Chesterton said:
"The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."
> No, Chesterton's Fence says for you to leave and think about why someone put it up originally, and figure out what was good about the fence
I don't see as much difference between that and "figure out what the code does" as you do. I agree they have some differences, but ultimately a code's functionality and the reasons why its functionality exists in its current location are strongly related. The "go away first" part is a parable, I don't think you have to actually go away.
Overall I'm somewhere between you and the article's author. I think your ignorant ancients argument has some merit, but only in certain cases. For instance, I don't see why the author is assuming people who promote flat hierarchy companies never actually considered what the function of the hierarchy was. This seems pretty ironic given that his whole point is not to assume that builders haven't engaged their brains. Also, as with ignorant ancients, it's possible that these hierarchies evolved, which could mean they're functional but not optimal, and may in fact no longer be required.
On the technical side, I think we've all sat down to rewrite some code that looks too complex, only to have the reasons for the complexity become apparent as we research the subject. This might just be a case for better documentation.
After I learned the concept of First order thinking, I have noticed how most mediocre managers incur in problems of it.
They save money on saving, without realizing it is going to make every waste more time. They make a decision that only makes sense if no competitors react. They solve problems that end up causing others. It is impressive how this simple ability is a great predictor of performance.
One interesting book I have read is Prisoners of Geography, and it really made me think differently on geopolitics. Natural resources, trade routes and defensive barriers have a huge impact on a countries geopolitical strategy.
It made me go from “Why the hell are they doing that!?” to “reality is probably more complex”.
I have always associated second order thinking as part of the role of a systems analyst. Systems analysis is a discipline that deals in systems and their response to changes in input or environment. The notion of second order effects are essentially the recursion of the system outputs perturbing its own inputs leading to "second order" outputs.
I have always read Chesterton's Fence is an allegory for making one self aware of these effects prior to changing a system.
I always thought of it in the sense of a Taylor expansion.
First-order thinking suggests that every system reacts linearly according to whatever the dominant factor is (the way you would basically expect). For example, increase prices and your sales will decrease.
Second order thinking suggests including the next-more-complicated effect in your model; effects that are negligible for small perturbations have to be taken into account. Increase prices and your product gets perceived as more exclusive and higher quality, and despite the valid decrease that the first-order effect predicts, sales go up.
I agree, that is a perfectly valid example. You've perturbed the system (raised prices) and there is the "expected" input and the "input that arises from the perturbation."
As suicide and depression are on the rise, I think we've removed many fences without even knowing it. Social customs replaced by technology. Lessons not passed on but ill-got through tv. Assuming we know better than all of the customs of human history. Jamming everyone, whether round or square, into the same hexogonal hole.
Proverbs 22:28
"Do not move an ancient boundary stone which your fathers have set."
That's a valid concern. Although, I'm more inclined to think that it's rather linked to our leveling up in the Maslow hierarchy. Caring for next meal is replaced by caring for some other concerns that are less controllable.
Also, regarding the social customs, the point is that groups of people are an organism that has to adapt to the environment and that environment has changed. Social customs hold a small town together, but it's absolutely unreasonable to depend on them in a city, where you're living next to strangers that change all the time.
Why live in a place where you don't know your neighbors, then? Isn't that depressing in and of itself? In an ideal world, your friends would become your neighbors and people would move closer to the people they love, to make "love your neighbor" easier.
Scarcity and anisotropic distribution of resources? Dynamism of your lifetime requirements? Also, some age old sentiment that is encoded in the "keep your friends close and your enemies closer" adage? That would be my 3 guesses for most signifigant contributors.
Also, I would think scarcity, in this climate of supply chain interruption, would favor the spread out places.. where my supply chain is in my back yard, if I'm a farmer, whereas the city needs to funnel much into a greedy-little area.
And when you say "resources", again, you must not be talking about love, which is what most people truly want, when they earn their money. My earlier sentiment of wanting to live near friends, this makes the "love" resource much more plentiful.
Perhaps city folks just don't understand it.
"But if we have food and clothing, we will be content with these" 1 Timothy 6:8
He goes on... 9-11
"9Those who want to be rich, however, fall into temptation and become ensnared by many foolish and harmful desires that plunge them into ruin and destruction. 10For the love of money is the root of all kinds of evil. By craving it, some have wandered away from the faith and pierced themselves with many sorrows. 11But you, O man of God, flee from these things and pursue righteousness, godliness, faith, love, perseverance, and gentleness."
There is something greater than the age. The new dynamism is one of taking the lessons learned in the cities back to the homelands and distributing the gold ( like slaying a dragon ). Sure, I went to the city when I was cutting my teeth, but now I bring business back to my hometown. In a remote world ( and in a corona virus world ), this makes more sense.
I see the charm of Chesterton's favourite rhetorical judo. But we have learned many hard lessons about the cost of losing the documentation. We should charge the man with the duty of looking up the purpose of the fence in the archive, and finding the date that the purpose ceased to hold.
We should go further and invent the conservatism of the archive. When we contemplate changing the rules of society, the archive needs to contain more than just the justification of the new rules, analogous to the purpose of the fence. The defeated opposition must also have their place in the records.
Write down your rules. Write down why you have chosen them. Write down what your critics say will go wrong. Write down what your critics say we should do instead. Keep it all safe in the archive for 100 years.
When things don't go according to plan, dig through the archive. Did you stick to your rules? Really? In a way that is faithful to the reasons why they were supposed to work? What about the critics? Did things go wrong in the way that they predicted, or in some other way?
If the critics predicted the exact way that things would go wrong, they win. Dig out their suggestions and give them a try. If the critics predicted different screw-ups than actually happened, cry. Nobody knows anything. But at least you have an archive. What it was like. What people thought. How it actually turned out. That is a basis for working out what to do next.