I manage a highly complex AI service, and from time to time make config changes, and part of that duty is config cleanup -- removal and simplification. This is a necessary job in the past I have found copy-pasta where settings overrides for one environment are incorrectly copied to a similar one, leading to bad customer experience. As a personal policy, before submitting the cleanup PR, I go back in git history to when the config was introduced to understand why it was done the way it was, in case I am missing something. I call this "paying the Chesterton Tax."
Usually its fine, though once in a while the original author gets annoyed at second guessing their work long after it was considered settled. One recent curious example was when I found a specific setting was applied for one out of a set of many similar subservices; when asking around for why only one of them got it when the current data suggested all subservices would benefit, a senior "architect" got annoyed with me asking questions. As best I can tell: the project was shelved after a few months when the architect solved the primary issue and got bored with it / delegated the polishing touches to a junior engineer who went on maternity leave shortly after. Which is a fine enough explanation for me to proceed, but nobody's eager to admit such social causes.
> ...I go back in git history to when the config was introduced to understand why it was done the way it was, in case I am missing something. I call this "paying the Chesterton Tax.
Maintaining code I wrote 15-20 years ago I find myself doing this with my own commits. I have to get back into the mental place I was when I wrote the original code and then it usually becomes clear if it was an expedient hack or a thought-out solution for a corner case.
The worst feeling is "cleaning up" some code, discovering the edge case under which the "simple" case fails, then re-implementing the same solution.
Whenever I am in that expedient hack mode I've started commenting to explain my dilemma. My most valuable comments typically come out at that time.
Such and such can't be done right now because of such and such.
Often it doesn't matter if I ever resolve those comments or solve that problem I hacked through. The value comes from avoiding the rabbit hole next time.
If there were one lesson I could forcibly transfer into juniors via some kind of mind control it would be the lesson of reading my own code from 20 years ago and wondering what idiot wrote that and why didn't they explain XYZ better in the docs.
This is a good reminder to add code comments explaining why code is written a particular way. Also helpful is including breadcrumbs in comments and commits to relevant bug reports, docs, or commits.
You are external to the AI service, if you break the AI service you are still around to fix it. The consequences are relatively minor and easily remedied. Chesterton fence is about maintaining society, if you break that you may not be around to fix it so it stays broken, the consequence is major and not easily remedied. To force a software analogy; it would be like working on your autopilot software while you're flying on the same autopilot software - you're going to want to be very very careful when making changes.
As a senior engineer, I struggle sometimes with people critiquing my work. For me, it all comes down to pride and ego. It’s been especially hard the last year, as I’ve been vying for a promotion. I need to work on humbly and joyfully accepting feedback from others.
The issue with Chesterton's fence is something of a meta-problem in conversation and politics in general. It is the ultimate thought terminating cliche. If you accept it as given it allows you to build walls faster than you can investigate their removal, and by the time you have investigated it, they are building walls around your original position. It's an argument for inertia, fine in its basis, but over applied it becomes "Chesterton's big fucking infinite barrier", instead of a simple fence.
I doubt the conclusion Chesterton wanted anyone to draw was "consider carefully before breaking down a wall, but build all the walls you want without due diligence". The original context of the quote was an essay about reform. This is ultimately a thesis about conservatism in the old school sense. I think he wants you to think hard about the reasons for doing anything. I also don't think he'd refuse to tear down a wall if it presented an urgent threat—metaphorical wall or not. He's not a wall maximizing AI.
And, fortunately, I think you're the only one I've heard of who has interpret it in this other, interesting way.
I don’t think it’s particularly uncommon to find bureaucrats who use “we need to do more research from first principles” to frustrate progress of any form.
I can totally see how a bad faith interpretation could lead to “Chesterton’s big fucking infinite barrier”. And of course bad faith interpretations abound in politics.
It reminds me of people I’ve worked with who will always bring up edge cases in any discussion. They use their “ability” to identify edge cases to shut down conversation and present themselves as the smartest in the room. If they can think of ways this design might fail of course they must be the most qualified to do the design right? It really has a chilling effect on teams and has, in my experience, lead to really bad technical decisions.
I fully agree about the potential for bad faith interpretations of Chesterson’s fence, and I’ve encountered them a lot.
But regarding edge cases, I’ve come to see that type of person in a different light. The way I see it, edge cases pressure test the design. Some edge cases we just don’t care about, and when someone raises them, we put them on a published list of non-goals. This acknowledges that they exist and shuts down future naysayers. But sometimes an edge case ends up mattering a lot; enough to change the direction of the design. And for that reason, I welcome the edge case hunters. If managed, it becomes a valuable source of feedback and strengthens the overall design.
But to your point, this type of individual can sidetrack the team if not managed. Having processes in place and tactics to incorporate the concern without getting stuck on it is critical, and not always easy.
In this regard, I think it’s a bit different than the fence. The fence is a barrier that already exists, while the pessimistic edge case finder is trying to build fences that don’t yet exist.
Wait, are you saying that if Alice learns of an edge case by having it pointed out to them by Bob then Alice is a priori unqualified to analyze whether the edge case is important?
Ah, the nitpicking contest. I know it well :( My solution is to bluntly and quickly ask for a likelihood estimation. If the nitpicker can't tell, they have to find out for the next discussion. Until then it is shelved. If it is below 80% it is shelved until it becomes more salient.
That led to a few interesting developments. 1) participants say their edge case and immediatly shelf it themselves. 2) Or, they come prepared with good reasoning why case X is important and must be treated now. Discussion itself became more constructive, too. In any case a win!
I think Chesterton's fence is awful. It's cited as a nugget of infinite wisdom, and it's the opposite.
Anyone who's ever worked in a corporate knows how strong inertia is. Chesterton's way of thinking is the default. Cruft accrues because people are so averse to remove things from the code base. What if anyone needs this fence? Just to be on the safe side, I'll just leave it here. Little upside to remove it, lots of possible downsides. That's how people think.
And this essayist comes and writes it down as some great insight, and people can point to it and get legitimacy.
Everyone, have guts. Be brave and call a spade a spade: Chesterton's fence is the lazy guy's rationalization to leave things as they are. And to look wise doing that.
It is annoying because it seems to have been watered down to “have the correct amount of prudence” at this point. Which is definitionally correct. But we don’t need an analogy to get there.
It’s effectively the precautionary principle, and it’s bad epistemology for the same reason the precautionary principle is bad epistemology.
You’re imagining a danger, which you by definition cannot explain to anyone why the danger is real, and then you’re arguing that some very specific action should not be taken because it might trigger the imaginary danger.
Chesterton is talking about cultural systems which have evolved into their current form over a very long expanse of time. They are the result of a very long dynamic process.
It’s easy to look at the end result of that process and assume that one can easily move things around. But that’s not how the system came to be in the first place.
Maybe the new changes would be inconsequential. Maybe they would even make things run better. Or maybe they would be just an evolutionary dead end, eventually discarded by natural selection.
The point is that one needs to be humble and understand not just the current state of the system but also how it gradually came to be formed over time. Only then should one start proposing changes, carefully.
I think that Chesterton's fence is a useful guide when it comes to some incomprehensible systems, including our own biochemistry and physiology.
Once upon a time it was thought that some organs are useless (thymus in older age, appendix pretty much always). This has been revised. Nature often introduced hard-to-understand systems in order to cope with something we don't even realize is a problem.
Everybody who is listening to what you say and countering with, "sure that's possible in an extreme bad faith scenario" is falling victim to a black and white thinking fallacy.
If it's possible in the extreme, it's possible in a fine grained gradient between the two extremes. The danger isn't the infinite barrier, which I believe you posed as a thought experiment. The danger is death by a thousand cuts of new ideas that are so important that they could make the relevant concerns irrelevant.
How it's thought terminating cliche? If anything, it's thought initiating cliche. It compels you to consider the actual trade-off between keeping the barrier and removing it, instead of just dismissing the status quo as "stupid old shit that makes no sense so we don't have to assign any value". Yes, it's inertia - inertia is good. At least reasonable amount of it. It allows for permanence, planning, prediction, order. Of course, overdoing it is bad - that's true for any principle.
> and by the time you have investigated it, they are building walls around your original position
Who are "they"? It seems that you are trying to blame a general principle on some kind of fight you personally are having and not winning. But that's not the point of the principle.
the thought experiment assumes that people aren't building walls or doing things for no reason. The main issue with tech bros using it is that Chesterton was aiming it at people who were trying to change social norms that had been around for centuries built around the cumulative knowledge and wisdom of millions of people
it wasn't supposed to be used for some 2 year old startup's SOPs, software architecture, or business decisions. The argument assumes you already have a system that has been working for a long time and was well thought out
The criteria for removal are clear on paper, but impossible to meet, so the effect is the same.
It is impossible to meet because the reason why the proverbial fence was put up is, more often than not, a mixture of misunderstandings, logical fallacies and political motives from both the one who put it up, as well as other stakeholders involved, and most importantly, none of those aspects are documented and none of the people are around anymore. You'd need nothing less than a time machine to understand why the fence was put up.
As with all things in life, it's a matter of accumulating evidence up to an acceptance criterion. That criterion can never be 100% certainty, because nothing can be 100% certain.
The difficulty lies in agreeing on what that point should be, and on the magnitude of any particular piece of information as evidence for or against any particular hypothesis.
"The fence was a mistake" is a completely legitimate conclusion here. The principle exists to prevent us from jumping immediately to that conclusion out of convenience, arrogance, or ignorance. It should not be interpreted as preventing us from ever reaching that conclusion after some careful consideration.
"impossible to meet", "more often than not", "none of those aspects are documented": These are huge generalizations, and I very much doubt that they are generally true, or that anyone--including you--has done any kind of investigation into how general these conditions are.
You are the one adding the criteria that you need to know EXACTLY and fully why the fence was built. There is obviously a spot between “I have no idea what the fence was built” and “I know the exact intricate workings of all the minds that were involved in the decision to put up the fence”
> The criteria for removal are clear on paper, but impossible to meet, so the effect is the same.
These arguments are exhausting, especially when we are talking about political issues.
Do you really believe that it is literally impossible to figure out that some legislative "fence" is really a moat protecting entrenched interests of the lobbyists who helped write the legislation?
Not OP, but I’ve encountered this in situations where legacy code involved in mission critical business functionality was nearly impossible to remove due to the potential risk of unforeseen impact.
In other words, if the potential impact of an unforeseen breakage is high enough (costs us or the customer $$$), it’s not worth risking the change even if we can find absolutely no good reason for the current behavior to exist.
Example: complex spaghetti that touches billing calculations. These things are better addressed by a full rewrite/redesign followed by a long period of running both things in parallel until we’re confident that we didn’t miss anything. Maybe this is just a more complex way of removing the fence, but I think it’s more like moving away from the ground the fence is built on.
Sounds like you did the right thing by not changing the code! The rule worked
What would have been the advantage, in this situation, to ignoring the rule? What problem here is the Chestertons Fence rule causing?
EDIT: (responding to below, your issue has nothing whatsoever to do with Chestertons Fence. Making careful code changes to test then effect of removal is a great way to apply the rule. Building a separate system and testing in parallel would be an even cooler way to apply the rule)
The benefit of rearchitecting the code would have been high. As it stood, the status quo was blocking progress on other initiatives and making it difficult to meet the needs of customers. If we had ignored the fence and nothing broke, we could have immediately made major improvements that our customers had been begging us for.
The reality is that we don’t know if it worked. It’s possible that our conclusion that this had no reason to exist as written was accurate, and that doing nothing didn’t actually prevent anything bad from happening. It’s also possible that we saved ourselves from something we didn’t understand.
I’m not disagreeing with the premise of the fence, just pointing out that at times, even doing all of the due diligence to understand the fence isn’t enough to remove it.
Edit: I can't really respond to your response in its current form without these comments getting really difficult to understand. I disagree that this has nothing to do with Chesterson's fence. It's essentially a failure mode that can occur when applying the ideas behind the principle, i.e. there are times when learning everything we can about a barrier and believing with a high degree of confidence that the barrier isn't needed still isn't enough to remove it due to other factors. This points to the fact that this is a guideline, not a law.
> I’m not disagreeing with the premise of the fence, just pointing out that at times, even doing all of the due diligence to understand the fence isn’t enough to remove it.
I don't understand this. Implementing billing from scratch and running it in parallel to the old code is a form of doing due diligence. I.e. it is an application of Chesterton's fence. It might be an expensive application but it is one.
To me, this anecdote shows the value of documenting the why of software. I've read some time ago, to add code to systems such that, it is simple to remove it again. This discussion deepened that insight for me. This is preparing for Chesterton's fence.
> Maybe this is just a more complex way of removing the fence, but I think it’s more like moving away from the ground the fence is built on.
In other words, it's still learning from the cautionary tale of the fence, but the end result isn't a classic removal or non-removal of the fence itself. And it's not as if the idea has hard and fast rules :)
I agree regarding the value of documentation. None of us involved in the project were at the company when the code was written, and so we were left with a dangerous task.
What if you investigate why the fence is there and find nothing or a list of contradictory or nonsensical reasons? This is incredibly common in real life.
If you must understand something to remove it you do end up with a lot of things that can never be removed. It’s a big reason that laws and regulations build forever.
> What if you investigate why the fence is there and find nothing or a list of contradictory or nonsensical reasons? This is incredibly common in real life.
Congratulations, you can now opt to remove the fence. This is also discussed in the article.
Chesterton's fence is philosophical, and as always, there will be a value proposition to consider. E.g. building a new beltway that would connect two previously unconnected cities or developing trade routes completely overrides the value that a fence being used to keep wildlife away.
Nonsense, the response you'll get is to spend more time investigating:
"To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think HARDER and investigate better. ""
I actually like the principle as a tool for critical thinking, but like the precautionary principle which can also be a tool for critical thinking it has a tendency in practice to be converted into a thought-stopping and action-stopping cliche.
> find nothing or a list of contradictory or nonsensical reasons
Then you look at implementing process changes in the company at a higher level, there is 100% something absolutely wrong, but it may not be a problem with a physical system, but a human interaction one.
It is both a thought terminating cliche and a reminder to think more deeply. Clearly the original intent was the latter.
But many people who invoke Chesterson’s fence do so with the intent to lobby against changing something. It’s raised not as a caution, but as a barrier. The concept gets the most airtime in political circles, and I think this has led many people to misunderstand the original point.
This is not a straw man as far as I’m concerned, and while I agree that the original intent was about careful reflection, this is often not how people engage with it.
A predecessor seemed to have a series of processes, each of which seemed to be dealing with the errors created by the previous process. However, their notes made it clear that they thought that the mistakes had an external cause, blaming 'the system'. I felt that I could remove all of the steps and replace them with a correct first step. As an advocate of Chesterton's fence, I found it hard to simply accept that my predecessor had built a fence around a phantom, but it seems to be the case. By not finding any reason for the fence I kept hunting for the same phantom. This is the limitation of the model, the erector of the fence may have been wrong.
Seems like you applied the principle correctly here. When you first saw the fence, you did your best to find a reason for its presence. You only concluded that the fence had no purpose after you ruled out sensible alternatives.
The principle is not that all fences necessarily have a purpose for their presence, it's that we should assume by default that any fence with an unknown purpose has or had a legitimate purpose at some point, until we are confident otherwise.
I also applied another mental model from my experience. There seemed to be a lot more steps in the process than were necessary. The solution seemed too complex for the problem.
It seems like this worked correctly: you found notes describing their reasoning, but did not find their argument conclusive or persuasive. Not every fence exists for good _reasons_; Chesterson's Fence simply recommends identifying what those original reasons were, so you can evaluate their legitimacy before proceeding.
Their predecessor also did a good job realising that it might be hard for GP to apply Chesterton's fence because there wasn't an obvious reason, and leaving notes documenting that (lack of good) reason.
Somewhat related-- I remember tracking a bug to a poorly documented commit in a codebase. The dev had changed the memory allocation routine to multiply the requested size by 8 in order to fix "a crasher."
Removing that "fix" immediately triggered a series of crashers that led like breadcrumbs to the part of the code which contained the actual bugs being papered over.
Instead of Chesterton's fence, this is something like "Chad's bandaid." If you immediately strip these away the real problems stick out like a sore thumb and you can properly fix them.
It is a step at the end of a decade+ old system. My predecessor was only there for a year. I can't imagine how anything could have changed in this case. Upstream is a legacy product that runs without updates from the vendor. I have now from my predecessor when he left saying that he can't get anything changed at the vendor, because there is noone left there who knows how to change anything on this product.
But I agree, as a generalisation the situation could have changed.
I think this parable is simply about identifying the reason for something. It says nothing about if that thing is wrong. It's about simply getting to the point where you can begin to question the reason.
Interpreted properly, it's an argument in favor of curiosity. There's always a temptation to cut corners, to react without investigating.
In casual conversation, nobody has to do homework, but you should be wary of arguments you can make off the top of your head.
Ironically, bringing up Chesterton's fence and not doing any research is a rather common move. But I try to value curiosity even when I'm not actually curious enough to do research at the moment.
Second-order thinking is the practice of not just considering the consequences of our decisions but also the consequences of those consequences. Everyone can manage first-order thinking, which is just considering the immediate anticipated result of an action. It’s simple and quick, usually requiring little effort. By comparison, second-order thinking is more complex and time-consuming.
This seems like an arbitrary distinction. Just any consequence is the product of a chain of events. Throw a rock to break a window? That can be roughly described by the sequence "flex muscles, aim at window, impart velocity to rock, hit window, break window" so you could say really simple things are "second order thinking".
Sure, the point "there's a benefit to thinking ahead" remains but the article seems to dress up simple things as deep epiphanies.
Edit: Also, Chesterton's fence itself is a deeper point. It's the "second order thinking" buzzterm that seems misplaced here.
Yes, it's like a dumb person trying to rationalize why a smart person made better decisions. The actual answer is that they considered more consequences, predicted consequences more accurately, and further into the future. But a dumb person not considering the infinitude of that situation might say "ahh it was the thing after the thing, if I just predicted one more step, I would have made the same decision".
When a person or thing is more intelligent than you, there's no way to distinguish one step ahead from n steps ahead. All you know is that they are making higher quality decisions. If you're doing second order thinking, you will be outsmarted by people doing third order thinking, and so on.
There is a lot of misinterpretation about this concept, which tends to reduce the problem to a "conservative vs progressive" debacle.
A modern equivalent of his thought is the Lindy Effect [1], where every day a non-perishable thing survives (an idea, a technology, or in the original example, a Broadway show) adds another day in its expected survival.
Simply put, the longer one thing lasts, the longer it is expected to last. The fence is his metaphorical way of saying that what may be holding a change (thus, a fence) may have its reason and its strength is positively correlated with its lifespan.
A very similar thing applies to taboos in society. Various things were taboo in the past, often in various cultures, while recently people have made efforts to get rid of them. If we don't understand why these taboos existed in the first place, we may make a serious mistake.
On the contrary, I am convinced many taboos that persisted for millenia had very practical reasons. The world has changed dramatically since the industrialization. Therefore many reasons don't apply anymore. But that doesn't change the value propositions of those taboos in the dark age or the stone age.
Could you please stop posting unsubstantive comments and flamebait? You've already been doing it repeatedly with this account. It's not what this site is for, and destroys what it is for.
Religious flamewar, btw, is particularly off topic here and especially easy to avoid starting.
I always liked the framing that Charles Darwin basically worked with the current forms of life as one huge Chesterton's wall: This is what species look like -- how the hell did they get that way and why?
Another approach is to temporarily remove the fence and the gate and monitor the effects. This is far less work than doing an organization-wide audit to first determine why the fence and gate exist.
Chesterton's Fence leads to the ossification of large organizations. Determining why the fence exists can be so much work that it's easier to just leave it be and move on or go and work somewhere nimble.
Can I handle it failing, sure, go ahead. There are so many variables that could be involved, not uncommonly including temporal ones. I don't think simply monitoring for a period and calling it healthy is of any guarantee.
I do very much think you are right though. Being too risk averse will grind everything to a halt.
Your whole process has to be designed around avoid these issues. Allow failures, fix continuously and _quickly_, don't repeat mistakes.
Ossification of systems is mostly due to zero order thinking - a paucity of examination into the purpose of established systems. If someone with the agency to change a system gets to 2nd order thinking in their decision making, that is a blessing to everyone.
Even better: instrument the fence without removing it. You get the same knowledge without the chance of breaking stuff. If it's code, put it under a anti-feature flag.
Chesterton's fence might not be ideal for a fast moving startup. Perhaps it is a better strategy to question the wisdom of anything that existed only for a few month/years. But if we are talking about society-wide or nation-wide changes, Chesterton's fence seams to be quite relevant. It is rather easy to destroy a society, but it is so damn hard to create a prospering society.
That approach can work but requires a person with enough experience with the systems+domain in question to know when it's a reasonable risk. Just last week we had an example where they didn't have enough experience and there was an enormous effort to clean up all of the transactions that posted incorrectly.
It may be that the fence is there for a catastrophic black swan, so this method may not be suitable.
The most interesting remedy is continually developing domain experts in both roads and fences within the organization who can reliably intuit the likely purposes of the fence when they come upon it.
Indeed, and here[0] I apply Chesterton's Fence to the issue of widespread unlicensed operation of gasoline lines:
> Gasoline is a dangerous and volatile substance. There are numerous incidents of people being harmed by incorrect use - not just the operator but also bystanders - and millions of dollars in facility damage occurring due to insufficient training. There is a reason why Oregon requires Class C UST Operators and above have training regarding emergencies. We should require more training, not less.
Industrial substances need high standards. Within this calendar year we have been reminded of this repeatedly: train fires and derailment, the OceanGate submarine, the recent train bridge collapse carrying hazardous materials.
>It's important that we treat this substance with respect. Licensed operators should be the only ones handling it routinely. But of course, there's no surprise that Big Oil would like to socialize the risks and privatize the profits, speaking nothing of the job losses this will cause.
On the subject of habit change, I think the author makes some interesting points, but there’s a failure mode lurking here.
Sometimes unhealthy habits did form for a reason, but trying to understand those reasons isn’t always necessary or even helpful, and can seriously hold a person back from making positive life changes. Some habits are inherited through conditioning, and the original factors are long lost memories.
I think the potential failure mode is a kind of Learned Helplessness. Here’s this thing that I do, and I don’t understand why no matter how hard I try. If I wait to change until I understand, I might never change.
In these cases I think it’s most important to evaluate whether the habit is indeed something that I don’t want, and leave it at that while I go about making the effort to change it.
Time and time again, I’ve found that the thing stopping me from doing the thing that I know I should want to do - is thinking about it too much. It’s possible to get trapped in thought processes that go nowhere because the habit itself doesn’t actually make rational sense.
It’s often not until I’ve started to make a change that I realize why I was doing the old thing in the first place.
When the fences are internal/psychological, I think there’s more latitude and just jumping in and trying new things can be incredibly useful.
Understanding the function of unhealthy habits can absolutely be helpful, and habit replacement is most effective when you understand what you’re replacing, but don’t let a lack of understanding be a fence of its own.
Examples as practically silly as the principle itself, which can't help you decide since inaction also has risks and second order and other effects (cue historical example of a natural disaster because some fences stood in the way)
> The original employees who helped the company grow initially notice the change and realize things are not how they were before. Of course they can afford to buy their own sodas. But suddenly having to is just an unmissable sign that the company’s culture is changing, which can be enough to prompt the most talented people to jump ship. Attempting to save a relatively small amount of money ends up costing far more in employee turnover. The new CFO didn’t consider why that fence was up in the first place.
What if the change the original employees notice is the hiring of the new CFO??? Should the entrepreneur not have done that? Has he even thought of that? In what world can you be so precise in you understanding of any organization to tie some stupid snacks to turnover in your group of most talented people?
I think the Chesterton's fence parable falls short of pointing out the logical conclusion. The fence should have a plaque on it explaining exactly why it was erected and giving the reader enough information to decide if it's still necessary.
And when we build new things, we should also leave a trail of explanations for our decisions.
> the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”
Criminal law often does the opposite. There seems to be the illusion that “to understand all is to forgive all”. And so prosecutors and judges often feel like they should not understand the perpetrator too much, lest they cannot punish him.
It’s a remnant of “guilt” being defined as “acting in one way when you could have acted in another”. So it becomes imperative to prove that the perpetrator really “could have“ acted in another way, but inexplicably chose not to.
In other words, the people who put up the fence must have been stupid (ideally not human) for us to be able to say that it should be taken down.
Chesterton's fence is about the laziness of reformers that fail to understand at a minimum why a fence (rule) was placed there in the first place. It is noted that all humans are lazy and only some are reformers.
The core problem is that society maintains itself so if you break society it's very hard / impossible to fix because the broken thing must now fix itself and sometimes it can't. At best there is another functioning society waiting in the wings that can take over at worst it'll take a very long time for civilization to relearn what it once knew.
If the rule was and remains effective then the negative consequences
for why the rule was there will be absent and if the reason is forgotten then the negative consequences would have been absent for a long time. The more effective the rule the more absent the data. This makes it a bad idea to use absence of evidence in support of removing rules, especially given such a heuristic would support removing the most effective and long standing rules first - which is a terrible idea.
There can be enormous lags and noise in consequences. The negative consequences may not be felt for generations, long after the reformers have died, not only why the fence was placed was forgotten but why the fence was later removed is also forgotten. Cultural behaviours have evolved alongside people and evolutionary pushes are weak and noisy but given enough time can yield pretty good results.
I think a core part of traditionalism is that wisdom that is built up over generations can be greater than what a single smart person can learn in their lifetime, and much more than what average people will on average learn in their lifetimes. Trying to optimize society on what a single person can learn in their lifetime would be like doing machine learning using at most a single epoch.
Having worked for a "flat" organization that grew past 200 employees (cf. Dunbar's number) and was on its way to a thousand, hierarchy did indeed form and it was often like a shogunate, although without the peasants.
But orgs with clear org charts often have hidden political hierarchies as well. It would be nice if you could count on your supervisor and expect your supervisees to do as you ask, but a formal hierarchy is only effective if it's actually enforced and other hierarchies are suppressed. In practice, the formal hierarchy can become the tool used to enforce the goals of the alternative hierarchy.
Chesterton's fence makes sense in a deliberative process but old fences are often just in the way. In a lot of ways, the default position on this front is mere obstructionism.
The essay is called “The Drift from Domesticity”. If you’d rather not be downvoted in this discussion my recommendation would be to read the damn thing!
Every successful open source project needs at least one participant that is willing to tirelessly point out why things are the way they are when others enthusiastically propose that they will change everything. Understanding an existing system is much harder than starting a new project.
I have observed that this also applies to open standards as well...
Not at all what I was saying. I said it ends up being used to justify never refactoring.
You'll get a situation where there's some code and nobody knows why it's there even after investigation. Can we remove it? No of course not - do you not know about Chesterton's fence??!
There's actually a second reason why this advice is bad. It's basically victim blaming. The onus is on the person leaving a weird fence around to explain why it shouldn't be removed; not on the person finding a weird fence to have to guess a reason. You can say "well, bad people exist; you'd better assume there's a reason", but that's the same logic that leads to "bad people exist; don't wear attractive clothing".
It's actually good advice if the only thing you care about is not being raped. But people quite reasonably care about other things (like enjoying life).
Similarly Chesterton's fence is not bad advice if the only think you care about is not breaking your code. But people quite reasonably care about other things (like maintainability).
This all makes it the worst kind of advice - technically correct but unwise.
Hmm, I take the exact opposite conclusion from what you've said. Don't take it out until you can demonstrate with high confidence that it won't break things in production. Lower maintainability is probably going to be less harmful than breakage (not to mention choosing the hills you're willing to die on).
But then again it depends on the kind of company. Small and scrappy? Move fast and break things. Big and established? Be cautious.
Of course it does. But this is nothing new. Rich young people have been doing it for centuries.
They make their short term gains that they congratulate their cleverness for, then suffer the consequences, and those that survive are the ones that re-learn the lessons of old.
But on the other hand, it's sometimes the only way to get the old method out of the way so that new efficiencies can be gained with the technologies available today. Of course most will fail, but some will survive.
Not arguing that it's a new thing, but there is always another way.
Making the problem bigger is an interesting way to go about it. I will grant that doing so does expand the number of people impacted, and therefore the number of people interested in solving the problem is also bigger, as well, perhaps, as the apparent pay off for solving it. But ultimately, the solution is going to be something that could have been applied by the same people who expanded the size of the problem.
Chesterton's Fence doesn't conflict with disruption -- it conflicts with uneducated disruption -- disruption that doesn't understand what it's even disrupting.
Very insightful. IMO this is exactly what leads to many engineering teams to want to re-write a system from the ground up instead of trying to fix existing systems. They underestimate the complexity and assume the problems with the current system were the result of poor skill quality, rather than confronting complexity and nuance that they also aren't expecting. So they promise a rewrite in a couple months, a couple months turns into 6, they get pressure from management to wrap it up and ship the new thing, so they rush and they do. Then that system sits there for a couple years until a new batch of engineers comes and argues that the system sucks and the answer is a rewrite from the ground up. Rinse and repeat.
Well, sometimes you have to tear down the fences to bring about meaningful change based on first principles. Otherwise, we are only empowering the gatekeepers who have all the incentive in the world to persist with the status quo.
Tearing down fences without pausing long enough to at least figure out why they are there is how you get gatekeepers to begin with.
Its entirely possible that the fence was put there for an extremely valid reason, or that you must mitigate a separate issue before removing it. This assessment can be completed using a first principles approach, and the fence can be removed afterwards. Removing the fence then learning why it was there can be a painful experience for you and those around you.
The reason the fence exists should be written down, ideally as close to the fence as possible. It needs to be easy to determine why the fence is there.
If someone puts up a fence without explaining why it's there, they deserve to have their fence torn down.
I don’t understand this word “deserve”. It’s not about who gets the blame when the coyotes show up at night and kill all your chickens. It’s about making sure you don’t tear down fences that are, unbeknownst to you, protecting your chickens.
> If someone puts up a fence without explaining why it's there, they deserve to have their fence torn down.
Uhh no, because your punishment rampage will be putting other things in danger.
First, understand why the fence is there. THEN tear it down if it makes sense to do so. That's the whole point, and it was thoroughly explained in the article.
This in no way contradicts Chesterton's point, in fact he explicitly brings it up. He is not calling for an end to fence demolition, he is rather calling for common sense investigation into why the fence is there.
Chesterton will have to contend with our time honored tradition of tearing down fences. It might be the case that any particular fence is important, but being a culture that tends to tear-down fences has the second-order effect of being more adaptable.
> Suppose that a great commotion arises in the street about something, let us say a lamp-post, which many influential persons desire to pull down. A grey-clad monk, who is the spirit of the Middle Ages, is approached upon the matter, and begins to say, in the arid manner of the Schoolmen, “Let us first of all consider, my brethren, the value of Light. If Light be in itself good—” At this point he is somewhat excusably knocked down. […the lamppost is taken down but it turns out the lamppost was good…]
Poorly placed lampposts waste electricity and attract bugs (making the area worse off, and messing with their biology). They also have a maintenance cost. We should make a habit of tearing down lampposts. We should at least not maintain lampposts if there’s no articulated reason to have them there.
The robed figure and the town share in the blame. The robed figure should, if he wants us to keep up the lamppost, be able to present a punctual argument to keep it. On the other hand, the town shouldn’t rely on some old robed figure to go around cryptically warning of the importance of lampposts, the town should have an office of public works that documents why the lamppost was made. Angry mobs are just a dumb way of making civic infrastructure decisions. By having a well-exercised, well-documented process for tearing down lampposts, the town will completely circumvent the problem!
> Take the case of supposedly hierarchy-free companies. Someone came along and figured that having management and an overall hierarchy is an imperfect system.
[…]
> Without a formal hierarchy, people often form an invisible one, which is far more complex to navigate and can lead to the most charismatic or domineering individual taking control, rather than the most qualified.
[…]
> It is certainly admirable that hierarchy-free companies are taking the enormous risk inherent in breaking the mold and trying something new. However, their approach ignores Chesterton’s Fence and doesn’t address why hierarchies exist within companies in the first place.
But there are tons of companies, we don’t have the choice of removing the fence or not. It is more like, we have a blueprint of a farm, and it includes a fence, which some suspect might be unnecessary, maybe even harmful. So let’s try a batch of farms without that fence. Then, document whether or not it worked out in the form of case-studies. Bam, the fence is no longer mysterious.
These second order effects are often too hard to guess at from first principles. Let people try tearing them down, and see what happens. We don’t need fence protection services, we need a strong middle class and safety net so that people can try building that fenceless company, fail, and land on their feet.
We see this in governance too, the US was set up to run 50 permutations of an experiment. Just have each state try their thing, and then observe that Massachusetts’s plan worked out best and copy them.
> Chesterton will have to contend with our time honored tradition of tearing down fences.
The point is not to prevent teardown of fences. It's to know why they were setup in the first place. If it was put there for no reason other than to spend what's in the budget, then there is no barrier to removing it. But if it was there for good reason then you need to prepare a counter argument for why it should be removed.
Fences don’t maintain themselves, and I don’t think we should give either side a pass. The town should have a general tradition of tearing down unjustified fences and keeping documentation for the justification of fences. It shouldn’t be assumed there was a good reason, the party installing the fence should have presented an argument for the fence in the first place.
Chesterton’s argument is that we should assume there was a reason in the absence of an argument for the fence. This is only the case if there’s a proliferation of necessary, unjustified fences. We should not let that sort of situation emerge in the first place.
You don't seem to grasp the thrust of the article. Chesterton does not set up two sides. And I don't know what you mean by "necessary, unjustified fences." Those two words stand in juxtaposition. If a fence is necessary, it can be justified. If it's not necessary, it cannot be justified.
If the standard is that we maintain fences that don’t have an articulated justification, we will end up with fences that are necessary, but which don’t have an articulated justification. They may be justifiable, but without that justification articulated, we have don’t really have an easy way of telling which are justifiable.
Chesterton finds himself surrounded by fences which have no articulated justification, but which might be necessary/justifiable. This is a predicament of his own making. If he and everyone else in the town always bulldozed any fence they came across which doesn’t have a justification, people who want fences would start writing down why they’d put them there.
This would be helpful, because not only will it tell us which fences shouldn’t be torn down. It would tell us which fences we should actively maintain.
> Chesterton finds himself surrounded by fences which have no articulated justification, but which might be necessary/justifiable. This is a predicament of his own making. If he and everyone else in the town always bulldozed any fence they came across which doesn’t have a justification, people who want fences would start writing down why they’d put them there.
I think this falls down if the consequence of bulldozing a fence are high and if the original builders of the fence are no longer around.
The problem isn't Chesterton's own making. The problem Chesterton is trying to solve is when, for whatever reason, most likely due to many generations passing, the purpose of the fence isn't written down.
That also ignores the fact that some fences are built through an emergent and collaborative process, and so no one is such a co-creator to write down why it is there.
> but being a culture that tends to tear-down fences has the second-order effect of being more adaptable.
By killing many. In biology, species with higher rates of genetic diversity in their offspring sacrifice most of there offspring. E.g. trees fall into that category. Their seeds have more genetic diversity, when compared to children of mammals. But a single tree can produce tens of thousands of seeds in one year. And the environment conditions can change drastically every few meters for those seeds. Therefore, it is a successful strategy. But most seeds never grow into mature trees. Do we want to model human cultures on that strategy?
When I first heard this, it came up in an engineering discussion, where pulling out + replacing the thing was much lower cost than figuring out exactly what the person that implemented it was smoking.
Since the cost of removing + reinstalling a gate is also much lower than the cost of Chesterson's proposed historical wankery, I assumed I was getting an all clear to do my job.
I've also encountered this sort of wrong-think when trying to deal with planning commissions in the SF Bay Area.
Anyway, the Simple Sabotage Field Manual goes into more detail if you'd like to implement Chesterson's Fence. It worked well in WWII, so I guess they were on to something. Page 28, points 3, 4, 6, 7, and 8 are all good generaly ways to put the article into practice. If you're in a management position, the next section, points 11-13 are also good approaches. However, the entire 36 page book is worth a careful read:
The point is to take a pause to consider it. People are generally quick to throw things out or make commitments without thinking it through, see it all the time
It sounds like your team considered it - we can’t see an obvious why, it’s much easier to replace, and if there’s a second order consequence we will learn
No, that is not the point. You are putting words into Chesterton's mouth.
In his scenario, 100% of the people entrusted to road and fence maintenance considered keeping the fence and concluded it should be torn down. The people that installed it didn't think it was important enough to document its reason for existing.
The only person advocating for its continued existence is a philosopher with no apparent expertise in fence maintenance, or any specific knowledge about the fence in question. He is demanding the proponents of tearing down the fence produce evidence from the known-lost historical record, and then to use that evidence (and only it) to litigate for removal of the fence.
In Chesterton's fable the people entrusted to road and fence maintenance couldn't find a reason for the fence while being in a world with the fence. Chesterton argues to try look at it from the world without the fence. If that has been done, fine, tear it down. If the total risk is miniscule, bulldoze it immediately. But, be aware that the risk for a vc funded startup might be tiny, compared to the risk of changing the rules of society.
Wow. I'm not certain whether you're just being facetious, but you...probably shouldn't accuse others of putting words into Chesterton's mouth. You're suffocating him with what you've stuffed in there.
Usually its fine, though once in a while the original author gets annoyed at second guessing their work long after it was considered settled. One recent curious example was when I found a specific setting was applied for one out of a set of many similar subservices; when asking around for why only one of them got it when the current data suggested all subservices would benefit, a senior "architect" got annoyed with me asking questions. As best I can tell: the project was shelved after a few months when the architect solved the primary issue and got bored with it / delegated the polishing touches to a junior engineer who went on maternity leave shortly after. Which is a fine enough explanation for me to proceed, but nobody's eager to admit such social causes.