Hacker News new | past | comments | ask | show | jobs | submit login

It troubles me that Chesterton charges the man who would tear down the fence with a duty: "Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."

I see the charm of Chesterton's favourite rhetorical judo. But we have learned many hard lessons about the cost of losing the documentation. We should charge the man with the duty of looking up the purpose of the fence in the archive, and finding the date that the purpose ceased to hold.

We should go further and invent the conservatism of the archive. When we contemplate changing the rules of society, the archive needs to contain more than just the justification of the new rules, analogous to the purpose of the fence. The defeated opposition must also have their place in the records.

Write down your rules. Write down why you have chosen them. Write down what your critics say will go wrong. Write down what your critics say we should do instead. Keep it all safe in the archive for 100 years.

When things don't go according to plan, dig through the archive. Did you stick to your rules? Really? In a way that is faithful to the reasons why they were supposed to work? What about the critics? Did things go wrong in the way that they predicted, or in some other way?

If the critics predicted the exact way that things would go wrong, they win. Dig out their suggestions and give them a try. If the critics predicted different screw-ups than actually happened, cry. Nobody knows anything. But at least you have an archive. What it was like. What people thought. How it actually turned out. That is a basis for working out what to do next.




> We should go further and invent the conservatism of the archive. When we contemplate changing the rules of society, the archive needs to contain more than just the justification of the new rules, analogous to the purpose of the fence. The defeated opposition must also have their place in the records.

> Write down your rules. Write down why you have chosen them. Write down what your critics say will go wrong. Write down what your critics say we should do instead

This is a lot less valuable than you're implying. The problem with this approach is that practices succeed or fail independently of why people believe they succeed or fail. The archive can only tell you what someone thought the benefit of a practice was; it can't tell you what the benefit actually was.


On the other hand some practices should have explicit rationale. For example I’ve learned that tests should contain some associated rationale for what the case is covering. Otherwise the next person to update the test risks eliminating coverage somewhere.


Most practices do have explicit rationale. For example, the practice of touching or knocking on wood after mentioning a hypothetical bad thing is explicitly justified by reference to the practice's power to prevent the mentioned misfortune from becoming true. This is the norm; mostly behaviors have explicit rationales whether or not those rationales justify the behaviors.

But as the example shows, a practice's rationale is generally not related to the function served by the practice. Even if a practice is adopted for straightforward reasons and more or less fulfills its intended purpose with few side effects, over time other practices will come to depend on that one in unclear ways. Where should that be written down?


You're right about superstitions but I think it's disingenuous to take a statement about engineering practices and expand it to include all statements ever made about how to do anything.


It's interesting to think of evo-psych as essentially the discipline of trying to explain the underlying rationale for the "chesterton fences" within our own subconscious/preconscious behaviors.


In my experience, evo-psych is essentially the discipline of trying to turn your favourite cultural practice into a "fact of nature".


I don't know that it's any worse off than anthropology, or any of the other social sciences.

There is certainly the temptation to engage in the naturalistic fallacy (i.e. "just because something is natural that it is therefore necessarily moral").

At the base of it, evo-psych rests on the premise that for a behavior to persist, it must have a purpose. Asking what that purpose is, without ascribing a value judgement, seems like a reasonable avenue of inquiry.


> At the base of it, evo-psych rests on the premise that for a behavior to persist, it must have a purpose. Asking what that purpose is, without ascribing a value judgement, seems like a reasonable avenue of inquiry.

This conflicts with the broader understanding of evolution in general, which treats "selection" (changes that have a purpose) and "genetic drift" (changes that have no purpose) separately.


"Purpose" is also the wrong word, isn't it?

A more correct way would be the changes that improve the chances of survival and procreation, and changes that don't but persist because they at least don't decrease those chances.


> There is certainly the temptation to engage in the naturalistic fallacy (i.e. "just because something is natural that it is therefore necessarily moral").

My point was more that with evo psych, you can additionally declare almost any behaviour to be "natural" by coming up with some convoluted reasoning why it confers an evolutionary advantage.


This fable is about someone who wants to destroy something he doesn't understand.

Saying that he shouldn't have to understand it, there should always be a good explanation available somehow, rather completely misses the point.


I certainly see no problem with making it easier for a would be innovator/destroyer to do their homework by making such justifications readily available.

But yes, the intention here is really to communicate the importance of a certain mindset.


I agree with everything you've said except the part where you imply Chesterton was wrong to charge the demolition-proposer with a duty to do research. That duty absolutely still also exists. In the situation described, even if people putting up fences have a duty to record why, the time for that to happen is long past and there's no utility in complaining about it. It's natural not to bring it up, especially if you assume that the records do exist in some dusty church or town hall attic.


I think one of the less appreciated corollaries of Chesterton's line of thought, is that we should be wary of building fences in the first place. In programming, I think that abstraction should be avoided until the need for it arises. On the first pass (or two or three), a program should be written clearly and simply and only do what it needs to. Working with many abstractions in software can be very difficult later and it can be especially hard if it turns out the original abstractions are the wrong ones in hindsight. Because later Chesterton will apply and people will try to shoehorn their new work into the existing abstractions instead of taking a step back and realizing that they would never be there in that form if the abstractions were designed today.

There's no silver bullet with anything in life, but something to keep in mind.


I think one of the less appreciated corollaries of Chesterton's line of thought, is that we should be wary of building fences in the first place

Would that change anything?

Assume the original author did think about whether it was necessary, and they thought it was. You know have to think whether the fence is still necessary.

Now assume the original author didn't think. You still have to think about whether the fence is necessary.

Nothing has changed.


Would it change anything? Uh...of course? If you build fewer fences it will result in fewer fences. I'm not really sure what confuses you.

Unless your point is whether not building as many fences now will save you from Chesterton's reasoning for any fences already built, then well of course not.

Changing the metaphor, I'm just saying that you should consider stretching to avoid future injury. Of course that doesn't save you from dealing with injuries that have resulted due to a lack of stretching in the past.


If you build fewer fences it will result in fewer fences.

Chesterton's Fence only applies to removing existing fences, so it obviously only applies to fences that already exist. It doesn't say anything about whether or not you should build the fence in the first place.


> Chesterton's Fence only applies to removing existing fences, so it obviously only applies to fences that already exist. It doesn't say anything about whether or not you should build the fence in the first place.

I'm also really having trouble understanding why it's so complicated for you to understand. Chesterton's fence says you shouldn't remove a fence until you understand why it's there. Ergo the removal of a fence requires effort. Ergo you should not put up fences unless provide actual value, because they will require effort to remove later.

Regardless, Chesterton's fence is a principle with whatever wisdom one chooses to draw from it. I guess if you disagree with me it doesn't really matter.


Software Engineering has been blessed, over the course of the past several decades, by exponential increases in available resources (Moore’s Law et simila). This has somewhat offset the inherent laziness that the author speaks of in his essay, in that while it requires effort to implement a feature (or an abstraction, is your case), but thereafter the increase in processing power essentially rapidly makes the performance penalty ‘free’. Now that Moore’s Law seems to be slowing down somewhat (though maybe that was just Intel’s monopoly position in CPUs showing, as AMD’s recent releases and the unending growth of GPU and ARM performance suggests), implementing additional features that carry a performance penalty will entail considering the effects of that slowdown in perpetuity, and therefore will be (hopefully) taken into account.

I’m not saying that people will revert to writing assembler onto the metal, but perhaps the whole trend of VMs inside VMs on dockers on compartmentalised OSes will slow down.


Honestly I didn't even consider the abstractions as a problem due to performance penalties in the vast majority of cases. I consider the abstractions a problem due to their cognitive penalty and the fact that you often need to put a square peg in a round hole (which requires a cognitive load both initially and in maintenance) due to incorrect abstractions. The fact that automatic optimizations (e.g. compilers) may get rid of these abstractions' performance costs doesn't mean their cognitive costs are removed.

I agree we shouldn't necessarily be writing in assembler, but we should be cautious of over-generalization. A math professor I had once said you should study problems in the right generality and no more. I think that software is the same. Abstractions have a cognitive cost and they are often not apparent until they are taken together. Keeping design simple and focused is a way to try to avoid those costs.


> It troubles me that Chesterton charges the man who would tear down the fence with a duty: "Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it." ...When we contemplate changing the rules of society, the archive needs to contain more than just the justification of the new rules, analogous to the purpose of the fence. The defeated opposition must also have their place in the records.

I don't think those are mutually exclusive. As a maintainer, I insist that commit messages which fix things contain loads of information about the problem they're fixing; and in my own commit messages I often include references to other possible approaches and why this one was chosen, to make the "Chesterton's Fence" challenge easier to deal with.

But if you find a bit of code doing something that looks unnecessarily conservative, and the original commit message from 10 years ago doesn't explain why, it's still a risk to remove the checks; and it's still right to insist that an attempt be made to go back and find out why things were done that way in the first place.

In a sense, the second is the reason I do the first. The pain of trying to reconstruct from 10-year-old email conversations the purpose of a given line of code is what motivates me to insist that commit messages contain all that information in the first place.

Similarly, the difficulty of going back to find out why a law was enacted in the first place should be a motivation for people to insist that we store that information for the future.


Just recording those things isn't sufficient. What if the person hasn't read the archive? What if they don't have the capacity to understand what they are reading in the archive?

It's an old saying, "I can explain it to you, but I can't understand it for you." And that's what Chesterton is ultimately driving at: don't reform something if you haven't taken the time to understand why it is the way it is. Documentation helps, but is not sufficient.


>We should charge the man with the duty of looking up the purpose of the fence in the archive

A fine sentiment, but still falls short of preserving & benefiting from the wisdom of the yore.

In the eternal battle of the new & better versus the old & tried & tested, the old & tried & tested may have been well reasoned, formulated, and described in the past.

But then again it might have never been reasoned through; instead it might have evolved, might have undergone a process of natural selection, might have won long term battle of ideas without being understood. The reasons might not have been explicitly known at any point in the past.

A good idea doesn't necessarily appear good, even on closer examination. There are some counter-intuitively good ideas; for example using random, stochastic processes[1] can yield better results than strict plans in face of partial information. For another example, we humans are pretty bad at understanding exponential and other non-linear processes (cue the present Coronavirus worries), and any reasoning based on estimation of several interacting non-linear effects tends to diverge wildly from reality. Not only our human ability to rationalize is limited, at times we aren't even aware that our ideas were suboptimal, and that there are whole classes of possible better solutions. Conversely, there are several XIX and XX century ideologies, programs, projects etc. that were well grounded in reasoning, but turned out to have led to such terrible results that we decided to never try them again.

There is a slow- but constantly-running natural selection of ideas, customs, traditions, organizational schemas, cultural trends, etc. The better ones tend to become more successful, and establish themselves as part of the culture. Some of the selected ideas won't be fully reasoned through or even understood at that time, and that is fine. Just like we make technical & business decisions with imperfect, partial information, we need to be able to handle the past customs, ideas etc., with partial, or even missing, reasoning behind them. Having been successful in the longer term is also a pretty good indicator of their qualities.

--

[1] in context of a hunter-gatherer society, using supernatural divination for making hunting plans has no rational basis, but in certain natural settings yields optimal results due to essentially random nature of the divination


That sounds like a lot of work. I don't have a model of when it would be (more) useful to go through all that (than to not).


Whoosh.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: