Hacker News new | comments | ask | show | jobs | submit login
Presumption of stupidity (aaronkharris.com)
510 points by garry on Aug 10, 2015 | hide | past | web | favorite | 178 comments

Chesterton's Fence:

> In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.

This philosophy is one of the cornerstones of my engineering practice. IFF you can describe why something is there - without using shortcuts/crutches like 'their dumb' - then we may be able to consider changing it. Otherwise, it's dangerous and risky to fool with it. Too many times did I see something absurd and paint myself into a corner trying to 'fix' it, when there really was a bizarre edge case that this covered. (And in plants, you usually only find these edge cases after enormously expensive production loss events!)

And it's a fantastic principle in general, since it's like the technical equivalent of the Principle of Charity.

( http://philosophy.lander.edu/oriental/charity.html )

I worked in a similar setting, writing safety critical code. Any time we found code that was so dumb that it couldn't possibly be correct, without documenting why, we still did a blame to see when it was written, if the blame was older than the current source control system we did a blame in the previous one (this could be 10 year old code). Once we found when it was written we looked at any commit messages, if nothing obvious was written there we looked at general code movement around the time of the commit, we looked at any documents changed around the time of the commit, we looked at issue tracker tickets filed and closed around the time of the commit, we looked at test cases written around the time of the commit, test reports that had different results before/after. In essence we did everything in order to understand how the person thought when writing the code. This had always been a code base with very rigorous processes around it so explaining things by "he probably was lazy and let it slip by" was not a valid excuse before all other reasons had been investigated.

Obvious "errors" like if(false) is a stupid example of this, why would someone commit if(false)? Is it supposed to be if(true)? Is it for debugging only or is it code that should actually be removed? Maybe it's used for other reasons by a preprocessing-tool that you are not aware of?

This is a great story/summary, and it underlines the value of being able to "hear from" the code's original author (either by contacting them, or preferably by referring to high-quality documentation saying "this looks stupid, here's why").

I've had the joy of seeing code which broke completely if you removed the line while(false){}. Some hideous synchronization bug was solved by this line, and while it obviously wasn't the right solution, simply deleting it produced bad outputs. Profilers and processing tools are, as you say, other likely causes of dumb-looking code choices.

The stupider the "error", the more suspicion you should assign to its presence in any half-decent code.

While that's entirely true, it's also true that any code covering an obscure corner case ought to document that corner case and why the code is needed. And similarly, any policy or law should be completely clear about what problem it wants to solve. That would make it far easier to maintain.

Documentation is essentially the proactive version of the idea for which the principle of charity is the reactive version.

Assume that those who came before you had good reasons for their actions, and assume that those after you will be unable to identify any motivation you don't state explicitly.

I'd be curious to see a strong legislative version of this - enshrining the spirit of the law in the text, and giving courts explicit rights to strike down laws which no longer fulfill their original intent. Done well, it's the sort of change which could have worked wonders on our legal system's constant failure to adapt to technological change. The Aereo suit, for instance, shouldn't have happened under any kind of intent-based legal system.

> Assume that those who came before you had good reasons for their actions

There's a limit to that assumption, though. I'm always inclined to assume that people had reasons, and that they looked good at the time, but without knowing what those reasons are, there's no way to know if the reasons are as good now as they were then, even if you make the charitable assumption that they were good reasons at the time.

I would be a big fan of the idea that legislation had built-in turnover clauses, for instance, that required renewal every N years (for a value of N not much larger than the turnover rate of legislators). Which then means if you want something to persist, you would have to document your rationale for posterity, and convincingly argue that that rationale still applies.

> I'd be curious to see a strong legislative version of this - enshrining the spirit of the law in the text, and giving courts explicit rights to strike down laws which no longer fulfill their original intent.

I agree completely. Laws should state up front that "the purpose of this law is to ...", and for that matter explicitly state any other relevant considerations or side effects and whether they're considered beneficial, undesired, or simply neutral. That would mean there would have to be at least a pretense of a sensible motive, and that interpretations that don't serve that motive could be thrown out.

> The Aereo suit, for instance, shouldn't have happened under any kind of intent-based legal system.

It still could have, depending on the intent. The intent of copyright law, for instance, is supposed to be "we want more works produced, but we also want more works to enrich the public domain, so there's a tradeoff". The intent was never about authors and what they want; that's a means to an end. However, that rarely seems to be reflected in deliberations.

> I would be a big fan of the idea that legislation had built-in turnover clauses, for instance, that required renewal every N years (for a value of N not much larger than the turnover rate of legislators). Which then means if you want something to persist, you would have to document your rationale for posterity, and convincingly argue that that rationale still applies.

Let's be honest. These things would mostly be just bundled up and passed all at once. Or, alternatively, they would be used as leverage like the budget currently is. Actually, it would pretty much be exactly like the budget at this point. It either sails through with no issues, or it begins months of partisan bickering.

EDIT: Also, could you imagine the kind of flex we might see on major laws? I can only imagine large sets of laws sunsetting every time the Congress majority changes... Hey, at least it will create a whole industry centered around these adjustments! That's job creation right there! And, as always, the lawyers will be making their money.

This would require lawmakers to agree on the intent of legislation in addition to its affects, which would be categorically harder because different people may favor a policy for different reasons.

It might be simpler to just sunset every law by default after some period and require lawmakers to periodically renew them - if a majority can't form to support renewal, then it's reasonable to assume the original motivation lacks support. It's also a handy way of nudging updates to the laws to keep up with current times.

> This would require lawmakers to agree on the intent of legislation in addition to its affects, which would be categorically harder because different people may favor a policy for different reasons.

While that doesn't seem like the main intent of such an approach, it certainly sounds like a beneficial side effect. New legislation is not a thing that should occur particularly often, or lightly.

> It's also a handy way of nudging updates to the laws to keep up with current times.

Agreed. (You'd also need to have some requirement to prevent any omnibus renewal legislation.)

This is kind of irrelevant though. Sure it should be documented, and sure the fence should have a nice notice of purpose posted right at the gate for everyone to see, but life doesn't work like that.

I'm not sure I'd go as far as irrelevant - rather, I would say that documentation is what you do to keep other people from needing the principle of charity. They're two directions on solving the same problem. As a maintainer you should assume good intentions, but as a creator you should work to clarify your intent.

Agreed, but irrelevant to this particular conversation, which is about how to handle a "fence" in the road where there is no obvious reason evident.

Chesterton's Fence should be one of the core principles of software development, and it calls to mind the famous "Never rewrite software" article.

If you're refactoring, you can use unit tests and proofs of equivalence to demonstrate that your changes don't pose a threat. If you're rewriting a whole codebase, you're likely to wander down the same paths of "wait, the simple way doesn't work" that the person before you took.

Even bafflingly small changes like removing always-false tests have been known to break code, which ought to inspire real hesitation any time you see "stupid" code written by a smart person.

I hope we can all agree that having code relying on a compiler bug or something like that (without explicitly documenting it directly in the source code) is stupid.

> Even bafflingly small changes like removing always-false tests have been known to break code, which ought to inspire real hesitation any time you see "stupid" code written by a smart person.

Yes, but really, if you write stupid-looking code because of arcane reasons you really owe the rest of humanity at least


This is the main purpose of comments.

> Even bafflingly small changes like removing always-false tests have been known to break code

Okay, how is this even possible? Reflection/pre-processing magic?

Another comment in this post:

>I've had the joy of seeing code which broke completely if you removed the line while(false){}. Some hideous synchronization bug was solved by this line, and while it obviously wasn't the right solution, simply deleting it produced bad outputs. Profilers and processing tools are, as you say, other likely causes of dumb-looking code choices.


I can also imagine methods with side-effects. The test is always false but a side-effect is necessary.

Compiler codegen bugs.

I've had a few compiler bugs where, adding an if false or changing the order of an if test 'fixes' things. And the complement changing the code exposes a compiler bug. And also, new improved compiler results in code that's broken.

Other issues I run into a fair amount is dealing with hardware/naked interrupts often involves a lot of subtle timing and read/write access issues. Sometimes this happens in code you don't own.

    def stub():
            print("smart stuff")

    print("hello, world")
If you remove the `while(False)` loop, the working code will stop working.

Oh, come on, that's an obvious one. Obviously, "removing the while(False) loop" in this case means replacing it with an empty return:

  def stub():

  print("hello, world")
There, it works.

That said, boo Python for not admitting empty bodies. It needlessly adds a special case to its syntax. I suspect this comes from the implementation of its lexer.

The question I answered was: "How is this even possible?" (emphasis in the original). The implication seems to be that it should not be possible for the removal of a `while(False)` statement to break working code, presumably based on the assumption that it can't do anything. Yet that is a false assumption, and lots of bugs are based on assumptions that seem true yet, strictly speaking, don't have to be. I provided an existence proof that it IS possible and one minimal example that the assumption that such code literally can't play any role in whether code works or not is incorrect.

The post I was replying to said `while(false){}`. It did not say `while(False)`.

It was implied that we were talking about a C based language. You proved nothing. Others did.

The keyword pass is used in place of an empty body. Explicit better than implicit, etc.


I'm going to dispute the "iff". Chesterton's fence is a useful idea to keep in mind, but it is flawed in that it is excessively conservative. It demands that you prove a negative, which is not reasonable in general. (How do you prove that that apparently useless doohicky isn't staving off Cthulu's wrath?).

It's important to try to understand why something exists, but it is also important to understand that the reason often is that it was either unintentional or silly, but you won't be able to prove it because it isn't documented and the person responsible is gone/unknown.

Chesterton's fence doesn't require you to prove that the doohicky isn't staving off Cthulu's wrath, but merely to know that that is the purpose.

To be sure, there are limits to one's sanity when over-applying this principle, but I also would apply it far past where many would stop. A simple explanation that doesn't quite fit needs yet more inquiry. Normally, the application pre-limits the number of things that may go wrong, and you can use that to save some effort. For example, I may not know if a pressure sensor's wiring diagram is correct enough to squelch Cthulhu's call, but since that has not bearing on if the vaccuum system's gases are pure, I also don't care. (That's the sort of thing HR and Operations to worry about.)

Usually this meant digging in deep, questioning really basic stuff like "Are we sure this model is normally open or closed? Did the manufacturer forget to tell us?", and normally there would be clues or evidence to hint at the next round of questions. Eventually you'll have enough information that the evidence fits the questions, and there's no clear line of inquiry left. Life experience and just volume of work teaches one the limits of inquiry (old engineers can be shockingly good at this, to the point of appearing sloppy :)

I once spent 6 hours overnight troubleshooting a confusing gas non-leak that ended up being the result of a default setting changing on a valve being replaced off-the-record by not-the-usual-guy. It gave me the confidence that this process does eventually get to the bottom of it, but it was a long, meandering path from miswired panels to out of date schematics in the wrong language to noting how clean the part was to know someone replaced it. All to dig out the missing tribal and undocumented information, proving the PLC was actually correct to interlock the whole machine out. (It's like knowing your program will halt - you can't prove it, but you can still be damn sure it will. At least until it doesn't ;)

Actually, negatives are often easily provable.

For example, I could prove that I don't have a dragon in my pocket without breaking a sweat.

I'm sorry, but I'm pretty convinced there's a phase-shifted dragon in your pocket, untouchable and invisible unless painted by a properly configured tachyon beam...

... which I'll happily sell you for just $2000. Remember, phase-shifted dragons are dangerous!

A corollary is that actual bugs tend to become the sturdiest, longest lasting fragments of code. Because they're by definition incorrect, so no one can really fully and correctly "describe why they're there", and thus people tend to feel afraid to fix the code ("dangerous and risky to fool with it"), because maybe that's not a bug? (and thus "someone might punish me later for touching this code")

This is an awesome comment, thank you for sharing. Do you know of any other resources/materials like this? (Specifically would love to learn more about these kind of principles as related to mechanical engineering or a factory/process setting)

Foolishly, I don't really. I made the connection in some other thread on HN, and it's stuck with me. My advice is perhaps terrible: find yourself an old engineer and just do whatever it takes to be their friend and follow 'em like a puppy. It's sorta weird how effective it is to just be around people like that (and I think some PE licenses actually require apprenticeships for that reason).

But I do have two formative things: Feynman's lecture on Cargo Cult Science is excellent, with this gem:

"The first principle is that you must not fool yourself--and you are the easiest person to fool. So you have to be very careful about that. After you've not fooled yourself, it's easy not to fool other scientists. You just have to be honest in a conventional way after that... I'm talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you're maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen." - Feynman [0]

I can't tell you how many times this has saved me in the field. Replace the word 'scientist' with '<YOUR DISCIPLINE HERE>' as needed. (And I think moreso for engineers than scientists, since safety absolutely requires this mentality.)

And I really like the ideas in The Pragmatic Programmer [1]. More than anything specific, the idea of not being married to any particular concept or solution, but rather to pragmatically chose what is correct then for that circumstance helps a lot. The book is opinionated, but reasonable, and copying that pleasant tone is a fast way to make allies when solving problems in a group. And it's a pretty good book to boot.

[0] http://neurotheory.columbia.edu/~ken/cargo_cult.html (and is an HN favorite, to be sure)

[1] https://pragprog.com/the-pragmatic-programmer (another HN favorite, though not always brought up in terms of engineering)

Taken together, Feynman's "bending over backwards" and the "principle of charity" constitute a philosophy along the lines of "Be conservative in what you send, be liberal in what you accept". It's recently become fashionable to blame this principle for some sort of percieved flaw in the Web, but I wonder if a world where web browsers failed to render a entire page on so much as a misplaced </p> would really have been better. After all, it would be a shame if we lived in a world where people only listened to your opinions if you formatted them a certain way, wouldn't it?

Often, I find requiring to format my ideas in a certain way has revealed problems with those ideas I'd not yet considered. For instance, if a theist was required to frame God in the context of provable science, they would realize how many holes exist in their logic.

An amusing anecdote is the onion in the varnish: http://c2.com/cgi/wiki?OnionInTheVarnish

The principle of charity sounds extremely questionable; what is the purported benefit of suspending logical thought for the purposes of understanding something that may be wrong? I want to understand the idea logically, so I don't want to substitute in some second-rate folk understanding while I'm considering it.

The principle of charity is basically a way to approach new ideas arising from mindsets different than yours.

The 'assumption of truth' step doesn't require us to believe anyone who walks up and says "A ^ ~A". Rather, it says that you shouldn't immediately reject a philosophy because it disagrees with your prior notions.

The 'resolve contradictions' similarly doesn't mean that we have to overlook flawed claims. It's a tool to help learn an idea even if you don't get a perfect presentation of it. There may be a valuable concept available even if the person explaining it to you can't properly work through every detail.

As an example, most people who could talk to you about evolution can't explain the development of eyes, but that doesn't mean you should assume evolution is bunk when you hear about it. The principle of charity says "there might be a good reason here, whether or not they can provide it".

In short, the principle is an attempt to get value out of ideas you're presented with. That might mean you're engaging with a different idea than what you were actually presented with, but it gets you more insight. It's not something you want to use when faced with a concrete form of the document - it's not a reason to pass a bad-but-well-intentioned law.

It's not questionable, just good practice. And my experience tells me it's a Best Practice. `Bartweiss gives a good overview of it, but I'll share some advice I got from an engineering greybeard (which I most definitely am not).

He had a great story to go with this, but the TLDR is that the PoC is the safe choice when applied to engineering: give a true, best-faith effort to prove the opposing view correct. (And best-faith is important: you can not approach it with bias, lest you waste everyone's time.) If you're right, then you'll find an irrefutable flaw in the engineering, and if you're wrong you've learned something new.

If it sounds like a lot of effort, it is.

But this is also utterly win-win. It's a way of assuring that you've been careful. Avoiding the humiliation of hubris is great if a detail of the implementation was missed at first glance. But you also will inevitably more fully understand the problem when you've failed to implement the opposing view; it also puts you in the excellent position to graciously be on the same side as your opponent and now you can sway your now-ally-in-war in their reasoning. In either way, your goal must be to solve the problem, and not merely "win" political points.

Of course, there's crackpot theories and stupid ideas and foolish plans which should be dismissed with prejudice. But hopefully you're working in a professional environment where your coworkers really are trying their best to succeed. And even then, a serious engineer will still give the stupid ideas at least some (small) time of day, as you must have a reason for all decisions, even dismissals; as you become experienced, you'll be able to properly dismiss these faster and more precisely, but you'll still need to go through the process. That process is what separates the people who loudly proclaim they are smart and right from those who would testify in court they are right.

(Incidentally, this is part of the reason why I get immensely frustrated with "idea people", as it takes much more work to flesh out their half-assed ideas into full-assed ideas. Non-engineers don't get that there's a huge amount of effort to constantly take everything and everyone seriously.)

The principle of charity asks you to grapple with the best possible version of an argument. If the one making the argument makes an obvious mistake, correct it and take on the corrected argument rather than the flawed one. Nowhere is logical thought suspended. The goal should be truth, so why fixate on small mistakes instead of fixing them and engaging with a sounder argument?

"I don't understand it" is an admission of ignorance, not wise authority justifying condemnation. Few seem to understand this.

If you can't concoct a strong* argument favoring the opposing view, you don't understand the issue well enough.

*Edit: as suggested, it's "strong" argument, not necessarily "convincing". If you can make a convincing argument for the other side but not for your own, perhaps you need reconsider your stance.

> admission of ignorance

And although "ignorance" has a negative connotation, I argue it's a neutral thing unless otherwise modified, ie. as opposed to "willful ignorance".

The healthcare field uses the gentler term "knowledge deficit" regarding patients; I've taken to using that term in engineering contexts as well.

This is the most concise expression I've heard of something I've felt was a fundamental truth for a long time, thank you!

slight caveat: I think you can understand an issue very well and be able to concoct a strong argument in favor of the opposing view, which is nonetheless not very convincing. (Sometimes you may also be able to concoct a strong but unconvincing argument in favor of your own view. Some issues are like that -- the best arguments on both sides still aren't really convincing.)

I can't make a strong argument for the Earth being flat, does that mean I don't understand the question well enough?

I can't come up with a single coherent good argument against gay marriage, does that mean I don't understand the question?

I think your rule is very useful as a guide line, but please don't make it a rule. There really are policies and ideas that are one sided.

Yes, I do indeed believe that means you do not understand those "questions". As such, your insistence that major issues deeply dividing society are solely one-sided, and establishing policies based on the one-sided view, leads to what I shall colloquially refer to as "civil unrest". Don't dismiss the guideline simply because it doesn't lead you to where you want to go.

> I can't make a strong argument for the Earth being flat, does that mean I don't understand the question well enough?


Have you never used a flat map for navigation? Did the map deviate from what you observed on the earth by enough to notice? The curvature of the earth is very small, so it is trivial to come up with a world-view in which it would be "obvious" that the earth is flat.

Lest one dismiss the notion that anyone would willingly adhere to a "flat Earth world view", remember that most people are baffled by airliner flight paths as drawn on flat world maps.

So make a strong argument for Earth being flat, right now you have only made strong argument for why people perceive earth as flat.

"In the early days of civilization, the general feeling was that the earth was flat. This was not because people were stupid, or because they were intent on believing silly things. They felt it was flat on the basis of sound evidence. It was not just a matter of "That's how it looks," because the earth does not look flat. It looks chaotically bumpy, with hills, valleys, ravines, cliffs, and so on.

Of course there are plains where, over limited areas, the earth's surface does look fairly flat. One of those plains is in the Tigris-Euphrates area, where the first historical civilization (one with writing) developed, that of the Sumerians.

Perhaps it was the appearance of the plain that persuaded the clever Sumerians to accept the generalization that the earth was flat; that if you somehow evened out all the elevations and depressions, you would be left with flatness. Contributing to the notion may have been the fact that stretches of water (ponds and lakes) looked pretty flat on quiet days.

Another way of looking at it is to ask what is the "curvature" of the earth's surface Over a considerable length, how much does the surface deviate (on the average) from perfect flatness. The flat-earth theory would make it seem that the surface doesn't deviate from flatness at all, that its curvature is 0 to the mile.

Nowadays, of course, we are taught that the flat-earth theory is wrong; that it is all wrong, terribly wrong, absolutely. But it isn't. The curvature of the earth is nearly 0 per mile, so that although the flat-earth theory is wrong, it happens to be nearly right. That's why the theory lasted so long."

- Isaac Asimov, The Relativity of Wrong, http://chem.tufts.edu/answersinscience/relativityofwrong.htm (read the whole thing.)

In many ways, it's really just a matter of measurement power. An illuminating illustration of this is posed by how far away do you need to be from another person to see them disappear over the horizon [0]. Turns out it's like 6 miles. I have terrible eyesight, so without telescopic optics, I'd never have been able to measure this.

Rather, I'd have to rely on local measurements. And that'd nail me too: the Earth only curves about 12cm / km. So if I could only resolve a local rise-over-run of 1/1000, I wouldn't be able to fail the null hypothesis that the Earth is flat. (But if I could manage an order of magnitude better, I could!) And given that hills and such are all kinds of lumpy, and large bodies of water are rarely still, getting even that level of resolution without advanced optics would be difficult. (Though if you can be sure you've got a straight enough stick...)

So I think it really comes down to how well you can prove or measure anything. Once we had telescopes, there really wasn't too much confusion about the spherical nature of the planet. (And people had suspected for a very long time the earth was - at least in some way - round. Eclipses give that away a bit.) But the details really give us the resolving power to be sure. That and it helps to get away from local measurements - get up really high, and it becomes easier to tell (and IIRC some early experiments measuring the size of the Earth took advantage of really large height differences).

After all, Newton was right, too. But add a few extra zeroes to the solutions, and we start seeing some deviation from our relative measurements...

[0] http://mathcentral.uregina.ca/QQ/database/QQ.09.02/shirley3....

Of late I've been taken by the concept that someone who grew up on the far side of the Moon, and never travelled very far, would - due to the Moon's rotation period equalling its revolution period - be completely unaware of the existence of the Earth. They'd see the Sun rise and set in a 175 hour cycle, along with the rest of the Universe. The notion that a brightly reflective body, significantly larger than their own spheroid and covered with billions of intelligent (?) beings, existed a mere 238,900 miles away would be absolutely preposterous ... at least until said resident travelled far enough to peek around the horizon and see a most mind-blowing sight.

I mention that to set premise that one can be remarkably unaware of a plain truth just around the corner. The strong argument for Earth being flat is little different from the strong argument for Earth perceived as flat. The flaw obviously is the objective difference between fact & perception.

I go thru trouble of writing this to note that while you're pointing fingers at the difference between being and perceiving, you are yourself holding the mistaken notion that Earth is a lumpy sphere, when Earth is, in objective reality, a very long and slightly bent 4-dimensional _spring_ shape, which we see just a 3-d cross-section of which looks spherical to us lower-dimensional beings.

While making a strong argument, be humble - your perception may be objectively wrong, misguided, or incomplete as well.

The arguments are not meaningfully different. For each individual, their perception is indistinguishable (to them) from reality. It could be trivially changed:

"I use a flat map to navigate. I perceive no difference between the map and reality. Therefore reality is like the map."

For things that are very strongly one-sided you almost certainly aren't going to make an argument that convinces yourself, but if it's not at least as good as the arguments used by people that disagree with you, you are doing yourself an intellectual disservice.

The primary issue with Chesterton's long-revered fence is that puts the entire burden of proof on the person who desires to take it down.

What if the fence was put there for purposes of adverse posession? Those who would defend the fence, absent any documentation, would ascribe the most noble uses to it, as their desire is to maintain the status quo, or at least the status pro se. They would, to use Chesterton's term, 'go gaily up to it' and defend it with their lives, saying "clearly this fence always existed, and thus should always be."

Chesterton's 'modern reformer' is a strawman as bald-faced as any, and it's all too easy to use "Chesterton's Fence" as a defense of mindless conservatism.

>What if the fence was put there for purposes of adverse posession?

Answering that question is the very reason for pausing to investigate its purpose. Questioning past purposes before reforming is not equivalent to "defending the fence absent any documentation." Your own characterization of this "mindless conservatism" is itself a straw-man of the behavior Chesterton's fence recommends.

The alternative is an endless cycle of deconstruction that impedes any progress, as every possible step or rational for it brings a possible mask to power. The Nietzschean project, though fun, has produced nothing besides criticism, and would never succeed in taking any kind of action about said fence.

> Answering that question is the very reason for pausing to investigate its purpose.

So you answer the question and determine the fence is there because of adverse possession. And then everyone else you present your findings to says, "that's not a good reason, so clearly you're not using Chesterton's fence and we shouldn't make any changes." See the issue?

Chesterton's fence and/or the Principle of Charity only have the potential to lead you to a good answer if the other party is putting in a good faith effort. But as soon as that's not the case, these cognitive rules of thumb basically become tools of oppression. And if you've ever watched congressional testimony where the parties are forbidden by rule from criticizing each other, it's easy to see that lots of people purposely take advantage of this.

Also, in the context of business the PoC means assuming that people are acting in their self interest, but in the context of politics or whatever it means assuming that people aren't acting in their self interest. As a rule of thumb for improving your thinking then these sorts of ideas probably make sense, especially for things like entrepreneurship or software engineering, but at an epistemological level the phrase "not even wrong" comes to mind.

> So you answer the question and determine the fence is there because of adverse possession. And then everyone else you present your findings to says, "that's not a good reason, so clearly you're not using Chesterton's fence and we shouldn't make any changes." See the issue?

I must confess that I don't. I guess this depends on why you are doing this: if your goal is to convince other people that you should remove the gate, you must accept that it is a possibility that other people may not find your argument convincing. That is their prerogative, right? If you need their permission to remove the gate, there is nothing you can really do without convincing them anyway.

If, however, you are doing this for your own benefit, it may help you understand the system and propose/tweak your own solution to the problem better.

Or maybe I don't understand what you are saying here.

> the entire burden of proof on the person who desires to take it down.

That's exactly the point though - If a person can successfully defend the existence of a fence against your insistence for change, they've not demonstrated that the fence is required beyond all doubt, they've demonstrated that YOU are not capable enough to be the one that changes it, since your knowledge of the domain can't even defeat a "silly" insistence of maintaining the status quo.

Isn't it just a nice little parable intended to clearly transmit the idea?

Parables are the lowest form of argument. A valuable principle should be stated directly. It is absolutely worth making the caveats explicit, because there will always be people who try to enforce the exact letter of the principle.

Implicit in my comment was that it would be strange/tiresome to present it as a solid argument.

I more or less don't believe that principles are actually possible to act on in a principled manner. If everyone all at once took a deep breath and started calling the things they call principles ideals, we could probably avoid a lot of bad discussions (because implicit in that labeling is the observation that compromise sometimes happens).

I've seen this very often here when the topic of regulation comes up. Some think that any regulation of any kind is literally evil and exists for no reason other than to get in the way of "innovation". They demand to repeal them (or just ignore them, breaking the law) without understanding what they are for, just because they don't fit a particular business model.

Anyone interested in the necessity of understanding history for philosophical and moral thinking ought to read MacIntyre's Whose Justice, Which Rationality. It argues that a "history-constituted tradition" is the inevitable system in which we make intellectual progress, and analyses the major movements in philosophy as support.


I ran across a slightly more polemical phrasing of this principle once, while googling for something unrelated. I've long kept it in a file on my work desktop as a sort of reminder to myself:

"There are often very valid (and even necessary) reasons for why certain things are done in certain ways; these reasons often become clear to us only after we have more deeply investigated into the full details of how things work, whereas our first, less than fully informed reaction may be to regard them as silly, or to attribute them to the wrong cause or agent, or to entirely misunderstand them." (John H Meyers)

This statement worries me. It's equivalent to small-c conservatism and sounds suspiciously like "You don't know enough to have an opinion" - which is perfectly fine when it's reality-based, but can also be an easy excuse for enforcing groupthink when it's not.

There's also the implication that alternatives have already been tried. But what if they haven't? How are you supposed to tell?

Clearly you can't, unless the alternatives have been documented historically. If there's no documentation you're firmly in the land of convention, tradition, and opinion - not effectiveness.

I'd prefer a model that assumes running a start-up is like running an experiment on a market. You probe the market with a complete package of technology, marketing, networking, and funding. Then you assess the results. And then - the hard part - you try to work out which parts of the package aren't working, and design a new experiment.

You absolute should examine competing models for validity, and you absolutely shouldn't assume they're wrong because you're just that awesome.

But assuming they're a better market fit without reality-testing and evidence makes as little sense as assuming they're bad at what they're doing.

If that were true, there would be no room to innovate at all.

I didn't think about it that way -- valid criticism, and one that is strongly apparent when presented without context. In context (https://groups.google.com/d/msg/comp.sys.mac.apps/uWq5nUOa-1...) it was more about someone coming to a big software system and going "ooh, this decision is stupid" without having done that reality-testing themselves. So I interpret it more as a "don't make snap judgements, analyse the situation carefully" than anything else.

Understanding is also important when instead of getting rid of something/not copying some tactic you want to copy it. I once was in some cheap cinema where instead of the heavy red curtains they had some structures they painted red on the wall. My guess is that you have the curtains on the wall for a better sound quality, the sound in this cinema was really bad because of reverberation.

The only problem is that it's sometimes not possible anymore to find out why something is the way it is.

Might depend a lot upon who erected the fence in the first place: is it a fence in your mind, after your considered opinion has decided the path it wants to take? By all means, tear the fence down in your mind first, you might get an inkling as to why the fence exists in the first place. Of course, if the law or institution emanates from the people who think others' thoughts, you may be able to figure out whether you need the fence there or not faster

Problem: sometimes (often?) the reasons for the fence are so old and so far from relevant today that a cautious reformer cannot reasonably figure out why it's there (unless he first becomes an historian). If we insist he first understands the original reason, the result is that the fence remains, at much cost.

If you've ever worked on legacy software, you know what I mean.

Thanks for this.

There's a very similar notion in negotiation; that you should understand why your opponent is willing to grant a specific position before you accept it.

The same presumption happens for people, too. Developers tend to assume that the people that wrote the terribly messy code that you inherited were incompetent. I think a much more productive and healthy attitude is to assume that everyone was doing the best they could, given their resources, knowledge, and deadlines at the time.

That might be a false assumption (look, some people just don't care) but you gain very little by complaining and getting mad at things that already happened.

We love to complain about things our predecessors did wrong, but often, we don't do those things either :)


One example I see all the time is my more liberal/left-leaning friends assuming that their conservative adversaries (especially on the internet) are motivated solely by stupidity.

This trips them up when they are confronted with evidence of intelligence and solid reasoning from the people they had presumed to be capable of neither.

The reverse also happens of course, with the conservative person characterising their adversary as the stereotypical "Dumb Hippy".

Also happens with religion.

There is no similarity between that and religion. Religion by definition is an unjustified belief in supernatural - faith. There is no reason or logical backing for it. It is a malfunction of your reasoning system - perhaps heavy cognitive dissonance.

Here's the thing. Both with the religion argument and in general: even if you're right, your logic is unassailable, and the other person is provably wrong, you will still benefit from not assuming stupidity/broken thinking/etc, and instead seeking to understand the other person's position as thoroughly as possible.

At one extreme, you may discover a flaw in your reasoning. Even if not, you may find that while not entirely sound, there are valuable aspects to the other person's point of view, which can make your world-view more nuanced. Or perhaps you will simply confirm your initial perspective... but even then, you're much more likely to inspire the other person to think on the subject than if you come in with the perspective that they're provably wrong (and perhaps a bit pitiable for it). By seeking to really understand someone, you inspire them to do the same for you.

GP said: "assuming that [others] are motivated solely by stupidity."

You said: "by definition is an unjustified belief ... no reason or logical backing ... malfunction of your reasoning system"

exactly proving the point. You've made a blanket statement regarding anyone who has come to a different conclusion from you regarding the supernatural. It must be stupidity! It can't possibly be justified, or even clear the bar of not-totally-irrational! There's no room to even try to have a conversation, because "by definition" people on the other side are stupid.

No not stupid. I see religion more like a disease. Not many turn religious as adults. 99.99% have it passed down by their parents and their extended family and the culture they live in. They live life without questioning it and can't even see past it. Some people just are lucky to not have it imposed on them or wake up later in their life.

> 99.99% have it passed down by their parents and their extended family and the culture they live in. They live life without questioning it and can't even see past it.

I was raised in a religion that emphasizes understanding the belief system and arguments for it (the focus on valuing the truth was what caused me to ultimately leave, btw.). So I do have first-hand experience of what it means to believe in something for perfectly good reason with strong, consistent arguments. And I have to tell you, while most religious people probably believe for reasons like you just described, I've seen and talked to many self-proclaimed atheists and many refuse to believe for exactly the same reasons - they were raised as atheists. Or they discovered that atheism is what cool people do, or didn't want to stand out from the crowd.

People who don't believe in supernatural aren't inherently smarter than those who do. For many (I suspect most) of them, atheism is a blind faith, the same way Christianity is for their parents.

> "They live life without questioning it and can't even see past it"

I have met very few people like this.

And, again, you are making the OP's point. You have it all figured it out -- you know how wrong everyone who disagrees with you is. Whether you call it "stupid" or "a disease" or "irrational", the point is that you're not attempting to learn from them or understand them, you're simply being dismissive of everyone who concludes differently from you.

Maybe I misunderstand the OP's point but I specifically took issue to this very topic, at the expense of my karma apparently, because religion is differente.

Religion is 'solved' in the sense that they are man made belief systems, all which proclaim to be true, and most holding mutually exclusive claims between each other.

It is not debatable in a way left-right politics and their backers' stances and reasonings are. I'm not hating on them because it is not their fault. I feel for those trapped, I really do.

Given time societies turn (and have already turned in large parts) away from it through education and the undrstanding of the universe and the human condition via scientific method. There is no middle ground in this specific instance.

You would not entertain an alchemist or an astrologist either. Religion will soon join them in that regard in the collective consciousness.

> You would not entertain an alchemist or an astrologist either. Religion will soon join them in that regard in the collective consciousness.

And why do you think people won't entertain an alchemist or an astrologist? Why do you think religion will become similar to those disciplines?

It's not because people are getting smarter. It's because alchemy is not popular, science is! Most people don't understand a thing about either, but they have a firm opinion. That's just blind faith, the only thing that changes over the centuries is the object of faith.

It's completely orthogonal to the discussion about which is right and which is wrong. Most people don't know and don't care, as long as they believe the same thing their peers do, so I wouldn't be quick to judge one group, because the other is not smarter, it just sticks to the currently popular belief.

keep in mind that the article was about people who are not domain experts, seeing things they don't think make sense within a domain, and deciding those things are stupid prior to gaining proper understanding.

I feel fairly confident not entertaining someone who claims to be able to turn a lead brick into gold in their living room, because I know enough about physics and chemistry to understand the energy difference between lead and gold. But I would entertain someone who had nuclear-lab-grade equipment who claimed to be able to convert a small number of atoms. They might actually be full of crap, but my domain knowledge is not specific enough to be able to say.

When it comes to religion, I think the same thing with non-experts happens, and you're doing it. You claim to know all religious are "man made belief systems", "unjustified", with "no logical backing" -- claims that a domain expert can make regarding perhaps a few religions they're intimately familiar with, but not that anyone can actually make for all religions. How many religions have you studied deeply enough to really be able to make that claim? If Christianity is on your list, have you ever read the Didache, or De Principiis? If Mormonism is on your list, do you know about the Seer Stone or Elijah Abel? If Judaism is on your list, are you familiar with the Midrash or Siddur? These aren't particularly obscure references; they're things anyone well-enough versed in any of these religions to say "it's bogus because..." should know about.

If you don't know about those things, but you're sure each of those religions are "man-made", "unjustified", with "no logical backing", you should consider where your certainty comes from and how certain it really is.

Religions that claim to be correct can be disproved by refuting just one of their central claims. It is unnecessary to know everything about all of their other claims.

This requires you to be a sufficient domain expert to be able to:

- identify the central claims of a given religion

- identify the version of a central claim which is actually necessary for the religion

- determine what it would take to "refute" one of those claims sufficiently (rather than, for example, merely calling into question one of those claims, or refuting only one variant of a claim which has other variants.)

I contend that nobody on earth knows enough about "all religions" to be able to make such a claim. I further contend that, of the religions I mentioned above, if you didn't at least recognize the details I named, it's reasonably likely that your expertise is not deep enough to be able to follow those steps. For example, if you have never heard of The Didache, your awareness of Christianity is probably focused on only a small subgroup, whose "central claims" don't necessarily correspond to the claims of the broader religion; refuting one of that subgroup's claims may or may not have any relevance for someone from another subgroup.

I would contend in response that it's unnecessary to play whack-a-mole with countless subtle variations on the same theme if there are refutable core claims shared by a plurality of those variations. For example, the idea of a young earth is easily refuted, as are some sects' claims about the nature of human gender. Other broadly held claims, such as an unerring divine origin of scripture, lack any evidence, and thus can be dismissed, if not strictly refuted.

With regard to identifying essential claims, I'd propose that any claim that is believed by a good number of the sect's adherents is an essential claim, as refuting it invalidates those particular adherents' beliefs.

Beyond that, the question is what epistemological value can be derived from the remaining religions? What reason can be given to accept them? The burden of proof should be on them to demonstrate that they together, or one of them alone, should be accepted as accurately describing reality.

P.S. Thanks for the interesting discussion.

> "it's unnecessary to play whack-a-mole with countless subtle variations"

Sure -- but unless you're reasonably well-educated on a particular religion, how do you know if the variations are subtle or substantial? How do you know if they're "broadly held", particularly among that religion's scholars? (You brought up "unerring divine origin of scripture", which is a common belief of Fundamentalist Christians; do you know how broadly that belief is held by non-Fundamentalists? Do you know what the other common views are among the more populous Christian and Jewish sects regarding the same scriptures? Are you aware that Mormons believe the Bible has been corrupted?)

> "any claim that is believed by a good number of the sect's adherents is an essential claim, as refuting it invalidates those particular adherents' beliefs"

In practice, it doesn't work that way. Every religion I've studied carefully has a lot of inessential beliefs which are nonetheless widely held -- beliefs which, if they were overturned, would not in any way shake the faith of that religion's adherents.

It's common for outsiders to misidentify how popular certain beliefs are, how strongly they are held, and how essential they are. It's also common for outsiders to believe they've refuted something, when in reality they've stated some fact that has been known and accepted for centuries and which is actually the tip of the iceberg for scholarly study within a given religion.

That's why I suggest that, if you haven't heard of some of the specifics I named above, you're probably not capable of actually refuting those religions. It's not that those things are critical, so much as that to identify and address the actual core beliefs in relevant ways requires a depth of knowledge which would also expose you to topics like the Didache, the Seer Stone, or the Midrash. (And I wouldn't dream of making any sort of claim about the refutability of "all" African tribal religions, or "all" eastern religions, because I don't have that sort of depth of knowledge. I can say that I don't presently believe any of them, but that's a much weaker claim than "they're all irrational and a disease".)

> "The burden of proof should be ... to demonstrate that they ... accurately describing reality"

This ties us back to my original contribution to this discussion. How does one demonstrate that their position accurately describes reality, to an audience that thinks they are "motivated solely by stupidity"? How does one demonstrate there is value to be found in their belief system to someone who issues blanket dismissals? How does either party get value out of a conversation if one of them believes it's a monologue -- an opportunity to tell their stupid opponent how wrong they are, with no expectation that they might learn something?

If you begin with the assumption that someone else is stupid and you don't need to listen carefully to them, whether you're talking about their religion or politics or software, then it's unlikely you'll go through the sort of critical thinking process the original article described (and likely you'll "get lazy about challenging your own assumptions".)

Honestly friend you know very little on the subject of religion if you say 99.99% live without questioning.

But let s just say that some of the greatest philosophers were also believers of some sort of diety. As well as some of the greatest scientists.

Sounds like love. Or friendship.

Some of the worst code I've had to work with in my nearly 10-year long career so far was written by one of the smartest programmer's I've met. His problem was that everything because an exercise in designing this all-encompassing abstracted-to-hell-and-back web of interfaces, services, aggregates, domains, etc etc.

Indicative of this was a request to take a table of data that already existed (on a website) and add a button to export a CSV of that data. For anyone familiar with .NET (and I assume most other languages) if you've already got the data in the format you want to export this is literally a 10-minute task, including a unit test or two for the functionality. His quote was something on order of 18 hours which included time to write a set of TableDataExportService methods that would support a whole host of file formats in the future.

I've consumed enough CSV data to be wary of CSV data that was generated in ten minutes - including the unit tests. Are you sure you already have the data 'in the right format', for example? Maybe the data that's presented on the page has already been formatted for localization, so when you use that in your CSV export you put out dates in US or european format depending on the user's preferences, creating some hard-to track-down integration bugs later. Or maybe it only includes the display name for the status code, not the status code itself, so when three months later you change the display code from 'cancelled' to 'removed', all your clients' Excel macros break.

And once you've done that ten minute job on this page, how long will it take you to add it to the other 25 pages which also have tables of data that need CSV export? And when the table format changes to add another column, does the next developer also have to adjust your CSV output code?

Sure, YAGNI, but... there's no excuse to just throw a bunch of CSV-export logic inline into a page that previously had shown no interest in knowing how to format CSV files. Take a little longer, think about where to put the logic. There's a middle ground here.

I started on a project filled entirely with 'senior people' once, and was really pumped about the prospect of doing Serious Programming with serious people, instead of doing a bunch of mickey mouse crap.

Six months later everything was going horribly wrong because there was not a single one of us who was willing to solve a simple problem with simple code. Everybody was engineering the hell out of every single 'solution' and the code was impossible to read.

From this I learned a couple of things. One was that I had not learned as much from my Second System Syndrome as I thought I had. The second was that every project benefits from having people who are entertained by solving 'mundane' problems, to whom you can assign all concerns that are not part of the information architecture.

But the most important is that the best solution is -never- the one that is dazzlingly brilliant. It's often the one that's subtly clever (everyone agrees "that works", but some can wax poetic about how great it is at satisfying the concern), but sometimes it's the one that's dead simple.

Few solutions are easier to replace than the dead simple one.

I don't mean to advocate against YAGNI or writing simple code. I can't authoritatively to your specific example, but what I am suggesting is that you default your assumptions to a view that this developer had some reason for the choices that were made.

Maybe this client was notorious for asking for CSV export but really meant CSV, XLS, XLSX, PDF? Maybe the build and release infrastructure is so slow that any change - no matter how small - needs 3 days to be built, tested, and deployed? Maybe the complexity makes sense in other areas of the system and they decided to adopt patterns across the codebase to aid in teaching/onboarding new developers?

Just to be clear, maybe you will find that the reasons are totally invalid and this is gold-plated-abstracted-to-hell code. (It sounds like you probably will)

But if you assume from the start that everything is an over-complicated pile of junk and you could rewrite it in a day, I think you will find yourself jaded and unhappy with your environment.


Software should be implemented as simple as possible, and then refactored as necessary when new functionality is required (the only exception is things you are very sure is going to be needed, e.g a password reset functionality on a password dialog).

Anyhow that is my development philosophy.

Yeah this was some of the most anti-YAGNI stuff I've seen.

Which is not meant to detract from the fact that the code was great 99% of the time. It just took 10x longer than it should have and cost 10x as much and half of it was never used.

"A great tailor does little cutting" ;)

> Developers tend to assume that the people that wrote the terribly messy code that you inherited were incompetent.

Don't over-generalize.

There is a messy code and there is a messy code. Changing shared data without synchronization is incompetence, but having a 3-page long for-loop is not. In reality though, once you've cleaned stables a dozen times spaghetti code is the red flag of incompetence and the correlation is there.

> once you've cleaned stables a dozen times spaghetti code is the red flag of incompetence and the correlation is there.

Question is, where's the incompetence?

Yup, like any profession, there are developers out there who just suck.

But the same is true of middle and senior company management. Can you honestly say you've never cut a corner because some incompetent manager mismanaged the schedule or customer expectations and then forced you to compromise quality in order to meet an absurd date?

And in any organization with that type of management, those issues are rarely isolated. Any one manager can do a little damage, but when an organization is dysfunctional, well-meaning coders on the ground may be forced to compromise code quality time and time again in order to deal with unrealistic schedules.

Think of it like a professional sports team. Sometimes, yeah, the players just suck. But a bad coach or general manager can have an outsized effect on an organization.

Now, I would never claim that all bad code is a product of bad management. Again, some folks just suck at their job. But speaking as a guy in middle management, I'd be willing to believe at least half of the bad code lying around in the real world is a product of an incompetent management structure.

Getting back to the article, that would represent the kind of hidden incentive that would cause an outsider to assume developer incompetence, even though the reality is very much different.

I absolutely agree that there exists terrible code and that there are incompetent individuals or folks that simply don't care. But if one assumes that as the default response, I don't think that is very helpful.

Even in your example of shared data/synchronization - what if the original developers were told that the code would never need to be thread-safe? Or what if there was a constraint that required this trade-off to be made? It's not so black and white.

Hell, even incompetent looking code (like managing shared resources poorly) can often be explained by a small script done by an amateur turning into a side business and then a full fledged company. Experience and knowledge of an industry beyond software is often many times more valuable than the knowledge of parallel programming. It doesn't mean the code wasn't valuable or good enough at the time, it just no longer meets the rigor and demands, that's why you're employed to work on it. To call everything shitty thats's below your own standards, level of education and experience is a naive view of how software actually exists in the world. Especially when the standard advice given to "idea people" is to learn to code their idea themselves.

It is helpful. It gives you the confidence to change things. It allows you to feel productive. And it's true often enough that it's a reasonable default, IME.

That seems like a dangerous way to feel confident in your own work and abilities to me. Pinning your self worth to comparisons with others is not so helpful, long term, as you might think.

Over-generalizer, know thyself.

I believe that the error the article describes comes from the family of "inside view" errors (see https://en.wikipedia.org/wiki/Reference_class_forecasting). Taking an outside view instead is a very widely applicable principle.

The issue is that, even allowing for good intentions, the fact tends to remain that the code is messy, undertested, and underspecified.

I've worked on a legacy codebase where the developer both admitted that the code was a hack job, got guilty-defensive about it, and then refused to schedule any time to help make progress.

You just want to shake them by the shoulders and be like "Look, it's fine that you did the best you could, but you need to either help clean up the mess or get out of the way."

Chesterton's Fence is wise in moderation--otherwise, it ends up as technological hoarding.

Sure - at the end of the day even the best intentions can result in a poor quality output. I've just found that my own personal outlook is much better if I stop complaining about how some other person messed up and focused on where the code is currently and how we can solve the problems at hand given our new knowledge/resources/constraints.

Oh, on that I agree. I'm just very frustrated with, having taken those steps, being blocked because ~reasons~.

There's also the fact that decisions can be very hard to un-make (even in something as malleable as code). This can lead to a bunch of slightly bad/messy decisions all coming together into something that is unimaginably messy/obtuse.

you make a very valid point!, Thx, :-)

To generalize: competitors in a market usually behave rationally, and what looks like "stupid" behavior from afar may actually be unseen incentives.

Which suggests a test for your understanding of a market: can you map out the incentives and explain why what looks like apparently-irrational behavior is happening?

For example, in healthcare, we waste 30%+ of the $3T we spend each year. Much of that waste is due to hospital readmissions for an ongoing condition like heart failure. Startups sometimes try to fix this by developing a special machine learning algorithm to predict readmissions and apply an intervention. But even when the technology succeeds, the business fails: hospitals charge for readmissions, so there's an active disincentive for the hospital to buy the product. (That is now changing with ACOs, and a change in incentives is an opportunity for new companies.)

The hospital admission/re-admission policy (or lack thereof) is exactly the type of thinking that this article is addressing. ER/Urgent Care doctors have a list of criteria for hospital admissions that are not always present at the time but may comeback later. If they are use more caution then required a hundred dollar medical bill may turn into a several thousand dollar hospital stay over a minor issue. Doctors are dealing with imperfect information when making a diagnosis so to say that they need an "algorithm" is trivializing a complex problem. There is room for a lot of improvement in terms of helping medical staff make better decisions with more complete information. I am uneasy that some of the new regulations in this area will end up handcuffing doctors in ways that will not always be in the best interests of the patients Note: not medical staff. Many years ago I processed claims for an insurance company and had some long and heated discussions about this topic with hospital staff. If some of these new regs are like what my oversized insurance company had, it will not end well.

It's not just ACOs. CMS is also changing the reimbursement structure to penalize hospitals that have unnecessary readmissions.


I don't understand your comment. You say "we waste 30%", but then say "hospitals charge". Who's the "we" who's wasting 30%, and why are startups trying to sell something to prevent X to the people who make money from X, rather than to the people who lose money from X?

"We" referred to US healthcare spending [1], and the party with the right incentive for this problem is the payers: Medicare, Medicaid, and private insurance like Aetna, Blue Shield, United Healthcare, etc.

While the payers have the incentive to reduce readmissions (saves them money, leading to lower insurance premiums for you), they don't usually have the access to do so -- they're not the one seeing the patient or prescribing medicine.

The payers could, of course, try to change the way that THEY reimburse the hospital to align the incentives. For Medicare and Medicaid, that requires a law--which is why the Affordable Care Act is creating opportunities for new startups as it rolls out. For private insurance, I think they'd like to change reimbursement, but they have relatively little market power compared to healthcare providers: http://rockhealth.com/wp-content/uploads/2012/12/Kocher-et-a...

[1] Completely different set of incentives (and thus problems) in other healthcare systems.

That clarifies things, thank you.

A readmission is basically sending someone home early or failing to care properly for them on an out patient basis after they leave the hospital. Sending someone to the hospital twice when you could send them once is wasteful.

Ah, thanks. Wasteful unless you are the hospital, indeed.

There's no actual stupidity there - just conflicting unstated goals.

There's an inevitable conflict between "Make as much money out of patients and the entire health system as possible" and "Care for patients as efficiently and effectively as possible."

Agree. Same can be said about countries. Laws/incentives vary drastically and change what the "correct" strategy is from country to country.

An excellent keynote presentation showing this in the public-home-maintenance business in the UK is was presented by John Seddon at https://www.youtube.com/watch?v=hbNsQFd8DQw

I have never disparaged my competitors, but I'm a small fry. I can say that during four years at Microsoft (1996-2000, Development Tools group) I never heard the products of competitors disparaged that way. In fact, there was a weekly presentation of competing products and invariably the interest was in where we were lacking, not what was bad about them.

Likewise when customers came to visit us at trade shows my boss would sit politely through their compliments, then immediately jump to the question "So what don't you like about our product?"

Fast forward to today. I'm friends with top people at a Really Big Guitar Company and a Huge Amplifier company. Even in private, these C-level execs show nothing but respect for products of their competitors. They are not ashamed to own and even personally use said products (especially vintage ones).

It seems to me that dissing your competitors even privately can make you dangerously blind to the challenges they pose to you, set a bad example for your employees, and also restrict your job prospects should you decide to work for a competitor one day.

Interesting - thank you for sharing this.

This is a great thing to consider, and I think this presumption of stupidity bleeds over into other areas of life too.

Developers: inherited code is considered guilty until proven innocent. Or maybe more accurately, guilty until you've rewritten it. Surely the old developer had no idea what they were doing.

The "other faction": Democrats/Republicans, different religions, rich vs. poor people... most generalizations about the faction you don't belong to start off with thinking "they're so stupid". "Look at those Republicans/Democrats. Can't they see that Trump/Obama is just lying through his teeth?"

Bad actors: The presumption of stupidity carries over into the way people think about computer hackers and terrorists and the like. You'll see stories about how "those terrorists are learning how to use cell phones to detonate bombs!" or how "criminals are migrating online to prey on people with phishing attacks!" The underlying assumption is that they're stupid, but getting (dangerously) smarter.

I think we'd make a lot more headway in most areas by assuming our competitors, detractors, and wrong-doers are probably already pretty smart.

> Inherited code is considered guilty until proven innocent. Or maybe more accurately, guilty until you've rewritten it.

I'm facing this dilemma right now, and I find it a whole lot more nuanced. The previous dev definitely had the right ideas, but he also produced XXXXX lines of code. I would take me years to wrap my head around the reasons for all these lines. Is this a bug or a feature? It would take me weeks to rewrite it, drop 75% of the corner cases for which I have no test case / clue why they have been coded. Then become the grumpy old dev that says "no" to 95% of feature requests and keep things simple. At the risk of my job.

Often the reason can also be evolution. Where the code started off doing something specific - but then as it evolved, more features were added, approaches changed, the code evolved to do something totally different.

Also "starters" - people who are good with getting something shippable fast and iterating on features and not the same people as the ones that write truly great code - simply b/c they're driven by different things.

But it's very situational.

You might find value in exploring the idea of Characterization Tests or Software Seams.

or Working Effectively with Legacy Software.

A lot of people seem to be unaware that companies survive by satisficing. That is, you don't have to do everything "right" to succeed in business. You just have to do most things well enough to not fail (don't break the law, don't forget to file your paperwork, pay your bills, etc.), and a few (one?) things outstandingly enough to win customers.

We've been brought up in school to think we have to get nearly every answer right on the test in order to get a good grade (and get more than half of the answers right just to not flunk out). In the real world, getting one right answer, and not screwing the rest up too badly is often enough (and sometimes only barely achievable!).

So maybe your competitor did something "stupid" because they're stupid, or maybe it's because that thing doesn't actually matter that much, and they're focused on doing something else incredibly well instead.

An important line in the piece:

"Of course, just because you presume intelligence doesn't mean that every decision made was smart."

I'd rephrase as follows: it's unwise to assume stupidity on the part of your competition, but it's very wise to allow the possibility of stupidity.

With the corollary that if there's an inexpensive way to capitalise on that stupidity if it exists, it's probably worth trying, just in case the thing that's walking like a duck and quacking like a duck is in fact a duck.

As a tangent to that - the chances that assumptions of stupidity are correct go up in direct proportion to your level of domain knowledge.

I see a lot of non-film people say "the movie industry does $FOO and that's really stupid", for example, and 95% of the time, they're wrong and there are good reasons for doing $FOO.

However, I also see people who know the film world (including me) say "a lot of / most filmmakers do $BAR and it's dumb" - and $BAR has a considerably higher chance of actually being a dumb, common mistake.

A little more succinct might be: "It's unwise to assume stupidity on the part of your competition, but don't rule out stupidity."

I had the experience, that things people identify as "stupid decisions" are often just "economical decisions"

For example, a company I worked for had the best technology, but bad UI and the competitors had good UI, but their tech was old and inaccurate.

For years we thought they were imbeciles, because they didn't update their tech and we would smash them in the future, because they cannot catch up with us.

But in the end the customers bought the software with the better UI and didn't look behind the scenes.

So their decision was logical. Why pour money and time in parts of the software when noone wants to pay for this.

This sounds like an instance of the fundamental attribution error [1]. It's a known human cognitive bias to blame others' failings on internal characteristics while seeing your own situation as more of a product of external influences.

[1] https://en.wikipedia.org/wiki/Fundamental_attribution_error

Also relevant is the reversed phrase, "Never attribute to stupidity that which can be adequately explained by X (some other reasonable cause)."

The important thing in analyzing a competitor's behavior is to understand the incentives motivating that behavior.

A common example in startupland is a company whose senior management has short term incentives that reward a fast exit over long term growth. That company may very well behave in ways that appear dumb to competitors with a long term focus. But if the "seasoned" CEO and his cronies get their compensation even in a mediocre deal, why bother trying to build a company for the ages when they can cash out, rest a bit and land in a similar situation at the next gig?

When I read the essay, I thought of P Thiel's question of self-reflection that analyzes in the reverse direction:

"What important truth do very few people agree with you on?"[1]

I interpret "truth" to really be a highly-opinionated belief rather than something like "2+2=4". In other words, what factors do you believe in that would make the business model successful that outsiders would dismiss as insane or stupid?

(On trivia related note: I notice the blog as the title of "stupitidy" instead of "stupidity" so I'm not sure if there's an inside joke I missed.)


Founders presume the stupidity of the competition because they're arrogant. Silicon valley, with it's notion of creative destruction and disrupting the establishment, encourages arrogance. We're blinded by the notion that new always trumps old, so we never consider that the established industry has reasons behind how it runs.

I agree, but I think this is what makes SV wonderful. You have to be somewhat naive to think your approach is going to be better than the status quo, particularly when your competition is well established, and probably well capitalized.

The reason why big companies can't turn on a dime is that they have a lot of people who work for them, and an existing customer base. It's difficult to align all of those people on a new, and potentially better way of solving a problem. It's not that they are collectively "idiots", it's that they have an established business with an established track record. This is quintessentially the Innovators Dilemma. Technology markets almost always move downmarket to the cheaper solution which at first looks like a "toy".

Yes! I wish more people would actually read that book instead of throwing the word "disruption" around vaguely. After reading it, you'll realize the established business almost always has the advantage, and start ups only succeed in certain environments.

One thing I learned the hard way is that if you are on the right track, your competitors are probably barking up the same tree and are further along than you would think baes on what is public.

For instance there was a period of many years where both Google and Bing image search were embarrassingly bad and I was able to build something far better for a certain range of queries.

It took me a year to build out my system but in that year, Bing and Google both improved dramatically, so my demo comparing results with them was no longer impressive at all.

Technology has some phase changes, where it suddenly changes from "doing X is extremely hard" to "hey, X is actually very easy to do!" without any ado, and no obvious reasons.

There are too many histories where after a long time of nothing happening, everybody suddenly starts working on the same problem, without any kind of coordination.

Great advice.

I do think that you should try to think about how you might try to solve something before looking at what your competitors do. The reason being that it's easy to trap our minds into thinking that there are no other solutions unless they fit into a similar box of what's already working. Naïveté combined with thinking for yourself can often be a powerful reason why many startups succeed.

If your solution ends up looking similar, at least, it was likely derived from first principals vs the path of least resistance: blind copying.

I tend to err on the side of caution with stuff like this, for instance when inheriting a code-base I assume the previous author actually knew what he/she was doing. But sometimes (not often) that can work against you as well. For instance when after spending sufficient time with said codebase you realize the original writer was entirely out of their depth and this was likely the first time they'd attempted to write something this complex.

But more often than not it is the presumption of intelligence that pays off.

I couldn't disagree more. When Mark Zuckerberg turned down $1 billion from Yahoo[1] when he was 22, and FB was two years old at the time, because they were "stupid and didn't get it, so they obviously were't valuing the company" properly he was right.

The direct quote is:

>Thiel described the argument Zuckerberg finally came down on like this: "[Yahoo] had no definitive idea about the future. They did not properly value things that did not yet exist so they were therefore undervaluing the business."

Yahoo's market capitalization in July 2006 was $42.51 billion. A 22 year-old presumed they were stupid, and he was right. [2]

Today FB has a market cap of $264.91B and Yahoo? Down to $35 billion after 9 years of growth.


[1] http://www.inc.com/allison-fass/peter-thiel-mark-zuckerberg-...

[2] by the way to get the market valuation at the time, I did this search: http://www.wolframalpha.com/input/?i=what+was+yahoo%27s+mark... I can't believe it worked! I used wolframalpha because this is the kind of search they promise they can answer - and they were right, they actually delivered. Nobody else on the face of the planet does this, and it shouldn't even be possible. But it is. If you think something is possible, JUST DO IT. If you think your competitors are stupid (compared to what you think you can do), you're probably right. (or you wouldn't have that thought.)

> If you think your competitors are stupid (compared to what you think you can do), you're probably right. (or you wouldn't have that thought.) No, you are probably not the next Zuckerberg, or Jobs, or Page. If you don't understand why your behaving a certain way its much more likely that you are the one who doesn't understand the market.

yaur, I'll simply have to disagree with you. If you ask most people "why is Nike stupid, why are their running shoes completely wrong" nobody would give you an answer, nor do I have an answer. Anyone (including me) would say: "They're not stupid, they put billions of dollars of research into engineering and have a great understanding, their shoes are very comfortable and that's why they can charge $100+ for them."

... except for the 50 people in the world who know something that Nike doesn't. Those 50 people who do have an answer to that question are the people who can topple them by doing something better.

Nobody delusional can explain something clearly, and answer all your questions about it - if someone clearly explains something and you think they're delusional for it, likely you're just not qualified to judge. You may be the kind of person who, if you got back sent 1,000 years into the past, would learn to ride a horse and learn some trade, because it's obvious that recreating anything from the future is "impossible", or someone would do it. Nothing wrong with that. Just doesn't apply to startups.

That advice in the context of startups is simply completely wrong, irrelevant, and inappropriate. Couldn't disagree with it more.

yes, but what's amazing is that it's obvious how to calculate it. You look at a chart, mouse over the price at the time, figure out the number of shares at the time (mostly affected by any splits or new shares) and multiply. Instead of actually calculating it, though, I asked a robot to, using human language.

Finding someone who happened to ask the exact same question by doing an Internet search is a different problem entirely. Wolfram Alpha can instantly answer questions that require certain levels of deduction or calculation, and which have never been asked in the history of the world. Google can do that too: for mathematical equations like 453442433 * 234523432 + 3485478347434879. Applying this to questions of a general type that require real-world data is miraculous.

I think this is the classic problem of advice giving that's so prevalent in the startup community today. It won't be too long until we're all praising a tweet or article that talks about only those who brashly challenge the status quo and assume the entrenched players are vulnerable, bogged down with legacy issues and fat and lazy on an existing revenue stream are ripe for the disrupting. Those who sit back and say "Well maybe there's a reason they do things this way I'm not sure" aren't bold enough and won't be the recipients of the spoils of disruption.

Not that this article is bad. It's just datapoint 107 that a founder has to reconcile with all the other competing advice.

Many times throughout the Entrepreneurial Thought Leaders podcast[1], a founder expresses the sentiment, "If I had known how hard it was going to be, I might not have started the company." The difficulty discounting has also been cited as the reason why many successful founders are young and inexperienced. So while hubris can lead to disastrous consequences for engineers, it could be viewed as an asset for inciting action.

[1] http://ecorner.stanford.edu/podcasts.html

Not much else to add. Presume a larger, smarter, better funded team is working stealthily in another office somewhere to kick our ass... anything else is complacency. Worse, following that path, leads to hubris at some juncture: excuses rationalizing cutting the wrong corners or shortchanging the customer that could prove fatal in a game of inches in the marketplace. There are at least a quadrillion ways to fail, and 99.997% of them will be my doing. Rational paranoia is healthy, because your product/service needs to be so well-regarded by people other than the team or supporters that it demoralizes potential adversaries that they don't want to compete. Even then, it still may not be currently as good in other key areas of focus as a competitors.

(Btw, the Thiel view of not picking fights you can't dominate and Buffett's sticking to defensible business models is a good mindset to calibrate a venture's success per risk gut perception. And with timing, team and execution you might just make something that hits.)

Great article, taking the thinking further...

Everybody watches their competitors, it's entirely natural. It's solid advice to study them, and try to stay/get ahead of them where possible. This doesn't just apply to app features, but every facet of the business across many disciplines (sales/marketing/development/back office etc).

On the other hand, building a business based solely on a competitor's business decisions and not doing your own homework is the path to madness. We might take inspiration from our competitors, but we always check in with our customers next to make sure they actually want the feature. It's also our job to get feedback on not just what we're doing but also how we're planning to do it, as our users might have unique business requirements that our competitor's users do not.

I think you can generalise one more step up. Markets that look horribly inefficient may well not be.

Or maybe they are ;) How to find out? For some people, the only way to confirm is to try and be tested.

People may have a sense of superiority, especially smart ones. Chess world champion (and genius) Bobby Fischer once said: "My opponents make good moves too. Sometimes I don't take these things into consideration".

This mindset can be seen in other systems of thought. There are a large number of species known to man, yet we somehow think that we are the pinnacle of the tree of life, despite the fact that this is statistically highly unlikely when only taking into account pure numbers. Considering the dimensions now accessible to us that were completely unknown even 500 years ago, it doesn't seem a large leap at all to posit that there are other dimensions we are currentlt unaware of that contain life forms in the same tree as ours that are far more advanced and perhaps even invisible to us.

Love the (deliberate? ironic?) mis-spelling of stupidity in the title/url.

Wish I could claim to have been either deliberate or ironic...but I actually got careless. Thanks for pointing it out!

The title is correct but the URL is misspelled. I assume they fixed the title after publishing so in that case the URL won't change. Clearly a typo in the original.

This is so true. As a investor, I hear a lot of pitches where the founders say their competitive advantage is that they execute better (the flip side of believing everyone else is dumb is believing you're especially smart). Do you know who else claims they "execute better"? Everyone. That kind of attitude usually reveals that a founder doesn't have a real sense of what makes their company special and defensible, and is a bit of a yellow flag for investors -- well, at least for me.

Working in academia, I've experienced this too. I've read work in my field of research that I dismissed as bad work or not worthwhile, simply because I didn't fully understand them. Too complicated, strange background assumptions, not well motivated, etcetera.

Then later, while developing my own work, I find that I end up with the same complications, that I'm forced to make the same background assumptions, and I have the same difficulty in motivating my choices.

Uh, I was thinking this article is more business-side than operational? Easily put: a business exists if it stays afloat and fences are often the way not to go under, even if they appear stupid from outside. Many times, fences are the only common ground between sellers and buyers. Removing fences is, a lot of times, pretty stupid = you geniuses operate at a loss and survive from artificial money or VCs until you are allowed to.

This is an excellent observation. When you encounter a suboptimal system, there's a substantial chance that it either produces some unnoticed benefit or results from some coordination problem that can't be overcome by "just not doing that".

In either case, successful solutions have to work around the gap in the system rather than simply charging into it.

Despite the flaws in the rational model of economics and the efficient market hypothesis more generally, I have always been fond of the more humble, observant posture it gives us in considering others' behavior. The flaws in the rational model are well-known. But as a presumption, it certainly works better than its opposite.

I am confident of the value I deliver but I don't call my competitors stupid. One person being correct or a winner doesn't mean others are stupid. Maybe some people can deliver value where the competition doesn't. Could be due to some leverage or insight or creativity.

This is probably not a correctable problem. People don't start businesses if they think the competition is highly competitive and intelligent. They start a business of they perceive a weakness in the market, or believe they have a unique capacity to succeed.

This works well when considering people in your own field, you know what it took for you to get there and you can assume they've had similar experiences.

However when you are considering the general public it is best to presume stupidity and design with that in mind.

I've experienced this myself.

Sometimes what our competitors were doing was stupid, and we ate their lunch.

Sometimes what our competitors were doing was the only way to really run things, and we had to adapt to follow them.

It is valuable to determine what technology, not available in the past, would cause a reasonable insider to change their decisions if they could implement that technology.

Once again, xkcd explains all.


Really? Is that the way some people reason about their competitors? Something like: "Oh, I guess they're just stupid and we're so smart."

Who in their right mind would do such a thing. Even if the competitors really are stupid (who knows?), it doesn't give you any advantage to assume that.

No, it isn't that way. It mostly ends up as a consequence of initially over-estimating the competitor. So your mind looks for their tiniest shortcomings to make sure that they do things wrong too.

Once it finds such a thing, subconsciously the mind registers the competitor as weaker than they actually are, to redeem your mind from the uncomfortable prick of a all-powerful competitor. It doesn't happen for everyone though. It takes a bit of rationality to overcome such conclusions.

I personally know people who'd read and stress negative reviews of their competitors' products, and eventually conclude that their competitors are not really thoughtful in their decision making.

TL;DR - Yes.

Ah, metaness!

So can we assume people who assume other people are stupid, are stupid? Or is there some benefit from doing things that way that might not be apparent at first sight?

One possibility that comes to mind is the substitution of parallelism for serial processing, or put another way, letting the world be its own model. Instead of one startup spending a lot of time thinking and researching (and maybe missing the market window if there was one), let ten startups just assume and go for it. Maybe nine will be wrong and fail and one will be right and succeed.

> Instead of one startup spending a lot of time thinking and researching (and maybe missing the market window if there was one), let ten startups just assume and go for it. Maybe nine will be wrong and fail and one will be right and succeed.

But that'd be something like evolution or anthropic computing. In order to find a solution to a hard problem, write down something. If it is not a solution, kill yourself. Conditioned on looking at anything at all, you look at the correct solution.

Or we could use the distinct advantage of humans and think about it properly.

Those folks are just being introspective, that's all..

Agree with the first 3 paragraphs so much, well put.

see also: Sarah Silverman's bit on scientology and things that sound weird.

A nice article to manipulate your competitors from doing a market survey!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact