Hacker News new | past | comments | ask | show | jobs | submit login
Second-Order Thinking: What Smart People Use to Outperform (2016) (fs.blog)
397 points by arunc 47 days ago | hide | past | web | favorite | 139 comments



I think this article is saying:

  - 1st order thinkers primarily see causes and *direct* effects.
  - 2nd order thinkers frequently see causes and their *indirect* effects.
I guess that seems like a reasonable idea.

For what it's worth, the truly exceptional people I've met in life had a different quality.

* When most people are presented with a difficult/challenging problem, they soon give up.

* The most exceptional people that I've met just kept hammering away after the rest of us had stopped. Most of the time, they failed, but if you have some aptitude and you keep hammering, you have a better chance to make breakthroughs that the rest of us don't make.

Just as an example, I had worked for a company that used X-ray crystallography as a tool for drug-development. I would be in meetings with crystallographers where we discussed the technical problems they were having in trying to grow crystals. The crystallographers were all smart and talented, but when we had group meetings, there was only one guy who would float suggestion-after-suggestion-after-suggestion, long after everyone else had run out of ideas. I don't think he was any "smarter" than anyone else in the room, but he just could not shut himself off. He was relentless. He went on to make some important contributions to the field.


You have to split the title, which is marketing mumbo jumbo, from the article, which is fundamentally talking about decision making.

Smart decision making and persistent work is the X and Y axis of achievement.

Decision making (which broadly includes subjects like efficiency, policy, systems, problem solving, and design thinking) is the heart of efficiency gains for large organizations (not necessarily individuals, although there really shouldn't be a separation between the two).

Generally speaking, decision making falls into 2 categories - prioritization and policy. Prioritization is deciding what best to do out of the given options, and while policy (systems building) is building ecosystems that enable activity or work output.

2nd order thinking is the necessary requirement for policy/systems building. However, there's a lot more skills needed to enact good policy decisions, so this is just scratching the surface on the subject.


I humbly presume that you are familiar with topic at hand, and so, if you please, can you elaborate more on 'scratching the surface' part? I'm genuinely curious about other aspects of policy/systems-building, as it were. Pointers to blog posts or books should be enough, as well. Thanks.


The seven habits of highly effective people.

The Peter Principle

Books or articles on "planning backwards." Gantt charts get used a lot in, for example, construction. You need certain things to precede certain other things. You set a goal and end date and then start asking "What has to happen just before that? And just before that?" It is the reverse of asking "So, then what?"


Backward state-space planning in classical AI/automated planning


Sorry for the late reply.

Many people have recommended and I agree, Thinking in Systems is a great primer.

Here's an excerpt from the intro of that book:

> So, what is a system? A system is a set of things—people, cells, molecules, or whatever—interconnected in such a way that they produce their own pattern of behavior over time. The system may be buffeted, constricted, triggered, or driven by outside forces. But the system’s response to these forces is characteristic of itself, and that response is seldom simple in the real world.

> The system, to a large extent, causes its own behavior! An outside event may unleash that behavior, but the same outside event applied to a different system is likely to produce a different result.

> Think for a moment about the implications of that idea:

> • Political leaders don’t cause recessions or economic booms. Ups and downs are inherent in the structure of the market economy.

> • Competitors rarely cause a company to lose market share. They may be there to scoop up the advantage, but the losing company creates its losses at least in part through its own business policies.

> • The oil-exporting nations are not solely responsible for oil price rises. Their actions alone could not trigger global price rises and economic chaos if the oil consumption, pricing, and investment policies of the oil-importing nations had not built economies that are vulnerable to supply interruptions.

> • The flu virus does not attack you; you set up the conditions for it to flourish within you.

> • Drug addiction is not the failing of an individual and no one person, no matter how tough, no matter how loving, can cure a drug addict—not even the addict. It is only through understanding addiction as part of a larger set of influences and societal issues that one can begin to address it.

Once you start to understand the principle, you'll see systems everywhere. Good case studies in business are studies of systems. Annual reports are windows into the systems companies build to run effectively.

Everything is a system, really. Losing weight is no longer an extended act of willpower, it's building a system into your schedule and habits that produces the outcome of less weight (well, to be fair it's a bit of both). Writing a book is a series of behaviors sustained over time, induced through scheduling.

A lot of traits of people are systems induced, and instead of blaming a culture for a negative trait, a system (usually just looking at income level) can explain a lot of it. For example - why do Chinese people copy western IP so much? Before getting into "it's part of chinese culture", which is a slippery slope, consider how much can be explained by systems - just look at the national income average and compare education levels to see how little skilled labor they actually have to work with, and how much economic growth the country needs.

In Thinking Fast and Slow, there's a great little section that talks about professors wanting to write a book and budgeting 2 years, only later to find out that the it took anyone in the history of the school 8-10 years to write of a book of similar scope. They indeed ended up taking 10 years to write the book. There's a system hidden in there too.


Thanks a lot!


+1, really would love to learn more


I'm reading Gerald Weinberg's general systems book and it talks about second order thinking. Also, thinking in systems by Meadows is a great primer


Agreed with ignoramous' reply - interested in reading more literature here around what you're describing.


It's worth pointing out that blindly hammering away on the wrong problem is a failure mode for many intelligent people.

The key is not just sticking to things, but also having enough taste to know when to drop a problem.


> blindly hammering away on the wrong problem is a failure mode for many intelligent people. The key is not just sticking to things, but also having enough taste to know when to drop a problem.

Agreed. In my original comment, I was trying to imply that those exceptional people did (ad you say) have enough "taste" to know when to stop, but I don't think I made that aspect clear.

The crystallographer I was talking about definitely failed more often than not, but I was always struck by his ability to keep floating reasonable ideas after the rest of us had reached what we thought was an intellectual cul-de-sac.

If you have ever read the transcript of Richard Feynman's speech "There's plenty of room at the bottom" [1] you get the same feeling--here's an incredibly bright person who can't seem to stop when he's told that "x is impossible."

I love this part of Feynman's speech: "The reason the electron microscope is so poor is that the f- value of the lenses is only 1 part to 1,000; you don't have a big enough numerical aperture. And I know that there are theorems which prove that it is impossible, with axially symmetrical stationary field lenses, to produce an f-value any bigger than so and so; and therefore the resolving power at the present time is at its theoretical maximum. But in every theorem there are assumptions. Why must the field be axially symmetrical? Why must the field be stationary? Can't we have pulsed electron beams in fields moving up along with the electrons? Must the field be symmetrical? I put this out as a challenge: Is there no way to make the electron microscope more powerful?"

[1] https://www.zyvex.com/nanotech/feynman.html


I think the deeper explanation here is that unusually clever people know how to locate the unquestioned assumptions and have no problem throwing them out. Or put another way, they ask questions of the form, "I know this sounds dumb at first, but hear me out. Why don't we..."

Most problems come with a slew of constraints we take for granted, and then a set of constraints we consciously impose on the solution because we think it helps. Most reasonably bright people try lifting those conscious constraints, but rarely touch the less apparent ones.

I think that more neatly explains what you both are going for with "gives up too soon" (not identifying all constraints), "hammering away" (also investigating non-obvious constraints), yet "have enough taste to stop" (there are no more constraints to lift).


Couldn't agree more here.

I've been encountering this effect in rapid succession in the last couple of years as a new parent.

First few weeks: Everything is new and you have no firm assumptions other than what you've observed from the outside of other families, so you try to build a new model on how your child behaves, how you should react and the routines needed to function as a family.

A month in and every other month going forward for the next year: Everything you think you know about your child has changed, and all (most) assumptions about what works when comforting, putting to sleep, and feeding goes out the window and you start over building a new model.

Of course everything doesn't change, but this scenario of developmental changes has really ingrained the idea in me how easy it is to make assumptions (deliberately and not) that you assume are static truths. It has made me go back and reexamine everything from old wives tales I learned through osmosis as a child through my politics through technical decisions in my work.

Whether you consider Jobs, Musk, Wozniak, Brin, Page, etc creators/innovators or something, I think a lot if not most successful ventures has come from reevaluating assumed truths, ranging from the market, state of technology, or paradigms. I'm not saying you should throw out everything old, but merely that learning from history should be a scheduled process, not a one-off.

Sorry for the unwarranted rant, but your comment just resonated with my own experiences.


Musk, in particular, appears to me to be refreshingly naive when confronted with a problem. His suggestions sometimes sound like something a kid would say, and that's not a bad thing.


I wonder if there is a way to train yourself to have the intuition for quickly lifting those less apparent mental constraints in all situation.

Anyone have any thoughts on achieving this or resources. I recall some dual n back game posted from a website called gwern that supposedly helps speed up your recall time or how many items you can recall.


Doing mathematics is probably one way to train. Especially the counter-intuitive stuff, like higher dimensions, statistics, weird geometry, and such.


Sometimes. Forward motion is always important. In incident command scenarios, I’ve seen situations where people will sit around and wait... you need to move forward at all times or you get used to nothing happening.


It's worth pointing out that blindly hammering away on the wrong problem is a failure mode for many intelligent people.

Also, being right about something, but at the wrong time. That's the worst failure mode of all IMO.


I somewhat disagree with this idea -- it's important to drop unrewarding lines of enquiry, but more because you've reached the point where it becomes clear the answer would be boring than because it won't pay the bills. I mean, you have to get with the program sometimes and drop an interesting problem because you can't afford to continue working on it, but at that point it's not really dropped. It's just become an itch you can't scratch. If the problem is interesting enough, the pursuit is its own reward.


Agreed. I have seen this plenty of times.


>there was only one guy who would float suggestion-after-suggestion-after-suggestion, long after everyone else had run out of ideas. I don't think he was any "smarter" than anyone else in the room, but he just could not shut himself off. He was relentless. He went on to make some important contributions to the field

Whilst I admire determination, enthusiasm and persistence; if none of the ideas had come to fruition or contributed anything meaningful, it could have been misconstrued as gish gallop. It probably requires second-order thinking just to manage such a scenario, which is not uncommon.


What I prefer, rathet than blunt endurance, is efficient in testing hypothesis. A radar for aesthethics.


Persistence is it. Quitting, by definition, takes you out of the potential winners domain.


Quitting a losing effort permits you to place bets in a more likely to win effort. There is a huge opportunity cost to using bad judgement to continue a failing project.


It has to be intelligent persistence, though. Only persist if you firmly believe that the potential reward outweighs the cost of continued effort.

Quitting a hopeless avenue of exploration frees you up to pursue greater chances of success elsewhere. It all depends on context.


This isn't second order thinking. Deciding to eat a salad when you're hungry instead of a candy bar because you value health over immediate satisfaction is discipline, not intelligence. You're still thinking linearly about the costs and benefits of the decision, you're just placing more emphasis on the longer term costs and benefits. Second order thinking refers to holistically looking at a situation to see emergent behaviors, which allows the intelligent to see things that are not intuitive. Second order thinking would be choosing a salad when you're hungry so that salad becomes associated with relieving hunger, thus making it easier to choose salads going forward, which will create a positive feedback loop keeping you on your diet. It's the self-interaction of the decision that makes it qualitatively more complex than first order thinking.


>This isn't second order thinking. Deciding to eat a salad when you're hungry instead of a candy bar because you value health over immediate satisfaction is discipline, not intelligence.

Nope, it is. Instead of sticking to a solution that only solves the immediate problem (hunger, first order), he changed the solution based on the adverse side-effects of going with the first solution (hunger solved, but health worsened, second order).

>Second order thinking would be choosing a salad when you're hungry so that salad becomes associated with relieving hunger, thus making it easier to choose salads going forward, which will create a positive feedback loop keeping you on your diet.

Nope, that's motivation engineering, which is orthogonal to second order thinking.


"Nope, it is. Instead of sticking to a solution that only solves the immediate problem (hunger, first order), he changed the solution based on the adverse side-effects of going with the first solution (hunger solved, but health worsened, second order)."

Yup. This just happens to be something that we get pounded into us (not without merit) a lot, so it's something that people understand fairly well.

Consequently, as a gateway into understanding higher-order thinking, it's really poor, because this "knowledge" has all but bypassed rational thinking at this point due to being repeated for us to the point that it's been internalized.

I'd say this is why the trivial examples are also uninteresting to people. If it's something obvious enough that everyone has figured it out, making it an accessible example, it also doesn't seem like higher order thinking.


Well, here's a non-trivial example, but still easy to follow, that describes second order effects quite well:

https://en.wikipedia.org/wiki/Cobra_effect


But he gave a thought to leave chocolate and take Salad. How do you think this got triggered in his brain. Discipline comes after you have formed rules by seeing it's effect either by you or learning from others.

Second-order thinking is too abstract: some time thing which is obvious for you might not be for other e.g. binary search is not always a good option, it can be obvious to you (here you are an intelligent person with acquired knowledge) and for novice to this concept; their second order thinking would be to find out what Binary Search is and then why not it is always good to use it.

I think second order is trying to convey to look beyond what you have good in your first glimpse and try to connect dots. I haven't done much study on it but that what I get from it.


I agree with the previous poster that the example given is pretty bad, even if choosing a most trivial case was the goal.

Perhaps it is a good idea to associate health and intelligence, and to achieve that it would require higher order thinking. Still skeptical though. Intelligent people probably lead statistics in being healthy, but extrapolating a correlation would be primitive first order thinking, no?


>You're still thinking linearly about the costs and benefits of the decision, you're just placing more emphasis on the longer term costs and benefits. Second order thinking refers to holistically looking at a situation to see emergent behaviors, which allows the intelligent to see things that are not intuitive.

It seems to me that "holistically looking at a situation to see emergent behaviors" is still just linear thinking. What's being suggested is a "line of thought" that is longer than the average, but if your modus operandi is to make positive feedback loops for as many things as possible in your life, and/or be holistic, then it's no different from the first way of thinking that you illustrated. Whether or not someone thinks either way could simply depend on the degree to which that person is an extrovert/introvert.

Sorry, not trying to seem snarky or anything. The dichotomy you presented here just seemed interesting to me, maybe because I'm the type of person who is predisposed towards finding patterns in things.


The first thing I thought of in that salad example was to understand why I am hungry. Am I hungry because I’m really just thirsty and need water? And am I not drinking enough water because I chose coffee twice in a row earlier this morning? It’s more like I evaluate in historical choices and need to give more thought to upcoming choices.


This closely mirrors my own thesis on genius.

There are first level, direct thinkers. This is the lower 50% of all people, although everyone is capable of it. It is direct and without irony, sarcasm, or self reflection. Think slapstick comedy, or superhero movies.

The second level is in 4th wall breaking, analogy, simile, anything which takes "getting" the joke beyond the purely visual. Most people who consider themselves "smart" fall here. About the 50th to 90th percentile of intelligence/creativity.

Third level is hard to describe with words by it's nature. It's all about indirection. Not making an analogy, but taking the analogy for granted and riffing off of that. Most truly genius artists, comedians, musicians, scientists, etc. live at this level. Think the comedy of someone like Dave Chappelle, the music of Bob Dylan, or scientists like Stephen Hawking.

The fourth level is unattainable for humans as a constant state. The very best third level people can just barely get glimpses of it, and bring those glimpses back down for us to see. These are our all time great works of art, and generational scientific breakthroughs. Picasso, Bach, and Einstein are the archetypes here. But what we (the second level masses) are able to see is just a projection, like a 3 dimensional representation of a 4 dimensional shape. The effect is still mind-blowing, but unless you're at that 3rd level it's impossible to really conceive its' true nature. Genius is in the ability to translate those brief fleeting glimpses of the 4th level by a 3rd level person into something intelligible by the 2nd and 1st.


My favourite way of illustrating the difference between 1st order and 2nd order thinking is actually a quote by Frederik Pohl:

"A good science fiction story should be able to predict not the automobile but the traffic jam."


A cleric in 1200 might have said that the masses could never read, and that illiteracy will always be a feature of our society and that it is natural. Yet, here we are today, all reading. Maybe Einstein’s talents can really elude nearly everyone, but don’t short change your fellow human. If you do, you will be all the worse off for it.


> It is direct and without irony, sarcasm, or self reflection. Think slapstick comedy, or superhero movies.

RDJ/Tony Stark would like a word with you.


Irony, sarcasm, and self reflection at the context of the movie, not the character.

RDJ/Tony Stark's "Irony, sarcasm, and self reflection" are still a flat description of an ironic, sarcastic etc character.

Not irony, sarcasm etc superimposed on the design/presence of the character itself within the movie.


Most modern superhero movies are indeed first level, bottom-of-the-barrel content: https://www.youtube.com/watch?v=7vfqkvwW2fs


You should check out the concept of the "SOLO Hierarchy" by Raymond Lister.


1st Level - Rick & Morty

2nd Level - Bojack Horseman

3rd Level - Moral Orel

4th Level - Xavier: Renegade Angel


The lack of second order thinking seems to be the essence of what ails modern medicine.

"I have an infection!"

"Time for antibiotics!"

Don't bother to ask "And then what?"

Some populations are given antibiotics so regularly that it isn't uncommon for them to develop antibiotic resistant infections and even lose their colon to E. Coli.

We are given drugs with a multi page handout covering side effects, the doctors take credit for short term improvement -- "Look! Your latest infection got better!" -- then blame your condition for long term decline. No one stops to wonder if it's the drugs -- even though in some cases we absolutely know the drugs caused X.


Second order thinking was thought further by Hans Jonas [1], a philosopher known for his contribution to environmentalism. In his book "The Imperative of Responsibility"[2] he argues that because of our technological breakthroughs (e.g. medicine) it's so easy to make impacts beyond your control that thinking through whatever you do has to become the core of a new ethic.

[1] https://en.wikipedia.org/wiki/Hans_Jonas

[2] https://www.press.uchicago.edu/ucp/books/book/chicago/I/bo59...


I'm an environmental studies major. It's been very helpful in trying to develop mental models for thinking about my health issues.


This is interesting: can you provide some examples?


I've had a class in Hydrology and also read "Salt Dreams" (about water in Southern California) and a book about the history of water development in Fresno County.

Modern people often seem to have mental models of the world where a river has a set place, similar to a road. This is not accurate.

Water percolates down into the soil. Rivers meander. The amount and type of plants impacts rain patterns.

Water is constantly moving and cycles through the air and moves through the ground, both largely invisible to the naked eye, but critical to the pieces we can readily see. Plus, its movement occurs in a larger context, such as the Moon causing ocean tides.

The human body's relationship to water is similar. It isn't just limited to how much you drink, how much you sweat and how much you pee.

Like the Earth's crust, your skin takes in water. aso, for example, you can get hydrated with a bath, especially a salt water bath. This is helpful to understand if you are so sick that you are having trouble taking anything by mouth.

My condition predisposes me to retain fluids. I've made progress on reducing the chronic bloat in part because I have mental models for how water moves through the environment.

Invasive species disrupt ecosystems. This is not unlike infection.

Trying to fix a thing like that is far more complicated than our current medical mental model of just giving out antibiotics for infection.

On some island, someone wanted to eradicate mosquitoes. They poisoned the mosquitoes, which all died. Small animals ate they poisoned mosquitoes and they died too. This went through several layers, almost like the Biblical plagues (deconstructed and analyzed by modern science on some TV show).

Those sorts of things influence my concept of the body's relationship to disease. I don't accept the mental model that "Well, just add antibiotics to the mix and voila! All better now!"

We know antibiotics kill gut flora. We know gut flora are critical to our digestion and even help provide certain nutrients. We know that killing gut flora not only screws up digestion, it also promotes the overgrowth of Invaders like E. Coli.

Yet we currently do not have a policy of making sure to remediate what antibiotics do to the body. We assume the gut will just repair itself over time without active intervention.

Some individuals know to at least eat yogurt, but doctors do not give out handouts with recommendations for "The Post Antibiotics Diet."

And then we kind of shrug if someone "mysteriously" develops serious and chronic gut issues. Broader society also seems to fail to understand the gut as the foundation of the immune system.


A significant amount of medical academia is definitely concerned with upstream causes, especially since a great deal of overall mortality these days is from cancers and heart disease, where impeding the upstream causes can have a large affect on overall mortality rates. This in some ways contrasts with a perspective where people fall ill primarily to infections and it is primarily the role of medicine to identify and cure those.

Even when an upstream cause for an issue is infective, (for example the young adult deaths from Rhuematic Fever in my country, New Zealand), there are many further upstream causes such as the level of insulation in housing, socioeconomic deprivation, the level of access to primary care physicians, access to transport, etc.

A crucial issue is whether the healthcare and economic system are geared to encourage or discourage addressing the most strongly associated upstream causes that are identified. In countries with established government-funded healthcare systems, options like public health campaigns and legislation are considered as tools that can potentially be brought to bear on these issues. In other countries where government-funded healthcare is not a norm, these options are less emphasised. In a system oriented towards private healthcare, often the approach leans more reactive than proactive, eg likely to be initiated as the response to a costly court ruling.


There is a lot of interesting research these days. I'm quite interested in microbiome articles, among other things. But the practice of medicine has changed within my lifetime from doctors being purveyors of wisdom to doctors being technicians with fancy testing apparatus and, all too often, insufficient context.

Doctors used to be some of the best educated people in a community and you tended to see the same doctor for many years. They probably also saw your extended family. It wasn't uncommon for all the children and their cousins to be seen and treated at the same time, thereby putting a stop to an infection instead of it being passed round and round like seems to be fairly common these days.

I'm quite fond of this scene from Doc Hollywood:

https://www.youtube.com/watch?v=xMjEmE1YLSU


Or if someone is an overweight, depressed shut-in, the treatment will be antidepressants and gastric bypass - when what they really need is a therapist/PT/buddy who plays sports and drags them out of the house to play squash. The medical system is not currently equipped to handle solutions like that.

The inability to account for second-order effects seems to be a chronic failure of the large-scale systems we build. I wonder if it's because specialization means that nobody understands or is in charge of the big picture. A old-fashioned generalist "village doctor" would understand the context of your existence because they were also in your social circle, while a modern GP's role in your life is highly abstracted. Or take capitalism - if you're mono-focused on making Widgets, then you neither know nor care that Widgets are causing trouble somewhere else in the world, and you will fight viciously against anyone telling you to make fewer Widgets.


In tribal culture, a Medicine Man is both a spiritual role and a medical role. There is no separation of your physical health from the larger picture of your life.

Even in modern medicine, physicians were historically de facto "village wise men." They largely have abandoned that role these days and it frequently goes bad places.


Not just modern medicine. Most government programs originate out of first order thinking. Most corporate plans, too, for that matter.


I thought it was the 300,000,000 dollars the American Medical Association has contributed to politicians.


This reminds me of Frédéric Bastiat and his essay "That Which is Seen, and That Which is Not Seen". Nicely formatted at http://bastiat.org/en/twisatwins.html

The article notes that "It’s often easier to identify when people didn’t adequately consider the second and subsequent order impacts." That failure to consider the second order impacts is what Bastiat refers to as "That Which is Not Seen"


That is such a fantastic article. It’s timeless: it was published in 1850, but it remains more relevant than ever.


I think people are generally good at 2nd order thinking, and thinking within complex systems with multiple causes, effects and subsequent effects. Especially so if we're immersed and experienced in a field.

We are bad at thinking this way in groups, relatively. We're especially bad when these groups are political. If we're deciding on arming rebels, the political dynamics are the 2nd order effects that dominate thinking, not the war... especially if it's a small foreign war that's unlikely to reach home.

The rebel field commander has no problem recognising these strategic dynamics.


>> I think people are generally good at 2nd order thinking

>> We are bad at thinking this way in groups, relatively.

Wow! This seems to answer a long-standing question I have had. I think I can now work out the mathematics of this, given appropriate amount of time. I think this is it! Thanks.


good luck to you.


I don't know that this is true.

I meet quite a lot of utopians (which are 1st order thinkers!), who assume the stated goal of a policy will be its effects.

Perhaps this is about knowledge. In domestic situations where you know how to casually infer to more distant effects, 2nd order is easier.

In politics, 2nd order is hard because people know very little and need to know very little (in their daily lives).


Politics is the poster entity for this.

Stupid 1st order: Making jails as bad as possible will deter criminals which will make everyone safer.

Good 2nd order: That doesn't work. How about we try rehabilitation and cutting the social and economic causes of crime?

Evil 2nd order: You're a snowflake. Of course it works. [I know it's nonsense, but I'm going to lie to you for profit.] How are the shares in my prison services company doing today?

Stupid 1st order: See! The authority agrees! We're right!

3rd order: So... how do we deal with the evil 2nd order people?

Everyone else: Why are you creating imaginary problems? Let's get back to the real issues!


But the “stupid first order” proposition is correct. If our policy was to brutally torture people for even the most minor infractions, and we enforced it very well, no one would ever commit a crime unless they were extremely confident that they could get away with it.

The problem is that this “cure” would be much worse than the disease.


Sometimes I wonder if people are actually really good at thinking in groups but the thought process is just too terrifying and different from what we could understand as individuals


That is quite a lot of words to say "think a few steps ahead".


+ Look beyond the obvious and conventional and what it is saying that the second order thinkers see unconventional things that might sound absurd to conventional first order thinkers. And do it with good precision, otherwise, they might end up with failed results too.


I am not against "higher-order thinking" but in my experience it often leads to wishful thinking and self-deception. Especially political arguments are prone to disagreements on higher order effects (as opposed to effects that are directly observable and measurable).

For example, take basic income. Opponents of basic income argue that it will lead to inflation (second-order effect) and it will have no effect. Proponents of basic income argue that it will lead to higher velocity of money (second-order effect) and it will have positive economic effect.

Often the magnitude of second-order effects is not clear (often they are large sums of small numbers, and somewhat non-linear), and this leads to disagreements even among reasonable people. It's very easy to see 2nd order effect of some type and not another. So I would be cautious to rely on it too much.

(This is also partly a reason why I am in favor of empiricism as opposed to rationalism - humans often have bad intuitions when we just think about things.)


Yes, there is this common reverse problem:

If we allow X to speak, his ideas will spread and Y will be harmed. So banning X is best.

But why not:

Banning X will cause a back reaction in X supporters and spread X's message leading Y to be harmed.

Or any other of a number of possibilities.

There is a fallacy of "hypothetical higher order danger justifies first order violence (/action)" -- where the higher order thinking is mostly ideological fantasy.


Often that fallacy is argued on the basis of artificially reduced complexity.

"Some first order actions can be justified by higher order danger" is often transliterated as "this first order action is justified by higher order danger" precisely because it drops all the subtlety and pretense for higher-order reasoning that the original speaker intended as an unspoken follow-up.

People often seem to allow others only the luxury of starting a thought, but never finishing it.


A good book on the topic is "Thinking in Systems" by Donella Meadows [1]. It's a catalog of common system patterns and how they behave to give you some tools to answer the "then what?".

[1] https://www.amazon.com/Thinking-Systems-Donella-H-Meadows/dp...


Ordinary people don't like when they are getting asked "And then what?"

Socrates kept asking questions. That was his defining characteristic. We know how that went...


I think the story of socrates has repeated itself multiple times over the course of history.

Very often men who were ahead of their times were prosecuted for being right.


1st order thinking dominates our politics and economics. And worse, there's no incentive for 2nd order thinking from our politicians, since the electorate doesn't go beyond 1st order. Further, pre-existing assumptions are usually backed by 1st order thinking, but not 2nd.

Not a bad little book promoting 2nd order thinking in economics: https://fee.org/resources/economics-in-one-lesson/


> 1st order thinking dominates our politics and economics.

At least in U.S. politics, we have a surplus of second order thinking. We can't get mundane immediate compromises made because people are worried about identities, factions, and trust issues, which are all second order concerns.

For instance, it's not controversial to think that H1B visas are a mess, but legislation on specific immigration reform tends to get stymied by the desire for comprehensive immigration reform. Why not just fix one immigration problem? Because immigration proponents and restrictionists do not trust each other that the piecemeal reforms they want will be passed the next time around.

The same pattern comes up in plenty of other controversial issues such as abortion, tax reform, and lately judicial appointments.


I think it's a tactic.

Politicians use whatever order thinking is most convenient to promote their ideas.

nth-order thinking is great for debate because you can use increasingly tenuous connections to justify your position.


> The ability to think through problems to the second, third, and nth order—or what we will call second-order thinking for short—is a powerful tool that supercharges your thinking.

Clearly it should be called higher-order thinking.


This is already a similar concept -

https://en.wikipedia.org/wiki/Higher-order_thinking

I would be quite happy to see these kinds of concepts rolled into one.

Looking back, most times I heard "higher order thinking" at school, I feel like they meant "thinking at a higher level of abstraction" (as I would describe it now).

You could make an argument that the kind of "nth derivative thinking" identified here is similar though.


As an interesting aside, the n-th partial derivatives form an n-cube:

For m variables, there are m first orders, m^2 second orders, etc. and most proofs about, eg convexity, amount to a proof over that shape.

The reason that n-th order reasoning about an m-dimensional problem is difficult is that it requires reasoning using m^n objects.

If you figure we can remember about ten things intuitively, that’s just enough for three dimensions and two derivatives — enough to do motion analysis.

But it’s leagues away from being able to do the second order of a ten-dimensional problem intuitively.


Just "planning" seems adequate to me.


If the author called it planning then everyone would realize that they had already heard of it...

People might also realize that with every additional step of reasoning the chance that you got it right decays exponentially.


That's why people make multiple plans and backup plans


Plans for the most part execute the 'happy path'. That's wny everything's late and over-budget. Contingincy planning captures 2nd order and that's about as deep as it ever goes.


"For example, consider a country that, wanting to inspire regime change in another country, funds and provides weapons to a group of “moderate rebels.” Only it turns out that those moderate rebels will become powerful and then go to war with the sponsoring country for decades. Whoops."

Unless you want to make it look like a beneficial regime change but your actual wish is long term destabilization of a region ;)


> long term destabilization of a region

That's not the final order thought--it doesn't show 'what's in it for me?' There's got to be another 'why?'


To stimulate sponsoring country's economy and cement electoral choices in the face of a common threat.


Indeed, but you can hide behind your 1st order thought, play dumb and get the vote of the 1st order thinkers.


Just make sure though that you don't end up in 'paralysis by analysis'.

Sometimes, the bullish people that don't overthink things end up in far greater situation (through a rollercoaster) than if they just sat there and pondered about all the consequences.


I love to do this, keep iterating on n-th order thoughts, figuring out emergent behavior etc etc (systems mapping is actually a quite useful technique to do this) but at some point you need to scale it back down to the 1st order to actually get things done.

It's fun to fantasize where the industry will be heading 10-30 years from now, but figuring out what needs to be done tomorrow is more important.


Honest question, just for the sake of respectful debate, not trying to be obnoxious:

Can you actually derive emergent properties from nth-order thinking? I think they're both fascinating topics, but when I think about emergent properties of human biology, I struggle to imagine how I could have derived them from nth-order logic. To me, emergent properties seem like something you have to observe first, not something that you can reach from any kind of reason. Emergent properties are so awe-inspiring precisely because you can't predict them when you go down a level or two.


I see emergent behavior as something that we didn't think of before (maybe because we didn't think about it enough), rather than something that is inherently impossible to predict.

Did Mark Zuckerberg know in 2003 that his dorm room creation would affect elections? Probably not, but I also wouldn't say it was impossible to predict.

A complex system is chaotic for sure, which definitely makes it hard to predict emergent behavior, but it's also not completely random. I wouldn't claim that you can figure out all emergent behavior beforehand but you can definitely reason about some emergent behavior just by doing some nth-order thinking and systems mapping.

One example that I thought of recently: Today most innovation with food is about innovating on logistics. It's all about getting your food faster and cheaper to your door. Let's say, in 5-10 years from now we live in a world where on-demand groceries are instant and cheaper than anything else, maybe even powered by autonomous drones or what not.

Would we still be going to restaurants and grocery stores? One emergent behavior may be that we get everything delivered with a snap of our fingers straight from warehouses and delivery kitchens. So is it wise to put all our money into Instacart and Doordash now that we know this? Maybe not, because what's the next step beyond delivery? What if there's a new technology to print your food, 20-30 years from now? Will Instacart and Doordash still be relevant in a world like that?

Emergent behavior would then be that innovation will be more about what you eat than when you get it (the innovation vector that we focus so much on today). The company that learns your taste profiles and can figure out exactly what you want/need will probably win in that scenario, so maybe companies that are solving logistics today (Instacart/Doordash) are not the companies that will win in the end (Google has a better shot when it comes to AI and data collection).


I doubt that anyone frequently predicts emergent properties precisely, but I still think that thoughtfully considering them is beneficial. There is a line in "Clean Architecture" to the effect that every architectural decision is akin to a shot in pool: you aren't merely trying to sink the ball, you're also trying to line up the next shot. This is true of decisions generally, and I think it's infinitely easier to do if you're thinking of navigating complex, dynamic systems rather than simply imagining "I'll do A because A –> B."


You might like this video that shows emergent behaviour that I find fascinating: https://youtu.be/gaFKqOBTj9w


The best way to exercise second order thinking is to play turn based games like chess or even magic the gathering. You can’t be good at those games without considering actions several turns ahead and aggregating a tremendous amount of knowledge about the game, and you get almost immediate feedback on how good your thought process actually was.



I don't think playing games makes you smarter. I think it practices a certain kind of thinking, and practicing thinking makes you better at that kind of thinking.


turn based games like chess or even magic the gathering

Risk -> Risk 2210.


Temporal thinking is something that needs to be exercised. It's relatively easy to think 'what if', 'and then what' but much harder to think these through when each next level overwrites the previous one, it's not a matter of adding up the effects of static layers. Successive effects invalidate former ones to varying degrees. It's really just running thought experiments, but running them deeper and simulating for longer time periods.

The most visual demonstration of most of these talking points is in the game of Go. Being able to 'read' the board is seeing the good/bad outcome of a world of possibilities.

The other really good comment made was about quantifying second order effects. Being able to imagine higher order effects without being able to estimate them leads to analysis paralysis.


So you're successfully applying 2nd level order thinking on your job. Other's don't do it. You're working towards your goal. Achieving your 2nd level order thinking successes.

But the rest of your colleagues don't understand what you're doing. They don't see your vision. They live in the 1st order thinking world. To them your choices don't make sense. They don't understand how the successes came from the 2nd level choices.

They'll say: "Why are you eating a salad? You're hungry! Take a chocolate bar! It has much more calories so it will better solve your problem."

You try to explain. They don't get it. They wave it off like some ridiculous story.

How can you effectively sell your 2nd order thinking ideas to 1st order thinking people?


Another way to look at Second-Order thinking is whether someone can understand C pointers intuitively. Joel Spolsky once wrote that understanding pointers is an aptitude, and I believe he was right. It is the same concept as second-order thinking - being able to follow several layers of indirection. Recursion and regular expressions fall into this group as well. I find otherwise very bright individuals that don't have something in their brain that allows them to grasp these concepts. I think it is how the brain is wired, like good hand-eye coordination or fast memory recall (aka being witty). Just because your brain is good at one of those traits doesn't mean you are good at all of them.


This is closely related to 'Chaos theory' too. There is a great correlation.

"Small differences in initial conditions yield widely diverging outcomes for dynamical systems"

My interpretation is that second order thinkers see unconventional things of how a dynamic system behaves to fine tune it initially and intervene when it's moving in wrong direction to drive it in the direction that yields desired results.

> What do the consequences look like in 10 minutes? 10 months? 10 Years? 1

> Extraordinary performance comes from seeing things that other people can’t see.

Although experience plays a key role, the second order still have to think that aren't yet experienced by them based on the patterns they can recognize.


And here I was using regular thinking like an idiot.


D'oh!


The first quote by Ray Dalio is from his most excellent book https://www.amazon.com/Principles-Life-Work-Ray-Dalio/dp/150...

I've listen to the section for personal principles on audible 5 times now, and I will probably keep repeating it at least a couple times a year.


This definitely explains why we have things like the drug war (and other idiotic pseudo wars) and can't even learn the second and third order effects of such prohibition after the mass experiment with alcohol prohibition in the twenties and thirties. In fact, it explains a lot if not most bad policy decisions. I'm convinced regular people are simply incapable or unwilling to consider second, third, and higher order effects of their actions and politicians especially take advantage of this to push through atrocious policies. Healthcare policy is another that comes to mind. Retirement saving. Lack of infrastructure spending. Anti vaccine idiots. Etc. Etc. I mean towards people being to incompetent and stupid rather than unwilling, as there is plenty of evidence for that (three quarters of people are purported to not be able to name the three branches of government), but who knows? The end result is the same. Stupid thinking driving forward a stupid populace. I didn't know there was a term for what the stupid were missing till now.


While I agree with most of this, this can also lead to a kind of "analysis paralysis". I've seen a good number of "first order thinkers" succeed because they just plow through stuff, and when the indirect consequences appear that they didn't originally think about, they plow through that, too.


I call it S1 (simple 1, arrogant and ignorant) vs S2 (simple 2, which is elegant and informed), but I’ve only found it possible to attain S2 after navigating the complexity between these two points, which implies that I’m often guilty of S1 thinking as the genesis of getting to S2, if I have the sand for it.


And S3 is where you store all your memories.


And S10 is a smartphone. I think I’m seeing the matrix.


This article reminds me of tree searches. One of the results we've found in various AI related investigations into the playing of games is that a good policy selects the action which maximizes the value of a future state. A good value for an action corresponds with where that action would eventually leave the agent. The article points out that a person will get better results by expanding the tree out to a depth beyond one. This is usually true. Interestingly, humans are real time and so we also experience contexts in which this is not true. Expansion takes precious time. Thus, one of the cognitive biases which sometimes steers us away from more optimal answers.


This is interesting, I'm pretty sure we all do this to a large extent, I would say one thing that I've noticed when everything was going sideways was I learnt that in the absence of a good viable outcome doing nothing was a very effective strategy.

Not without its risks either, but in some cases my act of doing nothing allowed what I had done to progress (where I knew it would not have a detrimental impact and couldn't see any path forward that was effective, I just circled the wagons and waited. Surprisingly effective.


3rd order thinkers see direct causes, indirect causes, direct effects and indirect effects, which establishes the need for setting up an internal feedback loop to strengthen the causes classified as direct (coming from within) and to power the process of distinguishing between direct and indirect causes. Without this feedback loop the experience of indirect causes results in an experience of loss of agency.

1st order thinkers know what they do, 2nd order thinkers know what they do to others, 3rd order thinkers know what they want to do. I'm leaving an interpolation which is both obvious, crucial, trivial and infinitely deep to the reader.


Not terribly insightful... I expected something about meta-reasoning - thinking about your own thinking - but it was more along the lines of just, "smart people don't have tunnel vision when making decisions".


Maybe world policy makers should think to the nth order before making bad policies.


I think they do, but I suspect that their goals are bad, which is why we have the world we have now.


It’s worth noting here that an astonishing number of prominent politicians and politically-involved elites are still avowed (and publicly acknowledged) Malthusians. So when trying to understand their actions, keep in mind, their goals may in fact be to “reduce the global population to a more sustainable level.”


Here is my attempt at second order thinking -- where did he pull out that odds of success exponentially depending on "second order thinking" graph?


Has anyone read the book "The Most Important Thing", which is mentioned in the blog? If so, can you give a review of it?


Interesting article.

While there may be nature/nurture arguments here, IMO, first order thinkers are trained to be that way by society and markets. Someone who doesn’t think ahead is more “useful” as a consumer. Someone who doesn’t think ahead and question, gets along with others with less friction.


So, to put it in a meme: https://youtu.be/CkdyU_eUm1U ( not really a smart movie - dude where's my car, but it thought about that the entire time)


My second order thinking holds me back in some ways, because when I think of something new, immediately I start thinking of ways why it would not work etc.


Smart people get everyone else to bog themselves down with "second-order thinking" perpetually tying themselves in mental knots wondering endlessly "what happens next" and "should I or shouldn't I." That is because smart people that get shit done will not think twice, do the thing, and then quickly re-evaluate the results and proceed apace. Smart people that get shit done, outperform by actually executing. You can't predict the future. Asking "but what about in ten years" is the height of hubris. The only answer can be "who knows."


Please don't give this man a company.

I agree that smart, effective people do generally have a bias toward action. But not in the bull in the shop "thinking twice is a waste of my time" sort of way you're describing.


Yeah I was thinking this is how Erie Canal, love canal and more boring things like Clinton Lake happened.


Depending on your aim, smarter people probably don't spend time bogging others down, and instead probably fall closer to the teach a man to fish side of things.


There is some truth to this. The future is impossible to predict. There is foolishness in failing to understand where predictions make sense and where they don't. We like to talk about "intelligence", but wisdom and the virtue of prudence are essential. Without the disciplining effect of the virtues, intelligence is thwarted. The degree and the kind of effort one puts into something is extremely important and requires a comprehension of the confluence of factors that are important to making the correct decision. This requires experience.

And as the saying goes, if you want to make God laugh, tell him your plans. But that isn't to say we shouldn't plans. This should not be taken as license for impulsive behavior and an irresponsible absense of forethought. But it should be understood that plans are often dependent on certain conditions holding, on the completeness of our knowledge, etc, thus requiring humility in the face of uncertainty and recognizing that plans must be adjusted or abandoned sometimes. There is no recipe for any of this and first principle are not enough.


are you a policeman?


Slightly off-topic: what app is used to create low-fi sketches like the illustration in this article? I'd love to use these in my product work.


You could make images like these easily with Krita on most platforms or Procreate on an iPad/iPhone.


Another way to say this is to evaluate the consequences of your actions. Good advice but don't confuse people by renaming it.


There is a famous saying in chess which captures this: "If you find a good move, look for a better one".


I think it is just one skill of critical thinking, why stop at second order when third order thinking is even better?


You have an opportunity to practice second order thinking in regards to using second/third order thinking here.


I'm not sure how much I agree with this article. In my opinion, most people's practical problems don't come from an inability to consider the later consequences of an action. They're either things completely beyond our control or they come from an inability to discipline ourselves to do the responsible thing given the consequences. People don't take the easy road because they're dumb. They take it because the hard road is hard. And as far as I can tell, gaining or building discipline oneself is nearly impossible. (To be clear, I'm not some hard core libertarian that thinks everyone has made their bed and therefore should sleep in it. I think the struggle to conform one's life to productive positive decisions is ubiquitously difficult and just telling people their problems are their own fault has probably never helped a situation.)

Regarding less daily matters like foreign policy, I actually think this kind of second order thinking is a trap. It's the pride that goeth before the fall. It leads us to believe problems that are too complex, too variable and too opaque are within the grasp of our rationalim if only we think hard enough. Robert McNamara didn't advise Kennedy and LBJ deeper and deeper into the Vietnam war because he lacked second order thinking skills. He did so precisely because he had like 10th order thinking skills.


> To be clear, I'm not some hard core libertarian that thinks everyone has made their bed and therefore should sleep in it. I think the struggle to conform one's life to productive positive decisions is ubiquitously difficult and just telling people their problems are their own fault has probably never helped a situation.

Not super relevant to the topic at hand, but I am a very hardcore libertarian, and I have to say that this is not an accurate representation of libertarian views.

It's not your fault, this seems to be a common misconception among many, even some of those who call themselves libertarian.

Libertarian philosophy does indeed stress personal responsibility as a part of voluntary interactions with others. However, this doesn't mean that a person's status is deterministic. Many of people's problem are not their own fault: hell, many of the problems are caused by the state, which is why we hate it. The problems that aren't caused by the state (scarcity, some inequality, externalities, etc) ate very difficult to solve, and state attempts to fix them usually result in even worse outcomes.

To bring it back around to second-order thinking, it's very important in politics especially to be able to see these unforseen consequences of political action. First-order thinking is how we end up with things like the PATRIOT Act and other emotionally-driven policies.


Fair enough. Let me rephrase that. I'm not some hardcore puritan...


infinity-order thinking: hanging all the self-improvement gurus from the highest tree.


Why didn't I think of that!?


BS alert


OK, so to follow up with the author of the piece and to maybe throw in a little 2nd order thinking into this equation:

“Second-Order Thinking: What Smart People Use to Outperform” - the author makes a broad claim that all smart people use second order thinking to outperform those who aren’t smart.

Questions: What makes the author credible, and what data / research does he/she have in order to support the claim? He/she cites Howard Marks as an example, but is Howard Marks really enough to support the claim for such a broad and generalized statement? What about the fact that even people who try to think deeply tend to get it wrong often-times, which has been shown to be true over and over again? Example:

https://www.nytimes.com/2011/10/23/magazine/dont-blink-the-h...

That’s not to say that I disagree with the premise – I think that in general, this piece puts forward a hypothesis that smart people are deep thinkers who think critically and deeply about the effects of their decisions, while those that aren’t so smart might not put such an effort in. There is some truth to the statement the author puts forward, but by no means is this the differentiating factor between smart people and the not-so-smart. I’m not an expert in this field either, but I’ve read a lot of stuff from Feynman, Einstein, and the geniuses of the field, and I find that some of the things which differentiate them from the rest are:

1. Deep curiosity. They were incredibly interested in finding out how things work / solving a problem. 2. Persistence: They worked hard to achieve what they wanted to achieve and didn’t give up. 3. They used their visual cortex / visual systems a lot. Tesla for example would have live simulated conversations with other people and build things inside of his head. Einstein is in the same boat - he would spend hours trying to visualize concepts. This makes total sense from my perspective. We have a heck of a lot more capacity to process information using our visual cortex than other areas in out brains. We can process something like 10 million bits per second vs. a much lower capacity in other brain areas. When you read silently for example, you're only processing around 45 bits per second.

Apologies for the long write up here – I guess I might be over-analyzing things a bit, but I just don’t really see the point of this article, nor do I see it pointing out the real differentiating factors between those that are ‘smart’ and those that aren’t, and I think the author is making a very broad statement and doesn’t have any data to support his/her claim. Howard Marks also by the way is not a class above other investors. He still uses a value-based approach in a time where the value based strategy doesn’t really guarantee maximum returns. It doesn’t mean that he isn’t bright – he definitely is. I just don’t think that this presents enough evidence or data to support the generally broad claim made in the article.


So, most of Robots are smart ?


In politics I've heard higher order thinking called 'running the traps.' Would love to learn of a way to apply derivatives or higher order thinking to polling and the decisions I make in my work.

Take Trump in 2015 if voters were less likely to say they support Trump in polling which 2nd order affected Clinton campaign malpractice to not campaign in MI etc, which gave Trump election.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: