Hacker News new | past | comments | ask | show | jobs | submit login
How to Understand Things (nabeelqu.co)
816 points by ingve 10 months ago | hide | past | favorite | 116 comments

Great post. Personal anecdote:

I don't think I really understood anything in school, but I was decent at going through the motions of carrying out certain methods and recalling certain facts when I needed to.

I went on to study Maths at university, and for most of my first year, I had the same surface level "methods + facts" knowledge that got me through school. After some studying, I could recite definitions and theorems, I'd memorised some proofs, and I could occasionally manipulate a problem to get an answer. I think about half of the cohort was in a similar position. But it was clear that there were others in a completely different league.

When we were studying for our first year exams, I was struggling to remember the proof of a specific theorem (it felt quite long). A friend was trying to help me learn it, and he asked me what "picture" I had in my head for the theorem. I didn't have any pictures in my head for anything.

It turned out that a simple drawing could capture the entire statement of the theorem, and from that drawing, the proof was trivial to derive. It was long-ish to write out in words, sure, but the underlying concept was really simple. This blew my mind — I realised I didn't have a clue what we'd been studying the whole year.

The worrying thing is that I actually thought I understood that stuff. Before that incident, I didn't know what it feels like to actually understand something, and I didn't have an appreciation for the layers of depth even within that. I suspect lots of people go through the entire education system like this.

> It turned out that a simple drawing could capture the entire statement of the theorem, and from that drawing, the proof was trivial to derive.

Excellent point.

1. A similar example: Feynman diagrams.

2. Another: Venn diagrams.

3. Longer example: On Navy nuclear-powered aircraft carriers, the officer of the deck underway (OOD) must have at least a basic understanding of how the engineering plant works. It's second nature for nuclear-trained OODs, of course, but non-nukes could sometimes have trouble. Back in the day, it turned out that an effective way to help non-nukes learn what they needed to know was to have them: (A) memorize a really-simple block diagram of the reactor and steam system, and also (B) memorize a chant, of modest length, that summarized how things worked. During slow periods while standing OOD watch, I'd make a non-nuke OOD trainee draw the diagram from memory; then I'd quiz him with "what if ..." questions (back then it was always "him"). If he got hung up on a question, I'd tell him, "chant the chant." That usually helped him figure out the answer in short order.

(U.S. submarines don't have that problem, AFAIK, because pretty much every officer who will stand OOD watches is nuclear-trained.)

>I suspect lots of people go through the entire education system like this.

+1. it took me a couple years after getting kicked out of college to get my head sorted out to the point where I felt like I could "think" again

I think one of the most harmful things about schooling is the way it imposes a tracked structure on learning. it demarcates knowledge into discrete subjects and sets up a linear progression through them and says you need to master each step on the track before moving onto the next one. this is poisonous and borderline evil, and I've encountered many people who are crippled for life by it. a lot of people never pursue things they're really interested in and could become extremely passionate about because school has convinced them they need to stack up prerequisite knowledge before they're even allowed to touch it

Our school is poisonous (can tell for France), if not evil. It become crystal clear after reading Celine Alvarez. Not sure if she got translated yet. In english, but older you also have Alfie Kohn, but I haven't read him.

When reading Celine, one understand that children are natural born learner, and there is no effort needed to make them learn stuff. Our school model is industrial production of objects. Thinking human machines. We are way more than that. Sadly Pink Floyd description of the school still echo to our modern school. Some peoples don't feel that way about school. I don't really know why. Maybe they never imagined how better it could have been, so they found it great.

Thank you for the recommendation. Celine Alvarez does have a book translated to English:

The Natural Laws of Children: Why Children Thrive When We Understand How Their Brains Are Wired

> A powerful, neuroscience-based approach to revolutionize early childhood learning through natural creativity, strong human connections, spontaneous free play, and more.

She has a home page, with an English version of some of her articles.


Check out Célestin Freinet too (also untranslated in English AFAIK)

Reference appreciated - there is one translated book:

Freinet, C.: Education through work: a model for child centered learning; translated by John Sivell. Lewiston: Edwin Mellen Press, 1993. ISBN 0-7734-9303-4

> says you need to master each step on the track before moving onto the next one. this is poisonous and borderline evil

What’s wrong with it? You do need to understand calculus before classical mechanics, classical mechanics before quantum mechanics, quantum mechanics before quantum field theory, and quantum field theory before the Standard Model. I’ve seen tons of people disregard this and the result is always confused word salad. People waste years of their lives this way, going in circles without ever carefully building their understanding from the ground up. The order in school was chosen for a reason.

I suspect you and GP are talking about slightly different things. GP is probably more opposed to artificial compartmentalizing of things. As an example:

> You do need to understand calculus before classical mechanics

Yes, but how much calculus? Do you need all of Calc I, II and III before even attempting classical mechanics? And should calculus even be treated independently of classical mechanics?

There are various traditions when it comes to teaching these subjects, and the tradition in the US involves keeping a strict distinction between these things, in addition to a "theory first" approach. Other people have studied things in a different manner. Some of my physics professors from the UK had studied most of the math they knew only as needed when they would get to relevant topics in physics - including differential equations, all of analysis (complex or real), some of Calc III, etc.

Even amongst mathematicians, it was common in parts of Eastern Europe to focus on a problem, and learn whatever theory is needed to solve that problem. They didn't learn theory and apply to problems - they took a problem, and learned whatever theory is needed to solve them. I recall picking up a Kolmogorov textbook on analysis and being surprised by seeing this approach, along with the informality with which everything is discussed.

And just a minor quibble:

> classical mechanics before quantum mechanics,

You don't really need to know much except the basics. I think the classical mechanics we covered in our typical "Engineering Physics" courses was sufficient to dive into proper quantum mechanics. It's nice to have been exposed to Hamiltonians in classical physics prior to taking QM, but really not needed. There's a reason neither schools I attended made the classical mechanics courses as prereqs to QM. In fact, I would argue we should split things up a bit: Have a course to teach the very basics of energy, momentum, etc. Then make it a prerequisite to both classical mechanics and quantum mechanics.

>And should calculus even be treated independently of classical mechanics?

Thank you. The value of calculus didn't really click for me until the next year in physics, at which point I wished I had paid more attention.

It's really unfortunate that you first have to drill high-speed high-precision symbol manipulation, devoid of any meaning, for so long before getting any indication of why you'd want to be able to do it.

I like Lockhart's analogy: mastering musical notation, transposition, etc. before being allowed to play your first note.


Some people like to work their way in reverse. I'd often pick a really complicated subject I'm after like "stellar fusion" and then work my way downwards and learn whatever I need to learn in order to understand it. If I had to start from differential mathematics, without knowing why I need it, I'd probably give up.

Are there any notable teachers who take or have taken this approach?

I can't think of any from my education, everything was bottom up building. But when I self-teach (as it naturally emerges from pursuing projects related to work or hobbies) it's often top down digging.

This is the big problem I had with engineering in school, so much was based on faith that you needed it. And just now as I'm writing this, I realized that's completely counter to my personality.

side note: I remember Einstein needed to learn particular math skills to help prove his ideas and sought out to learn it.

It didn't work for the GP. That makes it poisonous and evil.

I've seen this sentiment way too much on HN. X didn't work for me, therefore X is a scam, its perpetrators are evil sociopaths, and if it worked for you you're a cog in the machine, man.

I'm a believer in individual learning styles, and hard as it is for me to understand personally, I do recognize that lots of people may learn best by lecture and drill. I certainly don't think bad of them for it. but many more people are not suited for this, and forcing them through the education system as it exists does them great harm, and I empathize strongly with them

Lecture and drill has been out of vogue for decades. The hot thing is active learning and high-impact practices. And frankly it has the opposite problem of dragging the class down to the slowest common denominator, wasting the opportunity to teach more to the high-achievers.

calculus is required to understand classical mechanics, but intuitions about classical mechanics can inform and accelerate how you learn calculus. I actually tutored calc 1 a little bit in college and most of the time the people I was helping could do the calculations asked of them fine, but felt lost and confused because they didn't know what the output actually meant. everyone learns linear algebra before abstract algebra, but even a little background in the latter creates many opportunities to think "ohhh, this is just like X!" that make the former easier to pick up--and when you revist the latter, you'll probably understand it better too because of the connections you made

is it better to learn python or java to build a cursory understanding of programming, and then c and x86 to see what "really" happens "under the hood?" or the other way around, starting from base operations and then layering abstractions on top? I don't think one is strictly better than the other. when I first learned sql, joins were intuitively obvious to me but bewildered the person I was learning with, despite the fact that he'd been in IT for years and I barely knew how to program, because I happened to know set theory. I wonder what it would be like to do the traditional algorithms and data structures stuff before ever learning to program. it might make picking up a lot of details easier! I wonder why we don't teach children logic gates and binary arithmetic before they learn decimal. it's actually simpler!

in general I don't think there's a good reason why you must teach raw technique before practical application, or a concrete special case before a broader abstraction. even an impefect understanding of something "higher" can give you hooks to attach information about something "lower." familiarity and comfort and an understanding of interrelationships between knowledge is so much more valuable than perfectly executing each step on a given track

when I want to get acquainted with a new field, I usually start by reading current research, barely understand any of it, and then work backwards through the author's previous papers and the papers he cites habitually. my cursory grasp on the latest version makes it easier to understand earlier sketchier versions, and at the same time the development history of the older shines light on the newer. sure you can always "start with the greeks" as it were and work your way up, but I don't think this is objectively better than going the opposite direction

really I think of knowledge as more a highly interconnected graph than a series of linear tracks. as long as you can anchor something somewhere it's valuable to pick it up. it can reinforce other things near it and you can strengthen your understanding of it over time. and getting into the practice of slotting arbitrary things together like this is good training for drawing novel inferences between seemingly disparate topics, which I think is the most valuable form of insight there is

While I completely agree with your general comments, I still want to emphasize how hard this ideal is to implement in practice.

Yes, every subject is linked to every other subject. If I'm tutoring introductory physics, I will use examples and analogies from math, computer science, engineering, biology, or more advanced physics, depending on the background and taste of the student, and it works fantastically. But if I'm lecturing to a crowd, this is impossible, because the students will differ. If I draw a link, some people will think it's enlightening, some will think it's totally irrelevant, some will think it's boring, some will think it's so obvious it's not worth saying, and most will just get more confused.

The same thing goes for the top-down "working backwards from cool results" approach; it's supposed to bottom out in something you know, but whenever you teach multiple people at once, everybody knows different things. The bottom-up linear approach is useful because it gives you a guarantee that you can draw a link. If I'm teaching quantum mechanics I expect to be able to lean on intuition from classical mechanics and linear algebra. If I didn't know the students had that, I would draw a lot fewer links, not more.

Similarly, "if people learned X in school, then Y would be easier to understand later" is true for almost any values of X and Y, because of the interconnectedness of knowledge. But if you ask any math teacher, they'll tell you the school curriculum is already bursting at the seams. You can't just add logic and set theory to existing school math without taking something out. In the 70s we tried taking out ordinary arithmetic to make room for that. It was called New Math, and everybody hated it.

>stack up prerequisite knowledge before they're even allowed to touch it

The way you phrased this reminded me of A Mathematician's Lament by Paul Lockhart[0]

[0] https://www.maa.org/external_archive/devlin/LockhartsLament....

seems like a lot of people are imagining a caricature of what school actually is and blaming it for problems they would have had anyway.

OP here -- that is a fantastic anecdote!

Do you have an example of a "drawing" of a theorem, in this context? (I've seen these for fairly trivial theorems but not for more complex ones, so I'm curious.)

As someone who very much relates to the GP’s anecdote, I might suggest determinants as a good example.

As an undergraduate studying maths, I encountered a standard theorem in one of my first courses, which says that about 947 different conditions are equivalent to a matrix having a determinant of zero. I dutifully memorised these. I also dutifully memorised the algorithm for how to calculate a determinant. I might even have remembered some verbatim proofs of some of the equivalences.

However, I developed absolutely no intuition about what a determinant is. I had book knowledge, but no insight. It was a long time ago now, but I’m fairly sure that when I graduated I still did not truly understand even this very basic (by undergraduate standards) subject. I think it was probably a few years later, when I came across some of the same theory but in a much more practical context at work, that most of the connections in that equivalence theorem first “clicked”.

Meanwhile, here is what a gifted presenter with the right illustrations can do in about ten minutes:


The 2,000 or so substantially identical comments below that video are very telling.

Given the understanding you’d get with that quality of presentation, the equivalences I mentioned above would have been obvious and constructing the proofs from first principles would have been straightforward.

In the process of learning about a family of algorithms in machine learning I also gained some physical intuition of determinants (same diagram as 3Blue1Brown, but applied in a different context of "squashing and stretching" probability mass): https://blog.evjang.com/2018/01/nf1.html

Awesome example, thank you!

This is awesome, determinants were one of the things that I never really understood during my degree

Ahh I did a bit of googling but couldn't find anything nice — sorry! Most of the time, the complex stuff is broken down into smaller "lemmas" with their own manageable proofs, and then the proof of the whole theorem will be something like "Follows from Lemma 2.1, Lemma 2.2, and a basic application of Theorem 1.4"

This is the theorem I was talking about: https://i.imgur.com/1xEH51Z.png (taken from https://taimur.me/posts/thinking-at-the-right-level-of-abstr... which touches on a similar topic to your post)

I think a good example would be Lagrange's theorem[0]. Once you have this[1] picture in mind, it becomes trivial.

[0] https://en.wikipedia.org/wiki/Lagrange%27s_theorem

[1] https://i.stack.imgur.com/w2GfA.png

Was it this theorem of Lagrange's that you had in mind https://en.wikipedia.org/wiki/Lagrange%27s_theorem_(group_th... ?

Yes this. In grade school I had trouble memorizing the formula for the area of a right angled triangle. But one day I bisected a rectangle diagonally and was blown away how it all of a sudden made sense: The same thing happened with circles and even calculus stuff. Putting it into diagrams suddenly made it make sense.

So now I try to “do” instead of memorize and it makes all the difference in the world.

Even I can’t learn new programming languages unless I make a real project with them. That’s why I don’t believe in things like code katas or reading programming books cover to cover. It’s a bad representative for using a tool in real life to learn its principles, techniques and methods.

And if you got good grades, then the system wasn't testing whether you understood it - Bad news.

But it's good news that you were able to understand once he drew the picture for you. So there is an effective way to teach that theorem - If only the professor knew it.

I felt the same exact way trying to memorize molecular formulas in organic chemistry, being able to visualize the molecules and some outlandish pneumonics really allowed me to force my brain to make memories that I forgot over time however I gained the ability to "see" the molecules in my mind and deriving the formula became trivial.

Creating really unique pneumonics works similar to how you can recall when something happened out of the ordinary or have a "sixth" sense where you subconsciously can sense something is off in an otherwise ordinary situation or encounter. If I had to guess, this might be an evolved human survival mechanism that helped our ancestors detect and avoid dangerous/deadly situation.

I wonder what other evolutionary mechanism we can hack to encode concepts more deeply. I know spaced repetition is the mainstream approach via flashcards/ankii type apps. I think pneumonics blended with spaced repetition is the most practical methodology. Thoughts?

> It turned out that a simple drawing could capture the entire statement of the theorem, and from that drawing, the proof was trivial to derive.

Couldn't agree more. Some of the best CS books I've read have been ones with a tonne of illustrations, like the Head First series.

And _why’s poignant guide! I’m still a rank amateur, but that little PDF helped me truly understand, not just memorize.

Humans are wired to enjoy and remember stories. So much education is done narrativelessly and as such neglects to harness one of our most potent capabilities.

What happened after this revelation? Did anything change? Did you try to intuitively understand the problems you were working on? I think this boils down to how you were taught (or self taught) to approach the subject and the set of tricks you learned along the way that became your toolset.

Did you do a lot of exercises in school? They usually help build the intuitive part, the aha moment, that comes through repetition.

Yup, definitely changed the game for me, and turned on my 'intuition' spidey sense, of whether I actually understood a concept.

We did do a lot of exercises in school, but they mostly just tested whether you can reliably apply a method that you were taught, rather than testing understanding.

Did that realization at the eve of exams bring up a bout of anxiety ? (edit: I would have freaked out)

Haha, yeah. Thankfully, the Maths exams were set up such that you could always get 60% (the boundary for a "2:1" grade in the UK — an acceptable score for most people) by just knowing the 'bookwork', which you could do by memorising stuff without understanding it.

I always remember the exams frustration of remembering the exact page of some graph but being unable to get it back. I hated it!

The university I went to offered open book/note exams for almost all courses. It literally didn't even matter how much you memorized... open book tests didn't make anything easier. you need to understand or fail.

I'm not into showing off ranking or pedigree but I do genuinely believe that the higher the pedigree your school the more likely the exams will be harder and require total understanding and even creativity over rote memorization.

The reason is because memorization is trivial. Students able to get into any top school will likely all easily achieve full score on an ordinary tests. The professors at top schools need to make these tests brutally hard in order to produce a bell curve.

I literally had one new professor at my school actually give a mid term that was what would ordinarily be called fair in any other school or college... but the entire class ended up getting nearly full score.

He realized his mistake and the final was way, way harder.

Rote memorization tests are the easiest to make and grade, both in terms of simple instructor effort as well as in terms of fairness and lack of ambiguity.

In Asia it is common for schools to solely grade base on multiple choice exams which promote cramming and memorization. But in the US I would be shocked if a program that only operated that way was accredited. I would argue students who simply memorize things are trying to hack their way to an unearned grade. The goal of higher education is supposed to be to reach higher levels of Bloom's taxonomy, beyond the rote stuff done in primary and secondary school.

> Students able to get into any top school will likely all easily achieve full score on an ordinary tests. The professors at top schools need to make these tests brutally hard in order to produce a bell curve.

This is no longer how “high pedigree” schools function in the US. There is too much blowback from helicopter parents and the kids who haven’t ever had to deal with critical feedback before.

Lookup “grade inflation” if you want to be horrified by the quality decline of “pedigree education”.

You got a good point. My school is listed as a school that has grade inflation but anecdotally I feel my engineering department was pretty brutal. Average GPA was below 3 when I was there.

It could be my personal situation was not representative of the norm.

> This quality of “not stopping at an unsatisfactory answer” deserves some examination.

> This requires a lot of intrinsic motivation, because it’s so hard; so most people simply don’t do it.

It also requires self-confidence, persuasiveness, and social power.

Without these traits, your attempts to really understand something will be dismissed as "overthinking things" or "trying to understand the universe". Those around you will urge you to "stop thinking just do the task" or "do the obvious thing" as they lose patience with you. If you don't resist them you'll end up moving forward despite feeling confused, sometimes completely. You'll then end up pissing people off when you execute too slowly or fail (in their eyes, intentionally).

> This is a habit. It’s easy to pick up.

Not if the people around you are exhausted by you.

I want to add - habits like that are great to pick up, and fine tune, when you're in college.

More so if you're a college student that can focus 100% on your academic life/studies, unburdened by things like work.

EVEN more so if you have great mentors, professors, etc. that can guide you to the right place.

I'm not saying that one CAN NOT do the things above, if you're a busy student with work on the side, and very limited resources as far as mentors or professors go...but I do think that those lucky enough to find themselves in the right positions, are more likely to mature - and quicker.

(And it was no surprise to see that the author is a PPE grad from Oxford)

Your points about social context are great ones and 100% valid, but I want to make a case for "stop thinking just do the task":


Often, trying (and potentially failing) to do a task is the best way to learn about it. The key is to be very explicit about what parts of that task you actually understand and which ones you're pulling out of your ass.

This is especially true when creating software. It's super rare to have requirements that are concrete and detailed enough to form a comprehensive understanding of the final design before you start developing it. Instead there are usually parts that have clarity and others that are fuzzy. If you can enumerate those and keep them separate you can often leverage the parts you understand to make progress on those you do not. Writing placeholder/obviously-terrible code to stand in for the unknown parts just so that you can spin up a running system is a great way to do that. Along the way you'll see what patterns emerge, where you hit walls, etc, which is not always easy to imagine with raw abstract thought. And having "working" software that you can play with is a great way to find edge cases and otherwise make progress on those unknowns. Once you've gained a more complete understanding you can replace the placeholder junk with well-designed/actually-thought-out modules.

(Another obvious reason to do this is if your company will literally go out of business if you wait until you have a perfect understanding to launch a product/service, but I think most people here get that.)

I'm not sure how much this generalizes, but it also works well for me when writing. I usually start with a vague understanding of an idea I hope to communicate, then jot down disjointed sentences to capture parts of it. As I do so it gradually becomes clear how things are connected, where my reasoning is muddy, what I thought I knew but can't express so probably don't, etc, and I can use this gained understanding to iteratively rewrite and reshape my message until it becomes something coherent. Sometimes, anyway; other times I don't end up sending/publishing it at all because along the way I learned that the thing I was hoping to communicate was based on a faulty assumption or is not as straightforward as I thought it was. Which is great, because either way I've learned something.


I guess I'd say it differently: "keep thinking and do the task".

Agree with this. I think of education as a "series of increasingly smaller lies". If you try to explain everything right away with complete fine-grained truthfulness, you quickly get to molecular dynamics, and don't have to stop there.

A common unspoken theme in the post is, the art of learning is knowing "what" to care about and "when".

It’s worth mentioning that this desire to think through things deeply often runs contrary to a bias for action, and it’s very difficult to measure progress in thought. There is a long phase of thinking very hard, and a short phase where things suddenly become clear. So, thinking through and understanding things deeply is often discouraged as a consequence.

The key is to decouple the two.

When you are required to act, act and act decisively. If you are clear that the understanding could be deeper (and it usually can), you trigger a work effort to understand more. So the next time you need to make a decision you’re more informed.

There are both extremes, and best not to be at either one of them.

Example: I forget the name of the principle, but in mathematics the statement "P implies Q" is considered true if P can never be true. As an example, let P be "George Washington was a woman" and Q be "Queen Elizabeth is a man". Then the statement "If GW was a woman, then QE is a man" is considered to be a true statement.

I have a friend who refuses to accept that such a statement should be considered "true". And he has put off studying real analysis until he can learn enough logic theory to convince himself on the validity of accepting such statements as true. I do not think he'll ever get to study real analysis, because he is full of "No! I need to understand this really really well before proceeding!" statements.

It's a fine approach if you have an infinite amount of time.

The other issue, as another commenter pointed out: It's very difficult to measure progress in thought. The mind is great at fooling itself, and not until you try to solve real problems (or discuss them with others) will you expose most of the gaps in your mind. The same person in the above anecdote does suffer from this. He definitely puts in effort to learn a lot (and has succeeded), but there are always more things to learn, and he moves on to the next topic before really applying what he has learned. As someone who talks to him often, it's really hard to tell if he understands. He is the classic case of "I'm sure I can solve problems when I need to, with a bit of review".

At the other extreme, of course, are people who are not really that motivated to understand. They are satisfied if they get the answer at the back of the book. You won't get far with just that.

> If you don't resist them you'll end up moving forward despite feeling confused, sometimes completely. You'll then end up pissing people off when you execute too slowly or fail (in their eyes, intentionally).

This sounds more like an issue at work, and your experience is fairly universal - most jobs I've worked at have it. In my experience, understanding things well is sadly not valued on the job. They want you to "execute", and want you to minimize the time you spend learning. And of course, they would rather hire someone else instead of ensuring your proper learning/training.

A real logician can correct me if I'm wrong, but I think those statements are considered "vacuously true" for the same reason we say the empty set is a member of every set and x^0 is 1: it makes the rules of calculation simpler if we define it that way than other ways. But you can't use these convenient definitions to learn anything new.

You can keep multiplying your equation by x^0 and union-ing your set with the empty set and or-ing your proposition with "If eggs are diamonds then fish can talk" all the livelong day, but you never gain any more information.

I pretty much told him the same: That we declared such statements to be true merely for convenience - it makes the proofs shorter. And that none of the theorems he would normally deal with would have a different "outcome" if we don't accept such statements - they'd still be valid theorems.

But he won't be convinced until he formally studies logic. No idea when that will happen.

Well I have "formally" studied formal logic and he's in for a disappointment. Formal logic is a game, especially boolean logic, which has no grounding in truth, the world, or anything. Sometimes it's a useful game, but there's no inherent meaning behind it.

Tell your friend that implication works that way because that's what we've assumed, nothing more, nothing less.

> Formal logic is a game, especially boolean logic, which has no grounding in truth, the world, or anything.

Thing is: He knows this. He's not a "beginner" who wants to learn a bit more. He just hasn't spent enough time pondering vacuously true statements, and is assuming there is more to it than there is, and hopes studying logic will shed some light. So he refuses to study analysis until he has time to study logic.

No doubt, if he ever gets to logic, he'll end up with more questions and branching off further and further. He'll never get around to analysis. But like the top level comment - he doesn't like it when people tell him not to bother.

Though not implication, there is some reasoning to some other vacuously true statements. Take for example:

All purple cows are smart.

This statement is true, because the set of purple cows is empty. It might seem, odd, but it's because we can rephrase it as:

There does not exist a purple cow that is not smart.

When you phrase it like that, I think most people would intuitively agree that it should evaluate to true. In other words, to preserve symmetry between the universal and existential qualifiers, you need the first statement to also be true. Without the ability to transform "exists" into "foralls" would make first order predicate logic pretty much useless. In addition, if the emptiness of a set actually changed how the quantifiers worked, it would be a big problem: breaking referential transparency, if you will.

I don't recall there being a similar justification for implication in boolean logic, but I think the reasoning is similar. Hope that helps!

> I don't recall there being a similar justification for implication in boolean logic, but I think the reasoning is similar.

You can apply a similar equivalence, the contrapositive. Using the same example I gave sidethread:

0. (Premise - I wish to snottily imply that "that guy" did not graduate from high school.)

1. "If that guy graduated from high school, I'm the King of England."

(1) is exactly equivalent to (2):

2. "If I'm not the King of England, that guy didn't graduate from high school."

A positive proposition in (1) is negative in (2), and vice versa. But they are the same thing; if one is defined, the other is also defined.

> the statement "P implies Q" is considered true if P can never be true. As an example, let P be "George Washington was a woman" and Q be "Queen Elizabeth is a man". Then the statement "If GW was a woman, then QE is a man" is considered to be a true statement.

> I have a friend who refuses to accept that such a statement should be considered "true".

This is a weird thing to refuse to accept, since it is an ordinary part of vernacular English, a common way of dismissing an assertion as false.

"If that guy graduated high school, I'm the King of England."

Actually there were women who dressed like men, pretended to be men -- and if it, surprisingly, turns out this was the case with GW, then that won't make QE a man.

I have to agree with your friend :-)

Yeah, there's a bunch of caveats here. You also need to be reasonably quick to understand things, otherwise you'll be spending a lifetime trying to understand some pretty basic concepts and never get to anything interesting.

In general, there are a bunch of starting conditions necessary for this kind of approach to glean value.

I don’t agree because I still do this all the time. Sure some people get annoyed, assholes will try to make you feel bad for asking dumb questions, or even mock you publicly. The fact is, your understanding grows so quickly when you ask questions that this doesn’t matter.

As an engineering manager, I love having a mix of people who just always need to go deep on whatever they’re working on and others who are just obsessed with shipping and getting stuff out. Particularly great when you pair them off and they push/pull each other a bit.

Totally agree with this. I used to be solely in the latter camp and my teammates pulled me in the deep direction.

I have always thought that, as mentioned in the article, time is the greatest enemy. Human minds are incredible at understanding and theorizing, if given enough time. But when we are young and lacking information, pressure and trust of authority create a time-restriction that disallows us from forming truly "soft" or flexible models. Thus we are sort of taught to "forget" about a problem once it is solved. The problem itself becomes a task to be finished (for brownie points) rather than an actual problem with its own unique set of rewards. Not every problem rewards equally and even some problems may be detrimental to solve, but the self-education that our education system requires of our minds to succeed in the system makes it such that we regard problems as something we need to solve as proof to authority that we may get some free time.

Great post.

Heh I get this a lot in my daily sysadmin/developer duties. When you need to turn on a knob somewhere you don't just wanna know that you have to turn it on, you want to know why you need to turn it on and follow the chain up until you get to facts you already know. But it's not always possible to get that far, there's too many layers of abstraction, the source code is not available or you just don't have the time.

I can draw a parallel to this in software development. Some product features that require development "from scratch", where you can get down to the original code and logic - this is where "taking time to think" really pays off.

But when you are basically composing a final product from components, libraries and features - this is where figuring something out may take really long time and a lot of effort. It today's world many libraries are open source, so you actually can get to the bottom of many issues. But the time and effort cost of that is almost never acceptable.

My conclusion is - if you are a "slow" thinker, prefer getting to the bottom and figuring stuff out - try and choose the "fundamental" type of work. Where you are "done is better than perfect" kinda person - you'll thrive in the upper layers of development stack where shipping stuff out is of utmost importance. Focus on your strengths.

Where do you find this "fundamental" kind of work? How do you select for it?

One example would be analytics libraries that require precise calculations and background in math as opposed to the UI that displays the bar chart with results. The former is what I consider to be "fundamental" where the latter is much higher level and close to the end user (UI).

Other examples: audio / video codecs vs. media player app; game engine vs. intro screen and menu stuff, etc...

Swimming in uncertainty and (conditionally) trusting abstractions is one of the skills I have to teach my interns. I love their desire to understand and I try to be careful not to kill it. But at the same time, software engineering in the real world means contributing to a picture that is larger than will fit in your head.

Agree on the aspect of time. As we have progressed, we now have many levels of abstraction that it is hard to think deeply about the problem. Almost à la like a code that has grown too deep to understand every "bit" of it. Moreover, I think people now work in teams rather than one individual thinking about the system holistically.

It’s possible. It just takes time; very few people are in a position to devote that time.

This gives an advantage I haven’t seen discussed: when you put in the time, you make connections no one else thought of. It happens time and again, and it’s a clear pattern at this point.

It takes months of daily study, often tedious, with no clear benefit. But the benefits sometimes come. (I wrote “usually” rather than “sometimes,” but that’s not really true. The usual result is that you go to sleep more confused than you started. It’s not till much, much later that the connections even seem relevant.)

I like to imagine that at some point we might collectively have a large enough software development population to solve most significant problems comprehensively enough -- and fairly and equitably enough -- for most people that we begin to see developers with increasing amounts of free time.

At that point I think we could collectively really begin digging into some of the huge backlog of software bugs and errors that we've built up over time and make everything more reliable, seamless and consistent.

It'd be a massive undertaking, especially to solve each issue thoroughly and without causing negative externalities elsewhere. But it'd also be a great puzzle-solving and social challenge, not to mention an educational and useful one.

I worry that the way modern society is structured disincentivizes deep understanding.

1. Industry cares more about concrete results, quick execution, and bias for action.

2. Academia cares more about positive results, quantity of published papers, and small achievable experiments over big experiments that might fail.

Where are the institutions that care about deep understanding?

i agree. I think this is why I find companies like tesla and spacex exciting. They seem to have set up incentive structures that encourage _both_ quick execution and innovation (which requires deep understanding). One thing he's said that really struck me is that it's _really_ difficult to produce innovation if you tie punishment to failure. People tend to be conservative if they are punished / think they will be punished harshly for trying and failing. But if you want to innovate, failure has to be an acceptable outcome

hopefully we see more companies go in this direction

well that's where the whole mantra of "move fast and break things" comes from.

Putting it out there and failing also accelerates you faster to the right answers. If you release it today, it'll take 6 more months of iteration to really get it right. Or maybe you spend an extra 2 years of development to get it "right", but then once you release, you'll still have to spend 3 more months of iteration anyways to get it right.

yeah, this hits home for me because my team just spent a couple of years trying to get a product right and now it's on the verge of being replaced.

I think I've typically associated that mantra with pure software companies, so it's surprising when a company does this with rockets

I would say the IAS is one, as are other pure research institutes like Perimeter.

I would also say that a deep understanding isn’t usually necessary to make progress in many fields of human endeavor. Fields like engineering work on the basis of empiricism — theoretical understanding usually comes later. I could be all wrong here but my gut feeling is that the majority of great breakthroughs in engineering have come through tinkering and luck rather than a principled application of science — the latter is used more for refining the execution.

Even in recent times, neural network models have been shown to work without any deep understanding apart from the basics. It’s only recently that a lot of new theory have come out.

Deep understanding is a worthy goal in order to deconstruct things to learn how to make them fundamentally better or to build a foundation for further progress.

But as humans, we do very well surviving in a world that we largely do not deeply understand for the most part — I’d argue we are able to do so through heuristics (see Gigerenzer). We definitely should not let the lack of deep understanding prevent us from taking the initiative to do things.

Unfortunately deep understanding doesn't put food on the table, concrete actions do.

The points about education really resonate with me. In my engineering undergrad, it was frustrating to see in our applied math courses that the mechanical plugging and chugging of equations was the approach that most of my peers took in their studies. They got better grades than me. I wanted to understand concepts more deeply, but there was no time, and the tests rewarded those who could simply go through the motions of applying formulae to problems.

I'm a first-year engineering undergrad, last Friday I finished the last Calculus 3 exam on the semester.

I spent 2 days studying with friends just to see they would blindly memorize formulas with no regard whatsoever for what they were actually doing. "I'm not the understanding type", they said unironically.

Do you think this happens more often in engineering than other disciplines? Some people believe that applied science or mathematics means that just learning formulas is enough

I was one of those people and I regret it. However, the course load was just so great that I'm not sure how I'd done it the "slow" way. I wish I had the time to go back to school and do it right.

> But it’s not just energy. You have to be able to motivate yourself to spend large quantities of energy on a problem, which means on some level that not understanding something — or having a bug in your thinking — bothers you a lot. You have the drive, the will to know.

This resonates with me. Someone once asked me how I decide when I'm finished with a particular thing I'm working on. The answer is as simple as "when I stop thinking about it". When it stops bubbling up in my thoughts. Until then, I'll keep returning, and I'll keep chipping away.

That's a really great and profound answer. Thanks!

One of the things I remember quite vividly from my Psychology classes in college was the idea of a "satisficing" problem solver. Given a (reasonably solvable) problem to solve most people can come up with a satisfactory solution. The difficulty comes when you ask them to come up with a new solution. Many people struggle because their brains say "I already came up with a solution, and it was a pretty darn good solution too."

The really creative people are the ones who insist that their brains come up with another solution, and another one until they can be confident that they've found the best solution within a reasonable time investment.

To me this seemed to get easier as I encountered more "languages of the mind", i.e. more ways to think about things. This, is very much tied to the "nurture" part of our lives, as foundational experiences are more by the very definition of "experience" subjective and unique.

Solving a problem within mathematics in a new way can be made much easier if you have grasped multiple fields (for example, algebra vs. geometry). I've seen people understanding chemistry well because they enjoyed cooking, and could sort of use either to get to a given explanation to solve a problem. I've definitely started to grasp chemistry only when I reached a decent level in theoretical physics.

Here's my assumption: everything has a likelihood of depending to some degree on other things (examples: can you do mathematics without a language or writing? can you do physics without mathematics?). Therefore, "thinking laterally" could very well be thought of more as "thinking with an interesting combination of previous vectors of thought". Perhaps the "genius" is to create nonlinear combinations of previous vectors.

So in short, this ability to come up with another solution, and another, and another.. I wonder how much it is tied to the richness of experiences you've had since your birth, and particularly the foundational ones (at the very least I would assume to be more testable than later on, if my hypothesis about dependency of "thought vectors" is true).

Not stopping after the first right answer is one metacognitive strategy[1] among many. Metacognition is an area of active research.

I first heard of metacognition as a distinct discipline in connection with the treatment of insecure attachment.[2]

Metarationality[3] is an aspect or extension of metacognition.

[1] https://helpfulprofessor.com/metacognitive-strategies/

[2] https://www.psychologytoday.com/us/blog/the-resilient-brain/...

[3] https://meaningness.com/eggplant/introduction

From the article:

> Visualizing something, in three dimensions, can help you with a concrete “hook” that your brain can grasp onto and use as a model; understanding then has a physical context that it can “take place in”.

I'm still sore at the educational system in my country (Serbia) for ignoring and occasionally discouraging this kind of visual thinking. Remembering words and reproducing them at the time of examination was rewarded far more than constructing a mental model of how things work at the abstract level, and then deriving conclusions from that. Some teachers were exception, of course, and I remember them vividly and fondly to this day, but the rest are like some amorphous gray mass that barely existed somewhere in may past.

I still remember being told to memorize the Schrödinger's equation in high school, before we even covered, let alone understood the underlying maths. And I still don't. The funny thing is that this teacher was one of the "good ones" and she actually apologized to us for forcing us do this. But her hands were tied by the teaching "plan and program" that was obviously created by a committee whose members didn't talk to each other.

At the university (maths + CS), we would write-down (to our notebooks) the computer programs scribbled onto the blackboard by the professor, then memorize them for the exam, which was taken on paper. Nobody ever asked any questions. I still remember a colleague who didn't understand the concept of a pointer. That was in the third year and she had good grades.

I honestly hope things have improved in the meantime (the above was in the '90s).

Unironically writing about "honesty, integrity, and bravery" while working at Palantir

Jesus christ that background is distracting.

Wasn't sure whether your comment was relevant, then I remembered I closed the article a third of the way through because of a deep annoyance with the background.

I'm all for individual expression, but here I think it subtracts from the reading experience.

Sad to say, I had the same reaction :\

Huge fan of this kind of thinking.

I try to recommend this technique with software all the time. Basically instead of the workflow bring this

1) follow some tutorial online 2) integrate into real work

I do

1) follow some tutorial online 2) spend an hour to see how I can break it 3) integrate into real work

That step two does all sorts of wonderful things like helping you understand failure modes, better understand quality, make it easier to debug. You get the same knowledge over time by doing "real work", but it is much much more efficient this way :)

How would you create a training program to teach all of lessons?

I thought of creating a workshop at my uni, titled "how to ask stupid questions?" Essentially, do group activities where someone presents on some topic, and the goal of the audience is to ask genuine "stupid questions" - questions about the fundamentals, which most people are embarrassed to ask, but which play a big part in understanding.

There is one point there that I thought was interesting, regarding asking questions, and not being afraid to look stupid and about many senior/distinguished people doing that. Being in science, I noticed that it gets easier when you get more and more stature. I.e. if you are a professor, I think it is very easy to ask a postdoc/student after the talk -- sorry I don't understand it, or sorry if it is a stupid question. And the reason is 1) you know much better the limits of your knowledge, so it is unlikely a stupid question 2) You may be less dependent on the opinion of people surrounding you. While if you are say a grad student, when you ask a question, there is a higher chance of looking stupid, just because you may know significantly less and also you may be more afraid of people around you because of their higher career status.

Because of all this, I find it a bit disingenuous when senior people say to younger people that you should always ask questions and not being afraid of looking stupid. But being curious and asking questions is good overall. It is just not always easy.

> Because of all this, I find it a bit disingenuous when senior people say to younger people that you should always ask questions and not being afraid of looking stupid. But being curious and asking questions is good overall. It is just not always easy.

I agree with your general point, but a minor comment here: in many cases the professor/leader asking those (stupid) questions may be to create psychological safety for a more open conversation. It is less threatening to the person being questioned; it also encourages other people to participate.

Sharing one of my favorite quotes on this topic that James Clerk Maxwell was said [0] to tell his students:

"In this class I hope you will learn not merely results, or formulae applicable to cases that may possibly occur in our practice afterwards, but the principles on which those formulae depend, and without which the formulae are mere mental rubbish. I know the tendency of the human mind is to do anything rather than think. But mental labor is not thought, and those who have with labor acquired the habit of application often find it much easier to get up a formula than to master a principle.”

[0] https://www.youtube.com/watch?v=v40OcJ7rfSE&t=788s

Excellent read. So much of this resonates deeply with me based on uni experiences. I studied math and was guilty on more than one occasion of basically memorizing some theorems and methods of manipulation, missing the forest for the trees so to speak.

I catch myself doing this quite often: I read documentation, try a couple of things,if it works,I move on. Now this is all good when dealing with simple things but the more complex things are the less it works. It's like reading learn python in 10 days and then going on GitHub with all that newly gained knowledge and confidence and trying to understand how a large codebase works. Within about 30 seconds you close the browser and binary tears start dripping on your keyboard...

> Moreover, I have noticed that these ‘hardware’ traits vary greatly in the smartest people I know -- some are remarkably quick thinkers, calculators, readers, whereas others are ‘slow’.

This strikes a chord. I am much more of a nerd than my girlfriend, but I notice at almost every opportunity that she's a much quicker thinker and a much better learner than I am. I have studied math and computer science, but she regularly out-thinks me in puzzles, even when they are math-related.

This takes a habit that's sometimes good for some people and some subjects, and turns it into a universal recommendation, and then claims that's what intelligence is, which is really quite dubious.

To work on math, you need time, a peaceful place to think, and motivation. Even then, you can't do this for everything, because there is too much. Obsessing on something that's not urgent when there's more important stuff to do may not be good time management, depending on your priorities and other claims on your time. But you might do it anyway, depending on your interests.

Also, learning some other subject well may be less about thinking by yourself and more about going out and talking to people, or playing a lot of games, or challenging yourself in some other way. All that takes time too.

But there is a lesson here: knowing something in more than one way means you know it better. I see this especially in music, where there are multiple ways to memorize a piece and they reinforce each other. Auditory memory (being able to hear it in your head), muscle memory, knowing the chords, knowing the lyrics, even remembering where it is on the page can all help.

Gerry Sussman talked at length about IV during his DanFest https://legacy.cs.indiana.edu/dfried_celebration.html talk: https://www.youtube.com/watch?v=arMH5GjBwUQ&t=295 .

> not understanding something — or having a bug in your thinking — bothers you a lot.

I find this to be true for nearly all of the best co-workers I've had. I think this is the same reason people enjoy debugging software. It's the computer telling you that there's a bug in your thinking

My own take trying to express this dichotomy of second-hand versus firsthand knowledge https://blog.alexrohde.com/archives/682

Great article. A nice followup can be an article that talks about how to prioritize, or select which problems to take a deep dive in.

Interesting; for the chain rule proof he cites, it would seem at first glance that if you rewrite the leibniz notation as its equivalent limit notation, then the y terms cancel. It has been a while since I learned differential calculus, so perhaps that is why I don't immediately recall why this is wrong?

> Another quality I have noticed in very intelligent people is being unafraid to look stupid.

At school, there was a guy who used to ask questions which made him look stupid, at first. After discussing said questions for a few minutes, it often became apparent that those questions were in fact at the heart of the topic.

Reminds me of "How to Solve It" by George Pólya. Short read about finding approaches to problems, planning proofs, and understanding by restropect.


> Understanding something really deeply is connected to our physical intuition

This rings true to me.

I think that I "understand" how to start a fire, which I do a couple dozen times a year, in a deeper more complete way than I understand any of the abstract software development that I spend 30 hours a week doing.

I can't scale the text on that page. I tried in both Firefox and Edge.

This reminds me of a discussion in the most recent episode of Django Chat with Aymeric Augustin on the difference between tutorials and reference documentation.

Interesting that his understanding of calculus doesn't include non-standard analysis. But then mathematics has a lot of abstractions.

Is this consciousness?

IQ being the biggest life changer.

Consciousness is considered second and more importantly is considered trainable.

Since spellcheck killed that -


It's part of the ‘Big Five’ factor of personality. https://en.wikipedia.org/wiki/Big_Five_personality_traits

It's interesting when Jordan Peterson was asked what he has been wrong about, he said he did not initially believe in the Big Five personality traits.

It fits with the hardware(IQ) and software(conscientiousness) analogy

Oh my lord, why is Jesus in that list?

because it's actually quite relevant. The quality that the author is talking about is simply faith. Genuine intellectual activity is the recognition that there is a gap between how you understand the world and that there is is a more genuine degree of understanding to be found.

Because there's no guarantee of success the attitude one needs to adopt is the same attitude a believer needs to have, which is to take a leap of faith.

And like genuine faith, genuine intellectual activity is not goal oriented. If you only think to optimise your shopping list or make more money, you're impoverishing your own thinking by making it subject to what basically amounts some meaningless goal, that is to say you're instrumentalising thinking. Just like being faithful so you end up on God's or the churches good side is an impoverished version of belief.

The OP goes into some detail on the essentialness of honesty and integrity when it comes to inquiry. Without both, your pursuit of deep understanding will short-circuit. The best way I know to overlook truth is to willfully presume you know something, especially something unknowable. And that's faith. To exercise faith, you must stop questioning, shut your eyes, and jump into the abyss. You stop thinking. If some sort of understanding arises from faith, it most certainly is not the fruit of intellectual pursuit.

Interesting. tl;dr -- we learn by experiencing things, not by being told things.

I think this is the transmissionism versus constructivist view on teaching.[0] This is something that is well-known in education and I wish more people knew about it! Lay people commonly think of education as transmission of ideas from teachers to learners, but educators believe that learners construct their own understanding of ideas. So these educators try to create situations where the learners can do that construction.

[0]: http://nas-sites.org/responsiblescience/files/2016/05/Dirks-...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact