I don't think I really understood anything in school, but I was decent at going through the motions of carrying out certain methods and recalling certain facts when I needed to.
I went on to study Maths at university, and for most of my first year, I had the same surface level "methods + facts" knowledge that got me through school. After some studying, I could recite definitions and theorems, I'd memorised some proofs, and I could occasionally manipulate a problem to get an answer. I think about half of the cohort was in a similar position. But it was clear that there were others in a completely different league.
When we were studying for our first year exams, I was struggling to remember the proof of a specific theorem (it felt quite long). A friend was trying to help me learn it, and he asked me what "picture" I had in my head for the theorem. I didn't have any pictures in my head for anything.
It turned out that a simple drawing could capture the entire statement of the theorem, and from that drawing, the proof was trivial to derive. It was long-ish to write out in words, sure, but the underlying concept was really simple. This blew my mind — I realised I didn't have a clue what we'd been studying the whole year.
The worrying thing is that I actually thought I understood that stuff. Before that incident, I didn't know what it feels like to actually understand something, and I didn't have an appreciation for the layers of depth even within that. I suspect lots of people go through the entire education system like this.
1. A similar example: Feynman diagrams.
2. Another: Venn diagrams.
3. Longer example: On Navy nuclear-powered aircraft carriers, the officer of the deck underway (OOD) must have at least a basic understanding of how the engineering plant works. It's second nature for nuclear-trained OODs, of course, but non-nukes could sometimes have trouble. Back in the day, it turned out that an effective way to help non-nukes learn what they needed to know was to have them: (A) memorize a really-simple block diagram of the reactor and steam system, and also (B) memorize a chant, of modest length, that summarized how things worked. During slow periods while standing OOD watch, I'd make a non-nuke OOD trainee draw the diagram from memory; then I'd quiz him with "what if ..." questions (back then it was always "him"). If he got hung up on a question, I'd tell him, "chant the chant." That usually helped him figure out the answer in short order.
(U.S. submarines don't have that problem, AFAIK, because pretty much every officer who will stand OOD watches is nuclear-trained.)
+1. it took me a couple years after getting kicked out of college to get my head sorted out to the point where I felt like I could "think" again
I think one of the most harmful things about schooling is the way it imposes a tracked structure on learning. it demarcates knowledge into discrete subjects and sets up a linear progression through them and says you need to master each step on the track before moving onto the next one. this is poisonous and borderline evil, and I've encountered many people who are crippled for life by it. a lot of people never pursue things they're really interested in and could become extremely passionate about because school has convinced them they need to stack up prerequisite knowledge before they're even allowed to touch it
When reading Celine, one understand that children are natural born learner, and there is no effort needed to make them learn stuff. Our school model is industrial production of objects. Thinking human machines. We are way more than that. Sadly Pink Floyd description of the school still echo to our modern school. Some peoples don't feel that way about school. I don't really know why. Maybe they never imagined how better it could have been, so they found it great.
The Natural Laws of Children: Why Children Thrive When We Understand How Their Brains Are Wired
> A powerful, neuroscience-based approach to revolutionize early childhood learning through natural creativity, strong human connections, spontaneous free play, and more.
She has a home page, with an English version of some of her articles.
Freinet, C.: Education through work: a model for child centered learning; translated by John Sivell. Lewiston: Edwin Mellen Press, 1993. ISBN 0-7734-9303-4
What’s wrong with it? You do need to understand calculus before classical mechanics, classical mechanics before quantum mechanics, quantum mechanics before quantum
field theory, and quantum field theory before the Standard Model. I’ve seen tons of people disregard this and the result is always confused word salad. People waste years of their lives this way, going in circles without ever carefully building their understanding from the ground up. The order in school was chosen for a reason.
> You do need to understand calculus before classical mechanics
Yes, but how much calculus? Do you need all of Calc I, II and III before even attempting classical mechanics? And should calculus even be treated independently of classical mechanics?
There are various traditions when it comes to teaching these subjects, and the tradition in the US involves keeping a strict distinction between these things, in addition to a "theory first" approach. Other people have studied things in a different manner. Some of my physics professors from the UK had studied most of the math they knew only as needed when they would get to relevant topics in physics - including differential equations, all of analysis (complex or real), some of Calc III, etc.
Even amongst mathematicians, it was common in parts of Eastern Europe to focus on a problem, and learn whatever theory is needed to solve that problem. They didn't learn theory and apply to problems - they took a problem, and learned whatever theory is needed to solve them. I recall picking up a Kolmogorov textbook on analysis and being surprised by seeing this approach, along with the informality with which everything is discussed.
And just a minor quibble:
> classical mechanics before quantum mechanics,
You don't really need to know much except the basics. I think the classical mechanics we covered in our typical "Engineering Physics" courses was sufficient to dive into proper quantum mechanics. It's nice to have been exposed to Hamiltonians in classical physics prior to taking QM, but really not needed. There's a reason neither schools I attended made the classical mechanics courses as prereqs to QM. In fact, I would argue we should split things up a bit: Have a course to teach the very basics of energy, momentum, etc. Then make it a prerequisite to both classical mechanics and quantum mechanics.
Thank you. The value of calculus didn't really click for me until the next year in physics, at which point I wished I had paid more attention.
It's really unfortunate that you first have to drill high-speed high-precision symbol manipulation, devoid of any meaning, for so long before getting any indication of why you'd want to be able to do it.
I like Lockhart's analogy: mastering musical notation, transposition, etc. before being allowed to play your first note.
I can't think of any from my education, everything was bottom up building. But when I self-teach (as it naturally emerges from pursuing projects related to work or hobbies) it's often top down digging.
side note: I remember Einstein needed to learn particular math skills to help prove his ideas and sought out to learn it.
I've seen this sentiment way too much on HN. X didn't work for me, therefore X is a scam, its perpetrators are evil sociopaths, and if it worked for you you're a cog in the machine, man.
is it better to learn python or java to build a cursory understanding of programming, and then c and x86 to see what "really" happens "under the hood?" or the other way around, starting from base operations and then layering abstractions on top? I don't think one is strictly better than the other. when I first learned sql, joins were intuitively obvious to me but bewildered the person I was learning with, despite the fact that he'd been in IT for years and I barely knew how to program, because I happened to know set theory. I wonder what it would be like to do the traditional algorithms and data structures stuff before ever learning to program. it might make picking up a lot of details easier! I wonder why we don't teach children logic gates and binary arithmetic before they learn decimal. it's actually simpler!
in general I don't think there's a good reason why you must teach raw technique before practical application, or a concrete special case before a broader abstraction. even an impefect understanding of something "higher" can give you hooks to attach information about something "lower." familiarity and comfort and an understanding of interrelationships between knowledge is so much more valuable than perfectly executing each step on a given track
when I want to get acquainted with a new field, I usually start by reading current research, barely understand any of it, and then work backwards through the author's previous papers and the papers he cites habitually. my cursory grasp on the latest version makes it easier to understand earlier sketchier versions, and at the same time the development history of the older shines light on the newer. sure you can always "start with the greeks" as it were and work your way up, but I don't think this is objectively better than going the opposite direction
really I think of knowledge as more a highly interconnected graph than a series of linear tracks. as long as you can anchor something somewhere it's valuable to pick it up. it can reinforce other things near it and you can strengthen your understanding of it over time. and getting into the practice of slotting arbitrary things together like this is good training for drawing novel inferences between seemingly disparate topics, which I think is the most valuable form of insight there is
Yes, every subject is linked to every other subject. If I'm tutoring introductory physics, I will use examples and analogies from math, computer science, engineering, biology, or more advanced physics, depending on the background and taste of the student, and it works fantastically. But if I'm lecturing to a crowd, this is impossible, because the students will differ. If I draw a link, some people will think it's enlightening, some will think it's totally irrelevant, some will think it's boring, some will think it's so obvious it's not worth saying, and most will just get more confused.
The same thing goes for the top-down "working backwards from cool results" approach; it's supposed to bottom out in something you know, but whenever you teach multiple people at once, everybody knows different things. The bottom-up linear approach is useful because it gives you a guarantee that you can draw a link. If I'm teaching quantum mechanics I expect to be able to lean on intuition from classical mechanics and linear algebra. If I didn't know the students had that, I would draw a lot fewer links, not more.
Similarly, "if people learned X in school, then Y would be easier to understand later" is true for almost any values of X and Y, because of the interconnectedness of knowledge. But if you ask any math teacher, they'll tell you the school curriculum is already bursting at the seams. You can't just add logic and set theory to existing school math without taking something out. In the 70s we tried taking out ordinary arithmetic to make room for that. It was called New Math, and everybody hated it.
The way you phrased this reminded me of A Mathematician's Lament by Paul Lockhart
Do you have an example of a "drawing" of a theorem, in this context? (I've seen these for fairly trivial theorems but not for more complex ones, so I'm curious.)
As an undergraduate studying maths, I encountered a standard theorem in one of my first courses, which says that about 947 different conditions are equivalent to a matrix having a determinant of zero. I dutifully memorised these. I also dutifully memorised the algorithm for how to calculate a determinant. I might even have remembered some verbatim proofs of some of the equivalences.
However, I developed absolutely no intuition about what a determinant is. I had book knowledge, but no insight. It was a long time ago now, but I’m fairly sure that when I graduated I still did not truly understand even this very basic (by undergraduate standards) subject. I think it was probably a few years later, when I came across some of the same theory but in a much more practical context at work, that most of the connections in that equivalence theorem first “clicked”.
Meanwhile, here is what a gifted presenter with the right illustrations can do in about ten minutes:
The 2,000 or so substantially identical comments below that video are very telling.
Given the understanding you’d get with that quality of presentation, the equivalences I mentioned above would have been obvious and constructing the proofs from first principles would have been straightforward.
This is the theorem I was talking about: https://i.imgur.com/1xEH51Z.png (taken from https://taimur.me/posts/thinking-at-the-right-level-of-abstr... which touches on a similar topic to your post)
So now I try to “do” instead of memorize and it makes all the difference in the world.
Even I can’t learn new programming languages unless I make a real project with them. That’s why I don’t believe in things like code katas or reading programming books cover to cover. It’s a bad representative for using a tool in real life to learn its principles, techniques and methods.
But it's good news that you were able to understand once he drew the picture for you. So there is an effective way to teach that theorem - If only the professor knew it.
Creating really unique pneumonics works similar to how you can recall when something happened out of the ordinary or have a "sixth" sense where you subconsciously can sense something is off in an otherwise ordinary situation or encounter. If I had to guess, this might be an evolved human survival mechanism that helped our ancestors detect and avoid dangerous/deadly situation.
I wonder what other evolutionary mechanism we can hack to encode concepts more deeply. I know spaced repetition is the mainstream approach via flashcards/ankii type apps. I think pneumonics blended with spaced repetition is the most practical methodology. Thoughts?
Couldn't agree more. Some of the best CS books I've read have been ones with a tonne of illustrations, like the Head First series.
Humans are wired to enjoy and remember stories. So much education is done narrativelessly and as such neglects to harness one of our most potent capabilities.
Did you do a lot of exercises in school? They usually help build the intuitive part, the aha moment, that comes through repetition.
We did do a lot of exercises in school, but they mostly just tested whether you can reliably apply a method that you were taught, rather than testing understanding.
I'm not into showing off ranking or pedigree but I do genuinely believe that the higher the pedigree your school the more likely the exams will be harder and require total understanding and even creativity over rote memorization.
The reason is because memorization is trivial. Students able to get into any top school will likely all easily achieve full score on an ordinary tests. The professors at top schools need to make these tests brutally hard in order to produce a bell curve.
I literally had one new professor at my school actually give a mid term that was what would ordinarily be called fair in any other school or college... but the entire class ended up getting nearly full score.
He realized his mistake and the final was way, way harder.
In Asia it is common for schools to solely grade base on multiple choice exams which promote cramming and memorization. But in the US I would be shocked if a program that only operated that way was accredited. I would argue students who simply memorize things are trying to hack their way to an unearned grade. The goal of higher education is supposed to be to reach higher levels of Bloom's taxonomy, beyond the rote stuff done in primary and secondary school.
This is no longer how “high pedigree” schools function in the US. There is too much blowback from helicopter parents and the kids who haven’t ever had to deal with critical feedback before.
Lookup “grade inflation” if you want to be horrified by the quality decline of “pedigree education”.
It could be my personal situation was not representative of the norm.
> This requires a lot of intrinsic motivation, because it’s so hard; so most people simply don’t do it.
It also requires self-confidence, persuasiveness, and social power.
Without these traits, your attempts to really understand something will be dismissed as "overthinking things" or "trying to understand the universe". Those around you will urge you to "stop thinking just do the task" or "do the obvious thing" as they lose patience with you. If you don't resist them you'll end up moving forward despite feeling confused, sometimes completely. You'll then end up pissing people off when you execute too slowly or fail (in their eyes, intentionally).
> This is a habit. It’s easy to pick up.
Not if the people around you are exhausted by you.
More so if you're a college student that can focus 100% on your academic life/studies, unburdened by things like work.
EVEN more so if you have great mentors, professors, etc. that can guide you to the right place.
I'm not saying that one CAN NOT do the things above, if you're a busy student with work on the side, and very limited resources as far as mentors or professors go...but I do think that those lucky enough to find themselves in the right positions, are more likely to mature - and quicker.
(And it was no surprise to see that the author is a PPE grad from Oxford)
Often, trying (and potentially failing) to do a task is the best way to learn about it. The key is to be very explicit about what parts of that task you actually understand and which ones you're pulling out of your ass.
This is especially true when creating software. It's super rare to have requirements that are concrete and detailed enough to form a comprehensive understanding of the final design before you start developing it. Instead there are usually parts that have clarity and others that are fuzzy. If you can enumerate those and keep them separate you can often leverage the parts you understand to make progress on those you do not. Writing placeholder/obviously-terrible code to stand in for the unknown parts just so that you can spin up a running system is a great way to do that. Along the way you'll see what patterns emerge, where you hit walls, etc, which is not always easy to imagine with raw abstract thought. And having "working" software that you can play with is a great way to find edge cases and otherwise make progress on those unknowns. Once you've gained a more complete understanding you can replace the placeholder junk with well-designed/actually-thought-out modules.
(Another obvious reason to do this is if your company will literally go out of business if you wait until you have a perfect understanding to launch a product/service, but I think most people here get that.)
I'm not sure how much this generalizes, but it also works well for me when writing. I usually start with a vague understanding of an idea I hope to communicate, then jot down disjointed sentences to capture parts of it. As I do so it gradually becomes clear how things are connected, where my reasoning is muddy, what I thought I knew but can't express so probably don't, etc, and I can use this gained understanding to iteratively rewrite and reshape my message until it becomes something coherent. Sometimes, anyway; other times I don't end up sending/publishing it at all because along the way I learned that the thing I was hoping to communicate was based on a faulty assumption or is not as straightforward as I thought it was. Which is great, because either way I've learned something.
I guess I'd say it differently: "keep thinking and do the task".
A common unspoken theme in the post is, the art of learning is knowing "what" to care about and "when".
When you are required to act, act and act decisively. If you are clear that the understanding could be deeper (and it usually can), you trigger a work effort to understand more. So the next time you need to make a decision you’re more informed.
Example: I forget the name of the principle, but in mathematics the statement "P implies Q" is considered true if P can never be true. As an example, let P be "George Washington was a woman" and Q be "Queen Elizabeth is a man". Then the statement "If GW was a woman, then QE is a man" is considered to be a true statement.
I have a friend who refuses to accept that such a statement should be considered "true". And he has put off studying real analysis until he can learn enough logic theory to convince himself on the validity of accepting such statements as true. I do not think he'll ever get to study real analysis, because he is full of "No! I need to understand this really really well before proceeding!" statements.
It's a fine approach if you have an infinite amount of time.
The other issue, as another commenter pointed out: It's very difficult to measure progress in thought. The mind is great at fooling itself, and not until you try to solve real problems (or discuss them with others) will you expose most of the gaps in your mind. The same person in the above anecdote does suffer from this. He definitely puts in effort to learn a lot (and has succeeded), but there are always more things to learn, and he moves on to the next topic before really applying what he has learned. As someone who talks to him often, it's really hard to tell if he understands. He is the classic case of "I'm sure I can solve problems when I need to, with a bit of review".
At the other extreme, of course, are people who are not really that motivated to understand. They are satisfied if they get the answer at the back of the book. You won't get far with just that.
> If you don't resist them you'll end up moving forward despite feeling confused, sometimes completely. You'll then end up pissing people off when you execute too slowly or fail (in their eyes, intentionally).
This sounds more like an issue at work, and your experience is fairly universal - most jobs I've worked at have it. In my experience, understanding things well is sadly not valued on the job. They want you to "execute", and want you to minimize the time you spend learning. And of course, they would rather hire someone else instead of ensuring your proper learning/training.
You can keep multiplying your equation by x^0 and union-ing your set with the empty set and or-ing your proposition with "If eggs are diamonds then fish can talk" all the livelong day, but you never gain any more information.
But he won't be convinced until he formally studies logic. No idea when that will happen.
Tell your friend that implication works that way because that's what we've assumed, nothing more, nothing less.
Thing is: He knows this. He's not a "beginner" who wants to learn a bit more. He just hasn't spent enough time pondering vacuously true statements, and is assuming there is more to it than there is, and hopes studying logic will shed some light. So he refuses to study analysis until he has time to study logic.
No doubt, if he ever gets to logic, he'll end up with more questions and branching off further and further. He'll never get around to analysis. But like the top level comment - he doesn't like it when people tell him not to bother.
All purple cows are smart.
This statement is true, because the set of purple cows is empty. It might seem, odd, but it's because we can rephrase it as:
There does not exist a purple cow that is not smart.
When you phrase it like that, I think most people would intuitively agree that it should evaluate to true. In other words, to preserve symmetry between the universal and existential qualifiers, you need the first statement to also be true. Without the ability to transform "exists" into "foralls" would make first order predicate logic pretty much useless. In addition, if the emptiness of a set actually changed how the quantifiers worked, it would be a big problem: breaking referential transparency, if you will.
I don't recall there being a similar justification for implication in boolean logic, but I think the reasoning is similar. Hope that helps!
You can apply a similar equivalence, the contrapositive. Using the same example I gave sidethread:
0. (Premise - I wish to snottily imply that "that guy" did not graduate from high school.)
1. "If that guy graduated from high school, I'm the King of England."
(1) is exactly equivalent to (2):
2. "If I'm not the King of England, that guy didn't graduate from high school."
A positive proposition in (1) is negative in (2), and vice versa. But they are the same thing; if one is defined, the other is also defined.
> I have a friend who refuses to accept that such a statement should be considered "true".
This is a weird thing to refuse to accept, since it is an ordinary part of vernacular English, a common way of dismissing an assertion as false.
"If that guy graduated high school, I'm the King of England."
I have to agree with your friend :-)
In general, there are a bunch of starting conditions necessary for this kind of approach to glean value.
But when you are basically composing a final product from components, libraries and features - this is where figuring something out may take really long time and a lot of effort. It today's world many libraries are open source, so you actually can get to the bottom of many issues. But the time and effort cost of that is almost never acceptable.
My conclusion is - if you are a "slow" thinker, prefer getting to the bottom and figuring stuff out - try and choose the "fundamental" type of work. Where you are "done is better than perfect" kinda person - you'll thrive in the upper layers of development stack where shipping stuff out is of utmost importance. Focus on your strengths.
Other examples: audio / video codecs vs. media player app; game engine vs. intro screen and menu stuff, etc...
This gives an advantage I haven’t seen discussed: when you put in the time, you make connections no one else thought of. It happens time and again, and it’s a clear pattern at this point.
It takes months of daily study, often tedious, with no clear benefit. But the benefits sometimes come. (I wrote “usually” rather than “sometimes,” but that’s not really true. The usual result is that you go to sleep more confused than you started. It’s not till much, much later that the connections even seem relevant.)
At that point I think we could collectively really begin digging into some of the huge backlog of software bugs and errors that we've built up over time and make everything more reliable, seamless and consistent.
It'd be a massive undertaking, especially to solve each issue thoroughly and without causing negative externalities elsewhere. But it'd also be a great puzzle-solving and social challenge, not to mention an educational and useful one.
1. Industry cares more about concrete results, quick execution, and bias for action.
2. Academia cares more about positive results, quantity of published papers, and small achievable experiments over big experiments that might fail.
Where are the institutions that care about deep understanding?
hopefully we see more companies go in this direction
Putting it out there and failing also accelerates you faster to the right answers. If you release it today, it'll take 6 more months of iteration to really get it right. Or maybe you spend an extra 2 years of development to get it "right", but then once you release, you'll still have to spend 3 more months of iteration anyways to get it right.
I think I've typically associated that mantra with pure software companies, so it's surprising when a company does this with rockets
I would also say that a deep understanding isn’t usually necessary to make progress in many fields of human endeavor. Fields like engineering work on the basis of empiricism — theoretical understanding usually comes later. I could be all wrong here but my gut feeling is that the majority of great breakthroughs in engineering have come through tinkering and luck rather than a principled application of science — the latter is used more for refining the execution.
Even in recent times, neural network models have been shown to work without any deep understanding apart from the basics. It’s only recently that a lot of new theory have come out.
Deep understanding is a worthy goal in order to deconstruct things to learn how to make them fundamentally better or to build a foundation for further progress.
But as humans, we do very well surviving in a world that we largely do not deeply understand for the most part — I’d argue we are able to do so through heuristics (see Gigerenzer). We definitely should not let the lack of deep understanding prevent us from taking the initiative to do things.
I spent 2 days studying with friends just to see they would blindly memorize formulas with no regard whatsoever for what they were actually doing. "I'm not the understanding type", they said unironically.
Do you think this happens more often in engineering than other disciplines? Some people believe that applied science or mathematics means that just learning formulas is enough
This resonates with me. Someone once asked me how I decide when I'm finished with a particular thing I'm working on. The answer is as simple as "when I stop thinking about it". When it stops bubbling up in my thoughts. Until then, I'll keep returning, and I'll keep chipping away.
The really creative people are the ones who insist that their brains come up with another solution, and another one until they can be confident that they've found the best solution within a reasonable time investment.
Solving a problem within mathematics in a new way can be made much easier if you have grasped multiple fields (for example, algebra vs. geometry). I've seen people understanding chemistry well because they enjoyed cooking, and could sort of use either to get to a given explanation to solve a problem. I've definitely started to grasp chemistry only when I reached a decent level in theoretical physics.
Here's my assumption: everything has a likelihood of depending to some degree on other things (examples: can you do mathematics without a language or writing? can you do physics without mathematics?).
Therefore, "thinking laterally" could very well be thought of more as "thinking with an interesting combination of previous vectors of thought". Perhaps the "genius" is to create nonlinear combinations of previous vectors.
So in short, this ability to come up with another solution, and another, and another.. I wonder how much it is tied to the richness of experiences you've had since your birth, and particularly the foundational ones (at the very least I would assume to be more testable than later on, if my hypothesis about dependency of "thought vectors" is true).
I first heard of metacognition as a distinct discipline in connection with the treatment of insecure attachment.
Metarationality is an aspect or extension of metacognition.
> Visualizing something, in three dimensions, can help you with a concrete “hook” that your brain can grasp onto and use as a model; understanding then has a physical context that it can “take place in”.
I'm still sore at the educational system in my country (Serbia) for ignoring and occasionally discouraging this kind of visual thinking. Remembering words and reproducing them at the time of examination was rewarded far more than constructing a mental model of how things work at the abstract level, and then deriving conclusions from that. Some teachers were exception, of course, and I remember them vividly and fondly to this day, but the rest are like some amorphous gray mass that barely existed somewhere in may past.
I still remember being told to memorize the Schrödinger's equation in high school, before we even covered, let alone understood the underlying maths. And I still don't. The funny thing is that this teacher was one of the "good ones" and she actually apologized to us for forcing us do this. But her hands were tied by the teaching "plan and program" that was obviously created by a committee whose members didn't talk to each other.
At the university (maths + CS), we would write-down (to our notebooks) the computer programs scribbled onto the blackboard by the professor, then memorize them for the exam, which was taken on paper. Nobody ever asked any questions. I still remember a colleague who didn't understand the concept of a pointer. That was in the third year and she had good grades.
I honestly hope things have improved in the meantime (the above was in the '90s).
I'm all for individual expression, but here I think it subtracts from the reading experience.
I try to recommend this technique with software all the time. Basically instead of the workflow bring this
1) follow some tutorial online
2) integrate into real work
1) follow some tutorial online
2) spend an hour to see how I can break it
3) integrate into real work
That step two does all sorts of wonderful things like helping you understand failure modes, better understand quality, make it easier to debug. You get the same knowledge over time by doing "real work", but it is much much more efficient this way :)
I thought of creating a workshop at my uni, titled "how to ask stupid questions?" Essentially, do group activities where someone presents on some topic, and the goal of the audience is to ask genuine "stupid questions" - questions about the fundamentals, which most people are embarrassed to ask, but which play a big part in understanding.
Because of all this, I find it a bit disingenuous when senior people say to younger people that you should always ask questions and not being afraid of looking stupid.
But being curious and asking questions is good overall. It is just not always easy.
I agree with your general point, but a minor comment here: in many cases the professor/leader asking those (stupid) questions may be to create psychological safety for a more open conversation. It is less threatening to the person being questioned; it also encourages other people to participate.
"In this class I hope you will learn not merely results, or formulae applicable to cases that may possibly occur in our practice afterwards, but the principles on which those formulae depend, and without which the formulae are mere mental rubbish. I know the tendency of the human mind is to do anything rather than think. But mental labor is not thought, and those who have with labor acquired the habit of application often find it much easier to get up a formula than to master a principle.”
This strikes a chord. I am much more of a nerd than my girlfriend, but I notice at almost every opportunity that she's a much quicker thinker and a much better learner than I am. I have studied math and computer science, but she regularly out-thinks me in puzzles, even when they are math-related.
To work on math, you need time, a peaceful place to think, and motivation. Even then, you can't do this for everything, because there is too much. Obsessing on something that's not urgent when there's more important stuff to do may not be good time management, depending on your priorities and other claims on your time. But you might do it anyway, depending on your interests.
Also, learning some other subject well may be less about thinking by yourself and more about going out and talking to people, or playing a lot of games, or challenging yourself in some other way. All that takes time too.
But there is a lesson here: knowing something in more than one way means you know it better. I see this especially in music, where there are multiple ways to memorize a piece and they reinforce each other. Auditory memory (being able to hear it in your head), muscle memory, knowing the chords, knowing the lyrics, even remembering where it is on the page can all help.
I find this to be true for nearly all of the best co-workers I've had. I think this is the same reason people enjoy debugging software. It's the computer telling you that there's a bug in your thinking
At school, there was a guy who used to ask questions which made him look stupid, at first. After discussing said questions for a few minutes, it often became apparent that those questions were in fact at the heart of the topic.
This rings true to me.
I think that I "understand" how to start a fire, which I do a couple dozen times a year, in a deeper more complete way than I understand any of the abstract software development that I spend 30 hours a week doing.
IQ being the biggest life changer.
Consciousness is considered second and more importantly is considered trainable.
It's part of the ‘Big Five’ factor of personality. https://en.wikipedia.org/wiki/Big_Five_personality_traits
It's interesting when Jordan Peterson was asked what he has been wrong about, he said he did not initially believe in the Big Five personality traits.
It fits with the hardware(IQ) and software(conscientiousness) analogy
Because there's no guarantee of success the attitude one needs to adopt is the same attitude a believer needs to have, which is to take a leap of faith.
And like genuine faith, genuine intellectual activity is not goal oriented. If you only think to optimise your shopping list or make more money, you're impoverishing your own thinking by making it subject to what basically amounts some meaningless goal, that is to say you're instrumentalising thinking. Just like being faithful so you end up on God's or the churches good side is an impoverished version of belief.