Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics. Newton's statement of the law of gravitation is relatively simple mathematics. It gets more and more abstruse and more and more difficult as we go on. Why? I have not the slightest idea. It is only my purpose here to tell you about this fact. The burden of the lecture is just to emphasize the fact that it is impossible to explain honestly the beauties of the laws of nature in a way that people can feel, without their having some deep understanding of mathematics.
50 pages in, I decided to take a step back and read a calculus book first, but wait my algebra and trig are crap so back to the basics. So yesterday I hit LCM and GCD applications and factoring which are very basic. So, I'll probably resume the initial book in a couple of years or so...
With these you should be able to follow the Feynman lectures or watch the very fine „Theoretical Minimum“ series by Susskind (http://theoreticalminimum.com)
I'm doing a full review of mathematics at the moment. Not in depth, more of a "here's an application of the GCD function" so I know what tools to use to solve specific problems. All this is beneficial for the day job as well who expect to see some value from my time spent even though I'm not being totally honest with the objective to them. Realistically I want to think abstractly in the terms of mathematics and develop some intuition.
Was completely unaware of the Theoretical Minimum series. Thanks for that.
Edit: I'm reading Mathematics: From the birth of numbers by Jan Gullberg as a text. Wonderful book. Covers just about everything and is beautifully written by a non mathematician with no assumptions spared and no education target. In fact the forward is mainly bitching about the education system. Slightly worried I will get distracted by this book but that's never a loss!
Over 1000 pages is quite a long read, though. I never managed to read a (science) book as big as that from cover to cover myself. One thing I learned through the years is to never use only one book for learning. Books have different styles and not every style fits to every student. Additionally one book might be good at one specific topic and weak on another. So nowadays I always use a couple of books (or online resources) to learn a new topic.
Quick page shot to show the scope and density: http://i.imgur.com/sV1WYFd.jpg
I have a number of other books as well that I use as a reference as well so no problems there (calculus for the practical man has some different insights). Oh and betterexplained.com.
One suggestion: make sure you don't just read about math, but also try solving exercises and problem. These are very important for actually learning the material.
Agree with solving problems; this was what was missing from my school education. Literally rote and box ticking with zero applications.
So, I think you should think about intuition, math, whatever else you have in your pocket as tools. They give the right answer when used correctly and sometimes give the wrong answer even when you're sure you're using them correctly. I think though that intuition CAN be more insidious because it's what you've experienced! It HAS to be true, you think.
In the example, I think what Feynman is describing, we commonly call "visualization", to be able to "see" the problem in imagination. That is no less a form of abstraction that is often vital to problem-solving. Of course, not every problem yields to this approach but it is a powerful feature of our basic cognitive tool set.
Einstein wrote that his early success in formulating his idea of special relativity was the outcome of his intuition about the physical properties of light, etc. he studied. Later on, the mathematical abstractions became more powerful, and at that point intuition about the "physical" nature of phenomena was insufficient for understanding.
But I think there are forms of intuition that apply to very abstract ideas, or what seem to be so to us. I once heard a physicist say "we never really understand higher mathematics, we just get used to it". Feynman would probably have agreed with that sentiment.
"Getting used to it" is really the equivalent of developing an intuition about the subject. I remember first learning about programming recursive functions, mind boggling in the beginning. After a while, it began to "sink in", that is, it became intuitive, I no longer had trouble "seeing" how it worked. The key is familiarity, something once strange is now digestible.
So there's nothing binary about intuition, it covers many forms of thought, and incorporates reasoning about emotion, having a "feel" for the problem in question. There are limits to our abilities, at the highest level it's genius, but there are no clear boundaries.
In every field there is probably a different type of intuition that is useful. In abstract math maybe 'getting used to it' very fast is very useful. In geometric math maybe being able to visualize objects in space is useful. In programming maybe seeing the run of the code in your head is useful.
Correct me if I am wrong but I believe this statement is attributed to Von Neumann. It has really stuck with me over the years, just because of how true it is.
I agree that what is important is self-consistency, precision and rigor. Intuition is for the birds. Like you have stated it is ambiguous and often deceptive. Mathematical abstractions enable us to circumvent these pitfalls.
There's a strong argument here to be made that all human reasoning is embodied. This isn't the same as a weaker one which might now be left trying to discuss why human's can talk about experiences we can never know—like the behavior at the surface of the sun. Instead, I mean more fundamentally that our brain is one designed to operate in our universe and that our universe plays by many nice rules. Things decompose and move, time flows, causality dominates. It appears increasingly that all of the tools to understand our universe are within these simple forces your brain can't not build an intuition for. To fail to do so would lead to catastrophic inability to function.
There is a lot of beauty in deriving laws of nature without relying on physical intuition. A lot of beautiful results are based on purely requiring laws to be self-consistent and seeing that only one possible law is self-consistent. For instance check out what Scott Aaronson says about probability in quantum mechanics. While Feynman in his famous lectures just says that quantum mechanics is counter intuitive and you are not supposed to truly understand it, Scott Aaronson uses math to explain how to correct your intuition (and I am stressing, this is not just about learning the math, it is about basing your intuition on the math, not on the everyday experience).
There's a good post by Terrence Tao about this topic, I think it was posted here some time ago:
The above is a good accessible exposition of this perspective. Though it would help to have some acquaintance with ordinary probability as well as basic linear algebra.
Our physical intuitions are Galilean, not classical, mechanics (that is, they are non-Newtonian). For example, our intuitions tell us that an object set in motion eventually slows down and stops. That's Galilean (also termed "folk physics" or "naive physics", usually by cognitive scientists).
Most of us had to study formal physics to advance to Newtonian classical mechanics.
Quantum mechanics (QM) is completely non-intuitive, at least as far as intuition about either folk physics or Newtonian classical physics is concerned. IIRC Feynmann says as much in his book "QED: The Strange Theory of Light and Matter", Chapter 28 beginning:
"I think I can safely say that nobody understands quantum mechanics. —Richard Feynman
The quantum theory is not explicable in commonsense terms..."
Of course Feynman used his real-world physical intuition all the way through to his most abstract work. In one case he characterised the internal structure of the proton as being like "marbles inside a tin can." Try and write those equations!
> Galilean invariance or Galilean relativity states that the laws of motion are the same in all inertial frames. Galileo Galilei first described this principle in 1632 in his Dialogue Concerning the Two Chief World Systems using the example of a ship travelling at constant velocity, without rocking, on a smooth sea; any observer doing experiments below the deck would not be able to tell whether the ship was moving or stationary.
> the term Galilean invariance today usually refers to this principle as applied to Newtonian mechanics, that is, Newton's laws hold in all inertial frames.
This is very similar to how scientists for a long time believed in classical Newtonian mechanics, because it's a reasonable approximation of the truth at large scales and low velocities.
Actually, what you mentioned sounds like intuition to me. You didn't make hand plot the polynomial, so that is relying on intuition over rigour, which I think is what the OP's quote from Feynman referred to.
An example from Hans Freudenthal is from a standard physics question. If there are books on top of a table, what are the forces acting on the books? Most all students draw the downward force of gravity, but some forget about the force the table exerts upward back on the books. You can have students get on their hands and knees and put books on their back, or have them lie on their back and hold up books with their hands and arms. They 'embody' the table, in a sense. When you add a second book, you feel that you have to exert more effort (which correlates with force) to hold the books up.
Maybe a better example is, what is the best way to accelerate a ball horizontally from a fixed height? IE how can we most efficiently translate potential energy to kinetic energy in the horizontal direction?
I'm interested in process. What is the mapping algorithm that transforms problem into solution? While intuition appears to be magic, I believe that there is a very concrete process happening in our subconscious.
My personal guess is that we're transforming the problem into a format more suited for different modules of our brain to process.
For the table problem we apply 2 transformations. One is referred to as a calculation, the other as intuition.
"Calculation" involves transforming the table into symbols (mathematics) for the language/logical part of our brain to process; the other transformation involves turning the table into a sort of fuzzy 3D visualization for the imaginative/spatial part of our brain to analyze.
Both transformations yield transformed results. Symbols yield a symbolic/numeric solution, a fuzzy 3D visualization yields an equally fuzzy 3D solution.
Funny how two different process that are both seemingly systematic are called different things. Why is one called intuition and the other not?
The model itself cannot explain how it arrived to its conclusions; that could be done with the training data which are long gone.
Similarly to machine learning, humans can develop intuition about things by learning and training in the subject (that's why I believe rote learning is actually quite useful).
Just like with intuition, it crucially depends on (and varies with) the input data (experience) and there can be different models, but successful models (those that give good results on training data) are quite similar in appearance.
Of course, the big disadvantage of intuition is that you cannot explain it to others, even if it works. They have to believe that you are expert and made correct judgements (that you have correct model). That's why science (and especially mathematics) has tried to formalize the process, so that people could double check the reasoning and wouldn't have to rely on expert authority. That's why the two processes are called differently, I think.
Kahneman and Tversky spent a good deal of time trying to identify the major heuristics employed (and biases caused) by the more "intuitive" of the two systems used for thinking.
(You may note that at the time this paper was published there was no concept of Dual Processes and the phrases "System 1/2" are never used.)
I've come to think of it more as a "familiarity" though - where if you're familiar with writing stable and maintainable systems, you'll get the right smells - but some people have opposite feelings about systems, and it seems like they are people that have the opposite type of experience as well.
Other types of intuition I think are based mostly on pattern recognition. If you have a system that makes a set of choices that follows a well known pattern to you, your brain can start predicting what has been done where, and with enough confirmation, can start feeling confident about the behavior of other parts not yet scrutinized. Once again, if things are done in an unfamiliar way, all that evaporates and one needs to fall back to looking at the low level strategies and techniques.
The other issue is 'allowing' oneself to imagine and visualize these things (math/physics equations / software). I don't think the point is to restrict yourself to things for which you have physical intuition, it's more that it's easier to go from physical experience -> intuition about physics phenomena that are closely related to physical experience -> intuition about math /physics /software that don't have a counterpart in our physical experience as humans e.g. quantum mechanics
Now, software just isn't. Managing complexity requires a completely different intuition from exploring a simple space.
You need to learn arithmetic before you learn algebra.
Data structures and algorithms are made to solve real problems. You don't have to be able to code up those problems to understand them and see how the algorithm works. Some of the cleverest theoretical CS guys that I know freely admit that they're bad programmers and couldn't implement the algorithms that they describe in their papers.
Sometimes we learn things in one domain that could be easily applicable in others, yet since we've never practiced said things in another context, it's hard for us to make this connection. Often, all we need is a little nudge in the right direction, which could open to us a new world of insights that we didn't have before.
Never told anyone--out of fear it could happen again.
Similarly, it is often the case that new fields of mathematics arise from someone defining a new concept that was previously imprecise. Once you have the right language to discuss something, discovering its properties becomes much more straightforward.
For physicists the math becomes a language and a safety net and a set of heuristics that let us simply the problem to the point of being about to reason about it effectively. A great deal of what we use math for amounts to book-keeping. The human imagination is as capable of dreaming up impossibilities as it is incapable of dreaming up the way the universe actually is, and math helps us avoid doing the former while we use systematic observation, controlled experiment and Bayesian inference to figure out the latter.
Because "thinking about the mathematical representation of physical reality" is such a profoundly unnatural, unintuitive act, and because the math is so strict and simple, it is very hard to for us to use it to imagine impossibilities, whereas if you have a conversation with a layperson you will find they almost instantly run off the rails into nonsense because they don't have the math to keep them on track. So laypeople believe in perpetual motion machines and the like with surprising ease, because they "just make sense" to their intuition (which maps pretty well to Aristotle's physics).
This student at the example is probably crazy or fake.