Hacker News new | past | comments | ask | show | jobs | submit login

Because they are written by mathematicians. In my case, when I have learned a mathematical topic, the intuition becomes obvious and the derivations/proofs seem to be much more important for gaining a complete understanding. I have gone up against texts with complete bewilderment, only to come back after gaining the intuition and found the extensions of the core premises and proofs provided by the text to be highly enlightening.

Great math teachers understand the need to teach intuition. He wasn't a math teacher, but I think Richard Feynman is the pinnacle of this. See [1] to see how he expresses intuition about physics, and his Red Books[2] for how he teaches mathematical physics with all the qualities I believe makes a great maths text for students.

Also, there's a linear algebra MOOC which also teaches great intuition before delving into proofs and heavy detail [3]. I mention these examples because they are exemplars of this idea of teaching intuition.

[1] https://www.youtube.com/watch?v=4zZbX_9ru9U

[2] https://www.amazon.com/Feynman-Lectures-Physics-Vol-Mechanic...

[3]https://www.edx.org/course/linear-algebra-foundations-fronti...




> Richard Feynman is the pinnacle of this

There really needs to be a version of the Feynman lectures for mathematics.

Although, this is what Arnol'd has to say [1]:

"Mathematics is a part of physics. Physics is an experimental science, a part of natural science. Mathematics is the part of physics where experiments are cheap... In the middle of the twentieth century it was attempted to divide physics and mathematics. The consequences turned out to be catastrophic. Whole generations of mathematicians grew up without knowing half of their science and, of course, in total ignorance of any other sciences."

[1] https://www.uni-muenster.de/Physik.TP/~munsteg/arnold.html


> There really needs to be a version of the Feynman lectures for mathematics.

I asked the question on math.SE:

https://math.stackexchange.com/questions/62190/mathematical-...


Feynman learned mathematics from a series of self teaching books published in the 1940's suffixed "...for the Practical Man" and prefixed with Arithmetic, Algebra and Calculus. I have the full set and this is a rather good solution to the problem. They teach you insight and how to think about things as well as the mechanical aspects. This is IMHO a well solved problem if you don't mind skipping more modern abstractions such as limits.

From there he was given a calculus book, the title of which I cannot remember. I never got that far.

I suspect you have to at least follow the same path to have the same intuition.


I sometimes get the feeling that we seem to have taken a huge step backwards in math books over the past 50 years. Back when I was in college and studying multivariate calculus I happened to find a small, ~100 page, book called something like "Introduction to Multivariate Calculus" from the 50s in a used book store. This tiny books not only covered basically the whole curriculum of my course, but did it in much greater clarity then the 500+ page that was our textbook. I can basically thank that book for me passing that course. I find on the whole that especially introductory mathbooks have gotten harder to follow and less clear (and a lot longer) over the past few decades.


Completely agree.

I've taken the liberty of taking a quick snap of a random page in "Arithmetic for the Practical Man" to include below for those poor people poisoned by modern textbooks:

http://i.imgur.com/Bg9OiiK.jpg (926KiB)

I see horrible modern behemoths of over a 1000 pages that leave you dazed, confused and full of facts but nowhere to go with them. EE textbooks are even worse on this front than your average mathematics text book. I've seen one proudly promoting over 1500 pages and 1000 illustrations, but doesn't even get as far as an opamp or discuss anything at system level.


bought the series as well, love it. It's my daughter's favorite math series. One interesting thing I noticed in this regard is textbooks from the 30-60s have way more textual descriptions. They seem to spend more time looking at the problem or concept in a literary way and that might have helped to build a better understanding for the student.


Nice, simple, intuitive proofs. Very cool.


I think it's a bit misleading to say he learned math from those books. He got his start there, but surely the bulk of his mathematical knowledge was more advanced. However, it's quite possible that he retained the attitude from those early books. It seems to me though that he already had that attitude prior to reading the "practical man" books, and it is more that they particularly resonated with him because of it.


In search for books like these in the past, I found "Understanding Analysis" (Stephen Abbott). I went through the first two chapters and I liked it. It is written in a narrative which is both entertaining and instructive. He explains the problem, why is it relevant, ways of approaching it, etc. "It is designed to capture the intellectual imagination."

From the preface: "This book is an introductory text. The only prerequisite is a robust understand- ing of the results from single-variable calculus. The theorems of linear algebra are not needed, but the exposure to abstract arguments and proof writing that usually comes with this course would be a valuable asset. Complex numbers are never used.

The proofs in Understanding Analysis are written with the beginning student firmly in mind. Brevity and other stylistic concerns are postponed in favor of including a significant level of detail. Most proofs come with a generous amount of discussion about the context of the argument. What should the proof entail? Which definitions are relevant? What is the overall strategy? Whenever there is a choice, efficiency is traded for an opportunity to reinforce some previously learned technique. Especially familiar or predictable arguments are often deferred to the exercises.

The search for recurring ideas exists at the proof-writing level and also on the larger expository level. I have tried to give the course a narrative tone by picking up on the unifying themes of approximation and the transition from the finite to the infinite. Often when we ask a question in analysis the answer is “sometimes.” Can the order of a double summation be exchanged? Is term-by- term differentiation of an infinite series allowed? By focusing on this recurring pattern, each successive topic builds on the intuition of the previous one. The questions seem more natural, and a coherent story emerges from what might otherwise appear as a long list of theorems and proofs."


That rant is a bit ridiculous to post here - the foundations of computer science is largely the result of mathematicians who weren't particularly interested in physics.


Well for one Turing was certainly somewhat interested in physics. From his WikiP page:

In 1928, aged 16, Turing encountered Albert Einstein's work; not only did he grasp it, but it is possible that he managed to deduce Einstein's questioning of Newton's laws of motion from a text in which this was never made explicit.

Then there was von Neumann and several others. If not interested then at least well educated in physics.


The foundations of computer science are trivial from the mathematics standpoint. I would not call them "results of mathematicians".


But there can't be, because mathematics is abstract and most of its fields have no intuitive physical analogies to back them up.


I don't think it's the intuition. I think it's the part where people are explicitly and implicitly taught to avoid metaphors, since they are considered bad analogues and "window dressing on top of objective literal truths". The sad part it, Lakoff and Johnson already provided a good counter-argument that thesis in the eighties with their landmark 'Metaphors We Live By,' suggesting that metaphors are the main way humans make sense of the world, almost as if they are the fundamental intuition you refer to. Since then then the proof for this case has only been piling up.

Especially in the field of machine learning we're finding more hard evidence that metaphor are not decoration, but fundamental parts of how to transfer information. Using rich metaphors to pass on implicit information between teacher and student is known as "privileged information":

> When Vladimir Vapnik teaches his computers to recognize handwriting, he [he harnesses] the power of “privileged information.” Passed from student to teacher, parent to child, or colleague to colleague, privileged information encodes knowledge derived from experience. That is what Vapnik was after when he asked Natalia Pavlovich, a professor of Russian poetry, to write poems describing the numbers 5 and 8, for consumption by his learning algorithms. (...) [After coming up with a simple way to "quantify" the poetry], Vapnik’s computer was able to recognize handwritten numbers with far less training than is conventionally required. A learning process that might have required 100,000 samples might now require only 300. The speedup was also independent of the style of the poetry used.

http://nautil.us/issue/6/secret-codes/teaching-me-softly

Now, of course, knowing how to come up with a good metaphor is a skill in itself, and bad metaphors do lead people astray. But they do so precisely because they are so good at transferring information - wrong information, in the case of bad metaphors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: