The top-rated answer is either defeatist, or just rationalization for the sentiment "I had to go through this and figure out everything myself, so you should too".
There is a huge amount of information encoded in the choice of exactly how to define thing, and which theorems people care about. This reflects a long process of trial-and-error as the field was constructed. For a famous philosophical treatise on this using the Euler characteristic as an example, see "Proofs and Refutations” by Imre Lakatos
Most of that foundational information is lost when it's not written down somewhere accessible; contrary to the answerer, only a small fraction is reconstructed by students as they learn the subject.
It's a huge problem, it applies to physics textbooks too, and it doesn't have to be this way. Unfortunately, the problem has been known for decades and there's not much reason to expect things to change. (Lakatos wrote the above in 1976.)
> This reflects a long process of trial-and-error as the field was constructed.
A way to think about this process: "math" isn't quite a tower of abstract concepts; those only exist in any given mathematician's head, and die with them. Instead, "math" is a name for the novel pieces of language we invent, with the purpose of using them to talk about and (hopefully) share those internal abstract concepts, exposing to the rest of the world concepts that were entirely inaccessible (to everyone but the originator) before that language was introduced. Math is the attempt to communicate never-before-communicated epiphanies; to describe the shapes of never-before-pondered abstractions, with properties only describable using never-before-spoken words.
One thing that this implies, is that the first piece of language that lets us even vaguely point at some particular idea so that we can get to work on analyzing it further, might stick around long after we come up with some clearer or more coherent language, because the former one now has the momentum of historical use behind it.
Math, when done this way, then becomes a precarious pile of "just good enough to survive" conceptualizations, rather than a precise tower of "best-tool-for-the-job" conceptualizations. And that's no good for teaching.
> "math" is a name for the novel pieces of language ...
I'm keen for etymologies and hence believe that math means the art of problemsolving or learning successfully. That includes language as a problem domain, and teaching tool for learning, but as modern development would have it, it's about structure and organization, not just in language.
Sure, literally, yes, the word "mathematics" is not an -ology; it isn't a discipline concerned chiefly with carving up reality and giving words to the results.
I was just trying to highlight the fact that mathematics as an institution is a process of building up share-able symbolic abstractions; of inventing a "language" one new word at a time.
I'm using "language" here, and above, to refer not to the words used to discuss mathematics (the... "mathematology" of math), but rather to the thing that includes objects like mathematical operators (e.g. "+", "⨯", "∫", "⇒") as its "words." Not the language about mathematics, but rather the language that is mathematics: the ever-growing set of abstract tools with symbolic handles which we've constructed to allow us to manipulate other concepts inside our heads, in rigorous ways where you can trust that if you and another mathematician do the same named mental 'move' to the same source concept, you will both arrive at the same destination concept as a result.
For a cute analogy: you can think of a martial art as a vocabulary of known, precise body movements, that can be taught. You can think of mathematics as a vocabulary of known, precise mental movements, that can be taught. Yes, this makes mathematics an art; but, equivalently, this makes a martial art a language.
To sum up:
• Mathematics is itself a (formal) language. It doesn't really fit in the category of words like "biology"; it fits more in with words like "logic" or "C++".
• To say that someone is "doing" Mathematics just means that they are using that language to achieve a goal; it's about the same as saying that someone is "doing" Python.
• To say someone is a mathematician, is to say that that person works to explore and extend the language of Mathematics, to test its properties and its limits, and to invent new 'words' within it that may then be used by those "doing" Mathematics.
Yes, I might have missed that you used "language" as an allegory rather than a metaphor. I find ironic how this reaffirms the stereotype of the divide between language and mathematics, ie. mathematicians being bad at maths and the other way around, when really the combination of both creates a synergistic effect that helps each to surpass the effectiveness of each on its own.
I'm not so sure. In most fields, we certainly discover new things (concepts, relationships, laws.) But we can usually describe those new things by analogy to existing things, because—given enough abstraction and analogy—they have recognizable "shapes." There are existing mental tools in our civilization that fit the concepts, and let us toss them around and look at them from all the angles.
Math is where we invent language to refer to new forms of abstraction themselves: novel possible shapes for our thoughts to take when we think about other things. You can't talk about a new shape by analogy to existing shapes. Nor can you abstract an abstraction in a way that gives you anything more familiar. (Instead, you'll usually get more novel, ontologically-primitive abstractions, like going from monads to arrows, or going from numbers to fields to rings.)
Sometimes disciplines like physics will find a concept that we don't have any mental tools in our toolkit for yet. Then we build some. But we still refer to the process of doing that as doing mathematics—and then we apply that new mathematics back in the problem domain to talk about the new concept.
(For a good example, the formalization of quantum theory in physics, required the creation of infinite dimensional analysis in mathematics. Physicists did most of that work, but the work itself was still mathematics, not physics.)
---
Now, other fields do still have a similar problem to mathematics, of historical momentum carrying forward old "things" (again: concepts, relationships, laws) when there are better, clearer "things" that could be used in their place. But when we're not working with pure abstractions—ways of thinking—we can make the effort to compare and contrast old and new "things", and decide that some might be more edifying than others.
It is possible for an especially-gifted physics teacher to write a very accessible physics textbook, because they need only pick all the clearest "things" to demonstrate. That teacher will still be stuck in a given paradigm—a way of thinking, a belief in the worth of some "things" over others, popular in the culture of their discipline at the time and place they worked. But they might be able to (barely) rise above it, if they think hard about the history of their discipline and the paradigms it has gone through in the past, compare-and-contrast those, and synthesize something that isn't quite just the paradigm they're immersed in.
Mathematics is uniquely problematic because it is entirely paradigm. It is a tree of paradigms—each new abstraction only making sense assuming the paradigm it was created in, and then becoming the paradigm for further abstractions still. Every mathematician, all the time, is trying to discover what a particular paradigm—their specialization—can be twisted to accomplish. Not one of us has the brain power to know the total space of things that one of these ways of thinking can be used to express—the problem domains the tool is applicable to—in order to know which tools show more or less promise at being "powerful." We know what we've discovered so far, but we have such an infinitesimal idea of the "space of all possible abstractions" that we could be totally missing some of the best, and using ones that are barely satisfactory.
Point a hypercomputer AI at "solving physics", and it'll spit out a description of the universe that will certainly have more "things" in it than we know about today—but which will still also contain a subset of the "things" we do have. (The most "carving nature at its joints"ing ones, presumably.) Those "things" that get carried over will, of course, be defined much more precisely; their concept-boundaries will be adjusted to include or exclude edge-cases we weren't aware of. But they'll be the same "things."
Point a hypercomputer AI at "solving mathematics"—of giving us the most powerful abstractions possible to solve problems with—and the result might be entirely unlike anything we've invented so far. It might not share any resemblance to current mathematics, beyond a shared fondness for sets. (For a silly hypothetical: it might turn out that there's a better abstraction for thinking about relationships between things than "functions", and that all the rest of mathematics and physics and whatever other problem domains simplify greatly if we think about the relationships in them in terms of that abstraction, instead of trying to specify relationships using the "function" abstraction.)
> Most of that foundational information is lost when it's not written down somewhere accessible; contrary to the answerer, only a small fraction is reconstructed by students as they learn the subject.
From "A Mathematician's Apology", G. H. Hardy:
> Statesmen despise publicists, painters despise art-critics, and physiologists, physicists, or mathematicians have usually similar feelings: there is no scorn more profound, or on the whole more justifiable, than that of the men who make for the men who explain.
From another angle: when developing new theories or models, your thoughts are all over the place and frankly it's boring to go over your own crappy notes afterwards and try to reconstruct them in a way that others can understand. And much of the time you forget exactly what happened along the way as well, so any story you reconstruct is going to have some hindsight bias, which defeats the purpose of trying to "teach the story".
Really, from the first answer:
> Based on my own experience as both a student and a teacher, I have come to the conclusion that the best way to learn is through "guided struggle".
This is the only way to "properly" learn mathematics or science. Anything else is only making you think you've learnt something.
>> Based on my own experience as both a student and a teacher, I have come to the conclusion that the best way to learn is through "guided struggle".
> This is the only way to "properly" learn mathematics or science. Anything else is only making you think you've learnt something.
My point is not just that most textbooks make it too hard/inefficient to learn this stuff. My point is that most of it is never learned. The Legendre transform connects the Lagrangian and Hamiltonian mechanics, the two fundamental formulations of both quantum and classical physics, and yet most physicist cannot tell you why the transform is defined as it is. The reason is they don't take seriously the possibility that we'll find non-Lagrangian phenomena, and so they have not been forced to consider what observational and theoretical evidence led to it's identification in the first place.
I learned math through guided struggle. My high school had a kind of macho attitude about it, we were all about math competitions and pushing through university level stuff etc. Today I feel that was a good way to study, but much of the struggle could've been avoided. You must solve problems to progress, but they don't have to be hard problems. They just need to be formulated at level n but require solutions at level n+1. Devising such problems is hard and many teachers don't bother, instead they give you definitions at level n+1 right away and make you solve problems about those. That's the root of the problem IMO.
> mathematicians have usually similar feelings: there is no scorn more profound, or on the whole more justifiable, than that of the men who make for the men who explain.
Yes, Hardy was a great mathematician and he did say this --- but most mathematicians have tremendous respect for peers who strive for clear exposition in their lectures, their papers, and (if they write them) their books.
I am a professional mathematician, and Hardy's attitude is one I have never heard expressed by any of my peers.
"A Mathematician's Apology" is a fascinating read, but his description of mathematicians' attitudes is certainly not accurate today.
It indeed can be boring to reconstruct your thoughts in a way so that others can understand -- but many of us make the effort anyway, and doing so often leads to new insights.
The full paragraph (in fact, the very first paragraph of the essay) reads:
> It is a melancholy experience for a professional mathematician to find himself writing about mathematics. The function of a mathematician is to do something, to prove new theorems, to add to mathematics, and not to talk about what he or other mathematicians have done. Statesmen despise publicists, painters despise art-critics, and physiologists, physicists, or mathematicians have usually similar feelings: there is no scorn more profound, or on the whole more justifiable, than that of the men who make for the men who explain. Exposition, criticism, appreciation, is work for second-rate minds.
I interpreted "men who explain" not as mathematicians that can explain their work well, but as people who try to explain mathematics in a "lay" way to cater for a large audience, whose explanation can very often become inaccurate, non-mathematical or just downright false, yet still get public credit for seeming to know the field very well, despite these inaccuracies, and even though they are not directly pushing the advancement of the field itself.
It's of course a good thing to try to reconstruct your own thoughts, but I wouldn't say it's unreasonable for a mathematician to omit doing that. Could you go into more detail on the examples you mention, where doing so led to new insights?
> Could you go into more detail on the examples you mention, where doing so led to new insights?
Good question. It's a bit hard to do so (especially without going into mind-numbing technical detail) -- in math you never quite know where insights really come from. "Fortune prepares the prepared mind."
But generally speaking, I would say that good exposition gets you thinking about: Why does the technique work? What is the key insight? What are its limitations? And if you think about such questions, you naturally get a better sense of for which other questions your techniques are also likely to work.
How's about a half way step here: I agree that guided struggle is probably the most likely path to learning, but giving the student a concrete context they can apply does wonders. Back when I was first learning Calc, I had a professor notice I was also writing game software, at the time I was making a "Missile Command" clone. He pointed out that I could use calc to create "guided missiles" and more calc for various other things I had solvers. The light dawned in my head, and suddenly just the realization of the concrete application caused several calc concepts I'd been struggling with evaporate. Additionally, that created a reason, a justification for interest and context, and math almost instantly became peer to my interest in software, practically erasing any conceptual difference they had in my head. I cite that memory as the moment I became a mathematician.
>> Based on my own experience as both a student and a teacher, I have come to the conclusion that the best way to learn is through "guided struggle".
>This is the only way to "properly" learn mathematics or science. Anything else is only making you think you've learnt something.
I agree with this statement but I think it misses the point entirely. Guided struggle is indeed necessary, but learning some theorem without seeing the impetus for its discovery is like learning how to play an instrument without understanding that the intent is to make music. Yeah, with enough time and struggle, you might be able to go through the motions and play some scales, but most people don't learn to play instruments that way, they learn to play a simple song or two, then go back and start with the scales and building musical theory.
Math textbooks sometimes try to do the same thing, but it seems like they always come up with the most inane and pointless exercises.
All of this is just to say, learning they why of math can help someone learn the how.
No, definitely not, and I agree with the other sibling replies. I was more specifically responding to the part that was dismissive of the "so you should too" point.
Certainly, we can and do develop newer and simpler ways of understanding previous theories. And teaching the historical sequence of events can help with understanding; I myself experienced that with [1] for modern analysis. However, these understanding-aids don't teach you how to do mathematics, and only marginally improve your ability to apply those models and theories to existing real-world problems. To improve your ability to do mathematics, active exercises are necessary. Really, it's the same with many other fields, you don't get to be a good musician merely by reading about music and music theory.
I assumed they used the term "struggle" poetically, it certainly doesn't have to be unpleasant. But you have to put in some active mental exploratory effort. I found this post [2] a good summary of the skill set. But it's very abstract and likely won't make much sense unless you've been through the experience yourself.
These understanding-aids are also sometimes unnecessary. If you've done enough of the right kinds of exercises, they are of themselves an aid to understanding. For example, I could understand category theory better, not by learning about how this theory was developed historically, but by writing lots and lots of similar programs, and having a natural tendency to syntactically (and without much thought) refactor my code to be less repetitive, eventually leading me to various "category theory aimed at programmers" blog posts and papers. This one [3] of course deserves a mention, but there are many more.
To further emphasise this point, very brilliant mathematicians can just "pick up" models and concepts and work creatively and productively on them, without needing these aids.
My other point was that, the understanding-aids are very rarely what actually happened in the head of the people that developed a theory. Even historical narratives have distortions, and they are rarely detailed or precise enough to describe the rejected options, nor why they were options in the first place. (This fact, is also why they are not useful for teaching how to do mathematics.) There are exceptions, but reconstructing them is a boring process with little reward, especially since new developments 10 years later might explain it in even simpler terms.
That said, I would disagree with this part (from the top answer to the OP):
> a) The goal is to learn how to do mathematics, not to "know" it.
Modern mathematics has so much damn material these days that it's impossible to learn everything you need in order to solve modern-level problems, merely by teaching yourself all models and all theories "the hard way". Understanding-aids are certainly needed, and I use them very often myself, and I certainly prefer resources that teach using good analogies, proper context, descriptions of the motivations behind a theory, step-by-step "n/n+1" exercises, and everything else that other people mentioned here.
> It's a huge problem, it applies to physics textbooks too, and it doesn't have to be this way.
I've noticed this too. I've tried to learn some quantum physics for fun, and it seems to me that most textbooks have surprisingly little description of the actual physical experiments behind the physics. Maybe this is something they usually cover more in lectures?
Like, I haven't found much details about the actual physical experiments around the spin of electrons (and certainly not of other particles). Sometimes there's an abstract description of a basic experimental setup.
I read just a few days ago that Feynman would redo old experiments that other physicist weren't doing anymore because they were established science. This is a great learning experience if you have access to the equipment, but most people don't. There should be books that describe how physics was developed, experiment by experiment, with lots of details.
People severely underestimate how much detail is going into science.
If you read a 1000 page textbook on experimental physics, thats just the cliffnotes. The very highest level "here are the results of 400 years of serious research". There can't be any detail.
If you want the details on the stern gerlach experiment, for example, read the research papers. They will contain all the detail necessary to reproduce the experiment.
Lectures are just like those books. They just teach a very broad overview of what has been discovered.
The books you think should exist do exist. They're just not written for casuals.
As someone who works on the foundations of quantum mechanics (how to formulate, why we think it's right, etc.), I'll disagree with this. The fact that much of the detail from many experiments is available somewhere is not much more useful than being told that code is available from the authors if only you email them and wait 3 months. The barrier to figuring out this stuff are immense.
Furthermore, it's not necessary to communicate the actual, usually circuitous, route taken historically by scientists. Instead, you could just describe the series of hypothetical simplified experiments whose results would lead one to quantum mechanics and rejecting alternatives. This is never attempted in a serious way (trivial uses of the Stern-Gerlach experiment as a model of quantum mechanics not withstanding).
Dirac derives quantum physics from first principles, including the relevant thought experiments, in one of his books. that was good enough for me. made a lot of sense.
Dirac's book is a perfect example of what I'm talking about. Just think about the fact that this book from 1958, which makes a valiant attempt at justifying a few parts of the formalism (it's on my bookshelf), is basically a high-water mark even though it's very flawed, especially given what we know today. What does it tell us about the ability of physicist to transmit ideas to the next generation with high-fidelity if, to understand why the quantum formalism is how it is, you recommend reading a 60 year old book by a guy who was alive when it was being formulated!
Some of those flaws:
(1) Dirac postulates, as most people do, that measurable observables are to be identified with Hermitian operators. This is a mistake that can be traced back to von Neumann. In reality, the larger class of normal operators are perfectly fine as measurable observables. Indeed, measurements are properly associated with only an orthonormal basis, and it is completely unnecessary to label them with eigenvalues, real or otherwise. (To see this, just observe that there is no physical difference between experiments that measure x and x^3 for the position of a particle.) Dirac's discussion on page 35 is just wrong.
(2) Unless my memory fails me, Dirac gives little to no justification for why we use tensor products to build up the state space of a many-body system from the state space of a single-body system, a tectonic shift from classical mechanics at the heart of the weirdness and power of quantum mechanics.
(3) Dirac was written before Bell's inequality. I mean, just look at the pithy discussion by Dirac (p. 4-7) to justify fundamental indeterminacy, one of the most profound things we know about the universe. Do you think this would have convinced Newton? Or Dirac in 1925? (We know it didn't convince Einstein.) This sort of thought experiment is lovely for an article in Scientific American to give laymen a sense of where things come from, but it's nowhere near the rigor with which we should teach physicists.
I really wish I could find a book (or series) that started with explaining the (non-)results of the Michelson-Morley experiment, and traced through the major experiments that underpin modern theory. Just picking out the major quintessential experiment (types) -- things like the Bell test.
Much of science is really just subtle modifications or tests of major theories, but I feel like you could write up... say 12 keystone experiments in 100 page summaries and publish it in 2-3 volumes, (eg, QM and relativity volumes).
The problem I have with only presenting "polished" results is that we lose the context of our modeling -- eg, QM seems to have included non-determinism in an effort to preserve locality, but locality couldn't be preserved in the face of other results, so is non-determinism merely an extra assumption included for legacy reasons (eg, technical debt because no one wants to clean up the model/people forgot why we included it)?
You'd never even think to ask that question if you only saw the cleaned up model of QM divorced from its (philosophical) roots.
research papers can all be read by "people in the field".
its just that someone who is doing environmental physics doesn't necessarily understand papers on string theory. science is specialized, yea. if you want to understand research papers, get an education.
or just do something else. cutting edge research is not for everyone. not everyone is supposed to understand it. whats next? people demanding research papers be dumbed down to adhere to the freedom of information act?
science is hard. big deal. you cant have all the things at the same time. read the hawking books written for casuals if you want a cute little story about science.
> research papers can all be read by "people in the field".
Not necessarily. The classic example is Shinichi Mochizuki's work.
He's done some incredible work but he's basically invented his entire field out of thin air, he doesn't publish frequently, and the papers he does publish are essentially impenetrable.
He likes it this way and doesn't want to "dumb down" his work for the mere mortals who need to review it. That's essentially the end state of your argument - if you can't comprehend it you're not "in his field" and you're obviously not qualified to discuss it. And he's certainly not going to waste his time teaching some dunces the basics of his field.
For the casual reader, the "bad writing" aspect of the difficulty is severely outweighed by the "lack of domain knowledge" aspect, by several orders of magnitude. If you take a random modern research paper, and put it through a handful of editors to make it superbly written in a clear, convincing language that gets its points across very easily, the casual reader uneducated in the background knowledge will barely be able to tell the difference.
A large part of any scientists job is communicating results. The people you criticize may not be novelists, but they are certainly professionals when it comes to communicating technical knowledge.
However, this knowledge is only communicated to other experts in the same - often very narrow - subfield. Often the definitions that give you a hard time when reading a paper have been refined over several years and are basically known to all other people working in the same field.
This is not ideal, but there is simply not enough manpower to produce good and generally accessible summaries of current research topics every few months...
> The people you criticize may not be novelists, but they are certainly professionals when it comes to communicating technical knowledge.
No, that's the point I'm trying to make. They do not have enough training in making themselves understood. Hell, a good portion of them don't even have the language in which they are writing as their primary language.
Plus, it seems that authors sometimes feel compelled to appear smarter by leaving out the intuition and motivation behind a result, and in particular the easier and more intuitive simple cases (which might have been the inspiration for the result in the first place), and instead present only the most abstract & most general version of the result that they managed to prove.
Including some of the enlightening historical path towards that result is not "dumbing things down".
show me one paper that you cant understand because of the grammar but would be perfectly fine reading it if only the lyrical style was a bit more up your alley.
Effective communication is about far more than whether the reader/viewer can eventually understand the material. Even basic presentation skills can make a huge difference both to the speed at which someone can receive and understand new information and to how well they will retain that information later.
Unfortunately, many academics receive little if any training in good presentation before being expected to lecture or write at undergraduate or graduate level, and while some are naturally gifted presenters anyway, most inevitably are not. Consequently, many career academics have no idea how poor their presentation skills are, how ineffective their presentation is as a direct result, or how much better they could be. They just get stuck at a very low level, but without the kind of introductory/remedial training that would be given to someone whose career involved presentation skills in the professional world. And of course if anyone with broader experience dares to suggest that there might be room for improvement, the instinctive reaction is denial.
A little irony is that some of the most engaging and informative presenters I have ever seen or read come from that same community, but despite the emphasis on peer review in their research work, when it comes to soft skills the weaker presenters typically have no idea how bad they are and therefore make no attempt to learn from their stronger colleagues and improve.
Unfortunately, as your own choice of word "exceptional" implies, most are not so lucky.
I once sat in a review meeting at the end of a year with members of the faculty responsible for teaching collecting feedback from many of the undergraduate students. When challenged about the poor quality of many lectures, the response was essentially that they can't make the lecturers go and learn how to lecture competently because the lecturers wouldn't stand for it.
Try that attitude in the professional world and you'd be in a remedial programme on your way to getting fired.
Fortunately, I expect that with the advances in modern technology and changes in modern careers, the old-school universities that think a famous name and charging high fees mean they can get away with anything will soon be obsolete, and so will the incompetent parts of the academic community sheltering within them. They will need to find new ways to offer dramatically more value than interested people can find on their own with all the modern resources we have available, or they won't be able to justify people taking several years out of their lives and paying a fortune in fees to attend any more.
> I've tried to learn some quantum physics for fun, and it seems to me that most textbooks have surprisingly little description of the actual physical experiments behind the physics.
Try the free and new course "Quantum Mechanics for Everyone" that despite the title is certainly not dumbed down at all, on edX.
Look for books on experikental physics! Pysics bachelors measure many natural constants in practice labs. Many compendiate text books will have rich sources. I'd look for ones from the time the of the discovery to get the gory details which many textbooks avoid to achieve didactic reduction
> Like, I haven't found much details about the actual physical experiments around the spin of electrons (and certainly not of other particles).
The Stern-Gerlach experiment (1922) is what you're looking for. "Modern Quantum Mechanics" by Sakurai opens with an explanation of this experiment and its consequences.
I can remember feeling really intimidated by the proofs of theorems. What I now think is that at least some proofs are analogous to code you struggle when writing to get something working. Later, once you'd gotten the thing done you'd realised that there were steps that could be eliminated and still others that could be cleverly combined. Then later again when you wanted to show it off you tidied it up some more and what you then presented to others is much more elegant but perhaps harder to understand.
It's becoming more obvious to me with time that math and its corresponding proofs are very similar in concept to our code and unit tests. Theorems like our software are built from rudimentary building blocks, each needing to be understood in order to build the software and then the tests are the proofs that each of the building blocks are correct...
What's missing for a lot of people is "What does this theorem apply to? Why does it apply? How do I break this theorem down into its component pieces to understand it?"
Indeed if your math text started off with a concept of a real world thing you're trying to figure out and then break it down to a series of paradigms that the student understands before applying each of the mathematical components that applies to each of these paradigms before finally combining them all to the final theorem, people would get it far more easily and be much less intimidated by it. Then over time students will spot paradigms that combine more elegantly to the final theorem and understand how to substitute them.
I think Math is being taught backwards in schools and this is why so many people are intimidated by and shy away from math... and I was a student who both struggled and enjoyed Math... and I still struggle with it because we were taught to think about it backwards.
We started out well with the basic building blocks, but somewhere between basic algebra and calculus, this all got turn around ass backwards. Where does the motivation to understand derivatives and integration come from if one doesn't understand the implications of their application?
That's a somewhat naive way to say it. Math is not code, it's not programming. You can code a Python script without knowing how does a operating system work. You cannot understand nor use a theorem without knowing the definitions involved. I can tell people, for example, that the concept of derivatives and integration comes very handily when solving differential equations that model real world phenomena (for example, fluid motion). But how can I explain what is a differential equation in a manner that's useful to someone who doesn't know what a derivative is? Math, for better or worse, tends to not have direct applications (on the other side, much like other subjects in their basic stages) and you need to advance before getting to a point where you learn something you really need and then see the usefulness of it all. It's like everything: you don't know what you need and how strongly you need it (despite how many times people will tell you that you need it) until you really have that necessity.
Starting off with the general concept and breaking down into paradigms is not always possible. For example, say you want to teach people the Stokes' theorem of integration of manifolds [1]. It is the generalization of the divergence, Kelvin-Stokes and Green theorems that are very useful in electromagnetis, fluid mechanics and probably more. The theorem says, broadly, that if you are measuring some quantity in a certain space, you only need to know how does it "accumulate" on the boundaries of that space. But that will not be useful to an engineer or physicist. To really understand that theorem and break it down you need an entire course on differential geometry, with requisites on topology and calculus. Therefore, in the analysis courses for engineers/physicists the most probable situation is that either they present you the theorem and tell you how to perform the required operations on differential forms (which, if you don't know what they are, is a complete mistery/dark magic), or they go to the specific theorems (Gauss, Kelvin-Stokes, Green) which seem clunky and completely magic again.
The summary is that to be able to know what does a theorem apply to, why does it and how to break it, you literally need to study mathematics: that's what mathematics is. And the way to study it is always backwards, it's not like physics or programming where they can show you the high-level stuff, you can see what's happening and then you start to understand it. In math, most of the time, if you learn some subject starting from the high level stuff, it is difficult to even know what on earth are they talking about.
> You can code a Python script without knowing how does a operating system work. You cannot understand nor use a theorem without knowing the definitions involved.
Better analogy:
You cannot code a Python script without knowing how to call the APIs you need to implement the functionality you want.
Knowing how an operating operating system works is more like knowing how foundational mathematics is axiomatized. Interesting and occasionally useful, but not immediately necessary for the theory of differential equations.
I think you might even be able to deal with differential equations without involving limits, by simply asserting all necessary theorems about derivatives as axioms. This corresponds to relying on a robust library (= the theorems other mathematicians have proved), without worrying about implementation details (= the proofs of those theorems, which might be much more difficult to understand than the results can be used).
You might be able to teach someone how to solve differential equations mechanically the same way you can teach someone how to press keys on the keyboard to code a Python script. Yes, it works, but it's specific and the moment something is slightly different it will not work. For someone to be able to work with DEs they need to know how to work with derivatives and integration, and I think it will be worse to not introduce the formal definition of a derivative. Giving just the rules of calculation will appear as magic or arbitrary, and even worse, it may lead people to wrong conclusions (such as operating differentials as if they were fractions because that worked with ODEs). And that's not even entering into all the theory of differential equations, which usually in engineering it does not get much attention (existence and unicity is not that worrying for an engineer, you tend to already know that a solution exists as you did an experiment/saw something/etc).
The existence of fields that merge math and programming does not mean that "doing math" and "programming" are similar actions/similar fields. I've studied both and there is a world of difference.
> How else would the math be invented in the first place?
Usually the first step for problems to be solved is their expression in a mathematical way. See the heat equation: Newton (and probably others, history is not my strong point) already captured the core idea of heat transfer (transfer rate is proportional to the temperature difference) but it could only be really solved by Fourier once all of those ideas were put in mathematical expressions.
The very act of putting all those ideas into a mathematical expression is in essence programming. You take a whole bunch of ideas, codify them and put them together to either explain something else or produce a product - a theory or formula. Which in essence is what programming is.
> The top-rated answer is either defeatist, or just rationalization for the sentiment "I had to go through this and figure out everything myself, so you should too".
Yes, I agree this MO answer (and several others) seem to be rationalizing the situation rather than acknowledging how suboptimal it is.
I think I'm a good example of the system's failure. Coming in to college, I was something of an ideal candidate for becoming a mathematician. I had some success in Olympiads and already had decided I wanted to study math. My goal in life was to be a math professor.
I enrolled in a top college and took many graduate-level classes. However, by my senior year, when it was time for me to decide the next step in life, grad school or industry, I had become somewhat disenchanted with theoretical math. Math was so abstract I started losing interest: all this commutative algrebra (for example) I learned wasn't making me feel like I had any new insights into solving math problems, outside commutative algebra problem sets.
And so I went into industry.
However, I can't help but think that if I had more knowledge of the motivation behind all the abstract math, I wouldn't have lost interest. All that machinery of commutative algebra was invented for specific reasons, such as solving polynomial equations in the rationals through algebraic geometry. Years later, through casually reading math on the internet, I've been getting hints as to what power these highly abstract frameworks give you for solving concrete problems. But without seeing the end goal, and having some idea why I should be learning this in the first place, I felt like I was just getting lost in abstract nonsense.
>Most of that foundational information is lost when it's not written down somewhere accessible; contrary to the answerer, only a small fraction is reconstructed by students as they learn the subject.
This is exactly right. I personally failed to reconstruct enough to keep myself interested in the subject.
Highly recommended book by Lakatos. The history and meta-mathematical aspects of proof-theoretic constructs are so often neglected that one can only imagine that there's a latent desire to attain a priestly purity to the proof-theoretic process that serves only to hamper true mathematical maturity in the field and individual learning.
Or maybe, just maybe, some things really are complicated and there is no quick and easy way to learn them. You really have to struggle and hit your head several times until enlightment comes to you. Books and teachers can only do so much, they can help you and guide you, but often there simply does not exist any magical way to convey stuff to make you instantly understand it.
While that is true, there is really no need to make it (deliberately?) harder by withholding relevant examples, applications, and historical background.
No one is deliberately withholding them out of malice. Practical applications are often convoluted and complicated or too forced. It is like wanting to do triple flips on trampoline before you can jump to sit on it. The sport analogy hold for historical background too - it sometimes help or makes it more interesting, but most often helps as much as knowing history of flips helps to learn flips.
Contrary to what you say, practical applications are easy to be found where they are simple and helpful. They make for fun math hobbyist like to play with and that works nice in terms of making kids interested in math. Unfortunately they do not lead to cutting edge science nor math needed for physics and engineering.
Please use absolute references (a link to the specific answer) instead of fuzzy references that change over time.
Currently, your answer is the top-rated, so which one did you mean? (And even if your answer was not at the top, the top will almost certainly change over time, leading to all kinds of misunderstandings.)
EDIT: It seems that this referred to the top-rated answer on MO, not HN:
This demonstrates my point even more: Please use absolute references, as fuzzy references make for misunderstandings! (And as my comment received quit a lot of upvotes in the beginning, I'm pretty sure I'm not the only one who thought this was about the top-rated HN answer.)
I took their post to be referring to the top reply on the math exchange site, which may actually be a stable thing at this point as the question is not fresh. I may be wrong though.
Humans are strange. Perhaps, though it's not certain, if your original comment were couched as a softer request, then that softer approach might not pique certain small subsets so much?
The top-rated answer states that "the goal is to learn how to do mathematics, not to 'know' it". I don't think that's a defeatist sentiment or a rationalization for the state-of-things. It's certainly true that one would gain more insight that shows how modern-maths-as-we-know-it evolved from a centuries' long struggle to understand basic patterns (exempli gratia, Cox's "Primes of the Form x^2 + ny^2"). But it is also true that, when you are doing mathematics, such a book is not nearly as helpful as an impenetrable tome like Neukirch's Algebraic Number Theory.
From my own experience (which, I'll admit, is not much to extrapolate from), the former was great for its "insight" and the latter just had too much unnecessary "information". Until I got stuck in an actual maths problem. Then suddenly my opinion switched, and the latter became a trusted map through strange lands.
You're right about the amount of information that's encoded in a definition or theorem. And it is very difficult to portray why that information is important (never mind how to access it, see other's comments about scaffolding and cathedrals and the like). I fear that trying to describe why groups are defined like they are would just lead to more impenetrable tomes - but instead of being maps to the lost traveler, they'd be more like tourist adverts. But the questions the source was asking is "why are books like this?" and "how do others [learn maths] in this situation?"
Well, maths books are like this because they are guides to help people doing maths. And you learn maths by doing it. I don't think its defeatist to say that no book or lecture about riding a bike would compare to the experience of actually riding a bike. To paraphrase the top-rated answer, why would you expect anything different from maths?
But to answer the underlying question of both yourself and the source: how do we improve this? I think pedagogy should focus on getting to "do maths" faster. You really learn by getting stuck in a problem.
> I don't think its defeatist to say that no book or lecture about riding a bike would compare to the experience of actually riding a bike. To paraphrase the top-rated answer, why would you expect anything different from maths?]
This it the wrong analogy and does not capture the MO answer. The correct analogy would be training vehicle designers by teaching them all the parts necessary to build a modern bicycle without teaching them any of the ways early bicycles were designed and why those designs were discarded.
"I had to go through this and figure out everything myself, so you should too".
Some of us like it and don't see value in the long winded motivational style. Things like group theory are targeted on people who are interested in math itself and like the mathematics for what it really is.
It is interesting to learn history once in a while and there are good books about that. However, most of the time you want to move on faster.
“Sadly, our present system of mathematics education is precisely this kind of nightmare. In fact, if I had to design a mechanism for the express purpose of destroying a child’s natural curiosity and love of pattern-making, I couldn’t possibly do as good a job as is currently being
done— I simply wouldn’t have the imagination to come up with the kind of senseless, soul-crushing ideas that constitute contemporary mathematics education.”
I took a minor in Math because I enjoyed some aspect of it but completely agree with the quote. All my math courses were taught in a vacuum or worse, taking problems that bore no connection to situations I cared about. Over the years, as I explored various hobbies, I found mathematical ideas having application - that's when I truly made an effort to understand concepts fully. For my kid, I plan to introduce math concepts in the context of these hobbies .. if any one cares .. these are astronomy and radio.
"closed as no longer relevant ", I find that is a growing problem with StackOverflow. My colleagues and I personally asked several relevant questions (like this one, that has many upvotes on HN), that were shut down or deleted by StackOverflow heavy-handed mods as "off topic" , "not relevant" etc. This made me give up on contributing and treat it as a read-only resource.
Most of the SEs, in my experience, dislike questions, like this one, that boil down to subjective opinion. This makes them hard to answer, as answers no longer depend on fact or statistics, but someone's opinion of something. Frankly — while I feel for the questioner, and agree w/ him — this one seems very close to that edge.
It isn't immediately clear to me that this question is relevant like you claim; the FAQ isn't really clear on it either:
> The site works best for well-defined questions: math questions that actually have a specific answer. You'll notice that there is the occasional question making a list of something, asking about the workings of the mathematical community, or something else which isn't really a math question. Such questions can be helpful to the community, but it is extremely tricky to ask them in a way that produces a useful response.
Having participated frequently in SO and others, I understand why it is necessary. But, I feel that so much is wasted with such a qualified audience. There is so much potential to be explored there.
I don't have a solution either. I just wish I had.
I asked a question on an electronics stack, but it was shut down by mods because they apparently couldn't answer the question, and decided it wasn't clear/valid. Clear case of mod ego.
I'm very curious about this: is there a reason the reductionists seem to win in communities like this? Wikipedia has a similar culture. Are there systemic forces that point this direction, or is it just an accident of the history of the specific people involved?
It's worth noting that SE and Wikipedia are two of the most valuable reference resources human civilization has produced, ever in history. Maybe the reductionism contributes to their high level of signal.
My problem is that it feels like they could have been much more useful.
Annoyingly often it is also the most useful questions that are closed while things I consider trivia style/karma-farming operations like: "what is the reason for x" seems to be totally OK.
I too find those trivia questions and their answers interesting but IMO they are a distraction.
SE aims to answer every relevant question, even basic questions. Karma is a reward for the users which do activities to generate the relevant Q&A entries.
As a member of both, no, they're really not. The information on SE is very useful but highly transient (and redundant, but at least we constantly try to rectify that). It'd be extremely more valuable if it could be condensed into a guide or textbook or what have you to teach idioms to beginners.
As for Wikipedia, I'm almost convinced most information on it is erroneous, unverifiable, or useless in the forms it is presented. I've stopped making contributions and try not to ever reference it at all.
This gives no indication one way or the other of which direction causality points.
ie. I could just as accurately say: maybe their value as a reference has led to reductionism. We can't go back and do an experiment, so we'll never really know.
"Are there systemic forces that point this direction,"
When you can "contribute" to the community simply by clicking a delete button, rather than actually adding new and useful information, you're bound to get this sort of situation.
StackOverflow mods are appointed to police content. The more content they remove, the more it looks like they're doing their jobs.
It's the same with Wikipedia. Deleting someone else's hard work counts as a "contribution". It shows up in your profile just like writing a new multi-page article would.
When the same credit is awarded for for destroying as for creating, the destroyers are always going to win.
> StackOverflow mods are appointed to police content. The more content they remove, the more it looks like they're doing their jobs.
I can safely say you're wrong in this regard. Content removal is a big fraction of what the mods do because the contributors filter editing, closing, pointing out duplicates, migrating, etc. so that mods have to deal with only important stuff.
I recognized a few of the names who deleted the post to be fine mathematicians who have contributed quite a bit to mathoverflow, so I don't think that applies. Mathoverflow is a Q&A site for professional mathematicians, you may have it confused with math.se.
probably because both have the goals of being encyclopedic sources of knowledge, and successfully complete this goal following reductionist policies, and its a hell of a lot easier to follow a strict philosophy than a lenient one (because the latter invites heavy legal discussion by armchair lawyers, while the former minimizes it).
In short, likely because its efficient and apparently sufficient
I don't really agree with this, but I upvoted it because it's the best charitable explanation I've seen, even if I'm frustrated by what I perceive to be a missed opportunity.
Which part do you disagree with? That its easier to enforce, or that its sufficiently effective?
The former is intuitively true; finding good rules to enforce strictly is difficult, but enforcing is simple (is the act explicitly allowed/banned or not?). Find broad rules is trivial, but then you spend all the enforcement time arguing edge cases (see: US gov)
The latter is defended by the assumption that wikipedia/overflow do indeed follow such a policy, and its difficult to argue that they are not successful. There might be something better out there, but at least currently the market seems to have converged on this policy; ala "democracy is the worst form of government, except for the all other forms we've tried.
I disagree with the part about it achieving the goal of being an encyclopedic source of knowledge. Except it largely comes down to how you define "encyclopedic". I would like it to mean "broad and inclusive", but it often means "highly curated", which tends to translates to "conservative".
I don't think the convergence of the market on this sort of policy provides any indication one way or the other, because of network effects. Both of those sites won their space before the culture we're discussing here became entrenched, and because of network effects, it is not realistic to unseat them.
Im not sure you can easily argue either is "highly curated"; they both cover an extremely large domain and cover at least a substantial subset of it. It's hardly the case that any one/group handpicked particular subjects of interest, rather they denied anything not of interest (and accepted the rest)
I imagine rather than "broad and inclusive", it should be "broad and authoritative", which then tends towards conservative when contraversial (which makes sense: how can you be authoritative when a general agreement can't be reached?)
And ofc, wikipedia was preceded by c2wiki, and stackoverflow by innumerable Q&A sites, both predecessors being substantially more liberal in what they accepted. And both beat out their predecessors, presumably largely because quality control was made much more difficult, and often absent, in the face of liberal acceptance.
> And both beat out their predecessors, presumably largely because quality control was made much more difficult, and often absent, in the face of liberal acceptance.
That doesn't mesh with my experience of what made the sites popular at the time of their growth. Wikipedia was the only wiki that anyone had ever heard of, and SO was the only software focused Q&A site besides expertsexchange which had a freemium model that made it unusable (not to mention a funny domain name!).
There may be so much spam that they practice a shotgun banning approach without comprehension or real content evaluation. Somehow the best "spam banners" get elevated perhaps?
Agree this is a problem; there are a huge number of worthwhile questions that get closed on stack exchange. This is because they prioritize the reputation of the community over answering all worthwhile questions (and to be fair maybe they wouldn't still be with us if they hadn't). Still most of the mods would do well to remind themselves of the official advice
In fairness, it was closed after being open for more than two and a half years. Although the point remains that the "no longer relevant" message is needlessly grating.
There are absolutely non-boring mathematical books by Russian authors. In addition to already mentioned "Mathematics: Its Content, Methods and Meaning" by by A. D. Aleksandrov, A. N. Kolmogorov, M. A. Lavrentiev I would add "Elements Of Applied Mathematics" by YA. B. Zeldovich, A. D. Myskis [1]. The latter is really good introduction into math when one needs to apply it in various settings.
And also there is a more introductory book by YA. B. Zeldovich and I. M. Yaglom "Higher Mathematics for Beginners" [1] that is a fun to read book about calculus.
It's totally cultural. The culture comes with some costs, and some benefits.
In physics, there are so many times where we present the same mathematical ideas in a manner that's way more ergonomic for the minds we're talking to-- other physicists.
Inexperienced mathematicians tend to see these presentations and are kind of gobsmacked, "why did no-one tell me it was that simple and concrete?" Experienced mathematicians tend to see these presentations and cringe, "don't they understand it's not that simple and concrete? For instance, look at this pathological example here that contradicts everything they just said."
The slick presentations are great if you're a physicist or anyone just wanting to use the ideas. And they're great if you're an inexperienced mathematician trying to get your bearings as fast as possible. But once you're a research mathematician, trying to build new ideas, too many slick presentations of ideas just creates groupthink. You want as many people to generate their own mental pictures as possible.
That said, few people studying mathematics go on to be research mathematicians. The mathematics culture should be much, much friendlier to people whose endgame is not being a math prof. Apathy toward those whose ambitions lie outside the ivory tower is a problem endemic to most academic subcultures.
Finally, when reading anything generated by academia, remember most of it is crap. Academics are fired if they don't make enough noise. This is truer than you think, even taking into account metrics like "impact". You can make a lot of "impact" just by making enough noise. And you generally can't make "impact" on a short timescale without making enough noise. Once your name shows up enough in a field, you'll get known as "a name", and start being cited in that field. Even when your work is drivel. If your name only shows up a couple of time in a field-- unless your work incredibly well written and you get a good dose of luck-- it'll take years before it gets uncovered and appreciated, if it ever does.
Think of how much more attention you pay to what someone like maxxxxxxxx says here. Sure. He's good. But he's also familiar. It's primarily his familiarity, more than his quality, that makes you feel like he's one of "names" in this little subfield of the internet called hacker news.
Yes, I think you are right. I offer a free Linear Algebra book that appears on the first page of a Google search. The book's approach is to give a lot of explanation and motivation, so I often get emails from folks about this topic. From students often the mails say something like, "I couldn't understand my prof but by using your book as a supplement .." while from the prof I get (much less often) "way too chatty."
So I agree that a lot of it is a question of culture and taste.
On my first day in grad school a prof who I admire a great deal told me what he liked best about Differential Equations is that you don't have to say how you found the answer. You just state the answer and verify it satisfies the equation. I think that's the taste of people who enter the field.
(I also think that is the taste of the moderators of MO, which is why questions like this have been closed over time.)
> what he liked best about Differential Equations is that you don't have to say how you found the answer. You just state the answer and verify it satisfies the equation.
Man, that is kind of magical from a productivity prospective.* The vast majority of my time is spent explaining why the answer I give is sensible and reasonable.
*The small cost of this approach is that all insight that led to him finding the solution will die with him.
Don't be. The first thing I did when reading your above comment was to do the Google search. I should have scrolled down!
I'm browsing through this, and it's clear to me this is a textbook. In that it easily looks like it could be the textbook of record for college courses. This is an enormous amount of work. Am I correct in assuming it grew out of your lectures notes? What lead you to make this a freely available textbook, instead of going the "normal" route of going through a publisher and getting royalties?
Yes, it is a text. As the linked-to page says, it has been used in hundreds of classes at many schools as well as by thousands of individuals for independent study.
> Am I correct in assuming it grew out of your lectures notes?
No, really I wrote it intentionally not organically. I used Strang's book in a course a couple of times and while that is a very fine book, the students I had in front of me had trouble with it (and anyway I wanted to cover a somewhat different set of topics). I looked around some more but basically I couldn't find a text that fit.
> What lead you to make this a freely available textbook, instead of going the "normal" route of going through a publisher and getting royalties?
I wrote it using LaTeX, on Linux, using emacs. It seemed natural.
I do get some money, from Amazon sales, because it would be stupid to not round the price up. (But in general, everyone tells you that unless you write a very popular text for a very big audience, you are not going to see much money. You need to get your pleasure from the creative accomplishment.)
He takes the approach of assuming that his readers can range from absolute beginners to mathematicians, and that if you prefer, you can skip the equations and the exercises.
But if you attempt to understand them, then your picture of the Universe will become richer that no other pop science book can promise.
It's an enticing thought, and that and his gentle prose are just about his only tricks to push his readers through his enormous tome.
I haven't finished it yet but it does feel rewarding that much of the maths that I've come across (Alevels and engineering degree) are there, laid out in a new framework of meaning, the Universe.
P.S. There's also an eBook version, and an online resource that gives solutions to the exercises.
My advice is to not try Penrose's book. The vast majority of its contents will be utterly incomprehensible to someone without a PhD level background in physics.
Technically it is self-contained and nothing beyond high school knowledge is needed to understand it; but while technically true, in reality it's a cruel deceit. If you try to learn it without advanced training in physics and math, you'll run into limitations related to mathematical maturity (https://en.wikipedia.org/wiki/Mathematical_maturity) very quickly, within the space of the first few chapters.
The idea of this book seems noble and uplifting, but it cannot be done in this fashion. It doesn't matter that every theorem and assumption are clearly stated and follow each other in an orderly sequence. There's a huge number of new ideas, mathematical and physical, which require proper acquaintance to settle in your mind, not the 1-2 pages Penrose is able to give each of them. If someone is serious about learning physics to that level "properly" and is willing to invest the time and effort, they're much, much better off working through conventional college-level (and later grad-level) textbooks in particular subjects, taking care of math prerequisites before and during such study. That's very very hard to do, but is in fact possible, whereas studying solely from Penrose's book, I think, is not. It functions, sadly, as a sort of trap for impressionable bright people very eager for advanced knowledge - they will try to struggle through it, inevitably give up rather early on, and blame themselves.
> It functions, sadly, as a sort of trap for impressionable bright people very eager for advanced knowledge
Or perhaps people who assume that it's a good all-in-one solution to get a glimpse of understanding the Universe. For me personally, I have no wish to be mathematically fluent now. I admit that I can't imagine committing wholly to the studies of maths and physics for more than two years - this book seemed like a Magic Bullet.
My plan is to read it very slowly, perhaps over a decade. My mind is on other projects for now, but if I have this quietly stewing at the back of my mind, maybe one day I can be braver to make the jump and study maths and physics seriously.
At least, that's the plan. A retirement amongst books - I thought that's appropriate.
Sigh, I do believe you though. Have you read this book to dispute Penrose's claim that the less mathematically fluent can gain some useful insights?
It's a very good book for people of a technical background to get a taste of theoretical physics (probable the only one) but let's not exaggerate, it's not a math book.
Penrose says the same in "Emperor's New Mind" - that you can skip the equations. Perhaps you can but IMO you'd miss an awful lot, I found it too difficult to just jump over the equations until I'd studied enough to have met the Schrodinger equation and from then one can pretty much grasp all the physics/maths.
Perhaps he does better at keeping the flow going in "Road to Reality" for those trying to read without deeper knowledge?
Mathematics traditionally has a macho ethos. This goes all the way back to Euclid, and his "Let no one ignorant of geometry enter" sign. The training process primarily involves solving puzzle-like problems. Cambridge University still has Wranglers and Senior Wranglers [1], chosen for success at solving puzzle-like problems, not originality. (Hardy once wrote that this set English mathematics back a hundred years.)
This history infects mathematics books.
The other traditional problem with mathematics is terse and obscure notation. There are many implicit assumptions embedded in published mathematical papers and books. This is not helpful. It's like reading code snippets without the declarations.
Have you actually hung around with some mathematicians? I find them to be on the whole quite a humble bunch of people. My feeling is that any hubris is burnt up in all of the great suffering that it takes to do good mathematics.
I think the machos all migrated to physics in the first half of the 20th century. Those guys are full of "big talkers". Have you ever heard of a physicist refusing the Nobel prize? No, but it has happened (more than once i think) for the Fields medal.
Machos go to whatever gets them social status. They will move from math to physics when physics becomes important and then to programming when that one becomes cool. If art would be cool, machos would go back to writing poems.
It was Plato not Euclid who's said to have had that sign ("...no one ignorant of geometry..."). I agree that Euclid was a bad influence in this way, though we know so little about his time, maybe he'd have hated that part of his influence. We have a bit more variety from Archimedes: The Method, aka "here's how I really figured that stuff out". It was long lost until pretty recently, and so didn't get to influence anything post-Hellenistic.
Also agreed about the notation. I hope something better evolves out of code.
The other traditional problem with mathematics is terse and obscure notation.
It's only a problem if you are unfamiliar with the subject area, and if you are, you have very little chance to understand the paper or book anyway. Mathematics is just difficult, way more difficult than most other stuff people might be doing.
Here's an example: consider this classical paper[1], "Vector Bundles Over An Elliptic Curve" by M. Atiyah. The author is a well known and regarded mathematician, and the paper itself has around 1000 citations. Sections 1. and 2., "Generalities" and "Theorems A and B", just recall the basic notions and theorems that are absolutely necessary to have any grasp whatsoever as to what is going on in the paper.
If you know nothing about algebraic geometry, the classical way to learn these is Robin Hartshorne's famous textbook "Algebraic Geometry". It is famous for being both good, self-contained to a large degree, and having very well chosen exercises, but also for being quite terse and often difficult to follow. Here's an Amazon link[2]. You can look at the table of contents. To really fully understand these two sections from Atiyah's paper, you need to have very good understanding of Chapter 2. "Schemes", and at least first 5-6 sections of Chapter 3. "Cohomology". This is 200 pages of pretty terse mathematics. At a fast understanding pace of 4 pages a day, it will take you two months to even have some basic toolset to understand the Atiyah's paper.
But, if you try to understand the Harthshorne's textbook, you'll quickly find out that it also has some prerequisites of its own. Also, the "4 pages a day" pace is only possible if you've already spent 2-3 years learning how to learn mathematics.
I encourage anyone to try to understand even first 2-3 sentences of the "Generalities" section. Google and Wikipedia the unfamiliar terms, you can also try to look it up in Harthorne's textbook, or Vakil's lecture notes, or any other source. The notions used in these first 2-3 sentences are basic to anybody working in the field, and yet one needs to spend hours to fully understand these when starting from scratch.
Compare this to other famous paper from other field, "The Market for Lemons" by G. Akerlof[3]. This is also a very famous paper by a well regarded economist, who received a Nobel Memorial Prize In Economic Sciences for it. It is a much easier read, precisely because the economic sciences do not operate on nearly the same level of complexity as mathematics. Once you know what are some common sense notions like, supply, demand, utility etc., and some very basic calculus, you can easily follow the argument without too much training.
My point here is that it's not that it's difficult to read mathematics just because it uses terse and obscure notation. It all is just genuinely difficult and complex, and it is impossible to invent better notation that will transfer days and weeks worth of understanding straight to the reader's brain. I would love it to be the case, but then it would cease to be as fun and rewarding to really understand.
No, it is fun and rewarding because it is genuinely hard to understand. Once you really understand it, it tends to become more obvious in hindsight, but good luck getting your understanding across to someone else who hasn't spent as much time thinking about this as you.
Economics would also be really hard to understand if it didn't operate on the real world: something very familiar to us.
I can imagine a mathematical version of economics operating on some abstract constructs designed to emulate an economic system. It would reuse no terminology from the real world for the sake of producing an abstract notion, completely (or at least artificially) decoupled from the system it was designed to emulate.
I would imagine that Akerlofs paper, encoded into this form, would be at least as hard and involved to understand as the one from Atiyah.
As long as the as the "abstract constructs" themselves weren't much more complicated than the current real-world economic concepts, it would only be slightly more difficult. You would need some background reading, but not any more than skimming relevant Wikipedia articles for definitions. And in that case, you'd have every reason to be pissed at economists who don't present their findings in much simpler and obvious way.
If, however, the abstract constructs were much more complicated then the current economic concepts, and if you were trying to solve problems on much higher level of abstraction than economists currently are, then it would just be mathematics, and indeed it would be more difficult.
I, personally, cannot imagine how you could rewrite Akerlof's paper to be as hard to understand as Atiyah's. I can, with great difficulty, follow Atiyah's paper only because I spent literally _years_ learning the necessary background material. I am completely unable to relay my understanding to someone who hasn't spent years doing the same. I wish I was -- I'd revolutionize algebraic geometry then, just like Alexander Grothendieck revolutionized it around when the Atiyah's paper was written. On the other hand, if someone rewrote Akerlof's paper in an intentionally obscure way, you could easily rewrite it back in a clear way, once you spend the effort to understand the obscure version yourself.
> Here's an example: consider this classical paper[1], "Vector Bundles Over An Elliptic Curve" by M. Atiyah. The author is a well known and regarded mathematician, and the paper itself has around 1000 citations.
Sir Michael Atiyah is an Abel prize and Fields medal winner. Apart from that, he has an excellent sense of humor. I had a chance to grab lunch with him last year and it was quite entertaining to say the least.
Back when I was in Uni I flunked Calculus II. The following year I switched Uni and - somewhat fortunately - I re-followed the same class.
I say fortunately because during my first round at Calculus II I couldn't fathom the slightest underlying motivation for all the statements and proofs that were regurgitated during class. I had no bearing - I nearly panicked with the Cauchy Problem - and was doubting whether or not I was cut for Engineering at all.
The following year the other Prof took a digression and for two weeks laid out Inf-dimentional spaces, Banach spaces, functionals, contractions, fixed points and so on. Then one morning, over the course of 2 hours he plugs in differential equations, the Cauchy problem and deals with it with such elegance, clarity and insight that I can still recollect the broad strokes.
I had first hand proof of the blessing and the curse of mathematics...
Right, the second prof's name was Francesco Saverio De Blasi, he passed away in 2012. He was a terror to students but I appreciate the hours he spent teaching us the basics with rare depth
I've historically had the opposite problem. I find attempts in both science and mathematics textbooks to provide some sort of real world context to be distracting and wasteful. I remember that impression going all the way back to high school, but the most recent memory comes from early in college. My discrete math book had some chapter that started with a page and a half about volcanoes.
Volcanoes.
Why are you wasting my precious time with this nonsense before getting to the meat? There are other things I need to read and study. There are problems I need to use these ideas to solve... and not only the ones in the textbook.
But then, I can also recognize that such winding introductions to a subject or an idea might be helpful to people who "get bored" as the author describes. Though I suspect the remedy might be to acquire and deliberately practice study skills. It would be nice if publishers sold versions of their textbooks both with and without what the individual posing the original question would identify as "motivation." I tend to think of it as fatty prose.
Half-assed physical analogs can be a drag, especially when the analogy breaks down after any bit of scrutiny. Geometric intuition, on the other hand, I often find worth the weight of a thousand words. There have been times I haven't been able to make my way through a dense piece of mathematical text, only to pop open another book on the same topic to find a single picture that makes all of the pieces click.
The conversation is about very advanced math topics (advanced undergraduate or graduate level). At some point in the levels of abstraction it becomes hard to show concrete applications of the ideas, or rather the applications of math is to do ever more advanced math.
I find more deplorable the fact that even basic math topics are often covered in the same dry way, without discussing practical applications or introducing topics through real-world scenarios. Many math books take the attitude "You have to know this, because I say so." Whenever I write about math, I try to start with a concrete example or a useful application of the theoretical result—it's always possible to come up with something for most of first year stuff. Seriously, you'd be surprised how much better reading UX is if you start each chapter/section with a motivating example.
I find that almost all books I read do discuss "applications". No mathematics is an island. But as you say, the problem is that a lot of those applications don't really count, because the domain of the application is as foreign as the original concept.
For example, knowing that the snake lemma, a purely categorical statement, is most useful in homological algebra (such as, for example, simplicial or singular homology), is utterly useless if you don't already have an interest in algebraic topology. There really is almost no other motivation for the snake lemma, so now we're faced with the problem of trying to convince you that it's interesting by trying to convince you that algebraic topology is interesting. It can be done, and maybe we'll eventually bottom out in something like financial statements or bridge-building, or another topic that is widely recognised as "useful" and very far-removed from the snake lemma. Either way, it will be a long and arduous path, and I hardly think mathematicians can be blamed for this or be dismissed as elitists for the inherent difficulties of the subject.
But even for "first year stuff", the applications are kind of pointless. Do you really want to learn calculus because of physics? That's the most obvious and most historical application, but calculus is so foundational that you might as well motivate addition of real numbers by saying that numbers are added in physics too. More likely for the HN crowd, you want to learn calculus because you want to know how a neural network's backprop algorithm works, but how is the first year teacher going to anticipate that this is your particular interest in calculus?
At some point, I think there has to be a little "trust me, this is useful" and you just struggle through the subject until you can see on your own, after the fact, what the struggle was about. First wax on, wax off, Daniel-san. Then you will learn how you really were learning how to block karate blows.
People always say that math students don't learn about applications of math.
This was never the case for me. When I learned trig ratios, I always understood some basic things that trig ratios could be used for. The teacher always introduced some applications, we always had a lot of word problems, and I could fill in the gaps myself.
Same for calculus. When I learned calculus, I always understood some things that calculus could be used for.
So I understood how those things could be applied to general, everyday sorts of problems. What was missing, though, was that I had nothing to which I could apply those techniques, besides homework.
Learning math (and reading STEM papers) has become easier for me since I now have actual problems to solve. Don't get me wrong: I'm not solving particularly challenging problems or using particularly advanced math. Nothing that tens of thousands of people haven't done before me. But I do need to understand the problems, solutions, and some of the context in order to successfully implement them. This provides a motivation that was always missing before.
I suspect this general narrative is true for a lot of people: that having an actual problem to solve is almost necessary to get a student to really learn the material, instead of just coasting along for a grade.
High school trigonometry and introductory differential & integral calculus are not the kind of books being described in this discussion.
The example in the original post is books about group theory (or the group theory sections of abstract algebra books more generally). I can attest that this subject is very rarely described in textbooks with clear examples shown before definitions and theorems; usually the presentation is entirely abstract, following a pure definition–theorem–proof kind of structure. But many other areas of pure mathematics at the undergraduate level and above are presented in a similar fashion.
(I recommend Nathan Carter’s book Visual Group Theory for a lovely counter-example to the prevailing trend, which starts with the concrete, and is very accessible. http://web.bentley.edu/empl/c/ncarter/vgt/)
We used to "run through" books like that. Their reasoning was to prove a theorem you only need the definitions/axioms. They really wanted us to be able to grasp the truths of a logical system from just its theorems and definitions. It was horrifically difficult. (Not all professors taught that way there.)
I feel that a lot of blame lies at modern academias curriculums. They feel every student needs to graduate in X number of years with a pretty long list of courses. It leaves little time for students who need or want more time with topics.
A lot of trig and geometry never really clicked for me until I had to use them in shop class. For instance, planning out the dimensions and cuts that you need to make in a rafter to get the desired roof pitch for a shed of certain dimensions. Or laying out the stringers for a set of stairs.
On this topic I recommend the "learn X in 20 minutes a day" series of math books for folks trying to get started in subjects they've had difficulty with in the past. I always struggled with Calculus until that book. The math is taught almost entirely by visuals and intuition hints. I came away understanding Calc, not just how to perform the steps. It helped me divine the purpose of calculus, not just the process. I've used them for Geometry and stats to similar effect. Highly recommend for anyone who share the sentiments of the OP.
I think this, as much of higher mathematics, will change soon.
Today's technology in natural language processing is reaching the point where it will be possible to marry a natural language processing system to an automated theorem prover and have it generate formally verified proofs from math prose proofs.
Once this technology can readily process the textbooks making up the PhD curriculum, I think there will be a culture shift. Quickly there will be a new standard that math results should be formally verified. The hallmark of math, after all, is that it can be proven correct!
But with an increased role for computers will come an increased appreciation for the things that only humans can provide. Motivation and explication will be more valued when the technical aspects of theorem-proving are automated.
If what you are saying is indeed going to happen, then the "problem" will become even worse. The formally verified proofs tend to be much more unintelligible than the human generated ones, and when they stop being so, the humans will become deprecated in general.
Practically speaking though, working mathematicians do not care that much about formally verified proofs. Working mathematicians are more interested in insight and understanding, and not necessarily in being completely sure of every detail. Formally and automatically verified proofs are much better suited for programming, as the automatic verification of the correctness of the program is after all _the_ best regression test.
So, while interesting in principle, I doubt formal methods will change much in how we do mathematics. Hopefully they'll change how we do software engineering though.
Perhaps it would mean that theorems are sometimes known to be true or false before anyone understands why. Digging through automatically generated proofs looking for interesting insights seems like a rather different experience than groping in the dark, not knowing whether a proof is possible?
If you have a statement that humans doesn't know how to prove, finding a proof via automatically generated proof is kinda like trying to decrypt RSA by factoring the key. In both cases, you're looking for a specific key in the search space is extremely large. You can put some insights into the tool that search for the solution, from simpler ones (e.g. using fast multiplication algorithm to verify the candidate quicker), or more sophisticated ones (e.g. using generalized number field sieve instead of trial division), but in the end, they don't help you much in practice -- the search space is still too large to expect to find a key in the lifetime.
It's an interesting analogy but I think it proves too much. You could argue that computers will never win at Chess or Go because the search space is too large, and look what happened.
Although it's not proven, we have fairly good reason to believe there are no sufficiently efficient shortcuts for factoring large prime numbers, while there are shortcuts for proving many difficult mathematical theorems. After all, humans can do one but not the other.
You make a good point. I agree that the analogy is not perfect, and that if you assume that breaking RSA is computationally hard for some intrinsic reasons[1], then the theorem proving is more like chess and go, rather than RSA. However, theorem proving is still much more difficult than chess and go, if you consider time spent on a theorem vs. on a single game, and on the number of good mathematicians vs. the number of good chess/go players. I think we'll have human-level theorem proving solved by machines at some point in future, though not very soon. Either way, humans will be well deprecated by then.
[1] - Practically speaking though, the biggest reason we believe that factoring is hard is that we haven't really figured out how to do this, so our belief that it's hard is really build upon our feeling on hardness of theorem proving. :) I think we have more intrinsic reasons to believe than P != NP than that factoring is hard.
I don't think so. I'm not really competent to say if the NLP part will be solved soon, but (fully) automated theorem proving is very hard, and I don't see deep learning applying to it very well.
What could happen, though (hopefully), is that mathematicians start using more and more the various proof assistants (Coq, Isabelle/HOL, lean, …). These allow to write definitions, statements, theorems in a more programming-like way, with structured proofs; to develop modular libraries; and even in some cases to use automated provers to wave away details of the proofs. The problem of adoption of such provers, however, is a culture problem, not really a technology problem: as said elsewhere, mathematicians do not seem to care too much about formal correctness. It's only if automated tools help gaining insights (automatic search for (counter)examples, for example; function plotting; builtin algebra solving system… ?) that they will convince more people to use them. Modularity might also be a selling point, but not sufficient to overcome the steep learning curve of these tools.
I have some hopes in [lean](https://github.com/leanprover/lean), a new proof assistant inspired from Coq, which puts the focus on good UI and automation.
I've wondered the same thing. Moreover, math books tend to prove the most general case of every theorem first, rather than work up with the special cases which were the reason anyone thought about the theorem to begin with. It is strangely possible to learn how to prove quite abstract and difficult theorems without actually learning much about what the theorems mean. In the process of doing this one's ability to really think about mathematics or formulate new questions worth asking, is at best not developed and at worst stunted.
> math books tend to prove the most general case of every theorem first
If only! Many times during my studies I was hoping that someone had proved a result I needed, only to find textbooks that proved a slightly more special-cased version that wasn't enough for my uses.
The first time I really grasped the very basic concepts of linear algebra instead of just being able to calculate things like eigenvalues was after watching this series by 3blue1brown:
I feel when it comes to motivation in math, I just want to know why people got so excited about a particular theorem in the first place. I'm ok with the answer "before this theorem we assumed all these different things we proved in similar ways were different. Now we know about their commonality and it allows us to borrow mathematical machinery and use it here".
For example, take something like measure theory. A reasonable motivation for measure theory to me is remembering your introductory probability class how you had to learn a probability mass function for discrete spaces and a probability density function for continuous spaces. Now these two ideas are obviously nearly the same idea. I mean the notation used is a pretty big hint. But you need measure theory to have the right concepts to describe how they are the same.
Where are all those kinds of motivation for things like topology and cohomology?
I asked the ##math chat room on Freenode about cohomology recently, more precisely I expressed that it was for me the scariest math-word-that-I-don't-know-yet, and I received the explanation "it isn't that scary once you get used to it; it's just a way to repair exactness of sequences." Maybe that helps?
For topology, I feel like there's a sort of meeting-of-two-different-things; one starts with being very frustrated with the delta-epsilon-definition of "limit" and its one-dimensional nature, and the limit-definition of "continuous" and its clumsiness. The other starts from wanting to play with spheres and Möbius strips and knots and the like. When you're playing with these shapes a bijective mapping between two surfaces is not a fine-grained-enough idea because it is not continuous; adding continuity gives "homeomorphisms" which also aren't a fine-grained-enough idea because they do not make reference to the space an object is embedded in; wrap a torus about itself in a pretzel knot and you have something which is homeomorphic to a torus but in 3D you can't get there without tearing part of the surface through the other one, but in 4D you can. So finally we come to the idea of an isotopy, which bumps "continuous" to the next level by saying "Just like you can have a continuous path of points in space, you can have a continuous path of homeomorphisms from one to another," and that's where the pretzel knot becomes finally distinct from the torus in 3D, there is no continuous path from the homeomorphism of the pretzel knot to the torus, to the identity homeomorphism of the torus to itself. Or something like that. So this path is then an "isotopy" and then certain things are nicely isotopy-invariant and so forth.
The topology explanation is about what I had in mind for motivation. I think to appreciate cohomology, I need appreciate what problems in homology it makes easy to solve. To appreciate that I need have the vocabulary of algebraic topology.
One way to maybe then to describe homology is it gives you way to take about shapes and surfaces in the language of algebra.
I like quanta for general purpose math articles. they typically take a fairly recent paper and explain it qualitatively but still pretty accurately. almost like war stories.
In the top answer one of the last paragraph sounds like the real answer:
> When I was a graduate student, we had a wonderful working seminar on Sunday mornings with bagels and cream cheese, where I learned a lot about differential geometry and Lie groups with my classmates.
At the end, the students needed context and enlightment. We cannot underestimate other dimensions of the learning experience.
Honestly, every time that a discussion on mathematics comes up in HN I feel the same way: people seem to think that mathematicians are some kind of elitist people that don't know how to explain concepts, and that mathematics would be like coding (note: projecting everything into the tech world is not the best way to understand the world) if it weren't because mathematicians don't explain things well. But the reality is way different:
* We know that teaching mathematics is difficult. But, contrary to what most people think, there is no magic pill to solve it. Motivation and intuition are important, yes, but so are definitions and formality. Also, even when giving them to the students, they might not be useful: sometimes each person needs a different motivation of a concept, sometimes the intuition relies on more advanced concepts... Furthermore, things don't usually click the first time you are exposed to them, but only after you have worked with them for some time. Don't think that just because you are given a motivation and intuition behind the concepts everything is just going to become easy to understand. Even motivations and intuitions need work to be understood.
* The mathematics people study in high school and first-year engineering courses is very different from the mathematics that mathematicians study. Funnily enough, most mathematicians I know don't like the approach of mathematics as a tool for other disciplines, mainly because it focuses on things that are hardly interesting (here's the 123th trick for how to solve this weird integral that we are teaching you only because you will need it to solve some engineering equation) and avoids proofs that, despite being hard, most of the time helps understanding the concepts.
* Related to the last points, an important number of math books are written not for laypersons but for mathematicians. They go straight to the point, avoid long motivations/intuitions that are not specially useful and assume familiarity with a lot of concepts. Sometimes they are written as reference books for people that already know the ideas. That, of course, makes them hardly approachable by non-mathematicians, but there are already books that do that. Just because a book or a way to teach math is not of your liking does not mean it needs to be changed.
* Not all mathematical concepts need to have a direct application. Sometimes, things are studied just because they seem interesting or beautiful. Searching for motivation needlessly in these cases tends to be a waste of time. It's like reading a novel or solving a puzzle: you don't need to know "why", just enjoy it. If you're lucky applications can appear later, but there's nothing wrong if they don't.
> Motivation and intuition are important, yes, but so are definitions and formality.
Yet, many (maybe even most) mathematics professors and books insist on only teaching the later two...
Every time a similar discussion comes up in HN, there's a post like yours, pointing that the thing is too difficult to break into small problems like the one pointed. Yes, the thing is all very complex. Yet, the pointed problem is still a problem, and people are still not solving it.
Honestly, most mathematics professors I've had have attempted at giving motivation or intuition about things. Not always useful, not always correct and sometimes dangerously misleading, but the attempts are there.
Regarding the posts like mine, they come up because the majority is the other view on the matter: that there must be some way to hack this that somehow is going to be found in the HN comments, and that professors/book writers don't care. And the reality is different: most mathematicians care about intuition and motivation. It's their basic tool of work. But transmitting those ideas and transmitting them effectively is not easy. In fact, there's this funny phenomenon that I've seen in myself and it seems to apply to a lot of people: we forget the intuition we are given for a problem if it doesn't click right away.
I also refuse completely the premise that "people are still not solving it". In this post there are people linking to resources on books/pages with new ways to teach mathematics. People are solving it and each time getting better, and however it's never going to be straightforward to teach mathematics (just like it's never going to be straightforward to teach anything, mathematics is only seen as special because it has a lot more presence in education that other subjects).
I've come across articles on certain subjects that are far more illuminating than usual - these shouldn't exist if mathematics is just irredeemably difficult to learn. The only conclusion I can draw is mathematicians care more about advancing mathematics, than concepts in teaching; How many lecturers are actively involved in psychology of learning, or real testing of their students comprehension?
A lot of books are perfectly happy to bullet point list the properties of some concept, like feeding data to a machine, then try a combination of explanations and diagrams to describe how these properties fit together, or an intuition of their emergent properties.
It's funny, I've often gotten a lot more help from offhanded comments of characterisations of a concept, than from their formalities. It seems some lecturers are scared of making such observations, in case they mislead students, so they stick to the "safety" of irrefutable formality. set theory might be sufficiently abstract that it won't strictly mislead, but it won't enlighten either.
{ as an aside, it was years after my first undergraduate class that I finally realised that the integral notation represented a summation, and that it was literally the continuous form of the discrete "sigma" summation notation, I was surprised I hadn't realised it before! being able to relate the two gave me a stronger basis to reason about them in some ways - for example, I now thing of the variable of differentiation (dx, etc) like a 'counter' variable (the n=.. atop the sigma in summations), which helped when trying when reasoning about integrals etc with more than one (obv. if you try to derive a limit as some variable goes to inf., there is ambiguity if there are two unrelated variables), plus most continuous forms are based on discrete concepts taken to some limit. in fact, it even gives a little intuition about why an integral might be followed by "* dx", in such a way that the 'd' variable can be treated as an independent unit (taken to the other side of an equation, for example) - it is bound to the integral in some sense, but also represents a quantity in itself that can be shared amongst integrals }
I also get the feeling that mathematics is too "pen and paper", and ripe for an overhaul wrt anything that can be easily done using a math program. I feel the human part of maths should be high level concepts, not the actual calculation (although some understanding of what a calculation does is important) - hence, it is more important I understand what an integral represents, than know how to calculate it.
> an important number of math books are written not for laypersons but for mathematicians
But some math is useful outside of mathematical study, so 'maths for engineers' books are sadly too uncommon. Consider a software engineer trying to understand cryptography better, reading about elliptical functions..
> Not all mathematical concepts need to have a direct application
All but a few can at least be characterised somehow, even by mapping to similarly abstract social constructs.
He's talking about a fairly advanced math topic, abstract algebra.
At some point, you graduate from "being taught" to "teaching yourself." By the time you get PhD, you need to teach yourself, because you're studying things no one has ever studied before.
I'm not a mathematician, but it's my understanding from what I've read that part of it is becoming used to groping around in the dark, sometimes getting stuck, and accepting that as normal. Solving the really tough puzzles means getting stuck a lot.
This might explain the relative lack of attention towards the user experience of non-mathematicians?
I find it amusing that you call it "user experience", a very computery/hackery term for mathematics. There are so many computer-like expectations for mathematics that I see on HN. :-)
Anyway, yes. Every mathematician I know acknowledges that frustration is the natural state of affairs. If you're not frustrated that's because you haven't been doing enough mathematics yet. There's always a bigger problem, a new concept to master, a new way to look at an old idea.
Yes, to extend the metaphor, writers often pride themselves in offering a good "user experience." Bad writing does not flow smoothly and is unnecessarily difficult to read.
Good games try to provide an optimal learning experience, providing just enough challenge to be interesting without players getting stuck. Play-testing is vital; if your players commonly get stuck in ways you didn't intend, it's a bug.
There's a lot to be said for designing a learning experience to flow smoothly. We can admire the work that goes into making that happen. (It then seems strange that, by contrast, writers of math books often don't seem to be playing the same game.)
One thing a well-designed experience doesn't do, though, is prepare you for being stuck and overcoming difficulty when you're not on an artificially smoothed path.
> It then seems strange that, by contrast, writers of math books don't seem to be playing the same game.
Oh, but they are playing that game. It's just a very difficult game. Everyone wants easy math books and lots of people are trying to write easy math books (for example, my buddy Ivan and his No Bullshit guides: https://minireference.com/ ). It's just a very difficult game, and very few have come close to success. When they do succeed, it's usually only for one kind of audience and not another. For example, Spivak's Calculus is widely admired in the mathematical community for its presentation, but I wouldn't be surprised if HN derided it for being stuffy, too mathematical, elitist, and full of itself.
> He's talking about a fairly advanced math topic, abstract algebra.
That's actually a remedial-level topic for MO, which focuses on research mathematics. It's meant to be an easy example that everyone can relate to in this discussion, like talking about how children learn how to add. You'll see they say, "simple example". I'm not trying to be elitist, just trying to explain MO.
As an aside, I found the conclusion of this simple example about how normal subgroups are about being kernels of homomorphisms hard to relate to, how would you know that without even knowing what a homomorphism is, which needs knowing what a group is? These a-ha moments that come after learning the material and make the learner assume that the teacher is an absolute idiot by not starting from the a-ha moment are very frustrating. It's kind of impossible to start from a-ha without the learner first bumping into all the dark corners and hard work that light the way to the a-ha. I myself have had topics ranging from (simplicial, singular) homology to the completeness of the reals and elliptic curves explained in so many ways and nothing ever made sense to me until I sat down and struggled through all the explanations offered to me. I don't think there's a way to convey insights to a mind that hasn't struggled towards those insights.
Here I am going to quote Michael D Alder, from the introduction to an old edition of some lecture notes of his that I can no longer find online:
I try to start every course of lectures with an overview or
outline of the material in the course. Before I do that, a (true)
cautionary tale. While I was an undergraduate doing Mathematics,
I told my Calculus lecturer, a celebrated mathematician called
Ian Porteous:
“I have been getting the strong sensation in your course of being
dragged at high speed down a narrow track in a jungle. There have
been dimly sensed paths off to the sides but we have been
galloping after you and have no time to explore these
possibilities.”
I went on: “What I would like, is to be taken up to a high place
and given a view over the country through which we have been and
that where we are headed, an overview of the subject.”
He replied with an amused grin, “Wouldn’t we all.”
He elaborated the point, which was that in order to have an
overview of a subject in Mathematics, you have to crawl all over
it on your hands and knees, and then you move next door and do
the same with another mathematical subject, and then, if the
subjects are related, you may get a higher level view of the two
bits you have sorted out in detail. And then you can do some more
detailed understanding of some other bits and link those, and
maybe get a higher order insight linking the links. But a higher
order view from a height without doing the detailed work is not
possible.
The reason it is not possible is that you are looking at ideas.
You can only develop a higher level idea by understanding the
base level. We could tell you the words for the higher level
ideas, but they wouldn’t mean anything. For this reason, course
outlines in Mathematics are intelligible only after you have done
the course, not before. This is extremely maddening, particularly
to philosophers, journalists, post-modernists and others with
similar intellectual handicaps, but that’s the way things are.
It follows that explaining what the course is about in a general
way is a waste of time and can’t really be done, but I shall do
it anyway.
Edit: I just serendipitously found a passage of his in a very similar vein. Read section 5.1, "Cultural Anthropology": ftp://www.biophysics.uwa.edu.au/pub/Mathematics/Alder/DiffGeom.pdf
> And of course it is historically backwards; groups arose as people tried to solve problems they were independently interested in.
Reminds me of the joke that philosophy students university learn the work of people who didn't go to university.
Would a text written by the discoverer of a field be more interesting?
I've found important papers easier to read than textbooks covering them... OTOH the original paper can be too close to their own motivation, and not treat - or yet be aware - of its general significance.
A motivating problem doesn't have to be practical or relatable... it just must lack obvious solution. A puzzle.
Good example of something (although it's perhaps a bit trivial) that is a puzzle that seems to make even non-mathematicians curious is the 'justification' for greco-latin squares.
The idea is that you have 6 files of 6 men. Each is of a different regiment, each is of a different rank. Is it possible to arrange them so that no row and no file have two men of the same rank or from the same regiment?
You can mention this in your first lecture on algebraic geometry (or, for that matter, combinatorial geometry i.e. matroid theory) and then come back to it when you talk about projective geometries.
Partly this is a meme diffusion problem - eg. we have really good diagrams that explain visually why Pythagoras' theorem holds ... yet students often don't see these more intuitive forms.
Another example is the Feynman Video Lectures are effectively inaccessible, due to being in an ugly format - they should be on youtube as a planetary resource.
I was recently looking at reviewing University chem so I can have informed discussions with my teenage son.. Atkins has a great book, but at an exorbitant price. College texts seem to be a kind of scam. Some would argue its morally justified to pirate such content.
We seem to be stuck in all these local minima for minor psychological or economic reasons. There should be open texts that cover most of the basics of highschool and freshman science, with upkeep costs shared by universities/schools or covered by a library tax.
We can think of education methods as a technology itself, and the best technology, namely the best explanations, are much much better than the average - how do we propagate the best ideas ?
Having ratings on quality, like those on goodreads and amazon, seems to be a practical benefit, but maybe we need smaller granularity - eg. rate a particular diagram on an open biochem textbook wiki.
My pet theory is that there are financial incentives to make bad text books.
Text books are usually written by university professors, often the professors from whom you are taking the class. There are absurdly many entry-level textbooks for a subject whose beginning parts probably haven't changed in over 100 years. They do this at the behest of the university, which will be capturing their royalties under an IP clause in their employment contract. The university intends to sell this book for upwards of $200 to a captured audience of teenagers who don't yet have the life experience to know they can opt out of putting up with bullshit. If the university doesn't have a constant churn of new editions, then the books will very quickly filter into the secondary market where they will be drastically discounted. So universities pressure professors to write textbooks quickly. And university professors aren't exactly the best-paid vocation. Invoke "good, fast, cheap: pick two" and you see it's "good" that is getting dropped on the floor.
Frankly, I think the old World habits of people distancing themselves from distractions and to involve their self in deep thoughts relevant to their fields, experimenting tirelessly and then collaborating with like minded personalities are either dead or severely limited nowadays.
With such pervasive usage of TV, internet, cellphones, telephones, material items, shopping, entertainment over the last few generations, we have gotten to the point where we are no longer producing Einsteins, or Thomas Edisons or Babbages. These days, your typical 'scientist' is a guy more concerned with dressing and appearing hip and is more concerned with being a CEO/marketing than with actual trench work in the scientific plains. These are the same types who encourage in turn for younger generations to follow their 'hearts', to be more 'social', to refrain from 'dry and boring studies' that consume one's entire lifespan.
It goes back to Gauss and "no self-respecting architect leaves the scaffolding in place after completing the building" - about intentionally leaving all motivation, heuristics etc aside.
Does no self-respecting software dev leave comments/documentation behind? A software program is the equivalent product to a blueprint, not a building. And an architect might leave guidelines and annotations on those blueprints (they would not dismantle scaffolds, that's a builder).
One of the biggest problems I think is that most experienced mathematicians seem to forget that the written forms of mathematical systems contain significantly shortened symbology which requires significant understanding of what is NOT written explicitly.
The most basic example I can think of is "ab" meaning "a x b" but the "times" function has been deleted but still needs to be understood.
I find when reading mathematical books I cannot follow examples and progressions to the next level then the next level etc as at every progression there are significant pieces of information that are missing yet text books rarely if ever mention what is not written, so how can the novice possibly figure it out.
The best math class I ever took was probability, where the professor worked history of the field and famous historical examples into his lectures. It helped that probability has a colorful past, but the right motivation really brings things to life.
Also, for self-teaching, I've found some math books that are setup to be in the form "Topic A with Applications to Topic B." If I care about B, such a book will typically does a good job motivating A, even if it isn't the purest introduction to the area. I can always read a book that is a more canonical intro to A later.
Peter Lax wrote a graduate text on functional analysis where every other chapter consists of "applications". Even though the quotation marks are certainly warranted, I liked that concept.
I've often thought that one of the main problems with pedagogy, at least the kind that tends towards the abstract, is that it tries to give you answers without prompting the questions.
I suppose you could argue that it's the student's responsibility but maybe that would require insight and inspiration on a par with that of the person who came up with the answers in the first place and I think it's fair to say that that's an unrealistic and unreasonable expectation.
> I suppose you could argue that it's the student's responsibility
You'd then argue that it's the job of the less experienced people around to deeply understand the world's problems. I don't think it's a good argument at all.
For lower level courses (say calc and below) the books written in the early 1900's for students seem better. Of course there are plenty of stinkers there too, but it seems being able to explain a topic in a clear concise manner with applications was prized more. The text books my kids have in comparison are utter garbage in their effort to present the material in dozens of nonsensical manners to help those that didn't understand the primary methods. Plus "challenge" problems that rely on missing knowledge abound. The perfect example I had of this is when my child was in 3rd grade as part of the homework there was a set of algebraic word problems with multiple variables. When I confronted the teacher, she didn't see anything wrong with teaching a "trick" to solve problems where the underlying math was two or three grade levels ahead (said trick, was so narrow an unwieldy to basically be useless for any kind of problem more complex than ones made up for 3rd graders)...
The bottom line seems to be that the "noise" level keeps increasing to the point where the core fundamental concepts are lost. Which is to bad, because teaching "tricks" which don't clarify the problems, or make them simpler to solve is IMHO just a waste of time.
BTW; For abstract algebra there are a number of "applications" books which teach the core concepts necessarily to understand error correction or whatever. Use those books first, and then read the more traditional text books. The textbooks should be considered more of a reference book than something that should be read from cover to cover. (although that itself is a problem because jumping into the middle isn't easy).
If a dry presentation of groups doesn't excite you, then there's plenty of books that are full of examples and motivations.
Personally, I prefer the dry stuff (e.g., Herstein) because that's the abstract-est abstraction. Rotations and matrices are groups? Don't care. I just wanna see the strange hidden properties of abstract structures reveal themselves.
I thought "God Created the Integers" by Stephen Hawking was a pretty good high-level view of the history and progression of mathematics. It did give me some 'aha' moments about things I knew but never realized how they were developed.
There are two kind of mathematics books, pure mathematics and problem oriented. Pure mathematics text books needs not give real life examples or context, that is the way they are supposed to be.
"Who is Fourier? A Mathematical Adventure" is a great (though rare) counterexample - it is wonderfully intuitive. It's the best math book I've ever seen.
Might be just laziness. Far easier to put down everything that you know in the final form than try and figure out how to present it in a way that will be most intuitive.
I think it depends what book you're looking at. Pedagogy in education is a pretty recent thing IMO.
As of the teaching side, I went through my entire math bachelor without understanding what were the applications of what I was learning. Now that I do cryptography I finally understand some of these. There was also this course by Gilbert Strang on matrices that taught me that they were used in compression. None of my teachers could have told me that...
I had a math teacher that explained that beyond simple and clear explanations, the other way to learn math was to do example questions. Lots of examples, to see all the patterns and subleties. The subject he'd most often quote was permutations and combinations - either you got it pretty much instantly or it would just be a struggle that you could only improve at so much by doing a lot of practice questions.
I think it's also strategical. Math is an area of science that most of the time is not worth as much as it costs to continue. But sometimes, one of these little things turn out so huge that they change all our lives. So we really need math, but most of the time there is no logical reason to invest in it (knowledge, time, money). What to do about it?
One way that seems to have begun common in Math is to make it magical. The entry burden is hard as hell, and most reasonable things in math are expressed in a way that as best sounds mystical to common people.
This gives math some kind of credibility. Nobody wants to be the one to say how little they actually understand about it, so they continue to support the crazy magicians instead of losing face. And in the end humanity is better off for it.
PS: You can see that it is not necessary to be so cryptic but still math continues to do so, when you compare it to programming. For instance math continues to use a greek letter ∑ (sigma) to express it. In programming it's mostly a "for loop" or a "sum() function". Most people can understand "sum([1,2,3])" or "for <something> do <something> done", but they don't know the greek alphabet. So it's possible to encode math more readable, but math continues not to do so, and as argued before, probably also has good reason to do so.
But the plus sign is also one character. Why not use a big plus sign instead? That would not only be clearer, it generalizes systematically: use a big multiply sign instead of a capital pi, etc.
When I was a kid I was a little intimidated by those higher math books with the "Greek E's" in them. Years passed before I learned in my early teens what they actually meant (and that it was simple, and they weren't even E's).
no its the direct sum notation. you have no right to tell mathematicians what notation they should or should not use. what you are entitled to is to write a mathematical paper and use some notation and try to market it. but that requires getting a phd, learning mathematics, publishing on the arxiv, going to conferences, going to talks, talking to mathematicians,......... its much easier to say "BUT MUH PLUS OH SCARY GREEK!"
I am sorry if I seem frustrated. its just I can tell you are a thoughtful, curious person, who seems like you want to enjoy mathematics. but instead i feel like you are lapsing into a math phobia and a concern with trivialities. the fact you seem genuinely interested makes it more frustrating. mathematics is not notation. notation is just arbitrary historical nonsense. an OLD professor I know complains about using fraktur (annoying german script used in abstract algebra for ideals-german mathematicians in the 1880s developed the heart of modern algebra) because "we won the war". don't let it blind you to the beauty and depth and complexity of mathematics. yes sometimes its annoying AF. (god don't even get me started on what physicists call things). but the struggle is worth it and there are things that can be improved but there is no magic bullet
Thanks for going to some trouble explaining your point of view.
I'll try to clarify mine: yes, a plus sign vs. a capital sigma can be quickly explained. Let's imagine you're reading a short paper about, say, signal processing, and it says in a footnote or prefatory matter "we use tau for 2 pi". Also it uses the sum notation I brought up. Oh, and the exponential is different too: maybe one of these https://math.stackexchange.com/questions/30046/alternative-n... or maybe just the electrical-engineering angle symbol for exp(i theta).
Now you start reading the actual new material but even the fourier series looks all different and you're like "why is the author imposing this cognitive tax on me?" You can work through it, but why? The author must be a weirdo.
That'd be a reasonable reaction. But if notational trivia matter to those of us "with a freakish knack for manipulating abstract symbols" (http://worrydream.com/KillMath/) then they must also matter as a barrier to people who are more average in that regard. I agree if you're saying that something like sigma vs. plus sign is far down on the list of ways to improve mathematical communication -- even any notational reforms would not come first. (Though see http://cognitivemedium.com/ for some more thoughts on how computers doing what paper can't opens up new possibilities.) I also agree that learning must be active -- learning isn't just a matter of more efficiently pouring knowledge into a student's head. The difficulty can be roughly divided into accidental and essential; you must do well at engaging with the essential difficulty of the subject to engage well at all. But smoothing the accidental, trivial difficulties is not, in aggregate, trivial.
Most people are scared of things they don't know, not because there are logical reasons, but because they don't know. Human nature. If you walk on the street and you show random people a simple sum, they won't get it with sigma in the middle because their brain just wants to avoid it. But if you use english words to describe it people will get it instantly.
Because they assume a false premise (hah! Does this count as a pun?) and take a high road. Best series of books i read about matg tgst really helped me were the "for the practical man" series of books!
Another book i recommend that helped me was "the architecture of math" by pierre basieux.
These books really assume nothing beyond the ability to read and do a great job!
- e; The story of a number (Eli Moar)
- An imaginary tale; The story of -1 (Paul Nahin)
- The Poincare Conjecture (Daniel O'Shea)
- The man who loved only numbers (Paul Hoffman)
- Prime Obsession (John Derbyshire)
They're part biography, part history, and give a little colour to the subject that isn't available from your typical college textbook.
A wonderful book that doesn't suffer the discussed flaw is Visual Group Theory by Nathan Carter, discussed on HN earlier. It provides ample motivation and examples before diving into the theorems.
When I taught maths at university, I would always try to include in every lecture a couple of slides showing how the lecture could be used to get a job, or to solve a real world problem in computer programming. It's weird that other disciplines seem to ground knowledge much better than maths, when it isn't hard to ground maths.
I really really wish I could do math. I have very high dyslexia and dyscalculia and I really really struggle to understand numbers, it's just so difficult for me to think about them, think about putting them together, even basic addition can be a complete struggle. I'm not sure why, but I can somewhat understand geometry.
By the Curry–Howard correspondence, programming is in fact (formal) theorem proving.
Mathematics got a lot less interesting after I realized it amounted to a giant, informally-specified, mostly undocumented body of code designed to run on the human brain... from that perspective it's hard to see why one should prefer mathematics to a well-written software program that does the same thing.
Is Curry-Howard interesting if you're not programming with advanced type systems? It's my understanding that writing a function that returns an integer (for example) corresponds to proving that an integer exists. Whoop-de-do? This would imply that unless you're constructing a sophisticated return type, you're not proving anything interesting.
So it's legacy code that runs on the oldest hardware that we have.. Plenty of people are interested in updating this code so that it runs on more modern platforms. Actually, this is my favorite way to drill into a piece of mathematics: port it to python.
My first programmation book was the K&R book.
I hated reading it and was pretty sure that was nothing I would ever understand (or use).
25 years later, I now understand all the details of the book.
And now I hate ... the C language itself (with passion!).
Fortunately, nowadays, there is so much learning material that you have the opportunity to choose the one that suits you.
For example, I know that I cannot apply a theory upon reality. I have to start with examples and build my own vision by abstracting from a lot of examples.
So I choose my learning material accordingly.
My current interest, at the moment, is monad. Believe me, there is a lot of abstract articles about that.
But my entry point was a tutorial about implementing some stuff in Javascript, that happened to be functors, monads, etc.
Believe it or not, but I used that knowledge the week after in one of my Java projects. And my colleagues considered it was "a nice trick" (which is true!).
In one word, the learning process must be fluid. Don't try to force anything. If the book medium does not suit you, don't blame the book. Or yourself. Try to find another resource and keep learning.
There exist purely theoretical computer science topics. Ullman's automata theory lectures are freely (legally?) available online. There are also authors like Knuth who are surprisingly practical.
This problem goes way beyond math and programming and into physics and general engineering. There are a surprising number of engineers who can manipulate the linear algebra tools but can't actually engineer structures or figure out linear circuits. EE filter design is another good example.
If you look at higher ed vocational training as kind of a prep-school or maybe a qualification filter for an apprenticeship it makes more sense. Once you figure out linear algebra you're qualified to apprentice to someone to teach you how to actually use it.
Its because math is used as test material. Thats all it is in american educations and many others i might add. Its a fucking crime. All of science and almost everying else is treated similarly. They build tests into the material you are to learn. At the end of the day you get a monstrosity. I say that if we are going to try to hide iq tests inside the material then we should have no qualms about simply giving everyone iq tests separately from learning material. Yes there are flaws with that proposal but it would still be better than what we have now where all of the material is molested. Imagine if people actually got a thorough education. No more grtting to the job only to learn that they didnt teach you what you need to know. In computer science this problem is fucking atrocious.
When I was teaching calculus, I made a fundamental decision of how to define limits:
I used limits of sequences.
In our university, we encounter limits in either a handwavy way in Calc I or an epsilon-delta way in Advanced Calculus. Sequences and series are introduced in Calculus II.
Continuity is then described in the same terms. Or perhaps in terms of open sets if there is some topological topic.
I think this is pretty terrible for intuition. Not just that, but it's not even general enough - it requires normed spaces, so then you have to generalize again.
Much simpler to talk about ancient Greeks and Zeno's paradox. And then rigorously define limits of sequences, and define limits of real valued functions in terms of "for any sequence x_n that approaches x, y_n = f(x_n) approaches y". Simple and right away lets me show counterexamples beyond "left and right limits", like cos(1/x), and show students how to produce two sequences that converge to different numbers.
There are similar ah-ha moments when discussing fundamental concepts of linear algebra, number theory, complex numbers etc.
The two best books I have ever found on teaching Complex Numbers are:
1) Bak and Newman
2) Schaum's Outlines
They actually give you the understanding and ceeling behind WHY analytic functions are the way they are, and derive holomorphic functions from that. Imho a more terrible approach is that of Serge Lang and proving everything the other way, with Taylor power series.
Bottom line - make a directed graph of how you will teach your subject and then figure out the best entry point an direction for the greatest cohesion, as you would when telling a story.
If you are curious, now I teach a course on "Thinking Matematically" and here are this semester's results of that approach:
This is something I think about often. Maths is one of those subjects which is taught repetitively rather than philosophically, and yet there couldn't be a worse subject to teach in such a manner. It forms a barrier-to-entry which mathematicians probably care for, glancing at some of the quotes already posted here, but doesn't help people who would make excellent mathematicians if only they had the encouragement.
I always found mathematics self-explanatory. From all the repetitions the understanding came naturally, and I think there is definitely a subset of the population that has what you might call 'aptitude' for maths. It makes sense, because philosophically it is self-explanatory by definition. LHS = RHS - the trick is to prove it or fill in the blanks.
But I dare say the majority of people aren't 'apt' for maths in the way in which it is taught, but at the same time it's totally unfair to rule them out as maths-stupid. Many people who have no mathematical knowledge whatsoever still prove rather deft at deducing when their partners are lying or when an argument in a debate is self-contradictory. People who failed their mathematical education go on thinking they suck at math, but when they play video games and figure out optimal strategies and formulae for success in the game, it's not all that different. Sometimes the mathematical understanding comes later to people as a direct result of being forced to reason about mathematical problems in an applied setting and discover that they could have been good at maths all along if they had only understood the 'why' of the exercises and formulae they were doing.
I think there are two big stumbling blocks that stop people from taking a bigger interest and investing more concentration and care into maths: firstly people mix up arithmetic with maths, and assume from the fact that they take a long time to divide a couple of numbers (or get the wrong answer when they do so) that they are doomed to be crap at maths.
But that's utter BS. I'm hopeless at mental arithmetic, but maths isn't about adding numeric values, its about deduction and reasoning, and I'm certainly not alone in being a terrible arithmetician and a good mathematician.
The other issue that people struggle with is that the notation itself isn't clearly explained or appears daunting, especially if they didn't go beyond compulsory education. It can prove quite distracting because a person might have questions as to why limit notation looks the way it does or what the hell the integral sign means, when really it's no different to choosing commas and brackets to denote semantic delineation in written language.
Take limit notation or summation. The placement of the various numbers is arbitrary on a fundamental level, it's just the standard everybody agreed to use. If someone good enough at math sat down and tried to build up math from first principles, with no knowledge of modern mathematical standards and notation, he/she could come up with the same formulae and concepts but an entirely different way of representing them.
That's exactly what happened really. you had Leibniz and Newton just organizing information in a way that made sense to them, and it stuck. An explanation of how arbitrary the notation is on a fundamental level would probably make a board full of symbols and algebraic letters considerably less daunting for those who could-but-daren't understand it.
I think there's a big problem in education as well. I'm from Great Britain and growing up I think the biggest mistake in our education system is the dire failure to explain mathematics in a way that is friendly for people who aren't tip-top at abstract thinking or imagining.
Take the humble function, for example. teacher says "a function takes an input and produces an output. so f of x equals y minus 5x..." - and half the class stares blankly, daydreams for 15 minutes, chatters through the "work your way through the textbook" phase and does one of two things before the teacher goes through the solutions:
A. copy answers from the back of the textbook
B. copies an adjacent boffin.
I think the much maligned set theory actually provides a very good way to teach about functions, and yet it doesn't appear until you hit college, and then suddenly a function is referred to as a mapping, without any preparation or explanation of whether there's a difference between 'function' and 'mapping'.
I was endlessly helping classmates in school and I often think about how the education system could have been better and made math more accessible. The conspiracy theorist might say that society cannot afford for everybody to grasp mathematics and get a /good/ job, but hopefully that's not possible.
Geometry in education is a big fail as well. when I was growing up, trigonometry wasn't explained at all, it was just repetition of applying trig tables. Nothing visually relating right-angled triangles and circles like Thales was even touched upon until college.
Secondary education in UK tries to cover to many things. Not that there's any way of chopping math down to a single textbook or subject area, but there's often little in the way of structure and it's about cramming as many formulae down the student's throats and then examining them on their ability to apply what they have repeated. but you can't apply what you repeat, you apply what you learn and grok.
I'm not sure why there is this enormous failure in education. It could be because the people who write the books and lesson plans already grok mathematics and forget to give people aids to intuition or intrinsically understanding what they're doing. Or maybe it's because a mathematical education excludes the philosophy of math and set theory which is a mistake. Set theory is perfect to start off with, it gives insight into functions, probability, everything really. Can always discredit the /project/ of founding mathematics on set theory later.
Of course, maybe I went to a bad school, and it was a few years ago now so perhaps things have changed - but I don't think so. most of my younger sister's friends have a worse understanding than the disinterested kids when I was growing up, but that could be down to the whole fast-food, instasnaptwit culture and a hundred other things that seem to be distracting people from anything academic.
This is a general problem with textbooks. Textbooks are not written for students. They are written by experts in the field, and the audience is other experts in the field.
This isn't necessarily a problem. It's a good thing to have experts working together to codify the current state of affairs, but it isn't necessarily the best teaching device.
Even when you have people writing books that are intentionally targeted at people new to a topic, it's easy to forget how much you know and how much beginners don't know. I'm working on a Music Theory book with an audience of total beginners, and it's extremely difficult for me to put myself in that mindset. In doing research for that book, I've been going over many of my college music theory textbooks, and re-reading some of the treatises about the subject going back to Pythagorus and Plato.
Things that were infuriatingly opaque to me as an undergrad are now--after decades of study--just plainly obvious and practically self-evident. You forget how little you knew after a while.
If there were one thing I could go back in time and tell myself when I was younger, it would be to make it clear that textbooks are not there to explain it to me like I'm five. Textbooks are scholarly works intended to be as precise as possible about the current consensus of experts in the topic area. Approach a textbook the same way that you would approach a philosophical treatise. You aren't going to get it all the first time around. And I do tell that to my violin students who are now starting to go to college.
Textbooks are designed to be paired with an instructor who can interpret the book and guide the students. They are not actually designed to be self-sufficient courses of study that you can simply read and learn from.
The reason for this is that this is the model of education that's been pretty much generally accepted. The university model really hasn't changed a whole lot since medieval times and arguably going back to greco-roman times. At least in the West. I don't know enough about Asian culture or history to comment about that. You have a master and students and some texts laying out the way of things. And in order to become the master you have to jump through a certain set of hoops.
It's an almost religious sort of process, in some ways. There's a tradition of a certain type of study with anointed masters, there are holy texts that you don't understand but must adhere to. And if you put in enough time and survive certain types of hazing rituals, you get the privilege of becoming anointed yourself, making very little money, and possibly contributing to a holy text yourself.
Out of the many things that get cited as disruptive with respect to the technology revolution, one thing that doesn't get talked about a lot is the way that the internet has disrupted the education market.
An "explain it like I'm 5" post on reddit is far more practically useful than any textbook on any topic, if your goal is to help a person learn about a topic. I spent the vast majority of my life studying violin, and music theory. And I was a professional performer for 20 years. When I got burned out on performing and traveling and never making enough money, I got online and started learning, practically, how to code. At first just enough to get a junior job. And I kept reading, and I got better. It's been 11 years now, and I haven't looked back. I can write code and play the violin when I feel like it, and work on a book about music theory. All that as a college dropout. It's a life I'm deeply grateful for.
A generation ago, changing careers would've meant going back to school, probably taking on some debt, finishing up a degree that I never really wanted anyway, and then going to start something else. Which is a pretty big pause in your life when you're in your late 20s.
I didn't have to do any of that.
At the same time, rigorous textbooks certainly have a place in the world. The same way that Plato's Timmeus or Descartes Compendium musicae have a place. The biggest problem with textbooks is that people misunderstand their intended purpose and audience.
They are tools created by academics to help create more academics. Nothing more; nothing less. If you want to participate in the academic system, you have to play by its rules. That means that you deal with these textbooks and the professors who guide you through them until you yourself are a professor, and you can in turn, guide others.
For those of us who don't want to do that particular type of work and would rather be educated practitioners, there's the internet.
this is dead wrong. "Textbooks are not written for students" yes yes they are. unless you reading an review written last year summarizing the last ten years for a very very very specific sub-field....YES THEY ARE WRITTEN FOR STUDENTS.
For an answer, there have been various influences:
(1) Whatever math was before 1900 or so, by the time of the Russell paradox and its fix with axiomatic set theory, the style of the fix was to be close to Russell-Whitehead (if I have that right) idea that proofs could be checked essentially mechanically by just symbol substitution and manipulation. E.g., in those days there was a book on the natural numbers, that is, 1, 2, 3, ... that apologized for numbering the pages before the natural numbers had been carefully defined!
There was even a name given to this style of writing, telegraph style.
(2) Of course, the books written on axiomatic set theory easily fell into the telegraph style. Even there the writers were getting into trouble: They gave names for the various axioms; they didn't explain why the names they gave were appropriate, and I never could discover why. But I was eager to get out of the sub-sub-basement of axiomatic set theory ASAP so just did a f'get about it.
(3) When books were written on abstract algebra, e.g., basic set theory, construction of the main number systems -- the naturals, integers, rationals, reals, complex -- and then went on to the main algebraic systems defined with axioms -- groups, rings, fields, vector spaces -- it was easy to stay with the telegraph style. E.g., it was tough to find a book on abstract algebra that also discussed group representations and its applications to quantum mechanics and molecular spectroscopy.
(4) Long calculus was often done with a lot of intuition and nothing like some carefully done definitions, theorems, and proofs as in, say, W. Rudin, Principles of Mathematical Analysis. And physics and engineering kept drawing diagrams with "little interval dx" etc. So, when abstract algebra was proving theorems, the calculus authors also wanted to be careful at least about delta-epsilon arguments. Then including a lot of physics, mechanical engineering, touching on the heat equation or fluid flow, was considered off-topic. Bummer.
(5) The series of astoundingly carefully written books, close to telegraph style, by the team Bourbaki was influential.
(6) During the Cold War and the Space Race, US math was awash in grant money and essentially turned its back on the physical science motivations and applications. Some of the funding people started to get angry about that, and we got the Tom Lehrer song and joke about abstract math being about "the analytic algebraic topology of locally Euclidean metrization of infinitely differentiable Riemannian manifolds" or some such.
But, sure, especially in analysis, for a good proof, there's often a good picture and if can see the picture then can construct the proof easily. E.g., for positive integer n, the set of real numbers R, and convex f: R^n --> R, that f is continuous has a really cute picture. Same for Jensen's inequality. In linear algebra, the polar decomposition says that each square matrix is just (A) a rigid motion, rotation and/or reflections followed by (B) moving a sphere into an ellipsoid by stretching and/or shrinking on mutually orthogonal axes. One or more of the axes goes to zero if and only if the matrix is singular. Etc. Nice picture.
Currently, then, there is an opportunity for math authors to include motivations for their subject, definitions, and theorems, intuitive descriptions and helpful pictures, applications, issues, open questions, etc. Uh, when reading a proof, for each of the assumptions, check off where it was used in the proof! The definitions, theorems, and proofs can still be fully precise and solid.
Some students of, say, analysis have long tried to find and draw pictures that would clarify what was going on in some definitions, theorems, and proofs. I would advise new students to do that also.
Because they are written by mathematicians. In my case, when I have learned a mathematical topic, the intuition becomes obvious and the derivations/proofs seem to be much more important for gaining a complete understanding. I have gone up against texts with complete bewilderment, only to come back after gaining the intuition and found the extensions of the core premises and proofs provided by the text to be highly enlightening.
Great math teachers understand the need to teach intuition. He wasn't a math teacher, but I think Richard Feynman is the pinnacle of this. See [1] to see how he expresses intuition about physics, and his Red Books[2] for how he teaches mathematical physics with all the qualities I believe makes a great maths text for students.
Also, there's a linear algebra MOOC which also teaches great intuition before delving into proofs and heavy detail [3]. I mention these examples because they are exemplars of this idea of teaching intuition.
There really needs to be a version of the Feynman lectures for mathematics.
Although, this is what Arnol'd has to say [1]:
"Mathematics is a part of physics. Physics is an experimental science, a part of natural science. Mathematics is the part of physics where experiments are cheap... In the middle of the twentieth century it was attempted to divide physics and mathematics. The consequences turned out to be catastrophic. Whole generations of mathematicians grew up without knowing half of their science and, of course, in total ignorance of any other sciences."
Feynman learned mathematics from a series of self teaching books published in the 1940's suffixed "...for the Practical Man" and prefixed with Arithmetic, Algebra and Calculus. I have the full set and this is a rather good solution to the problem. They teach you insight and how to think about things as well as the mechanical aspects. This is IMHO a well solved problem if you don't mind skipping more modern abstractions such as limits.
From there he was given a calculus book, the title of which I cannot remember. I never got that far.
I suspect you have to at least follow the same path to have the same intuition.
I sometimes get the feeling that we seem to have taken a huge step backwards in math books over the past 50 years. Back when I was in college and studying multivariate calculus I happened to find a small, ~100 page, book called something like "Introduction to Multivariate Calculus" from the 50s in a used book store. This tiny books not only covered basically the whole curriculum of my course, but did it in much greater clarity then the 500+ page that was our textbook. I can basically thank that book for me passing that course. I find on the whole that especially introductory mathbooks have gotten harder to follow and less clear (and a lot longer) over the past few decades.
I've taken the liberty of taking a quick snap of a random page in "Arithmetic for the Practical Man" to include below for those poor people poisoned by modern textbooks:
I see horrible modern behemoths of over a 1000 pages that leave you dazed, confused and full of facts but nowhere to go with them. EE textbooks are even worse on this front than your average mathematics text book. I've seen one proudly promoting over 1500 pages and 1000 illustrations, but doesn't even get as far as an opamp or discuss anything at system level.
bought the series as well, love it. It's my daughter's favorite math series. One interesting thing I noticed in this regard is textbooks from the 30-60s have way more textual descriptions. They seem to spend more time looking at the problem or concept in a literary way and that might have helped to build a better understanding for the student.
I think it's a bit misleading to say he learned math from those books. He got his start there, but surely the bulk of his mathematical knowledge was more advanced. However, it's quite possible that he retained the attitude from those early books. It seems to me though that he already had that attitude prior to reading the "practical man" books, and it is more that they particularly resonated with him because of it.
In search for books like these in the past, I found "Understanding Analysis" (Stephen Abbott). I went through the first two chapters and I liked it. It is written in a narrative which is both entertaining and instructive. He explains the problem, why is it relevant, ways of approaching it, etc. "It is designed to capture the intellectual imagination."
From the preface:
"This book is an introductory text. The only prerequisite is a robust understand-
ing of the results from single-variable calculus. The theorems of linear algebra
are not needed, but the exposure to abstract arguments and proof writing that
usually comes with this course would be a valuable asset. Complex numbers are
never used.
The proofs in Understanding Analysis are written with the beginning student
firmly in mind. Brevity and other stylistic concerns are postponed in favor
of including a significant level of detail. Most proofs come with a generous
amount of discussion about the context of the argument. What should the
proof entail? Which definitions are relevant? What is the overall strategy?
Whenever there is a choice, efficiency is traded for an opportunity to reinforce
some previously learned technique. Especially familiar or predictable arguments
are often deferred to the exercises.
The search for recurring ideas exists at the proof-writing level and also on
the larger expository level. I have tried to give the course a narrative tone by
picking up on the unifying themes of approximation and the transition from the
finite to the infinite. Often when we ask a question in analysis the answer is
“sometimes.” Can the order of a double summation be exchanged? Is term-by-
term differentiation of an infinite series allowed? By focusing on this recurring
pattern, each successive topic builds on the intuition of the previous one. The
questions seem more natural, and a coherent story emerges from what might
otherwise appear as a long list of theorems and proofs."
That rant is a bit ridiculous to post here - the foundations of computer science is largely the result of mathematicians who weren't particularly interested in physics.
Well for one Turing was certainly somewhat interested in physics. From his WikiP page:
In 1928, aged 16, Turing encountered Albert Einstein's work; not only did he grasp it, but it is possible that he managed to deduce Einstein's questioning of Newton's laws of motion from a text in which this was never made explicit.
Then there was von Neumann and several others. If not interested then at least well educated in physics.
I don't think it's the intuition. I think it's the part where people are explicitly and implicitly taught to avoid metaphors, since they are considered bad analogues and "window dressing on top of objective literal truths".
The sad part it, Lakoff and Johnson already provided a good counter-argument that thesis in the eighties with their landmark 'Metaphors We Live By,' suggesting that metaphors are the main way humans make sense of the world, almost as if they are the fundamental intuition you refer to. Since then then the proof for this case has only been piling up.
Especially in the field of machine learning we're finding more hard evidence that metaphor are not decoration, but fundamental parts of how to transfer information. Using rich metaphors to pass on implicit information between teacher and student is known as "privileged information":
> When Vladimir Vapnik teaches his computers to recognize handwriting, he [he harnesses] the power of “privileged information.” Passed from student to teacher, parent to child, or colleague to colleague, privileged information encodes knowledge derived from experience. That is what Vapnik was after when he asked Natalia Pavlovich, a professor of Russian poetry, to write poems describing the numbers 5 and 8, for consumption by his learning algorithms. (...)
[After coming up with a simple way to "quantify" the poetry], Vapnik’s computer was able to recognize handwritten numbers with far less training than is conventionally required. A learning process that might have required 100,000 samples might now require only 300. The speedup was also independent of the style of the poetry used.
Now, of course, knowing how to come up with a good metaphor is a skill in itself, and bad metaphors do lead people astray. But they do so precisely because they are so good at transferring information - wrong information, in the case of bad metaphors.
Perhaps it is also the case that many mathematicians are far to the autism side of the spectrum, and find it difficult to verbalize "soft" aspects such as motivation and intuition, even though those are extremely important in a textbook.
It's almost as if they partly lack theory of mind, and cannot imagine that somebody else does not know what they know, and consequently give explanations that are only suitable for someone who already knows the material.
Sorry but this is extremely misguided. You are making assumptions that make you look like you haven't talked to a mathematician as if they were a person, but a machine or a misterious magician.
Mathematicians do not find difficult to verbalize motivation and intuition because they (we) are autists. We struggle to do it because it is really difficult sometimes. You can find that you have an intuition that you cannot put into useful words (sometimes the intuition is "it should be like this"), or that it relies on more advanced concepts (something that clicks only when you get to know concepts that rely on it), or that you know that the idea in your head is not useful to others and putting that down in a book would make things worse. I have not written any book, only some notes for other students, and with that I already know it is difficult to write a good motivation and intuition to abstract concepts even when you completely dominate them. Moreover, most of the mathematicians learn to develop their own intuition and motivations and it is not as important to write yours (which are subjective most of the time) down.
I'm an artist. I've been doing art since I was a child: I started oil painting when I was 8 or something, and remember drawing and coloring being one of my favorite activities as a young child (before kindergarten). I can't properly explain how I see things the way I do. I can't really explain why I choose some colors in non-realistic pieces other than "they look good", although I know that learning color theory from the color wheel helps explain that and helps folks learn. It took me some time to learn to explain this stuff to others. I'm not autistic, though I likely have a mild case of dyslexia.
I always imagine that skilled mathematicians are somewhat the same. It isn't that it only works for someone that already knows, it is more that they are writing the books for folks whose mind simply has that sort of slant to it.
If we change all the books to help someone like me - I generally did poorly in algebra, trig, and calculus but found geometry fairly straighforward - that's going to let the other folks down because their mind's logic simply doesn't work on the same wavelength. This isn't autism, but simply human variation.
I think that there is difference - there is nothing natural about things like notations, axiomatic system or concrete theories. They have been developed for hundreds of years and have to be learned. Kid will not invent them intuitively (unlike colors matching). As you are learning math, you oftentimes you don't understand for long and then something clicks and suddenly you get it. Mathematicien coming to new theory oftentimes needs a lot of time to go through just one page of article.
I am not denying that talent exist, but it just makes learning harder or easier and defines plateau. Through, if you was good in geometry, then your brain works well enough in math (meaning some click on algebra did not happened but you are unlikely to be unable to think that way - thinking is not that dissimilar).
Nonsense. Most people on the spectrum are not good at math (lacking abstract thinking for example) and most mathematiciens are perfectly within normal spectrum. While it is cool and fun to read about mathematiciens or programmers on the spectrum, the majority is nowhere near.
Moreover, mathematiciens are trained to remove things like motivation and intuition from scientific papers and when they communicate among themselves. You get punished when you dont. The training (which is perfectly sensible for given field) is much simpler explanation then attempts diagnoze people.
>Moreover, mathematiciens are trained to remove things like motivation and intuition from scientific papers and when they communicate among themselves. You get punished when you dont.
It is not about being autistic. It is just that teaching is hard, especially maths. Math is all about abstraction, and we want concrete examples, see the problem...
Teachers know that their students don't know everything. But the hard part is guessing what they actually do know and adapt to it.
I spent a couple years figuring out a certain problem. When I was done, it took me six months to be able to write up a blog series about it. I don't think autism has anything to do with it. Some problems are really hard to communicate.
There is a huge amount of information encoded in the choice of exactly how to define thing, and which theorems people care about. This reflects a long process of trial-and-error as the field was constructed. For a famous philosophical treatise on this using the Euler characteristic as an example, see "Proofs and Refutations” by Imre Lakatos
https://en.wikipedia.org/wiki/Proofs_and_Refutations
Part 1: https://math.berkeley.edu/~kpmann/Lakatos.pdf
Most of that foundational information is lost when it's not written down somewhere accessible; contrary to the answerer, only a small fraction is reconstructed by students as they learn the subject.
It's a huge problem, it applies to physics textbooks too, and it doesn't have to be this way. Unfortunately, the problem has been known for decades and there's not much reason to expect things to change. (Lakatos wrote the above in 1976.)