A friend of mine who is a physicist once complained to me that every time he had to install some scientific software package on his computer, he had to deal with a litany of arbitrary things which seemed to have nothing to do with his task. I countered with my experience learning math and joked that at least programmers are willing to occasionally refresh our idioms and notations to better reflect our mutual understanding.
Thanks for the comment. There's this weird notion in Calculus education that we need to start from first principles. Limits were invented a century after Newton died, yet they're taught first. "Oh, students won't understand calculus unless they can build it from first principles. I don't care if Newton worked out gravitation with his understanding, it's not good enough."
My little candle in the darkness (http://betterexplained.com/calculus/lesson-1) is to start with intuitive notions (X-Raying a pattern into parts, Time-Lapsing parts into a whole) and then gradually introducing the terms. Eventually, if people are interested, we can get into the theory (which is like getting into the Peano axioms of arithmetic, if you care to go that deep.)
I think in math there's a tendency to express things in the lowest-level "machine code" we can. We need more comments and pseudocode outlines :).
It depends. In high school, it certainly makes sense to start from an intuitive, rough idea, but in college/uni, I much preferred classes that started from first principles (axiomatic QM, sets → topology → metric spaces → calculus/algebra) etc. rather than the weird classes which define nothing properly and put in a “this is true, but I won’t tell you why :P” every other lecture. Which approach is preferable obviously also depends on your aims, but I found it hard to get a thorough understanding of a topic without a first-principles-based derivation at least at some point.
For a potential math major [i.e. people for whom metric space refers to a measure and not a flat in Europe :)], you definitely want the ground-up understanding. In CS it's similar, where you learn about transistors, logic gates, ALUs, CPUs, machine code, compilers, along with high-level languages.
But, some people just need the HTML "Hello World" to make their webpage. (In the math field, we have students who need calculus primarily to find the min/max of a function, and are wasting time worrying about epsilon-delta definitions of continuity. Limits are interesting, but I'd prefer to ignite curiosity with higher-level topics then dive into the details, instead of forcing someone to learn organic chemistry before being able to drive a car.)
Where I studied, we did have classes on all the low level stuff, but we didn't start there: I took a class taught using a high level language every semester. CS-100 was intro to how computers worked, but right along with it, you had 101 teaching C.
If you are going to end up teaching software engineering material, compiler design and such, you just can't have people that have barely done any programming for a year or two, or all they are going to learn is boxes next to each other, instead of being able to actually write a simple compiler in the compiler class.
Drive the car, build your interest, then start taking classes in chemistry, physics, etc. to see how it works.
Is that what we learn in high school? Nope. So everyone is left wondering what the hell they are good for (or were, before we had calculators) the first time around.
(btw, as far as I can tell you don't have that bit of history on your site yet either - might be an interesting addition?)
Seeing some modern application of lagarithms would be great. Even just making a log-log graph at some point would answer every question. But those are not at the official curriculum.
Exponents let you plug in time, and get the amount of growth. e^3 ~ 20, which means "3 periods of 100% continuous growth [100% is implied by e, 3 = 3* 1] will grow us from 1 to 20".
Logarithms let us plug in the growth, and get the time it took to get there.
ln(20) ~ 3 means "It takes 3 units of time [growing at 100%, continuously] to grow from 1 to 20".
Exponents take inputs and find the future state, logs take the future state and work backwards to find the inputs that got us there.
e was only discovered because of logarithms, not the other way around! :)
Personal anecdote related to Pythagorean theorem/euclidean distance.
When in ninth grade, I asked my math teacher to explain why why the the standard deviation is like it is and why the square root and not something else. He could not explain that to me satisfactorily.
I spend few hours looking at the formula myself and plotting and drawing different datasets until I made the following association: if dataset is like N dimensional vector and its values x1, x2, .. xN are coordinates, then I'm looking at euclidean distance from mean plus adjusting for the number of dimensions. I think this single moment of cryptic formula making sense changed my attitude towards statistics and math. Dataset ~ vector and data-point ~ vector-coordinate association helps to understand statistics using geometrical intuition.
Bithive123 - I found that practicing Project Euler has helped my understanding a lot. Part of it was that it made math a joy again, and I tied it to learning new programming languages.
Any learning course should include one textbook which develops the content from an historical perspective.
For example, the Fourier Transform was originally _rejected_ as implausible when presented to the mathematicians of the time. They needed a decade of debate to verify its truth, and yet we teach it to students in a week and expect them to internalize it without issue. We need to acknowledge the difficulty/counter-intuitive nature of the idea up front.
Nice summary: http://carbonatoms.wordpress.com/2009/03/13/prime-numbers-ar...
Primes appear to be zeroes of the Reimann Zeta Function. Nature loves minimizing energy, so there might be some reason the Zeta Function models how atoms behave. Then, primes are the most "stable orbits" or have a similarly useful property.
Super high level, but I think it's interesting.
Why would you need to prove central limit theorem in order to use calculus? What kind of course teach it that way? It would sound like something to prove for a honors calculus class that prepares students to become mathematicians.
Anyways, about math education. Math below college are taught by math educators. They are no mathematicians and they do things differently.
Things in college are rigorous. Should college students learn calculus in a way that make engineers happy and mathematicians cringe?
"Learning things not from first principles but 'intuition' of the real world? what is this? physics 101? ".
Different professor/department views it differently, depend on the view, you will get different education.
There are places offering business calculus. I personally believe to be much better way to teach calculus to people who don't really need to know how things work, but still need to know how to use it(at least for simple functions appearing in the real world).
> programming as a discipline has learned to avoid; encouraging varied and/or terse notations, opaque variable naming schemes, and arbitrary use of jargon where simpler terms would suffice.
Why should all mathematicians suddenly write things with the exact same convention just so people can learn it easier? Likely, most people who benefits from these won't advance mathematics in anyway.
Here is what I see when you wrote these things, but might sounds more interesting in a programming perspective.
"The barrier of not able to program comes from the number of programming languages, it be nice if we can all agree on the same programming language so I never have to learn another programming language again."
Maybe learning calculus itself is more akin to learning one programming language, but rarely calculus is taught in a way to end all learning(again, depend on professor/department). When mathematicians teach calculus, they would want to teach the principles in order for one to advance even further.
and now to each point(as someone who had pure math education)
1. "encouraging varied notations" This is not true. People might have different preference because which school/field they come from, I have yet to see anyone enjoying people come up with new notations when there are some established notation(s). In general, language use in the same field are mostly consistent (for a few decades, at least). No competent mathematician would have inconsistent notation in one single article.
2. "Terse notations" A few symbols captures a whole lot of ideas. It's not software engineering, the number of different variables/symbols in a paper is not the same order of magnitude as our programs. There are books with table of symbols of around 2 pages, but usually such a book would take years to master.
3. "opaque variable naming schemes", it's a convention can be learned by doing, and you don't even need to follow someone else's scheme. In fact, not knowing the convention doesn't mean anything. Most variables are used a few times and never used again. Just a few days ago I pick up a paper by the Hungarian combinatorial optimization authors, their notation is completely different from what I usually read. Once I read the introduction, I already internalized the "table of variables/notations" so I don't even need to refer back often. To programmers, this is like complaining different open source programs use different naming scheme for their API.
4. "Arbitrary use of jargon where simpler terms would suffice". I do not believe this happens often in mathematics. Mathematicians likes things to be elegance, simple and precise. There must be reasons for us to use a word, otherwise we won't use it.
5. "at least programmers are willing to occasionally refresh our idioms and notations to better reflect our mutual understanding."
Mathematicians do this all the time, not just "occasionally". It doesn't happen in mature and well established topics where all the problems are dry. There are nonstandard analysis, complementary to the standard analysis. There are homotopy type theory to have another foundation for mathematics. Many things we seem to see it and think it's set into stone it's because there really isn't anything to be done.
Yes, but not knowing calculus, how would he know that? Or any of those things that you have pointed out? They are certainly not taught at school (where we are unhelpfully taught an entirely different "mathematics" to that which you get at university, and will often require for advanced software work) and none of this stuff is ever pointed out in maths books.
Instead, the reader of a maths textbook is unsettled by the constant unspoken assumptions and unexplained concepts that he is already assumed to know. There is in university level maths books what looks like an almost wilful disregard for how people are actually taught mathematics at high school. How do students going from school to university cope? is there some secret occult ritual where all this knowledge is transmitted?
That's how it looks for those of us who are trying to teach ourselves, and it is in marked contrast to the experience of learning almost anything else, particularly programming.
As for physics, although I have never studied it to any depth, I did enjoy reading the Light and Matter series of textbooks a few years ago.(http://www.lightandmatter.com/)
The way the lower level courses are taught IS similar to high school math. Low level calculus in my undergrad institution is almost the same as AP calculus in my high school. If not, then the course picked the wrong textbook.
Anything above calculus, it is fair for textbook writers to assume mathematical maturity.
"How do students going from school to university cope? is there some secret occult ritual where all this knowledge is transmitted?"
To be able to self-teach mathematics, one would have to learn w/e mathematicians do by oneself. This is however, not impossible but difficult.
Here are some disadvantages:
1. It's hard to assert one's own mathematical ability.
2. No one can give you feedback(unless, you have someone with enough mathematical maturity and also have enough time to read and correct your proofs). Programming is so much easier because you can get partial feedback from compiler/interpreter and output. In fact, anything where you can see something happens is much easier. Mathematicians need to prove what we see happen is really true and it's not a wrong intuition.
3. Math books does not try to hold hands. They leave out many details to be filled in by the reader(the notorious "The proof is left as an exercise to the reader"). Sometimes, readers without enough background could gain a wrong intuition, which will screw up everything further down. It is not easy, and it be really nice to have some professor to talk to.
Now, about this "secret occult ritual". It is basically the undergraduate mathematics scene beginning at the first introductory proof class. (depend on departments, this might be as late as the beginning of the 3rd year of study)
In UIUC, there is MATH 347.
In Stony Brook there is MAT 200. Around 2/3 of the students have to retake it. Imaging this. This is a set of math majors learning these things full time, with study groups, WITH FEEDBACK and 2/3 of the students didn't get C. It's not a inherently easy thing to learn. The entire class to teach people to fight one's own intuition and mental short cuts we humans make everyday.
Once this is done, the students can further take a higher level topic(a intro to analysis or abstract algebra) to get a feel of how to apply these techniques in the intro proof class. It's a long process and there is no easy way. See this book, Counterexamples in analysis. http://www.amazon.com/Counterexamples-Analysis-Dover-Books-M... Half of the things I would believe to be true from intuitive argument turns out to be completely wrong.
Finally, one might question why one have to become a half mathematician in order to use some of the tools in mathematics. Because most math books are written for people with enough mathematical maturity that can only be gained from grinding over mathematics, and without enough maturity it doesn't make sense to learn certain things anyway.
I'm writing as a self-taught programmer who has spent years trying to get a handle on the maths of game development and computer graphics; for the most part that means calculus, linear algebra and geometry.
I went to high school in the UK where I was not taught calculus, or for that matter even told that I could go to university. My experiences did lead, I must admit, to some hostility towards the educational establishment and a strong desire to succeed without their help!
I first learned of calculus when I was about 18 and I bought a book on computer graphics programming. I was all geared up to start doing some cool 3d stuff, when all of a sudden I saw this strange elongated "S" symbol, which of course was not explained in the text. This confused me, when the book had a mathematical appendix describing the simplest vector operations (which I did learn in high school) in some detail. And that was the beginning of my frustration!
There just seems to be such an enormous gulf between high school, "everyday" mathematics and anything coming after. I would like to think that in a world where programming is now so popular, that the border between everyday mathematics and the higher reaches of academia would shift a little and a kind of intermediate area would open up.
But still, programming seems far more willing to fix mistakes than hard sciences/math. We've been clanking around with Pi instead of Tau for how many centuries?
Is case sensitivity a bad artifact? It doesn't seem like it is to me anymore, but my former self sure thought so.
That way names would have to be referred to as written, but you would avoid the potential confusion of having multiple names that differed only in the case of their characters.
Lots of programmers would probably regard this as some terrible infringement on their freedoms but, like python's whitespace indentation, I think it might work out OK.
For example, let's say you want to write code that communicates with an arduino. You can define a class named "Arduino", that'll be instantiated on other classes, and the instances can be called "arduino", with no problems at all. On case insentive languages one has to resort to worse names, like "arduino_t", or "arduino_instance".
So variables only with variables, classes only with classes, functions only with functions?
(I.e. you can define a class called Arduino (bad name, by the way) and a variable named arduino, and it won't complain, but if you then try to define a class called "ARDUINO" or a variable named "ARDUINO" it will error out.)
I'm not decided if I like it or not.
Eventually I just ragequit math because I didn't have the patience and time to search for the intuition myself, which is a pitty. I'm really happy I followed the classes about mathematical proofs, groups etc because I really enjoyed those but others were just painful.
Anyhow recently I discovered this blog too up my math concerning computer science is this: http://jeremykun.com/. It's a tad more advanced than this but I'd really recommend it. His blog is freaking amazing and always a joy to read. Math as a sidehobby it is! :)
The concise explanations here seem more helpful than semester-long university courses I've taken. I definitely look forward to exploring his other articles on math and programming.
There's a strange tendency in technical explanations to expand, expand, expand. A heuristic I have: Imagine you're writing a letter to yourself, before you started the course. There's no reason to waste your own time, just cover the key difficulties and how to overcome them, in simple language you would have understood.
(Oh, this letter also costs $5 a word, so let's make them count!)
This looks like it will be my commute reading for the week. But more than that it just makes me happy that it exists.
"Cheatsheet" might not be the best term for the summary page, perhaps "quick reference". I picked cheatsheet because it's a little more approachable than "reference", which implies something formal (and perhaps offputting).
My meta goal is to raise people's standards for what it means to understand something. ("If it didn't click, I should keep looking for a better explanation." vs. "It didn't click, I'm no good at this.")
This article is simply amazing, moreover Kalid is an humble person and references his inspirations from several people over the web.
Thank you very much Kalid for the time you dedicated to share all this wonderful resources.
I really appreciate the encouragement and hope to keep cranking as long as I can.
The sense I mean is closer to "grokking" or "clicking", when symbols and definitions actually mean something. You start thinking with the symbols, not despite them. You actually look forward to checking your understanding with real problems, because when it clicks the problems become easy :).