Here we have a plainspoken yet rigorous explanation of what algebraic operations actually mean (addition is iteratively adding 1 b times; multiplication is iteratively adding a b times; exponentiation is iteratively multiplying by a b times, etc.).
I wish more students would have the chance to build up such intuition, for such intuition is the key to keeping oneself interested in math and not finding that one "hates" it.
Edit: I've heard a similar statement made re: exponentiation-- that it's not "just repeated multiplication". I'd love to have the similar flash of understanding re: exponentiation that I had when I learned to think about multiplication as scaling. Is the "exponentiation is not repeated multiplication" statement true, and if so, does anybody have a recommendation on something I can read about it?
I would dearly love to be more in touch with math than I am. I stopped at intro college algebra and never felt overly comfortable with trig. From talking with friends who have deeper math backgrounds I can tell that calculus would add a richness to my experience that I lack, if only because I can see a richness that my basic understanding of algebra brings when I talk to people for whom algebra isn't intuitive. I hope I can get it together enough to spend some time learning more about math before I end up old(er) and grey(er).
Looking for recommendations on lectures / course that goes over linear algebra in this way
Interesting, because until I had the proofs for eg differentiation, or heck even the quadratic formula - they were meaningless rote learnings. The proof is the intuition imo
From the preface of Computation: Finite and Infinite Machines by Marvin L Minsky:
> The reader is therefore enjoined not to turn too easily to the solutions; not unless a needed idea has not come for a day or so. Every such concession has a price—loss of the experience obtained by solving a new kind of problem. Besides, even if reading the solutions were enough to acquire the ability to solve such problems (which it is not), one rarely finds a set of ideas which are at once so elegant and so accessible to workers who have not had to climb over a large set of mathematical prerequisites. Hence it is an unusually good field for practice in training oneself to formalize ideas and evaluate and compare different formalization techniques.
I figured out I actually like maths waaaay after I'd left uni. From that time at uni I have a vague memory of proofs being something like a whiteboard full of equations that I got lost somewhere in.
I have a vague feeling that what I'm thinking of is 'formal proofs', but I'm not sure.
People recommending the classics can come off as pretentious so I will add that I am serious: a modern book of Euclid's methods should be quite accessible.
As a followon bonus: Minsky's and Papert's 1967 book "Perceptrons" (the one that said you can't do XOR with a single-layer network, though you can with a multilayer one) that lead to 25 years of lack of interest in neural networks is entirely about using neural networks on Euclid. So you can go from one to the other!
First I found it hard to get much out of a Real Analysis course when I was a grad student. Only partly because of lack of explanations, admittedly. Probably even more of a problem was another kind of cultural mismatch between physical scientist me and the math prof that taught the course. My interest was mostly that I was actually doing path integral Monte Carlo calculations (for my Chemistry thesis) and wanted to make sure I understood the fundamentals. The prof, like many (most?) mathematicians seemed to be more interested in investigating ingeniously weird boundary cases. So the course didn't seem to teach me much about the gotchas that might come up in actual statistics or numerical analysis, and instead more about the ingenuity of mathematicians in constructing absurdly farfetched abuses of e.g. the axiom of choice. But besides that cultural mismatch, lack of an explanatory framework sure didn't help. Thus, I was very happy decades later when I ran across Terence Tao's book on measure theory (available as a free manuscript online), which had a lot of the same kind of material with quite a good framework of motivation and explanation wrapped around its proofs.
I also like Vapnik's _The Nature of Statistical Learning Theory_ which as I understand it is highly parallel to a much longer proof-heavy version of most of the same material. I much preferred this book to the approach in my undergraduate course in statistics. Again, the difference wasn't only lack of explanations (also, e.g., not enough enough grounding in proof or plausibly provable propositions, and too narrowly based in a frequentist worldview assumed without any explicit justification), but the lack of explanatory framework sure didn't help, so later I welcomed Vapnik's explanations of his approach. I have never been motivated to read the proof-heavy parallel book by Vapnik, but I do find it reassuring that it exists in case I ever work with problems that were weird enough to start raising red flags about breaking the comfortable assumptions in my informal understanding of the statistics.
I also like the posts on this blog as well: https://dhruvp.netlify.com/2018/12/31/matrices/
It's 3Blue1Brown, not the other way around. He named it after his eye color which apparently has 3 parts blue and 1 part brown.
The main thesis is that proving everything using determinants hides the intuition for what's really going on.
Take a look at the footnote. An offhand comment of how to calculate the square root of any N using a short iterative loop that you can compute by hand.
I'm no mathematician (clearly) but I had no idea that this method even existed. I've just had to try it out.
And it's a footnote!
I have the book of Feynman's computation lectures and they're dated, but most of the theory is still relevant and his writing remains accessible to anyone.
1. A number of times, young professors would visit with him when they were in town to give a lecture, and would discuss a recent result they had come up with, and Feynman would say that reminded him of something, pull out a sheet of paper from his drawer, and say effectively, "yes, looks like your result is correct".
2. A more specific version of that was captured in the link below, when Feynman worked out blackhole radiation on a blackboard with some grad students over lunch. The blackboard was erased and the result lost until Hawking published the same result a year later, propelling him to fame (deserved, of course). http://nautil.us/blog/the-day-feynman-worked-out-black_hole-...
He was a gifted theoretician with an ability to see how/why things worked and had the mathematical ability to express that understanding rigorously.
It's an iterative method for finding x such that f(x)=0 for a given function f, in this case f(x) = x^2 - N, where N is the number you want the square root of.
The special case has such a simple expression, though, that I'm tempted to commit it to memory.
Edit: Another nice example, for when you have access to a calculator that can compute powers but not logarithms:
a' = a - 1 + x/c^x will converge the base-c logarithm of x (though it will converge slowly if you start very far from the solution).
Yes, yes, I know Carmack didn't invent it.
Now I know where he must have gotten it from.
I think it must have been around grade 10 or 11.
>This, then, is the unification of algebra and geometry.
I can't help wondering if you could go further and come up with some physics.
For most students, the sublimeness of getting to the Reals and beyond tends to get lost in the grind of lectures, problem-sets and exams.
Feynman had a knack for getting straight to the essence of a subject, that's why people love him so much.
As for algebra + geometry with physics, there are people who have cut a path to that (see texts by Hestenes), unfortunately, it requires some mathematical sophistication and most curriculums simply don't have the time to reach that level for undergrads, given the way that they're structured.