Hacker News new | past | comments | ask | show | jobs | submit login
The Feynman Lectures on Physics: Algebra (1963) (caltech.edu)
288 points by signa11 on Aug 14, 2019 | hide | past | favorite | 40 comments

One thing I will always appreciate about Feynman as a teacher is that he never shies away from telling you why things work the way they do. I've spoken with a lot of people who "hate math" and it's almost always because they never had a teacher that went into the whys and wherefores behind equations, tables, graphs, etc.

Here we have a plainspoken yet rigorous explanation of what algebraic operations actually mean (addition is iteratively adding 1 b times; multiplication is iteratively adding a b times; exponentiation is iteratively multiplying by a b times, etc.).

I wish more students would have the chance to build up such intuition, for such intuition is the key to keeping oneself interested in math and not finding that one "hates" it.

It is thought that multiplication is better explained to mean scaling a by a factor of b. This is better illustrated by a diagram. I believe that 3Blue1Brown covers this graphically in several videos. For one, it better unifies division with multiplication as an inverse scaling.

It took me awhile to grok why thinking about multiplication as scaling made sense, but given how much it comes up in discussion I thought it was pretty important to understand. I think having been taught that multiplication was repeated addition as a kid made it difficult for me to think about multiplication as scaling. Once the "light came on", however, it was very intuitive. It's definitely better to think about repeated addition as being a "trick" that just happens to work like multiplication on natural numbers, versus thinking that multiplication IS repeated addition.

Edit: I've heard a similar statement made re: exponentiation-- that it's not "just repeated multiplication". I'd love to have the similar flash of understanding re: exponentiation that I had when I learned to think about multiplication as scaling. Is the "exponentiation is not repeated multiplication" statement true, and if so, does anybody have a recommendation on something I can read about it?

I would dearly love to be more in touch with math than I am. I stopped at intro college algebra and never felt overly comfortable with trig. From talking with friends who have deeper math backgrounds I can tell that calculus would add a richness to my experience that I lack, if only because I can see a richness that my basic understanding of algebra brings when I talk to people for whom algebra isn't intuitive. I hope I can get it together enough to spend some time learning more about math before I end up old(er) and grey(er).

I stopped enjoying math for about this reason. My university tried to teach with proofs, which to me is about as far from an intuitive understanding as possible

Looking for recommendations on lectures / course that goes over linear algebra in this way

> I stopped enjoying math for about this reason. My university tried to teach with proofs, which to me is about as far from an intuitive understanding as possible

Interesting, because until I had the proofs for eg differentiation, or heck even the quadratic formula - they were meaningless rote learnings. The proof is the intuition imo

Exactly! Proofs are why I love mathematics so much. There's nothing quite like the great "ah-ha!" moment when you find that one leap of logic that topples the rest of the dominoes in a proof. Of course, I understand that some people are better at teaching proofs by involving students in the discovery. Maybe the parent commenter was taught proofs in that same rote manner. I've had professors that have done that, and it's like someone sucked all the fun and learning out of the subject.

From the preface of Computation: Finite and Infinite Machines by Marvin L Minsky:

> The reader is therefore enjoined not to turn too easily to the solutions; not unless a needed idea has not come for a day or so. Every such concession has a price—loss of the experience obtained by solving a new kind of problem. Besides, even if reading the solutions were enough to acquire the ability to solve such problems (which it is not), one rarely finds a set of ideas which are at once so elegant and so accessible to workers who have not had to climb over a large set of mathematical prerequisites. Hence it is an unusually good field for practice in training oneself to formalize ideas and evaluate and compare different formalization techniques.

For proofs, could you recommend any good resources for a beginner? Is there a 'beginner proof' that's great to start with?

I figured out I actually like maths waaaay after I'd left uni. From that time at uni I have a vague memory of proofs being something like a whiteboard full of equations that I got lost somewhere in.

I have a vague feeling that what I'm thinking of is 'formal proofs', but I'm not sure.

Euclid. He tried to prove theorems of basic plane geometry (hence "euclidean geometry). Since we all have an intuitive understanding of (at least the basics of) plane geometry you can look at the work and not have to also learn the domain.

People recommending the classics can come off as pretentious so I will add that I am serious: a modern book of Euclid's methods should be quite accessible.

As a followon bonus: Minsky's and Papert's 1967 book "Perceptrons" (the one that said you can't do XOR with a single-layer network, though you can with a multilayer one) that lead to 25 years of lack of interest in neural networks is entirely about using neural networks on Euclid. So you can go from one to the other!

The first upper-division course I took in college concerned itself solely with learning the art of mathematical proofs. I had an excellent professor, so I can't really say how much this book helps with the learning process when used by itself, but we were assigned An Introduction to Mathematical Reasoning: Numbers, Sets and Functions, by Peter J. Eccles. Might be a good place to start!

I really like "Mathematical proofs - A transition to advanced mathematics" by Polimeni, Chartrand and Zhang If you're looking for a free option, "Book of proof" by Hammack also looks good, but I have less experience with it ( https://www.people.vcu.edu/~rhammack/BookOfProof/ )

Both proof and explanation seem to be important. Two experiences with explanation made a particular impression on me.

First I found it hard to get much out of a Real Analysis course when I was a grad student. Only partly because of lack of explanations, admittedly. Probably even more of a problem was another kind of cultural mismatch between physical scientist me and the math prof that taught the course. My interest was mostly that I was actually doing path integral Monte Carlo calculations (for my Chemistry thesis) and wanted to make sure I understood the fundamentals. The prof, like many (most?) mathematicians seemed to be more interested in investigating ingeniously weird boundary cases. So the course didn't seem to teach me much about the gotchas that might come up in actual statistics or numerical analysis, and instead more about the ingenuity of mathematicians in constructing absurdly farfetched abuses of e.g. the axiom of choice. But besides that cultural mismatch, lack of an explanatory framework sure didn't help. Thus, I was very happy decades later when I ran across Terence Tao's book on measure theory (available as a free manuscript online), which had a lot of the same kind of material with quite a good framework of motivation and explanation wrapped around its proofs.

I also like Vapnik's _The Nature of Statistical Learning Theory_ which as I understand it is highly parallel to a much longer proof-heavy version of most of the same material. I much preferred this book to the approach in my undergraduate course in statistics. Again, the difference wasn't only lack of explanations (also, e.g., not enough enough grounding in proof or plausibly provable propositions, and too narrowly based in a frequentist worldview assumed without any explicit justification), but the lack of explanatory framework sure didn't help, so later I welcomed Vapnik's explanations of his approach. I have never been motivated to read the proof-heavy parallel book by Vapnik, but I do find it reassuring that it exists in case I ever work with problems that were weird enough to start raising red flags about breaking the comfortable assumptions in my informal understanding of the statistics.

It's honestly harder than it should be to find well-written primers on linear algebra with a geometric focus, which I find to be the most intuitive way to "grasp" the subject and the motivations for it. I like the video lectures on MIT's website a lot for a variety of topics, and the one on linear algebra is pretty solid IMHO: https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra...

I also like the posts on this blog as well: https://dhruvp.netlify.com/2018/12/31/matrices/

This guy[1] doesn't necessarily hit all of the meat & potatoes, but covers some of the more interesting topics and does so in an amazing visual way. I'd highly recommend his videos (especially the playlist covering linear algebra: "Essence of Linear Algebra") as a supplement to more traditional mediums.

3Blue1Brown[1]: https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw

Sorry for being pedantic, your comment is amazing for mentioning him. However, I'm a fanboy of 3B1B and I just have to say this.

It's 3Blue1Brown, not the other way around. He named it after his eye color which apparently has 3 parts blue and 1 part brown.

Haha, thanks for pointing that out. I've watched literally every video of Grant, and am familiar with the source of the name - but for some reason I always mix it up. In my head I just say "3B1B", which I think is where I confuse myself.

You might enjoy "Linear Algebra Done Right" by Sheldon Jay Axler.

The main thesis is that proving everything using determinants hides the intuition for what's really going on.

I learned a ton from the linear algebra section of the Stanford Machine Learning course on Coursera:


I'm sadly now convinced that that approach only works for "simple" concepts. For instance I am yet to find a single book, course or teacher that can explain measure theory in a way that cultivates intuition.

They are simple because they are close to intuition one has already. Mathematical intuition is not something one is born with, it takes time and effort to develop.

And this is what gets me about Feynman, why I am a huge admirer of his work.

Take a look at the footnote. An offhand comment of how to calculate the square root of any N using a short iterative loop that you can compute by hand.

I'm no mathematician (clearly) but I had no idea that this method even existed. I've just had to try it out.

And it's a footnote!

I have the book of Feynman's computation lectures and they're dated, but most of the theory is still relevant and his writing remains accessible to anyone.

There are many stories of how Feynman did fairly profound work in an offhand way. Two that I can remember:

1. A number of times, young professors would visit with him when they were in town to give a lecture, and would discuss a recent result they had come up with, and Feynman would say that reminded him of something, pull out a sheet of paper from his drawer, and say effectively, "yes, looks like your result is correct".

2. A more specific version of that was captured in the link below, when Feynman worked out blackhole radiation on a blackboard with some grad students over lunch. The blackboard was erased and the result lost until Hawking published the same result a year later, propelling him to fame (deserved, of course). http://nautil.us/blog/the-day-feynman-worked-out-black_hole-...

He was a gifted theoretician with an ability to see how/why things worked and had the mathematical ability to express that understanding rigorously.

Here's the method that's being applied: https://en.wikipedia.org/wiki/Newton%27s_method

It's an iterative method for finding x such that f(x)=0 for a given function f, in this case f(x) = x^2 - N, where N is the number you want the square root of.

The special case has such a simple expression, though, that I'm tempted to commit it to memory.

Edit: Another nice example, for when you have access to a calculator that can compute powers but not logarithms: a' = a - 1 + x/c^x will converge the base-c logarithm of x (though it will converge slowly if you start very far from the solution).

At it's core, it's also what drives Carmack's Fast Inverse Square Root (https://en.wikipedia.org/wiki/Fast_inverse_square_root).

Yes, yes, I know Carmack didn't invent it.

The method in question was known for millennia before Newton, it is believed to be known to Babylonians:


That loop is known as Newton’s method, applied to square roots, it is very well known and taught in intro calculus classes.

There’s something delightfully physicsy about deriving Euler’s identity, which famously deals with the irrational and the imaginary, by brute force calculation using log tables and fudging the fifth decimal place. But the approach of sneaking backwards into the algebraic continuation of e^x by actually doing the arithmetic is a great way to viscerally get that it’s all just the same set of mathematical tools.

If you can find it an audio recording of this lecture is also available and it is one of the best lectures in the series. Feynman really brings forward the "soul" of algebra and the illustrates how powerful the art of abstraction is. Audible has all the audio lectures for sale, which is how I got them on my kindle, and I have also seen them floating around the net in other formats. I wish Caltech would release the audio material like they did the books!

The article has a link to a page which has the audio:


The audio in the link appears to only be a 5 minute excerpt, not the full 50 minute lecture.

The Feynman lectures is one of my favourite books ever. Yet, this chapter specifically is the least enjoyable for a mathematician. Somehow I find he's trying to explain very simple concepts using way too many words, which is the opposite style of the rest of the book.

It's deliberate. In his book The Character of Physical Law, Feynman discusses some differences in how physicists and mathematicians approach math.

I was reading the 1993 bio Hard Drive about Bill Gates recently (great to go back and read old books centered on tech, really makes you remember why people hated Bill/Microsoft...and he’d barely gotten started!) and it mentions several times that Gates would watch these lectures in what he considered his “down time”

My high school math teacher taught us a class where he followed this general outline, building up the number systems based on which equations we couldn't solve.

Now I know where he must have gotten it from.

I think it must have been around grade 10 or 11.

He does a numerical computation of the complex powers of 10 (to five decimal places) and they still are on the unit circle. And this was done in 1963.

This specific Feynmann Lecture popped up very recently either here or on lobste.rs. I have since bookmarked the "FLP" series as I really want to come back to it, but I'm curious if there is any significance to why the Algebra lecture in particular was submitted a couple of times. Is it just coincidence, or was it a particularly good one?

It's one of my favourites. Aside from Feynman's clear and enthusiastic style, I find it's remarkable you can start from counting 1,2,3 and by a bit of reasoning come up with logs, complex numbers and:

>This, then, is the unification of algebra and geometry.

I can't help wondering if you could go further and come up with some physics.

That material is covered in all STEM curriculums as part of the Calculus sequence.

For most students, the sublimeness of getting to the Reals and beyond tends to get lost in the grind of lectures, problem-sets and exams.

Feynman had a knack for getting straight to the essence of a subject, that's why people love him so much.

As for algebra + geometry with physics, there are people who have cut a path to that (see texts by Hestenes), unfortunately, it requires some mathematical sophistication and most curriculums simply don't have the time to reach that level for undergrads, given the way that they're structured.

I bet it has something to do with the build up to Euler's formula, which is famous for blowing people's minds.

The first paragraph seems to address this: because it's enjoyable.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact