It builds up the theory of differential geometry using Scheme, just like SICP and SICM, and is a fantastic read for programmers wanting to grasp the topic in a more familiar language.
It is available for download for free on the publisher's website: https://mitpress.mit.edu/books/functional-differential-geome...
FYI for everyone looking for the download link, it's a bit hidden, so here it is: https://www.dropbox.com/s/t3si4b99ijqyhyk/9580.pdf?dl=1
Licensed under Creative Commons BY-SA, too.
This concern means that you are looking at it backwards. Often, it happens in the opposite sense. You will find some problems that are hard to model or understand. Then, you realize that with quite a lot of effort you might be able to tackle them. And then, you learn about differential forms and see how they allow to express your problem very clearly and its solution becomes sort of immediate.
There is nothing mysterious about differential forms from the point of view of physics, but pure math texts often take this intuition for granted. If you are used to working with scalar and vector fields in space , you may realize that there are different kinds of each:
Examples of scalar fields:
(1) a potential
(2) a density
Examples of vector fields:
(3) a velocity field
(4) a flow
(5) the gradient of a substance
(6) the field of normal vectors on a surface
(7) the field of tangent vectors to a curve
(8) a field of "surface elements" filling the whole space
This list includes all particular cases of vector fields and differential forms of all orders on R^3 and on its sub-manifolds.
If you know the least amount of physics, you will realize that you can do some kind of integrals on these objects, but not all of them. For example, in the case of scalar fields, you can integrate a density over a domain, or you can evaluate a potential at one point (or more often, the difference of potential between two points). Thus, potentials are 0-forms and densities are 3-forms. And a similar reasoning for the vector fields, and 1-forms and 2-forms.
I think that first 'all' shouldn't be there, right? That is, these are all possible orders of vector fields and differential forms (among which the various Hodge dualities permit lots of identifications) on submanifolds of ℝ^n, but they're not all the possible particular cases of such vector fields and differential forms, in the sense that there are plenty of other, different physical situations that lead to the same mathematics (which, as you argue, is why the concept is so useful).
There's some other uses. They form the basis of De Rham Cohomology  which is a useful and computational way of describing topological properties of a manifold (recall how Stokes's theorem and friends show how the topology of a space constrains the integrals of differential forms).
Another thing is that they're specific kinds of tensors on the manifold. Tensors represent basically all the information we might be interested in about a manifold (for example, its curvature). Differential 1-forms are "dual" to vectors and are therefore important to building more complex tensors (higher tensors take in some number of vectors and 1-forms and output a value).
And just as regular integration and differentiation relates to solving of differential equations, differential forms are needed for differential equations that are on a manifold.
This was how differential forms were introduced in the course. I understood all of this from an algebraic standpoint, but I was lacking any geometric intuition for differential forms whatsoever. Say you have a k-form on some manifold and you evaluate it at some point which gives you an alternating covariant k-tensor. Then when you evaluate that at k tangent vectors at the point you get a scalar, does this scalar have any geometric meaning? Does it measure anything? Later when we did Riemannian manifolds and introduced the volume form that was at least a little more intuitive, as far as I remember, but general differential forms were intuitively a complete mystery to me.
Also I kind of got their usefulness in an algebraic sense when we did some typical vector calculus calculations using the concepts of divergence and curl, but I didn't have much intuition for these concepts since I don't have a physics background and only worked with vector fields in this abstract setting.
Unfortunately we did not cover De Rham cohomology. Thanks for your answer, I will take a look at planimeters.
The first example there is: given a base point X and two vectors V,W based at X, the 2-form gives the "signed" area of the parallelogram spanned by V and W. Determinants (which measure n-dimensional parallelograms), when viewed as functions of their column vectors, have all the properties of differential forms.
Differential forms are a bit like generalized determinants and in a sense specify a way to measure something like an abstract volume in the neighborhood of a point of a manifold, in such a way that the Jacobian needed for changing coordinates is "built in".
The important ideas can be found in pages 9-10 in this link:
Note halfway down page 9 we get a really clean equation for how to change variables (change coordinates) in an abstract way.
Also note the simple form that the general n-dimensional stokes theorem takes in terms of differential forms at the top of this page:
That they allow the expression of substantial theorems in concise form is a clue that they are the "right" way to do differential geometry.
Indeed, in "Tensor Geometry" (Dodson & Poston), they note:
> Most modern "differential geometry" texts use a coordinate-free notation almost throughout. This is excellent for a coherent understanding, but leaves the physics student quite unequipped for the physical literature, or for the specific physical computations in which coordinates are unavoidable. Even when the relation to classical notation is explained, as in the magnificent [Spivak], pseudo-Riemannian geometry is barely touched on. This is crippling to the physicist, for whom spacetime is the most important example, and perverse even for the geometer. Indefinite metrics arise as easily within pure
mathematics (for instance in Lie group theory) as in applications, and the mathematician should know the differences between such geometries and the positive definite type. In this book therefore we treat both cases equally, and describe both relativity theory and (in Ch. IX, §6) an important "abstract"
pseudo Riemannian space, SL(2;R).
I think some grassmannian computations would be good in this context but, on the other hand, they would become very cumbersome very soon.
As someone says below: Spivak's Differential Calculus on Manifolds is exceptionally good.
Let me be more specific:
Suppose your ambient space is $\R^3$ and you are looking at a vector field (let us say your space is full of water and the vector field models the velocity of the movement of water at every point). The vector field $V$ is a $1$-form, it exists.
Now suppose you insert a membrane (2d surface) into the water and want to compute how much water flows through it at any given moment in time. This is the "flow" of $V$ through your surface $S$.
If you go and look how to do this there are intuitive pictures and the computation reduces to
1) parameterize the 2d surface using 2 variables $(u,v)$
2) compute some partial derivatives of the parameterization
3) wedge product them
4) take the dot product with $V$
5) integrate in $u$ and $v$.
At first this seems like magic but whoever is explaining the procedure draws a bunch of pictures to explain why this is reasonable and tries to convince you. Usually they eventually manage.
However this is only part of the story. You see, you have a map that inputs $V$(the vector field) and $S$ the surface and spits out a number. Furthermore this map is intuitively "continuous" in the sense that if you change $V$ a bit or $S$ a bit you do not expect the result to change too much. However if you try to prove this or explain this at any mathematical level, you run into trouble!!!
The reason is that the way you defined integrating the vector field DEPENDS on the parameterization, and worse, it depends on it at the first step of your procedure. If you have to membranes that are "close" how can you even think that their parameterizations be "close". You can't! Even the SAME surface can have drastically different parameterizations.
So clearly you need to abstract away the coordinates so you can talk about continuity, stability, perturbation.
Let us get back to abstract definitions. You know that you can integrate 2-forms on 2-manifolds (2d surfaces). You are used to having a 2-form DEFINED on a 2-manifold (so you don't really see the difference between integration and 2-forms). However we do know that we have this rather standard procedure of computing the flow of a vector field (1-form) through a 2-manifold (2d surface). How so? It seems that for whatever reason a vector field is ALSO a 2-form. And it is a 2-form just floating around R^3 in the same way a vector field (the velocity of water) exists independently of whether you are computing how much of it is flowing through a given surface.
So how is this the case? This is exactly an instance of Hodge duality. Since the ambient space $\R^3$ has a volume form (3-form) there is an intrinsic association from $k$ forms to $3-k$ forms (specifically, given a $k$ form the associated $3-k$ form is that unique $3-k$ form such that wedged with the original gives you the volume form).
So there you go! Given a vector field you have an associated 2-form in $\R^3$ that is there, by itself, without needing any 2-manifold to justify its existence. In practice if $V=(V_x,V_y,V_z)$ then the two form is $V_x dy dz + V_y dz dx + V_z dx dy$.
And if by chance it encounters a 2d surface it can naturally be integrated through it. The Hodge duality above actually expresses in a very concise form the multiple points on HOW to compute the flow (the procedure we started with).
Another good book is Advanced Calculus: A Geometric View by James Callahan.
Thank you for this! For those of us who had real difficulty with Advanced Calculus, Callahan's methodical, visual, generous approach is deeply felt and appreciated. I did not know of this book until now and immediately found myself absorbed. It's embarrassing to admit, but as one who loves mathematics yet seems to struggle and stagnate more often than everyone around me, I often want to ask for, and indeed need, a bit of hand-holding. Callahan is a wonderful guide in that sense. Thanks again.
Definitely check out Edwards' book I mentioned above as well. It is a gem of a book. Although it doesn't use matrices and instead uses linear expansions, it is still brilliant. The first three chapters give an exposition of the theory, and then the next three go back and prove things. So if anything, take a look at the first three chapters and then the later ones on applications and extensions. It also has a geometrical viewpoint.
The figures on the slides are really great. Hope this helps:
BTW: If you buy from Springer, you get a free pdf of the book immediately while you wait for your physical copy, because of the delay for print on demand. They say you don't actually "own" the digital edition (can't remember the exact wording), but I can vouch that it's not time-limited. It's a very good deal.
Edwards' first three chapters give a wonderfullly intuitive exposition of forms and their application to integration.
Tu's book is a rigourous study of smooth manifolds and differential forms. His exercises are approachable, and his book is the most expedient to the full theory of differential forms.
As a quirky intuition pump, I recommend Geometrical Vectors by Gabriel Weinreich. The Fortney book mentioned in another comment is a nice, visual book, and there are other references in the replies to the comment I linked.
This distinction is probably related to the Lebesgue integral's inability to integrate functions unless they are absolutely integrable (since it needs to be able to compute the positive and negative components and take the difference, which being finite to make sense, must be absolutely finite) and is distinct from the Henstock-Kurzweil integral which works directly with a tagged partition of the underlying set (which implies an orientation) and is able to integrate (some) functions which are not absolutely integrable.
Nevertheless, moving to differential manifolds introduces problems for the Henstock-Kurzweil integral becames the local orientation in charts does not always induce a sensible global orientation. However, integration on a manifold isn't so much integration of a function of several variables as it is integration of several functions of several variables. That one needs additional machinery to deal with the "several functions" part is unsurprising.
I can't recommend studying the Henstock-Kurzweil integral enough! Strangely enough, despite being more general, it is far more approachable than the Lebesgue integral.
It still works better than the Lebesgue integral in the multidimensional settings, since it is trivial to create a product f(x)g(y) of two functions which will not be Lebesgue integrable but is Henstock-Kurzweil integrable.
As for generalizations to generalized functions, my preference lies with Colombeau algebras over Schwartz distributions, in any case. Where at least there is an arithmetic of the generalized functions.
It's a crippling misconception that talent is natural. Michael Jordan made himself the athlete he became; many people had his body but never got as far. Good mathematicians take conscious control of how they learn and think. Our tendency to go "meta" isn't restricted to math; it's applied to ourselves.
a great (even if expensive) math book