It occurs to me however that we really don't have long-form ways of expressing the equations (especially in more than one dimension). The math can't be simplified, but I was still lost about a quarter of the way through the work even though I used to know partial derivatives quite well. The article requires something like 4 or 5 semesters of calculus to grok.
I think what makes math so hard to translate from written form to a mental map is that we’re still expressing it in sort of a write-only language like APL:
I’m curious if anyone knows of any other way to express equations. For example:
* can variables be longer than a single letter without making equations too large to “see” at first glance?
* can all of the notation like the one for partial derivative be rewritten as ordinary functions?
* is there a way to “break up” long functions into shorter expressions (like sub-functions) without losing their relationships?
What I’m getting at is that people are able to visualize large, complex abstractions like SQL, OpenGL, matrices (barely), but not really statistics, probability, calculus, and so on. Have these other disciplines just not had time to be expressed in layman’s terms? I don’t buy that they are fundamentally more difficult than other disciplines. Just more.. esoteric. Thoughts?
About partial derivatives being rewritten as functions: it's common for ∂/∂xᵢ to be thought of as a linear operator from the vector space of functions of some number of variables to itself (let's just assume the functions are smooth). The derivative of a function between smooth manifolds is a function (linear transformation), and it contains the idea of all the partial derivatives of a function, with the benefit of being able to reason without using explicit coordinates: bases are arbitrary, after all.
> is there a way to “break up” long functions into shorter expressions (like sub-functions) without losing their relationships?
I think that's already somewhat happening by using things like divergence or the Laplacian. But mathematicians could do a lot better with naming subexpressions (or giving better structure to proofs...). Though, thinking in public, perhaps the equation itself isn't actually central, and instead the equation is just a mnemonic for all of the explanation of what the equation represents, and so the subexpressions are named by their explanation.
About abstractions: The thing about math is that the material of the subject is new ways of understanding phenomena (physical or nonphysical). A big difficulty with the subject is that you have to somehow learn completely new metaphiers (metaphor illuminates a metaphrand with a metaphier) -- for instance linear algebra has linear combination, linear independence, vector spaces, linear transformations, bases, inner products, and so on, and while their are many examples of these concepts, it takes significant effort to get to the point where the concepts themselves become primary. Teaching the subject, it feels like the most I can do is point out the moon in as many ways as I can, so to speak, and hope the student can follow my finger and recognize the moon from my descriptions -- I can't look at the moon for them.
From the assumption that complexity is proportional to the number of new introduced metaphors, SQL and OpenGL are not nearly as complicated. Effective SQL needs some understanding of a relation and how to use the relation metaphor to model data, and OpenGL sure has a lot going on, but it seems to all be standard systems-type programming (but with too much global state, in my opinion). OpenGL gets a lot more difficult when you then have to bring in the linear algebra to actually make some shaders, but that's not intrinsic to OpenGL itself, I don't think.
It could be that there is some minimal set of metaphors, or even a new easier-to-internalize metaphor, which would make partial differential equations transparent to anybody, or at least make a single partial differential equation transparent. But this hasn't happened yet, and people have been thinking about them for about a couple hundred years, much longer than either SQL or OpenGL.
I don't understand the part where he expands the navier stokes equations to the matrix form. Can some explain how he gets to that step
You can interpret `u dot del u` as `(u dot del)u` where `(u dot del)` is an operator which seems to sometimes be called the "advection operator" or the "directional derivative" or the "convection derivative" and probably even more names I don't even know -- which makes it really annoyingly hard to look up information about.
See https://en.wikipedia.org/wiki/Advection and https://en.wikipedia.org/wiki/Del (section "directional derivative").
`(u dot del)` expands to `u_x * d(__)/dx + u_y * d(__)/dy` in 2D -- a function taking a scalar (plugged into where I wrote the `__`s) and which evaluates to a scalar. As indicated on the 2nd link above, to operate on a vector just apply it to each component (similar to how you'd normally multiply a scalar and a vector). So that means that `(u dot del)u` becomes:
u_x * d(u_x)/dx + u_y * d(u_x)/dy
u_x * d(u_y)/dx + u_y * d(u_y)/dy
[d(u_x)/dx d(u_x)/dy] [u_x]
[d(u_y)/dx d(u_y)/dy] * [u_y]
As DanWaterworth said in the other comment, `del p` is just the gradient of the pressure.
Generally the convective derivative of the field u•∇y, the one that contains the covariant derivative of the field, can be interpreted both as involving the streamline tensor derivative of the field u•(∇y), or as involving the streamline directional derivative of the field (u•∇) y, leading to the same result.
Anyone who wants to read articles that contain math should learn notation.
this works https://www.cs.ubc.ca/~rbridson/fluidsimulation/fluids_notes... (5.5MB pdf)
With the demo at the very top, given that the initial configuration is always the same, shouldn't 'mixing' the colours vigorously always yield the same uniform colour as a steady state? Probably missing something here but shouldn't this be the case?
Without that, yes, it turns into a muddy grey.