Honestly, reading the Wikipedia page would be more beneficial for someone who wants to learn ODEs. Like I said the intention is good but I would suggest throwing everything but the pictures away and starting from scratch, preferably with a good textbook by your side to help you not only with facts, but more importantly with the pedagogic aspect.
I also thought this was actually a nice way to think about derivatives. "To visualize derivatives, we can draw a right triangle whose hypoteneuse is tangent to a function. If the triangle's width is , then its height is the derivative." I think if it had been explained this way to me when I was first starting to learn about derivatives a million years ago, I would have grasped it more quickly.
But, I definitely agree that this is way too short for the stated purpose.
Why everyone keep showering ecyclopedic knowledge on blog posts instead of updating things there?
There really isn't a place for people to put this other than their personal blogs.
That is a great idea! A place like that should exist.
This isn't only unintuitive, it's literally wrong.
Of course, it is more useful to get a derivative as a function of x, so you don't have to do this at every point. But, as a starting point, when you are first learning about derivatives and only have a background in algebra and geometry, this is a nice way to think about it.
My objection was that a tangent line, as opposed to a segment, has infinite length. I can draw infinitely many segments tangent to the curve -- and get infinitely many heights for it.
There certainly are differential equations without solutions, but I'm almost positive that's not what's meant here. Rather, it is very likely that one can't find an explicit solution to a non-linear equation (EDIT: although there are non-linear equations that one can solve, such as $\dot y = 1/y$, whose solutions have graphs lying on straight lines through the origin); but there is a huge difference between an equation that doesn't have a solution and one whose solution we can't find.
For example, away from singularities, one can approximate the solution of a differential equation, even a non-linear one, numerically to an arbitrarily high degree of accuracy, almost exactly as the author does for linear equations.
EDIT: TheLoneWolfling (https://news.ycombinator.com/item?id=9654767 )'s point that this numerical simulation can get hard very quickly is well taken, and probably more important in applications than abstract existence results. Non-linear equations really are hard!
> There certainly are differential equations without solutions [---]
These statements are confusing, I think. Some DE:s might not have analytical solutions in terms of _elementary_functions_. For example, 'sin(x)' is considered an elementary function, and it is a certain curve that solves some differential equations.
Now lets say I give you a non-linear ODE that no-one can solve, but I say "'foo(x)' is the function which describes the solution to this ODE". It is a more or less pointless statement, but all it means is that 'foo(x)' gives the curve that solves the ODE. Just like 'sin(x)' for the DE that it solves, difference is we do not know any properties of 'foo(x)' -- but there is a cruve that we could call 'foo(x)'.
All I'm trying to say is: A DE "not having a solution" and "not having a solution in terms of elementary functions" are two very different things. "Not having a solution" means (or at least _should_ mean) you could not even numerically solve it in a small neighbourhood (e.g. the curve 'foo(x)' does not exists), "not in terms of elementary functions" means no analytical expression in terms of trigonometric, hyperbolic, exponentials, powers, and so on, can be written down.
edit: Reading others comments they seem to be saying the similar things. Sorry for unnecessarily reiterating this.
Oh, one more thing: you say, and I understand why, that this is "more or less pointless", but it's not! It is a perfectly good way of producing new functions. For some reason (alphabetical filing in my mind?), I always think first of the Airy function (https://en.wikipedia.org/wiki/Airy_function), but the Bessel functions (https://en.wikipedia.org/wiki/Bessel_function) and, more generally, most (all?) special functions (https://en.wikipedia.org/wiki/Special_functions) also arise in this way.
In fact, even the logarithmic function (boringly, via $\dot y = 1/x$) and the exponential and sine functions (more interestingly, via $\dot y = y$ and $\ddot y = -y$) can be defined this way (by imposing suitable initial conditions, once you know the relevant existence and uniqueness theorems). They can also be defined by their power series, without direct reference to differential equations, but I find such a definition hard to motivate without reference to the differential-equations definition.
Thanks for elaborating on this.
I agree with all this; in fact, it's almost exactly what I meant by:
> there is a huge difference between an equation that doesn't have a solution and one whose solution we can't find.
I add the caveat 'almost' because not having a solution is an even stricter condition than not being locally soluble numerically (I think—I'm no expert on numerical methods); but there are equations that meet this stricter condition.
You can't say this until you've proven that a solution exists (and that it's unique, if you really mean 'the' solution). I think that was part of the point.
Sounds like how we got Bessel functions, no?
However, there are a number of differential equations where to simulate it requires exponential complexity w.r.t. to the given degree of accuracy or amount of "time" to simulate it for. For instance, pretty much any chaotic problem.
I think other very hard to deal with are equations where the solution diverges exponentially due to small changes in boundary conditions. Orbital mechanics is like that.
Not sure but I can see a situation where the set of equations that describe the solution form a series that never converges to a regular pattern. (irrational series? as irrational numbers but with equations) I think quantum mechanics is like that (Don't quote me)
Yes, you can encode arbitrary Turing machines as differential equations.
That's neat! Do you have a reference?
Is there any more-or-less explicit recipe that says "given a description of a Turing machine (as a 7-tuple, say https://en.wikipedia.org/wiki/Turing_machine#Formal_definiti...), here is a (possibly unmanageably huge) differential equation such that …"—well, I don't even really know what. Your answer suggests that I might ask that, say, the solution $y$ to the differential equation where $y(1)$ somehow encodes a given initial state of the tape is such that $y(0)$ somehow encodes the final state of the tape (with the understanding that $y$ is not defined at $0$ if the machine doesn't halt on the corresponding input).
You want something like this http://www.sciencedirect.com/science/article/pii/S1571066108...
or like this
You're absolutely right that it's an elegant and compelling argument for the plausibility of the claim; I was just looking for the rigour behind it (even a statement, if not a proof). Your second reference is exactly the sort of thing that I had in mind; thanks!
Of particular note, with some classes of differential equations, we can't write an explicit solution, but we can characterize the solutions very well. For example, we might be able to describe the solution in terms of its long-term behavior ("it approaches such-and-such point" or "it has a stable orbit around such-and-such path".)
See also: http://en.wikipedia.org/wiki/Dynamical_system
Another, I think, slightly gentler approach along similar lines is Blanchard, Devaney, & Hall (http://math.bu.edu/odes). The pedigree for this latter is very good; Devaney gave the (as far as I know) first rigorous mathematical definition of chaos. I have taught out of this one a number of times, and very much enjoy it.
Wiggins is a much more advanced, and much more thorough, book. It's appropriate for 400 level at a minimum, and probably more accessible to graduate students. I wouldn't even attempt it without having taken ODEs and Linear Algebra, and probably some real analysis or PDEs or another high level course just for exposure to mathematical rigor (and I might be leaving out other prereqs; I can't presently locate my copy of the book.) It's tremendously well presented for a book at that level, and could be enough material for a three-quarter or two-semester sequence. It's advanced enough that someone working on a related Masters or Doctoral thesis would likely refer to it regularly.
These beasts often pop up when trying to solve larger electrical circuits with time-dependent elements (ie. RLC). The only thing you can really do to solve them usually is linearizing them approximate them by some integration scheme (this is what Spice, the electronics simulator is based on).
Little did I know, considering the reals are uncountable, so you can't really come up with exhaustive algorithms for lots of things dealing with them.
EDIT: Also, the fact that a structure is uncountable doesn't prevent you from operating algorithmically on it (not that you claimed it did!); it just means that there are some elements of it that cannot be singled out algorithmically.
This is basically what math is all about. If a problem can be effectively and easily solved algorithmically, it is considered trivial, and little attention is given to it. The focus is on actually difficult problems, ones that need clever tricks to get a handle of.
Huh? If you mean this:
> A mathematician's mind works somehow.
then it seems like you may be using 'algorithmic' to mean something like "unfolding according to (possibly unknown, possibly probabilistic) laws", in which case it seems so broad a term as to be almost useless.
They're certainly currently unknown, and we have good reason to believe they're probabilistic.
Even such vague, loose descriptions are better than invoking "mathematical intuition" or "it just comes to me" or other explanations for how one does math.
Isn't there a theorem that says that no closed-form, analytic solutions to such equations can be found, in general?
'Analytic' has a mathematical meaning (https://en.wikipedia.org/wiki/Analytic_function) which is probably not what you mean here. Taking it in the more colloquial (Eulerian) sense of "given by a formula", not only are there such general results, but there are even specific functions for which it is known that no elementary (https://en.wikipedia.org/wiki/Elementary_function) anti-derivative exists (the strong form of "impossible to solve" that I mentioned above), and an algorithm for deciding of a given function whether it has an elementary anti-derivative (https://en.wikipedia.org/wiki/Risch_algorithm).
Oops! I mentioned this as an example of an easily soluble, non-linear equation—which it is; it may be solved by solution of variables—but the solutions are not at all what I said (I was thinking of $\dot y = y/t$).
EDIT: reading the whole thing, there's quite a bit of sloppiness and stuff that's flat-out wrong, like " the faster the cart goes, the faster it stops", or the statement at the end claiming that nonlinear DEs don't have solutions. In short: this needs a major cleanup.
OTOH, you are correct to the extent that we don't usually use both the "fact/proposition" language and the "solve" language at the same time when discussing equations.
each diffeq is like a set of directions: e.g. walk straight for 500m, turn left, walk straight 200m, turn right.
if you specify a starting point or ending point (boundary/initial conditions) then the directions become a specific set of instructions to get you from let's say, the bus stop to the library (particular solution). otherwise it's just a set of directions that can be used to describe getting from a set of places to a corresponding set of other places (general solution).
This is drag, not friction. Friction doesn't depend on velocity.
At low speeds drag is proportional to velocity.
However, the "cartography" section made almost no sense. It needs to be explained more clearly how the two carts are different and why they diverge.
edit: On a more general note, the animations are really slick!
Just freshman calculus actually is a good
start. E.g., for viral growth consider
y'(t) = k y(t) ( b - y(t) )
where t is time, y(t) the size at time t,
and the rate of growth y'(t) = d/dt y(t)
is proportional to both the current size
y(t) and the size
( b - y(t) )
yet to be achieved. So, e.g., the growth
rate is proportional to the number of
present customers talking y(t) and the
number of potential customers listening
( b - y(t) ).
We assume that the present is time t = 0
and we have the eventual size b and the
current size y(0).
Then, sure, we have an initial value
problem (that is, we know y(0)) for a
first order, linear ordinary differential
But all that is needed for a solution is
just freshman calculus. It's just a
Trivial? Once that equation kept FedEx
from going out of business.
Polished, elegant, insightful, balanced,
expert, great first text:
Earl A. Coddington, 'An Introduction to
Ordinary Differential Equations',
Prentice-Hall, Englewood Cliffs, NJ, 1961.
For when you want to take some really big
next steps up past nearly everyone else in
Earl A. Coddington and Norman Levinson,
'Theory of Ordinary Differential
Equations', McGraw-Hill, New York, 1955.
For one of the larger reasons to be
interested in ordinary differential
Michael Athans and Peter L. Falb, 'Optimal
Control: An Introduction to the Theory
and Its Applications', McGraw-Hill Book
Company, New York, 1966.
E. B. Lee and L. Markus, 'Foundations of
Optimal Control Theory', ISBN
0471-52263-5, John Wiley & Sons, New York,
Once in graduate school I got a reading
course to give a lecture a week from
Coddington and Levinson, Athans and Falb,
and Lee and Markus. The prof didn't show
up again after the first lecture.
At one time our current Fed Chair Janet
Yellen indicated that she saw some
potential in using control theory to help
manage the economy.
And a really sweetheart application of
modern control theory, doing amazing
things with automatic control of
A broader view of some of what can be done
with ordinary differential equations, and
a long, gorgeous desert buffet of applied
David G. Luenberger, 'Optimization by
Vector Space Methods', John Wiley and
Sons, Inc., New York, 1969.
Generally, though, apparently the big
glory days of ordinary differential
equations were for the US DoD and NASA
during the Cold War and the Space Race.
Likely there are plenty of people now,
with gray hair, who supply the expertise
needed for current aerospace, etc.
Numerical solutions? That field is also
Partial differential equations? That's
related but quite different.
You mean do I currently support anything but Chrome? Actually, no. But I'm not worried about cross-browser this minute. I'm leveraging shadow-dom pretty heavily to allow for encapsulation of web components developed by third parties and Chrome is the only browser to fully support that (with the webcomponents lib).
By the time my project is ready to (hopefully) be used by people, browser support will either be there or I will need to implement graceful fallback with iframes (but I think webcomponents.js will get there).
Not sure how else to respond to your snarky comment, so I'll just let you know: it's not easy.
Cool page, though, keep it up!