Hacker News new | comments | show | ask | jobs | submit login
Show HN: Differential Equations Explained (lewis500.github.io)
171 points by lewis500 on June 3, 2015 | hide | past | web | favorite | 75 comments



The intention is good, but without meaning to be overly negative, this is not very well written at all. There is imprecise and needlessly confusing and unclear phrasing ("To visualize derivatives, we can draw a right triangle whose hypoteneuse [sic] is tangent to a function. If the triangle's width is , then its height is the derivative." is the most confusing and unintuitive way to visualize derivatives I can think of), a poor sequence of examples for someone who is not familiar with ODEs, plenty of things left unexplained (why do I solve \dot{y} = \cos(t) by integrating?) not to mention technical problems with hovering the graphs (at least in FF). It's also way, way too short. You simply cannot go from "an ODE is like an equation with functions and derivatives" to "let's solve one" in 5 sentences.

Honestly, reading the Wikipedia page would be more beneficial for someone who wants to learn ODEs. Like I said the intention is good but I would suggest throwing everything but the pictures away and starting from scratch, preferably with a good textbook by your side to help you not only with facts, but more importantly with the pedagogic aspect.


Eh. I have found that the Wikipedia math pages are not approachable for someone that does not already have a very strong background in math. There is definitely a place for blogs etc. that can present the information in a more palatable manner to non-mathematicians.

I also thought this was actually a nice way to think about derivatives. "To visualize derivatives, we can draw a right triangle whose hypoteneuse is tangent to a function. If the triangle's width is , then its height is the derivative." I think if it had been explained this way to me when I was first starting to learn about derivatives a million years ago, I would have grasped it more quickly.

But, I definitely agree that this is way too short for the stated purpose.


Agree, this wasn't too enlightening. I highly recommend the differential calculus course on Khan Academy. I wanted to brush up recently after having not used this stuff for over 20 years, and it was the best resource I found.


wikipedia.

Why everyone keep showering ecyclopedic knowledge on blog posts instead of updating things there?


In fairness, wikipedia does not do well for interactive content (perhaps it could?). Khan Academy is doing much better for this, but it isn't really an encyclopaedic collection of knowledge.

There really isn't a place for people to put this other than their personal blogs.


> There really isn't a place for people to put this other than their personal blogs.

That is a great idea! A place like that should exist.


> To visualize derivatives, we can draw a right triangle whose hypoteneuse [sic] is tangent to a function. If the triangle's width is , then its height is the derivative." is the most confusing and unintuitive way to visualize derivatives I can think of

This isn't only unintuitive, it's literally wrong.


No. When you first start learning about derivatives, you are told you draw a line tangent to a point and then find the slope of that line at that point. The slope is just del y / del x. So, if del x = 1, then the slope is just del y / 1 = del y = the height of the triangle.

Of course, it is more useful to get a derivative as a function of x, so you don't have to do this at every point. But, as a starting point, when you are first learning about derivatives and only have a background in algebra and geometry, this is a nice way to think about it.


Right. My browser didn't play nice with MathML, so I missed about the part where x = 1 (i.e. I couldn't see the "1" :) )

My objection was that a tangent line, as opposed to a segment, has infinite length. I can draw infinitely many segments tangent to the curve -- and get infinitely many heights for it.


Ah, ok. That makes sense. Confusion resolved!


> Some of them don't even really have solutions (non-linear differential equations).

There certainly are differential equations without solutions, but I'm almost positive that's not what's meant here. Rather, it is very likely that one can't find an explicit solution to a non-linear equation (EDIT: although there are non-linear equations that one can solve, such as $\dot y = 1/y$, whose solutions have graphs lying on straight lines through the origin); but there is a huge difference between an equation that doesn't have a solution and one whose solution we can't find.

For example, away from singularities, one can approximate the solution of a differential equation, even a non-linear one, numerically to an arbitrarily high degree of accuracy, almost exactly as the author does for linear equations.

EDIT: TheLoneWolfling (https://news.ycombinator.com/item?id=9654767 )'s point that this numerical simulation can get hard very quickly is well taken, and probably more important in applications than abstract existence results. Non-linear equations really are hard!


>> Some of them don't even really have solutions (non-linear differential equations).

> There certainly are differential equations without solutions [---]

These statements are confusing, I think. Some DE:s might not have analytical solutions in terms of _elementary_functions_. For example, 'sin(x)' is considered an elementary function, and it is a certain curve that solves some differential equations.

Now lets say I give you a non-linear ODE that no-one can solve, but I say "'foo(x)' is the function which describes the solution to this ODE". It is a more or less pointless statement, but all it means is that 'foo(x)' gives the curve that solves the ODE. Just like 'sin(x)' for the DE that it solves, difference is we do not know any properties of 'foo(x)' -- but there is a cruve that we could call 'foo(x)'.

All I'm trying to say is: A DE "not having a solution" and "not having a solution in terms of elementary functions" are two very different things. "Not having a solution" means (or at least _should_ mean) you could not even numerically solve it in a small neighbourhood (e.g. the curve 'foo(x)' does not exists), "not in terms of elementary functions" means no analytical expression in terms of trigonometric, hyperbolic, exponentials, powers, and so on, can be written down.

edit: Reading others comments they seem to be saying the similar things. Sorry for unnecessarily reiterating this.


> It is a more or less pointless statement, but all it means is that 'foo(x)' gives the curve that solves the ODE.

Oh, one more thing: you say, and I understand why, that this is "more or less pointless", but it's not! It is a perfectly good way of producing new functions. For some reason (alphabetical filing in my mind?), I always think first of the Airy function (https://en.wikipedia.org/wiki/Airy_function), but the Bessel functions (https://en.wikipedia.org/wiki/Bessel_function) and, more generally, most (all?) special functions (https://en.wikipedia.org/wiki/Special_functions) also arise in this way.

In fact, even the logarithmic function (boringly, via $\dot y = 1/x$) and the exponential and sine functions (more interestingly, via $\dot y = y$ and $\ddot y = -y$) can be defined this way (by imposing suitable initial conditions, once you know the relevant existence and uniqueness theorems). They can also be defined by their power series, without direct reference to differential equations, but I find such a definition hard to motivate without reference to the differential-equations definition.


Yes, good point. I was actually thinking about special functions and that they are usually defined through differential equations (so, yeah, I more or less stole the idea from there and did not cite it properly :-/ but I did not want to bring all of that in to my comment).

Thanks for elaborating on this.


> A DE "not having a solution" and "not having a solution in terms of elementary functions" are two very different things. "Not having a solution" means (or at least _should_ mean) you could not even numerically solve it in a small neighbourhood (e.g. the curve 'foo(x)' does not exists), "not in terms of elementary functions" means no analytical expression in terms of trigonometric, hyperbolic, exponentials, powers, and so on, can be written down.

I agree with all this; in fact, it's almost exactly what I meant by:

> there is a huge difference between an equation that doesn't have a solution and one whose solution we can't find.

I add the caveat 'almost' because not having a solution is an even stricter condition than not being locally soluble numerically (I think—I'm no expert on numerical methods); but there are equations that meet this stricter condition.


'I say "'foo(x)' is the function which describes the solution to this ODE"'

You can't say this until you've proven that a solution exists (and that it's unique, if you really mean 'the' solution). I think that was part of the point.


Now lets say I give you a non-linear ODE that no-one can solve, but I say "'foo(x)' is the function which describes the solution to this ODE". It is a more or less pointless statement, but all it means is that 'foo(x)' gives the curve that solves the ODE.

Sounds like how we got Bessel functions, no?



You are correct, sort of.

However, there are a number of differential equations where to simulate it requires exponential complexity w.r.t. to the given degree of accuracy or amount of "time" to simulate it for. For instance, pretty much any chaotic problem.


Agreed! There's a reason that non-linear equations are regarded as much harder than linear ones, and I don't mean to gloss over it. However, possibly just because I am a mathematician and not an engineer or hacker, I still find the distinction between "solution that is hard to find" and "no solution" meaningful and important. (It is the difference between "we can't predict the weather" and "there is no such thing as weather"!)


The classic example is Navier–Stokes equations of viscous fluid flow. True story, one of my college professors handed out a homework assignment that no one could solve. Far as I could tell the boundary conditions described journal bearing that wasn't stable. Such things exist.

I think other very hard to deal with are equations where the solution diverges exponentially due to small changes in boundary conditions. Orbital mechanics is like that.

Not sure but I can see a situation where the set of equations that describe the solution form a series that never converges to a regular pattern. (irrational series? as irrational numbers but with equations) I think quantum mechanics is like that (Don't quote me)


Oh it's worse than that, decision problems like computing the sign of the solution at a given point are undecidable in general.

Yes, you can encode arbitrary Turing machines as differential equations.


> Yes, you can encode arbitrary Turing machines as differential equations.

That's neat! Do you have a reference?


You can build a physical Turing machine and describe its behavior using Newton's equations of motion, then do a change of variable t -> 1/t. The value at 0 gives you the result of the computation. Q.E.D


That's … unsatisfying. I mean that it's unsatisfying in a theoretical sense (because it gives me no theoretical feeling for the Turing-machine-to-differential-equation transformation), but, if I really wanted to be picky, I could point out that (a) nothing in the definition of a Turing machine guarantees its physical realisability (what if its number of states is bigger than the power set of the number of elementary particles in the universe, or, similarly, if it writes on an unbounded amount of tape?), and (b) Newton's equations of motion are only approximations to, not exact descriptions of, the physical universe.

Is there any more-or-less explicit recipe that says "given a description of a Turing machine (as a 7-tuple, say https://en.wikipedia.org/wiki/Turing_machine#Formal_definiti...), here is a (possibly unmanageably huge) differential equation such that …"—well, I don't even really know what. Your answer suggests that I might ask that, say, the solution $y$ to the differential equation where $y(1)$ somehow encodes a given initial state of the tape is such that $y(0)$ somehow encodes the final state of the tape (with the understanding that $y$ is not defined at $0$ if the machine doesn't halt on the corresponding input).


Fine, fine, I just think it's an elegant way to say it :)

You want something like this http://www.sciencedirect.com/science/article/pii/S1571066108... or like this http://www.sciencedirect.com/science/article/pii/S0196885807...


> Fine, fine, I just think it's an elegant way to say it :)

You're absolutely right that it's an elegant and compelling argument for the plausibility of the claim; I was just looking for the rigour behind it (even a statement, if not a proof). Your second reference is exactly the sort of thing that I had in mind; thanks!


Isn't this exponential precision = polynomial time?


> "there is a huge difference between an equation that doesn't have a solution and one whose solution we can't find."

Of particular note, with some classes of differential equations, we can't write an explicit solution, but we can characterize the solutions very well. For example, we might be able to describe the solution in terms of its long-term behavior ("it approaches such-and-such point" or "it has a stable orbit around such-and-such path".)

See also: http://en.wikipedia.org/wiki/Dynamical_system


I have long wanted to, but have never actually got to, teach a differential-equations course from Hubbard & West (http://www.amazon.com/Differential-Equations-Dynamical-Appro...). I thought that there was an article in Math. Mag. about their approach via "fences and funnels", but MathSciNet only knows about a 1995 CMS conference proceedings: http://www.ams.org/mathscinet-getitem?mr=1483923 . EDIT: The article I wanted (which has the same title as the proceedings article above) is actually in College Math. J.; it's http://www.jstor.org/stable/2687507 .

Another, I think, slightly gentler approach along similar lines is Blanchard, Devaney, & Hall (http://math.bu.edu/odes). The pedigree for this latter is very good; Devaney gave the (as far as I know) first rigorous mathematical definition of chaos. I have taught out of this one a number of times, and very much enjoy it.


At a more advanced level: my favorite graduate course was taught out of http://www.amazon.com/Introduction-Applied-Nonlinear-Dynamic... .


Another one that I meant to mention in my previous post is Strogatz (http://www.stevenstrogatz.com/books/nonlinear-dynamics-and-c...). Do you know how Wiggins compares?


I have the first edition of Strogatz' book. It's fairly accessible to students who have taken calculus and been exposed to ODEs, and can be worked through in about a semester. Great for a 200 or 300 level course.

Wiggins is a much more advanced, and much more thorough, book. It's appropriate for 400 level at a minimum, and probably more accessible to graduate students. I wouldn't even attempt it without having taken ODEs and Linear Algebra, and probably some real analysis or PDEs or another high level course just for exposure to mathematical rigor (and I might be leaving out other prereqs; I can't presently locate my copy of the book.) It's tremendously well presented for a book at that level, and could be enough material for a three-quarter or two-semester sequence. It's advanced enough that someone working on a related Masters or Doctoral thesis would likely refer to it regularly.


The thing that always bugged me about differential equations when I encountered them in the MIT analog circuits course was that the professor said that the way to figure them out was to use "guesswork". For example, the diffeq describing how RLC circuits work is well known, but can not be solved using a systematic algorithm that a person could do by hand.


You might be confusing straightforward linear systems of differential equations (some exceptions when you have a nasty heterogenous part, but which are 'always' solvable by a standard method) with so-called Differential Algebraic Equations (DAEs).

These beasts often pop up when trying to solve larger electrical circuits with time-dependent elements (ie. RLC). The only thing you can really do to solve them usually is linearizing them approximate them by some integration scheme (this is what Spice, the electronics simulator is based on).


Uni-level math on the reals always bugged me for the exact reason that I thought well-defined problems not involving funny self-reference tricks ought to have algorithms. Despite this, math professors kept presenting us with patterns to match, but no exhaustive algorithms.

Little did I know, considering the reals are uncountable, so you can't really come up with exhaustive algorithms for lots of things dealing with them.


Perhaps you would like to study constructive analysis, a la Bishop? You lose a lot of intuition in the hypotheses, which necessarily become more complicated—or perhaps I should say that you are forced to develop a different intuition about hypotheses—but you gain conclusions of exactly the sort that you want.

https://en.wikipedia.org/wiki/Constructive_analysis

EDIT: Also, the fact that a structure is uncountable doesn't prevent you from operating algorithmically on it (not that you claimed it did!); it just means that there are some elements of it that cannot be singled out algorithmically.


> Despite this, math professors kept presenting us with patterns to match, but no exhaustive algorithms.

This is basically what math is all about. If a problem can be effectively and easily solved algorithmically, it is considered trivial, and little attention is given to it. The focus is on actually difficult problems, ones that need clever tricks to get a handle of.


To be frank: bull. A problem that can always be solved algorithmically is "trivial", in the mathematical sense, but since the mind is algorithmic, you can't just say, "Well, clever tricks, not algorithms." A mathematician's mind works somehow.


> the mind is algorithmic

Huh? If you mean this:

> A mathematician's mind works somehow.

then it seems like you may be using 'algorithmic' to mean something like "unfolding according to (possibly unknown, possibly probabilistic) laws", in which case it seems so broad a term as to be almost useless.


>then it seems like you may be using 'algorithmic' to mean something like "unfolding according to (possibly unknown, possibly probabilistic) laws"

They're certainly currently unknown, and we have good reason to believe they're probabilistic.

Even such vague, loose descriptions are better than invoking "mathematical intuition" or "it just comes to me" or other explanations for how one does math.


There are ways to systematically get at DE solutions, the first that come to mind are power series or fourier series or decomposing into other orthogonal functions (of course, assuming the solution obeys relevant boundary conditions), but that won't give you an elegant result unless you have the time to wade through the recursion relations and sum it up. Sometimes, a smart guess just gives you the result, so why not do that?


There are methods to solve them. RLC in particular can be approached with Laplace Transforms. However these kinds of equations need certain approximations and simplifications in order to become analytically tractable.


This bothered me too. Actually, it started with the computation of anti-derivatives (integrals) in high-school.

Isn't there a theorem that says that no closed-form, analytic solutions to such equations can be found, in general?


> Isn't there a theorem that says that no closed-form, analytic solutions to such equations can be found, in general?

'Analytic' has a mathematical meaning (https://en.wikipedia.org/wiki/Analytic_function) which is probably not what you mean here. Taking it in the more colloquial (Eulerian) sense of "given by a formula", not only are there such general results, but there are even specific functions for which it is known that no elementary (https://en.wikipedia.org/wiki/Elementary_function) anti-derivative exists (the strong form of "impossible to solve" that I mentioned above), and an algorithm for deciding of a given function whether it has an elementary anti-derivative (https://en.wikipedia.org/wiki/Risch_algorithm).


> $\dot y = 1/y$, whose solutions have graphs lying on straight lines through the origin

Oops! I mentioned this as an example of an easily soluble, non-linear equation—which it is; it may be solved by solution of variables—but the solutions are not at all what I said (I was thinking of $\dot y = y/t$).


point taken


I appreciate your response and edit to the post; that was an admirably quick turn-around time! However, I think that, in the context of the article, TheLoneWolfling (https://news.ycombinator.com/item?id=9654767 )'s comment is more important than mine: a mention of non-linear equations and the difficulty of their solution is highly relevant and appropriate, as long as you don't imply the theoretical impossibility of their solution.


Cool idea, and the site looks nice. That being said, I was lost by the end of the first section.


I think calling a DE "a fact about the derivative of a function" and then speaking about "solving it" in the same sentence is a big source for confusion, since we don't usually "solve facts".

EDIT: reading the whole thing, there's quite a bit of sloppiness and stuff that's flat-out wrong, like " the faster the cart goes, the faster it stops", or the statement at the end claiming that nonlinear DEs don't have solutions. In short: this needs a major cleanup.


All equations are facts (or, more accurately, propositions) and solving an equation is deriving another proposition that is implied by that proposition (or perhaps that proposition and another proposition, such as when solve an equation in one or more unknown for a set value of one unknown.)

OTOH, you are correct to the extent that we don't usually use both the "fact/proposition" language and the "solve" language at the same time when discussing equations.


Exactly. I quite like the idea of calling a DE a fact about the derivative of a function, it's just the juxtaposition that is confusing.


I think it would be helpful to define a derivative first.


If it from the latter sentence means the derivative, the sentence makes sense, because with y'=f(x,y(x)) we wanna solve the equation for y.


Hovering on the graphs in Firefox, nothing happens and the console logs NaN problems.


diffeqs super easy:

each diffeq is like a set of directions: e.g. walk straight for 500m, turn left, walk straight 200m, turn right.

if you specify a starting point or ending point (boundary/initial conditions) then the directions become a specific set of instructions to get you from let's say, the bus stop to the library (particular solution). otherwise it's just a set of directions that can be used to describe getting from a set of places to a corresponding set of other places (general solution).


> Consider a cart rolling to a stop. Its motion obeys the DE v̇=−k⋅v, where v is velocity, v̇ is the rate-of-change of velocity and k is a 'coefficient of friction.'

This is drag, not friction. Friction doesn't depend on velocity.


Rolling resistance normally has a component proportional to speed as well as a constant component. (Aerodynamic) drag is normally modeled as proportional to v^2


Sure, there can be a component, but "coefficient of friction" usually refers to the velocity-independent term (the author even links to the Wikipedia article for it).

At low speeds drag is proportional to velocity.


Win 7, IE 11 -- there's a mass of overlapping text and images.


I can say assuredly, this explains precisely dick to a layman.


Thanks for explaining in literally 30 seconds what I failed to grasp in a fucking full semester of the class in college. Sigh.


The first two sections seemed fine to me.

However, the "cartography" section made almost no sense. It needs to be explained more clearly how the two carts are different and why they diverge.

edit: On a more general note, the animations are really slick!


Ordinary differential equations?

Okay:

Just freshman calculus actually is a good start. E.g., for viral growth consider

y'(t) = k y(t) ( b - y(t) )

where t is time, y(t) the size at time t, and the rate of growth y'(t) = d/dt y(t) is proportional to both the current size y(t) and the size

( b - y(t) )

yet to be achieved. So, e.g., the growth rate is proportional to the number of present customers talking y(t) and the number of potential customers listening

( b - y(t) ).

We assume that the present is time t = 0 and we have the eventual size b and the current size y(0).

Then, sure, we have an initial value problem (that is, we know y(0)) for a first order, linear ordinary differential equation.

But all that is needed for a solution is just freshman calculus. It's just a routine exercise.

Trivial? Once that equation kept FedEx from going out of business.

Polished, elegant, insightful, balanced, expert, great first text:

Earl A. Coddington, 'An Introduction to Ordinary Differential Equations', Prentice-Hall, Englewood Cliffs, NJ, 1961.

For when you want to take some really big next steps up past nearly everyone else in mathematics:

Earl A. Coddington and Norman Levinson, 'Theory of Ordinary Differential Equations', McGraw-Hill, New York, 1955.

For one of the larger reasons to be interested in ordinary differential equations:

Michael Athans and Peter L. Falb, 'Optimal Control: An Introduction to the Theory and Its Applications', McGraw-Hill Book Company, New York, 1966.

Also

E. B. Lee and L. Markus, 'Foundations of Optimal Control Theory', ISBN 0471-52263-5, John Wiley & Sons, New York, 1967.

Once in graduate school I got a reading course to give a lecture a week from

Coddington and Levinson, Athans and Falb, and Lee and Markus. The prof didn't show up again after the first lecture.

At one time our current Fed Chair Janet Yellen indicated that she saw some potential in using control theory to help manage the economy.

And a really sweetheart application of modern control theory, doing amazing things with automatic control of quadrocoptors:

https://www.youtube.com/watch?v=w2itwFJCgFQ

A broader view of some of what can be done with ordinary differential equations, and a long, gorgeous desert buffet of applied math:

David G. Luenberger, 'Optimization by Vector Space Methods', John Wiley and Sons, Inc., New York, 1969.

Generally, though, apparently the big glory days of ordinary differential equations were for the US DoD and NASA during the Cold War and the Space Race. Likely there are plenty of people now, with gray hair, who supply the expertise needed for current aerospace, etc.

Numerical solutions? That field is also nicely developed.

Partial differential equations? That's related but quite different.


Excellent stuff. I'm working on a framework right now for creating web documents like these!


I hope yours isn't another of these "use Chrome" pages. I thought we got rid of that meme around 2000 but hey it's 15 years later and we're back to scratch.


Huh?

You mean do I currently support anything but Chrome? Actually, no. But I'm not worried about cross-browser this minute. I'm leveraging shadow-dom pretty heavily to allow for encapsulation of web components developed by third parties and Chrome is the only browser to fully support that (with the webcomponents lib).

By the time my project is ready to (hopefully) be used by people, browser support will either be there or I will need to implement graceful fallback with iframes (but I think webcomponents.js will get there).

Not sure how else to respond to your snarky comment, so I'll just let you know: it's not easy.


hey did yall get you're supposed to click and drag around the dots? it's cool to make a crazy v(t)


Noup. They move by themselves.

Cool page, though, keep it up!


Windows phone.... http://i.imgur.com/Vo68qcm.jpg


oh, man...


Very cool. While I'm sure others found this explanation clear, I believe I should now feel dumber...


I don't think it was very clear either. If you want to learn more about this I like Arfken, Mathematical Methods for Physicists as a bible of practical mathematics, or Stewart, Calculus for calculus in general.


The Tupac reference was appreciated.


+1




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: