Hacker News new | comments | show | ask | jobs | submit login
What Makes the Hardest Equations in Physics So Difficult? (quantamagazine.org)
115 points by digital55 11 months ago | hide | past | web | favorite | 70 comments



So, Navier-Stokes assumes infinite divisibility in the fluid it models, right? Which we know is not the case: atoms exist. It seems obvious to me that Navier-Stokes can't be a perfectly accurate description of reality, so it should come as no surprise if they turn out to blow up or otherwise behave non-physically. Am I missing something? Why isn't it assumed that they're just a very nice approximation, like Newton's laws pre-relativity? Actually the whole thing reminds me of the ultraviolet catastrophe that preceded quantum physics.


Tone: Straight. I think you asked a reasonable question.

What you are missing is that mathematicians are concerned about Navier-Stokes itself, as an independent entity in pure mathematics, not related to its physics correspondence.

I think it would be safe to say that everybody believes that even if Navier-Stokes does have a singularity in it, that there won't be any way to manifest that singularity in the real universe with our discrete atoms. But that still leaves the math question, and the possibility that even approximating the singularity may produce interesting physics.


One reason to think the physics would be interesting: it isn't every day that we can arrange macroscopic systems in ways where their small-scale behavior gets amplified enough for us to measure. Arranging a blowup in the Navier-Stokes equations would be a way to study atomic-scale physics on human scales - and, likewise, arranging a blowup (if one exists) in the QCD quark-gluon plasma equations would allow us to get unexpectedly sensitive measurements of particle-scale physics.


A nice example of such a practical tool for microscopic analysis is X-ray crystallography, once you have the theory of Fourier transforms and lattices.


Your repeated use of the word ‘blowup’ makes me think of the way we get small-scale behavior amplified enough for us to see, hear, and feel: in atomic weapons.

There is a chance that understanding how quarks interact will give us even more powerful weapons (which might at one time be useful to blow up large incoming space rocks) or give us clean energy. How large that chance is is anybody’s guess.


I guess if you're looking for pure mathematical abstraction, Navier-Stokes doesn't seem like the most interesting without the physics connection. But that's just me.


I suppose what is interesting is the very difficulties we have deciding if there are smooth solutions. It would be interesting if there are whole class of PDEs which are difficult in this way. And if there isn't, then what is special about the NS? That too, is interesting.


Fair point, there.


> So, Navier-Stokes assumes infinite divisibility in the fluid it models, right?

You are right.

> It seems obvious to me that Navier-Stokes can't be a perfectly accurate description of reality, so it should come as no surprise if they turn out to blow up or otherwise behave non-physically.

The mapping between physical reality and Navier-Stokes equations is extremely well understood. We know when they represent an excellent description of the physical world, and we know when they could fail due to the finite number of particle in the fluid.

We also know how they fail, and can estimate corrections due to finite particle number, see for instance the Cunningham correction factor.

We also know that a much more complicated theory, Boltzmann transport equations, would be exact.

It would be surprising if Navier-Stokes equations blew up, because we know they represent most often physical reality up to extremely small, controlled corrections.

> Am I missing something? Why isn't it assumed that they're just a very nice approximation, like Newton's laws pre-relativity? Actually the whole thing reminds me of the ultraviolet catastrophe that preceded quantum physics.

Not really: the ultraviolet catastrophe was a symptom of unknown physics awaiting to be discovered.

We know very well the physics beyond Navier-Stokes, and still we use Navier-Stokes equations for their incredible effectiveness.

Using the full theory (Boltzmann transport equations) would make even the simplest fluidodinamics calculation virtually impossible, while adding a correction on the n-th significant digit, with n much higher than the precision one could reasonably expect.


As an aside, as the question of the article is more about the mathematics as opposed to the physics: Navier-Stokes assuming a continuum is one of the fundamental assumptions made in basic fluid dynamics, and to an extend applies to the governing equations for solid mechanics, in a whole field known as Continuum Mechanics.

The continuum assumption, as it turns out, is actually incredibly accurate for most gasses that are comprised of discrete molecules. Specifically, the region in which N-S is valid for is when Knudsen Number[1] is less than about 0.01. Knudsen Number essentially characterizes how densely packed the particles are.

This turns out to be most of the flows we observe on Earth. N-S becomes less accurate at around Kn = 0.1 and completely useless at Kn >= 1, as in this region the differential arguments no longer hold and NS predicts something non-physical. Some examples of flows in this regime: mass spectrometer, reentry, and inside shockwaves. In that view, you're not wrong that NS will eventually fail, but it is just applicable to most of what we do. For those working with fluids, it is important to recognize for when NS fails so they can switch to a different model for solutions.

For higher Kn flows, such as Kn > 10, you can feasibly track individual particles and their trajectories. People do this for satellites and what not. However, between the range of 0.1 < Kn < 10, we do not have a good enough set of models that is able to compute accurate answers to reality with reasonable time frames and is an active field of research. If you're interested in learning more about this topic, look up the Kinetic Theory of Gas, where particles are accounted with the evolution probability distributions in temporal, spatial, and velocity space.

[1]: https://en.wikipedia.org/wiki/Knudsen_number


jerf correctly points out that the mathematicians are investigating the behavior of the mathematical model, not the physical phenomenon which inspired it. I want to add a second point, which is that while it wouldn't be that shocking if Navier-Stokes had blow-up or other non-physical solutions, it's not at all obvious that it has such solutions. People have tried and failed for more than a century to construct them. We know, in fact, that Navier-Stokes is very close, in a technical sense, to not admitting such solutions, as there are very mild deformations of the Navier-Stokes model which do not admit such blowups. Likewise, there are mild deformations which do. The beauty of Navier-Stokes is that it's so perfectly balanced between the two cases.


This is such a fun observation. Physicists originally used differential equations to describe large scale, yet fundamentally atomistic, behaviour because, I assume, that was easier to work with using pen and paper than by using some more discrete, computational, atom-by-atom model. And then computers were invented, but we're still stuck with these continuous, legacy, models. Almost no-one questions the fact that the original motivation for using the differential equation technique is no longer appropriate given that the underlying technology has shifted underneath the feet of the modellers.


Well, we still don't have the computational power to do things atom-by-atom, even in small things like modeling cells, let alone something like simulating wind shear on a plane or predicting weather patterns.

It's not just computational power but memory. There's something like 1E14 atoms in a cell. Storing the x,y,z position with 4 bytes for each dimension (though you'll probably want 8 bytes) comes out to 1.2E15 bytes = 1.2 petabytes of information. Add in more fields for velocity and other pieces of state and you're talking about something that would only fit in working memory in some of our largest machines.


Here is a billion particle model (e9) which is a long way from atomic scale simulation (e14+), but not outside the realm of the possible.

https://www.youtube.com/watch?v=B8mP9E75D08


We are definitely making progress. But another factor to consider, in additional to the number of particles, is the energy of the system. When going through one step of a simulation you have to compute the effects that every particle has on every other particle, but generally a particle only affects the other particles near it. You can take advantage of this fact to speed up computation with the right data structures.

However, as particles start moving around faster and faster, the distance they can travel in one time step increases, and the neighborhood of effect increases, limiting the speedup of this optimization. This matters less in larger scale simulations like of weather patterns, because you don't have to worry about air molecules zipping to the other side of the continent in one second. But it matters a lot in small scale simulations, especially in cells. As an example, an average glucose molecule in one of your cells is bouncing around at around 250 miles per hour! That's not 250 miles when scaled up, that's really 250 miles per hour. A molecule in your body is colliding with another billions of times every second. (Source: http://www.righto.com/2011/07/cells-are-very-fast-and-crowde...). At that scale and that level of activity it becomes much harder to simulate each time step.


The other side of this is cells are also really fast. Modeling even 0.01 seconds could be very useful.


Much of the action inside cells is done via proteins. It's taken a huge amount of distributed computing power just to figure out a reasonable hypothesis for how they fold (only one part of the movement) and €1.22 billion to build the XFEL and have a way to image them on the required timescales: 1-2 femtoseconds. 0.000000000000001 seconds. 0.01 seconds is an eternity, calculating that long would take an utterly enormous amount of computer time.


You can spend similar amounts of processing power (or vastly more) modeling a single atom over much shorter time scales from quantum mechanics. Protean folding while important results on fairly stable configurations and you don't need the same granularity after that point.

PS: Also of note protean folding simulations don't even simulate the water surrounding the protean and are again very simplified.


Don't be fooled by the numbers e9 is 5 orders od magnitude less than e14


64 GPU's and 91 hours for a 12 second simulation. 64,000 CPU's for 910 hours (1 month) is 4 orders of magnitude.

Unfortunately, a useful cell simulation would be far more complex than simple particle collisions and a much shorter step size. But, that's not to say different kinds of simulations can't be useful. And ASIC's or general improvement in computing power can also boost things.


we still don't have the computational power to do things atom-by-atom

That's what puzzles me about this discussion.

Of course we don't. The goal here is to model more atoms than we have atoms to model with. Until we get to quantum computers that can represent more information per atom, than information per atom we want to represent, it's simply a matter of objectively & obviously inadequate resources. You can't simulate wind shear on a plane when there's more atoms in that wind & plane than there are in the computer simulating them: every single atom being simulated contains the complete information about its state (information independent of any other atom), so simulating it will require at least one atom per atom simulated - if you don't have at least as many atoms to simulate as you are simulating, you can't achieve a complete simulation.


It is not necessarily the goal to model more atoms than we have atoms to model with. The goal can be, for example, to model a folding protein, which has fewer atoms than a CPU.


And one might be happy to spend a month computing the folding while in the cell it takes a few seconds.


That may be true of digital computers, but it's not necessarily true of analog ones.


> Well, we still don't have the computational power to do things atom-by-atom...

We still don't have the computational power to do certain classes of continuous Navier--Stokes calculations (i.e. not atom based).


Sure, but you'd think we would shift our focus to using models explicitly tailored with that aim in mind. Using something that explicitly approximates a desired, precise, foundation. Perhaps parameterized by model accuracy. I would think taking inspiration from approximation algorithms would be a much more appropriate lens with which to model physics.


The mental model of a bunch of hard little atoms flying around, colliding, rebounding, etc is actually NOT the desired, precise, foundation for understanding how gases behave.

The better foundation is quantum mechanics. An example of a macroscopically visible difference between these foundations are the van der Waals forces.


In the case of Navier-Stokes, this was done in 1934 by Jean Leray, who showed that the so-called 'weak' solutions always exist. (A weak solution is not a function, but a more general object called a distribution which can be fuzzed out on very short distance scales.)

It's a very reasonable idea, and useful enough for computations, but it lacks the essential tension that makes Navier-Stokes a mathematical Everest.


> Well, we still don't have the computational power to do things atom-by-atom

People do atom-by-atom molecular dynamics simulations of proteins and such.


Yes, but those are simulations of classical mechanics models of molecules which are known to be very limited when it comes to atom-scale objects. We have another model that was much more useful there (Schroedinger's equation) but it is only computationally tractable for systems of few atoms. We do not have the computational power to work out the consequences of the model for proteins.


Certainly, but not at the level of an entire cell


Your assumption is frankly incorrect.

Physics is generally expressed in terms of differential equations. This is not due to their analytical tractability - as anyone who has attempted to solve PDEs before will know, most (nearly all) differential equations do not yield to analytical solution. Perhaps you think that quantum mechanics demands a discretized view of reality. This would be a complete misunderstanding of quantum mechanics, and physics in general.


It's unclear whether atomic-level data would be useful in these sorts of calculations - the input data isn't known to sufficient precision. For more, check out the last two paragraphs from the Feynman Lectures Vol III Chapter 2:

http://www.feynmanlectures.caltech.edu/III_02.html

A similar sort of mechanism led to the discover of chaotic behavior in weather models - checkpointing results at a precision slightly lower than the machine's internal precision caused simulations that were resumed from the checkpoint to deviate rapidly.


Human limitations is one reason, but there's something much deeper going on here.

Navier and Stokes worked before we were sure that atoms existed, certainly before we had any idea of how many there were. Nevertheless they were able to write down useful theories for describing fluids. This is how all of science works. The things about which we are totally ignorant are much smaller today, of course... but useful theories of any set of phenomena always omit a great many things we do in fact know about.

From these theories, we can understand what's going on, and use this to extrapolate to things we have not seen yet. An atom-by-atom computational model would (in some sense) be no more useful than what we had before N-S, just blind experiment. To try out any given swirl of smoke etc. we can equally well walk next door to the lab and videotape it... but this doesn't help us imagine what else might be possible. The "blowup scenario" discussed is an example of this kind of imagining.


> The things about which we are totally ignorant are much smaller today, of course

One would have to disagree with this idea. The more we study the universe around us, the less we actually know. As a somewhat philosophical point, there are many times when our mental (mathematical) models get in the road of understanding. Very often, people believe that because we have a model that works and appears to give good predictive results about some phenomena then we understand the how and the what (and even the why) of those phenomena.

When this happens, we get into a situation where alternative models are actually discouraged. If one looks the the history of the 19th, 20th and 21st centuries, one can see that more and vaster avenues of investigation have arisen as time passes. Our increasing knowledge is continuing to be shown as ever smaller in relation to what we are now seeing.

No theory is ever complete, nor is it ever accurate to the extent that it describes the reality of the universe around us. All theories make those simplifying assumptions that when taken too far lead into inaccurately describing and predicting what we should see. Too often people get enamoured by the beauty of the mathematics and forget it is only an attempt at reflecting reality.

Mathematics is a magnificent and useful tool, but it is a foolish master. Too often we forget that.

Theories and models help us gain some understanding of the nature of the universe around us. This understanding, however, is always subject to change, no matter how "perfect" the theory may appear to be. There are too many scientists, both theoretical and practical, who are so infatuated and enamoured with their current models that they have forgotten that the models are approximations only and are subject to change or even overturning.


"much smaller today" meaning literally smaller: Navier didn't know about atoms, 10^-10 m, but now we know about things at 10^-18 m or so.

Sure, our awareness of how much we don't know has grown over time.

The point I was trying to make is that theoretical models (like N-S) not only don't have to be perfect to be useful, but more, are useful precisely because they are not complete. By ignoring irrelevant details we get theory, not just simulation.


I don't disagree with the idea that a model doesn't have to be perfect to be useful. But I do disagree with the prevalent idea that such theories are the "truth" when they are not.

Understanding that a model or theory is useful even when we ignore certain aspects of reality is quite different to the often displayed belief that a specific theory is "gospel" even in the face of anomalies and discrepancies of the real world compared with prediction. Too much of the "theoretical physics" genre (word specifically chosen) is based on the idea that mathematics is the means of finding the "truth".

As I said above, mathematics is a wonderful and useful tool, but it is not a good master. It provides a possible insight into what is going on. However, those insights are not "truth" as such. I have been doing a review of my old mathematics texts for scientists and engineers, as well as other resources. It is interesting that all of them talk of and demonstrate that all the mathematical models are simplified and incomplete. Yet, if one raises the various problems with the various models in use today, one is shouted down. This does not bode well for our advancement in understanding of the universe around us.


It's certainly possible to find such views in theoretical physics. But typically not from very serious people.

In my understanding the big shift was the understanding of renormalisation, Kadanoff and Wilson, around 1970. This took airy ideas about useful approximation and turned them into serious tools, which are both useful for everyday things and illuminating about why any of it works.


You are missing the distinction between a numerical solution to a set of equations and an analytical one. Usually, the analytical solution gives you the dynamics of a system at any point in time and space given a set of parameters.

For numerical solutions you have to run each individual set of parameters to find the corresponding values in time and space (not even considering stochastic equations). This is very computationally expensive.

This is the value in solving these things analytical — hence the prize.


Sorry this should read: “Usually, the analytical solution gives you the dynamics of a system at any point in time and space given ANY set of parameters.”

The “Any” is the value here for the analytical solution.


> computational, atom-by-atom model

Remember that as long as computers continue to be made atoms, they aren't going to do atom-by-atom simulations, unless these simulated systems are much smaller than the computer itself or other are simplifying assumptions that can be made.


Let me rebuke just one part of the post: the example with Newton's laws. Of course everyone knows that Navier-Stokes is just a very nice approximation. That doesn't mean that mathematical results about Navier-Stokes can't improve or understanding of the physics side. For instance, if we find a solution which blows up, that may lead us to a correction to the model or (less likely) it may be actually physically realizable.


>> So, Navier-Stokes assumes infinite divisibility in the fluid it models, right? Which we know is not the case: atoms exist.

Atoms have structure, they consist of... whatever parts. And these parts have structure (like... quarks and whatnots). And... are we sure that quarks do not have structure themselves? Whatever the case is, it is not something that one should declare in a HN thread.

Even more interesting... 1+2+3+4... = -1/12

https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B...

And that result actually pops up in physics.

That kind of indicates continuity/infinite divisibility.


> Even more interesting... 1+2+3+4... = -1/12

What.. No. This sum diverges. It does not equal -1/12.

Now if you plug -1 into the Riemann zeta function, you get -1/12. One could interpret that to mean the sum 1 + 2 + 3 + .. can be mapped to -1/12.

But the sum has never, and will never equal -1/12. It diverges, simple as that.


What is the argument here? There are certain ways of assigning a number to the divergent (by partial sums) sum. If they are well-defined, they all arrive at the same number. In this case, -1/12. There is nothing to argue here, that shit has been around for 200 years or more.

And, there are some results in physics that actually measure close to the number -1/12, from something that looks like a sum of 1+2+3... and that's kind of telling us that the whole construction is not just some math sleight-of-hand, it actually has some meaning in the real world.

Example: Casimir effect

So my point is... that kind of suggests the smoothness, or continuity, or differentiability, or whatever we want to call it, of the underlying function. The opposite of discrete. Is what my point was.


Your equation is only correct if you assume that the left hand side is computed using the analytic continuation of the Riemann zeta function. This assumption completely changes how the equation is interpreted and therefore verified. But you don't state this assumption so how are you surprised that people are taking the equation at face value and telling you it's wrong?


Just to clarify, the above is not "my" equation. It is something that mathematicians (not me) figured out, repeatedly, many times over. Of course, the -1/12 number is preposterous, but it does turn out to have a physical meaning.

There are many ways of making the sum, including limit of partial-sums/summation by parts (the one we use most of the time), but there is also Abel summation, Borel summation, Ramanujan, Cesaro, and more. And frankly there is no reason to think that "summation by parts" is the "right" way. It is surely not "right" in physics. Example: why do we start summation from 1? Is the first element somehow more important than others? No, that is just our (ie human) arbitrary pick.


You likely wanna watch this: https://www.youtube.com/watch?v=YuIIjLr6vUA


> What is the argument here?

Essentially, it consists of text pulled from exactly the same source that you originally cited.

(Cue sound of hands washing in sink.)


There is a standard definition of what '=' means and your equation does not satisfy it.

https://www.youtube.com/watch?v=YuIIjLr6vUA


You say "your" and yet, this has nothing to do with me. I pointed to the wikipedia article. BTW this result is not exactly new, and it has specific confirmation in physics.

You cannot argue that with high-school math. There is more to it.


The bit on Wikipedia that follows the equation is part of the equation. It is the context that explains how to compute it. You need that bit because the equation uses extremely common syntax as short-hand for an operation that is different from the common one. If you write it in a way that doesn't rely on redefining common syntax like so "ζ(-1) = -1/12" it seems a lot less interesting doesn't it? Perhaps syntax confusion is the only interesting thing to it.


There is more to it - the number -1/12 actually pops up in physics when doing something awfully similar to 1+2+3...

This may be useful: https://motls.blogspot.ca/2014/01/sum-of-integers-and-overso...

" The real problem is that the definition of the sum involving the limit of partial sums – limits that way too often "diverge" or "refuse to exist" – isn't the only definition or the best definition or the most natural definition that may be connected to the sum. There exist better definitions of the infinite sum – numerous definitions that turn out to be more natural in physics applications – and they generally produce the result −1/12−1/12. It is no trick or sleight-of-hand. The value −1/12−1/12 is really the right one and the rightness may be experimentally verified (using the Casimir effect). "


The ordinary sum of any collection of positive integers is not a fraction, let alone a negative one.

If we take the set { 1, 2, 3, ... 4 } the summation or partial summation of no subset of this set, finite or not, converges on any fraction or negative number. No matter what order we choose for traversing that set for generating a series, we never see anything resembling -1/12 as a partial sum or limit.

The ordinary arithmetic sum of any collection of positive integers is an integer which is strictly greater than each the integers.

The pages you're referencing are all crackpottery.


>> The pages you're referencing are all crackpottery.

So... I can trust Ramanujan and Abel, that published results on these things, and Terrence Tao that has a nice writeup, and a bunch of others, or I can trust HN user "Kazinator", who... published some middle-school algebra "proof" on HN.

Guess who is crackpot here.

>> https://en.wikipedia.org/wiki/Ramanujan_summation


I would say, it is he who cannot separate "summation" from "Ramanjuan summation" due to the overlapping notation.


There are different definitions of "summation". Also different definitions of "integration".

I can see how overlapping notation can be confusing for laymen (me included).

But that does not mean that people that came up with those are crackpots, just because they reused the "=" symbol.


Indeed, and "summation" is not the same as "addition".

Unfortunately, the syntax chosen for representing the sequence is that of an additive series, where a binary + operator is interposed between terms.

The semantics being shown does not seem to follow from a redefinition of that operator per se as stand-alone binary operator.

You really want to show this as, say

  fun([1 2 3 4 ... ])  
a function applied to a vector. Why the algebraic rules seem to work is because fun is a linear operator; i.e.

  n fun([x0 x1 ... ]) = fun([nx0 nx1 ...])
and

  fun(v0 + v1) = fun(v0) + fun(v1)
We can ply these rules back to the original 1 + 2 + 3 ... notation and then they look like algebra.

Basically, none of this means that the natural numbers add to -1/12; only that the sequence of natural numbers can be fed into some decimating calculation which ends up with -1/12.

Well, no kidding; the sequence of natural numbers can be fed into a decimating calculation which converges on any value you want, if you can freely choose the decimating calculation, and that calculation can be chosen to be linear operator.

Note that we don't have:

   fun([x0 x1 x3 ... x42 ...]) = fun([x3 x42 x0 ... x1 ...])
which would be required to hold if this were addition. We can't change the positions of the terms. Why? Because they correspond to different powers in a power series.


> 1+2+3+4... = -1/12

Only through a derivation which involves some cute but unsound pseudo-algebra on infinite series.

See, in the same page, the remark "Generally speaking, it is incorrect to manipulate infinite series as if they were finite sums".


Just because you don't understand doesn't mean it's "unsound pseudo-algebra." There are ways to properly manipulate infinite series, and Wikipedia does actually explain some of them on the page, even if it relies on a generally-unsound way to illustrate the intuition to a lay reader.

It is very well-known and long-recognized that our natural intuition is very wrong when it comes to infinities and infinite objects, and mathematics had a crisis in trying to come to grips with it.


>> some cute but unsound pseudo-algebra on infinite series.

Sure it looks like that, but, amazingly, there is a physical meaning to that number.

I replied here: https://news.ycombinator.com/edit?id=16164915


It does pops up in physics calculations, but how does that 'indicates continuity/infinite divisibility'? Many things pop up in physics calculations, most of them are just tools to represent a model.


Good question, I guess I should have made that more clear.

Terence Tao has a nice lecture published... trying to find it, brb.

In any case, the gist of it was: long as we treat the numbers in the sum as integers (ie discrete), we will get ill-defined sums, contradictions, infinities, and so on.

Once we switch to real numbers (and there is an underlying function that is differentiable), things "can" be made to work and converge. To -1/12 always.

So that's it. In particular, if we decided that our smallest unit of measure is... the size of atom, or whatever finite value, many of these techniques will just fail to work, and they won't match what we measured. Which means something is wrong. Maybe things really are continuous. Or maybe they are not, it is just that our math is not sophisticated enough.

Obviously this is an open question.


> "The answer, I discovered, is turbulence. It’s something we’ve all experienced, whether flying through choppy air at 30,000 feet or watching a whirlpool gather in the bathtub drain. Yet familiarity hasn’t bred knowledge: Turbulence is one of the least understood parts of the physical world."

This immediately made me think of "roughness" and Benoit Mandelbrot.

> “When you zoom in on a point, from a mathematical point of view you lose information about the solution,” said Vicol. “But turbulence is meant to describe exactly this — the transfer of kinetic energy from large to smaller and smaller scales, so it’s exactly asking you to zoom in.”

You can explore the Mandelbrot set in similar fashions. There are many tools online to do so. The more you zoom in, the more you get completely lost from what the original image looks like.

If this kind of stuff interests you, his TED talk is among my favorites and can be found here:

https://www.ted.com/talks/benoit_mandelbrot_fractals_the_art...


Thank you for linking the TED talk. I did not know that Mandelbrot had also applied his fractal analysis to financial data.


The article is probably underselling general rel here. The Einstein equation is famous for packing a whole lot of 4-dimensional, nonlinear tensor nastiness into an elegant little package.

You need to add something like an equation of state to turn it into a complete dynamical system, but I am suspect that there is enough nastiness in the gravity part alone that it will be difficult to solve in many regimes. What saves us is that we are usually concerned with low-temperature, low density systems, or else highly symmetric like stars and black holes.


Deterministic Chaos is a real bastard.

There was a disturbing result a while back about the Navier-Stokes equations in which the existence of non-unique solutions was shown: https://arxiv.org/abs/1709.10033


What makes a problem interesting to mathematicians is often not the result itself but the expectation that the proof will require techniques and philosophical ideas that change the way the general theory is handled.


OT: some beautiful images of turbulence on Jupiter ...

https://www.theatlantic.com/photo/2018/01/gorgeous-images-of...


When I read about hardest equations I thought it would be equations of either General Relativity or Quantum Mechanics of complex systems. Both are much more complex than Navier-Stokes.

For Navier-Stokes we have at least numerical methods that allow to solve the equations in practical time for useful cases. But we still do not know how to model, say, dynamics of a galaxy using General Relativity. Typically such models just assume Newton mechanics with minimal if any relativistic corrections with no proof that one can use such approximations of equations of GR on big scales.

Or consider the problem of a possible state of metallic hydrogen. It is just a bunch of protons and electrons mixed together, the simplest possible material. Yet we cannot calculate from the first principles its properties.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: