You need to solve a 5-dimensional (+ time) nonlinear integro-differential equation. Not only that, but you need to keep it going for a long enough time for turbulent fluctuations to average out, and cover both ion and electron scales (that last bit is the big new contribution here).
And once you're done with that, you need to repeat it at even higher resolution as evidence that what you're seeing is physical.
Not all differential equations are nicely behaved. Many do not have analytic solutions at all, meaning that we can't even write down a function that satisfies the equation. However, such equations can still be solved approximately using computational techniques. This can be incredibly difficult and computationally expensive (note the huge resources required to achieve this breakthrough) but can produce very useful results.
It turns out that one of the simplifying assumptions that physicists have been making over the years does change the result. Giving up the "spare change" does make you "go bankrupt." In this case, ignoring small-scale turbulence makes you lose a significant amount of heat. But figuring that out required some really hairy math and a butt-load of computing power.
Is that simple enough for you?
True in general, but it would be nice to meet the five-year-olds for whom it's the right forum.
Explain like I have a high school physics education.
And even with that background, our foreign lecturers were regularly disappointed at our lack of mathematics competency in University ("Now I have to waste the next 4 weeks teaching you things we learnt in highschool in my home country before I can move on to what this subject is actually about").
These were Asian/Indian and a Ukrainian lecturer, that I recall.
I know Russia values math highly and does push their students, but there's a limit to believability.
I had the same curriculum but was utterly bewildered by derivatives being the first topic. For the simple reason that until that point I had habitually looked at "wholes" in the external world, only then to wonder "what kinds of things progressively make it up?"
I feel like to derive something, you start with the assumption that you know what you're talking about fist. Yet the thing is actually the sum of its parts. Therefore integration seems more natural at first, and only then can things similar to it be derived with confidence.
So the teaching order seemed backwards. I wonder if anyone has felt the same way.
They start with the calculating slope on a curve f(x) being lim h->0 f(x+h)-f(x)/h - spend a few weeks deriving various derivatives, show how you can find tops/bottoms of curves (derivate=0), talk about limit theory a bit, take a bunch more complicated equations derivatives.
Then, once that's figured out - take the inverse, and look at area's under curves, volumes, etc... My brain lost it at integration by parts (so many parts) - too much memorization, and so came an end to my mathematics education in that space.
I'm happy they were taught in that order, mostly on being exasperated with having to memorize all the integration by parts formulas.
I have an online series that starts with the historic order which you might like:
Scientists often assume that you can accurately approximate stuff by neglecting irrelevant parts of the physics (basketballs vs ping pong balls; protons vs electrons). Turns out that ping pong balls greatly affect the outcome and amount of heat/energy dissipation in the basketballs.
This doesn't surprise me too much, since critical behavior often happens in regimes where two separate aggregate effects become relevant. But the details are all still just a big mystery :)
P.S. someone posted a much better description than the OP. I'll reiterate: http://news.mit.edu/2016/heat-loss-fusion-reactors-0121
Now there are two main parts to the stuff inside the reactor, ions & electrons. Think of them as mac & cheese. When the reactor gets really hot, these bits start jostling around. (It's in the nature of hot things to want to move more, and really "hot" means that the little atomic components of everything are wiggling and moving faster and more excitedly.) Think of boiling the mac and cheese. For a long time scientists thought that the wiggles of the ions would diminish the wiggles of the electrons, because 1) the ions & electrons want to be near each other so they form a clump (like how the noodles absorb the cheese but waaaay stronger) and 2) because each ion/noodle is way bigger than each piece of cheese powder/electron, so its wiggles should be way stronger.
Now to figure these things out, scientists have to use computers to predict how it all might work. But the formulas they use don't have a single answer they can just solve by hand. Instead the computer can take a guess and then spend a lot of time seeing how good the guess was, and trying to make it better. A computer can do things fast, but it's hard because when it does math, it's only figuring out how a single piece of cheese dust moves at a time! There are so many little pieces and knowing how a few cheese dusts move won't give us a good idea what a big pot of mac n' cheese is like. (Could you learn how sand feels by holding one grain?) So they have to keep the computer running until it can tell them what a lot of cheese moves like. Even harder is that they need to watch the cheese move for a long time to make sure it's "stable". (For example, if you put mac and cheese back on a hot stove it might be quiet for a bit before suddenly a big cheese bubble burst and splattered everywhere. We need to know if there are bursting bubbles or not, so just looking for a second won't be enough to tell us.) So the computer also has to do this guessing for many little tiny moments, until they add up to a longer amount of time. Then they can figure out how the sauce behaves when you leave it on the stove for a while.
Ultimately these scientists found that the ions & electrons don't act exactly like we thought. The heat actually causes the electrons/cheese to stretch out into long strands that pull away from the main blob! Also when the cheese starts swirling around, instead of bumping into the noodles and stopping, they can bump strong enough that the noodles and cheese all start swirling together crazily. These two things give us big clues on why our reactors won't stay as hot as we want them to, because they explain why the mac and cheese gets so wiggly and wild, and with the discovery we can now try to find new ways to build better reactors that work better.
All the guessing by the computers was so much work it would've taken a single computer 15 million hours (over 1700 years!), but they got 17000 computers to all work as a team and give them answers in about a month. Then the scientists could get a couple answers in a year, which made them feel more confident that they probably did things right.
NOTE: Totally not a physicist or anything, and I probably started with a bad analogy, but wanted to try this as a writing challenge. Also I know you probably didn't want it this simpified, but I tried to take "ELI5" at face value for once.
What MIT did was make it easier to open all these poke balls in a sustained way. This is important because once fusion is sorted out, our political leaders like Ted Cruz could use all that energy to do really interesting things.
One side effect is that a lot of poke monsters will be released as well, which could cause problems...and so we as a society need to think about the long term ramifications.
MIT did not overhype this. The MIT news office used the title "New finding may explain heat loss in fusion reactors". The site from which this was taken added the clickbait title.
Now that the model is fine enough to explain the turbulences observed in the experiments one might hope that this will help suggest design changes to better deal with the turbulences. However the MIT article does not suggest any practical design modification yet.
But it is another thing entirely to take such a huge simulation and try to use many of them to do tokamak design optimization.
>> Now, researchers at General Atomics are taking these new results and using them to develop a simplified, streamlined simulation that could be run on an ordinary laptop computer
Check small numerical cases on an already-validated code.
Check problems for which you have an analytic solution (in this case, maybe many laminar flow scenarios).
Check problems where the 'weird part' (ie the turbulence in this case) is limited to one small part of the simulation.
Compare only moderately complicated simulations to experiments that are already entirely understood.
It helps that the model creates the same outcomes as the actual observations (and the old model didn't). This isn't an ironclad guarantee that the model reflects reality, but it suggests that we're moving toward a more accurate reflection of nature.
> I've always wondered how modeling these types of complex systems works.
To explain I will use a simpler example -- weather forecasting. In weather forecasting, we need to model the atmosphere. It's not possible to model the entire atmosphere directly, instead we break the atmosphere into a bunch of cubes, each with an initial temperature and humidity and a few other things.
Then we process the cubes in a supercomputer consisting of many processors running in parallel -- one processor per cube. The actual computation consists of seeing what effect a cube's neighbor's pressures, temperatures, etc. have on the tested cube, for a brief interval of time. Then we repeat the process for another slice of time. Ad infinitum.
This is obviously a numerical process, with no overarching equation that explains it all, but with some basic first principles at work (a theme in modern physics). Such a process becomes more accurate if we can create more, smaller cubes, and for that we need more and faster processors. So most of modern supercomputer design involves thinking of ways to acquire more and faster parallel processors.
The kind of model that led to the discussed result follows the same basic pattern. The irony is that the most often cited reason for having supercomoputers -- to be able to say whether it will rain 72 hours from now -- is also the least likely to succeed, because of the role played by quantum mechanics (i.e. by chance processes) in earth's atmosphere.
* there are most definitely some overarching equations that explain it all. We just can't solve them analytically. But we use them to write the code.
* when you say "one processor per cube": if you mean cube = grid cell you're way off. You want to have at least ~ 10 000 grid cells per processor to have any level of performance.
* it is definitely not quantum mechanics that makes weather hard to predict. It's what's popularly called "chaos", or mathematically, "exponential sensitivity to variations in initial data".
* getting more performance for this type of application is not about getting more cores, it's about getting more memory bandwidth. That's why e.g. GPUs are useless for speeding up weather forecasting.
* funfact: weather simulations today are not running any faster than in 1994; it's at about 15 minutes per simulation. The increases in computing power are used to a) increase grid resolution and b) run more replicas.
* The last point is what makes modern weather simulations tick. You don't run one simulation, you run 1000, each with slightly different initial data, and you analyse the ensemble results.
Not so, not for Earth's atmosphere, the topic I was discussing. There are first principles, but no overarching equations, analytically expressible or not. Prove me wrong -- show me the equation that tracks arctic temperature, rainfall, and snow cover, including the feedback effects that join them all, as an equation.
> * when you say "one processor per cube": if you mean cube = grid cell you're way off. You want to have at least ~ 10 000 grid cells per processor to have any level of performance.
It was part of a simple explanation, but in the future, in the name of increased throughput and as processor costs continue to fall, it will be literally true. I say this on a day when the Raspberry Pi Zero is so much in demand that I can't find one for sale.
> * it is definitely not quantum mechanics that makes weather hard to predict. It's what's popularly called "chaos", or mathematically, "exponential sensitivity to variations in initial data".
Both play a part. Quantum mechanics is thought to be the final obstacle to long-term weather forecasting, and chaos theory (extreme sensitivity to initial conditions) is the mechanism by which small initial causes lead to large effects.
Your last three point weren't corrections, so no need for comment.
The reader is referred to "Chapter IV: The Governing Equations" of Richardson's classical book "Weather Prediction by Numerical Process" .
> It was part of a simple explanation, but in the future, in the name of increased throughput and as processor costs continue to fall, it will be literally true.
No. You didn't read what I wrote. Unless some radical paradigm shift occurs in both hardware and algorithm design, strong scaling is never going to pay off once you get less than about 10 000 grid cells per CPU. Core cost is not tge issue, power usage over the cluster lifetime is already a huge cost. Memory bandwidth is the issue, Linpack (peak float) performance is so irrelevant even management has started to ignore it.
> Quantum mechanics is thought to be the final obstacle to long-term weather forecasting
[By whom?][Citation needed]. Of course QM is behind how atoms and molecules behave, but that doesn't mean any QM result will ever improve weather prediction. And, why stop at quantum mechanics? Why not invoke quarks and gluons and the Higgs field if you're going all the way down to fundamental particles?
> The reader is referred to "Chapter IV: The Governing Equations" of Richardson's classical book "Weather Prediction by Numerical Process" .
I will say this one more time: "As an equation". There is no equation that describes the system I described, there are only algorithms that cannot be expressed in closed form. The fact that they cannot be expressed or solved in closed form is revealed by the language of your reference: "by Numerical Process".
Numerical algorithms are used -- must be used -- for system that don't have a closed-form solution, i.e. an equation that describes the system. Weather and atmospheric physics are in that category. My specific example -- arctic temperature, rainfall and snow cover -- represents a system that we can't even model with any reliability, much less hope to express as an equation.
Another example, one that surprises many people, is the fact that there is no equation able to describe an orbital system with more than two masses. Those systems must also be solved numerically. (https://en.wikipedia.org/wiki/Three-body_problem)
> ... but that doesn't mean any QM result will ever improve weather prediction
So? I made the exact opposite claim -- that the effect of QM on weather systems may prevent accurate forecasts past a certain time duration.
>> It was part of a simple explanation, but in the future, in the name of increased throughput and as processor costs continue to fall, it will be literally true.
> No. You didn't read what I wrote.
Who cares what you wrote? You tried to say that 10,000 cells per processor is required for reasonable throughput. This is already false, and it will become more false in the future.
>> Quantum mechanics is thought to be the final obstacle to long-term weather forecasting
> [By whom?][Citation needed].
I didn't make an evidentiary claim, I expressed a commonly heard opinion, so I don't have to provide a citation.
> Why not invoke quarks and gluons and the Higgs field if you're going all the way down to fundamental particles?
The evidentiary burden is yours to try to support the idea that weather isn't affected by fundamental particles.
> You tried to say that 10,000 cells per processor is required for reasonable throughput. This is already false, and it will become more false in the future.
Can you show me a strong scaling plot for any PDE solver that shows good scaling significantly beyond 10k DOFs per processor?
> the effect of QM on weather systems may prevent accurate forecasts past a certain time duration.
No, and there are two reasons why, one practical and one fundamental. Practical first: The key here is that "exponential sensitivity to initial conditions" means you have to have precise measurements of the weather at close to the scale where QM is significant before QM can have an effect. To even think about approaching a description of that precision would require measurements of wind, temperature etc. at a grid resolution of much less than 1 mm. Leaving the huge practical problems with a measurement like that aside, if you were to measure with an (insufficient) 1 mm grid just air velocity and temperature for the Continental US, the data storage required would be 1 000 000 000 000 000 Terabytes of data for a single point in time. This is 1 000 000 000 times the total storage capacity of the largest supercomputer in the world. For a single second, and what you want is a time series spanning many days. And you're not even beginning to approach the regime where QM becomes important, that would require a storage capacity 10^18 times larger than this already absurd storage capacity. We're talking about 10^30 Terabytes of data at each instant in time!
The other and more fundamental objection to your assertion is the vast discrepancy between the Kolmogorov length scale, i.e. the smallest scale of variations in the flow, and the scale of QM. The Kolmogorov length scale for atmospheric motion lies in the range 0.1 to 10 mm. At scales much smaller than this, such as in QM, the flow is locally uniform everywhere.
There is no equation that describes that system. Which part of this is confusing you? A numerical algorithm is not an equation. My other example, easier to grasp, was the three-body problem -- the existence of a numerical solution doesn't mean there's an equation that describes a three- (or more) body orbit, quite the contrary (it has been proven that no such equation can exist) -- such orbits must be solved numerically, and there is no overarching equation, only an algorithm.
The presence of an algorithm doesn't suggest that there's an equation behind it. Here's another example -- the integral of the error function used in statistics. It's central to statistical calculations, there is no closed form (i.e. no equation), consequently it must be, and is, solved numerically everywhere. This is just one of hundreds of practical problems in many disciplines for which there is no equation, only an algorithm. Reference:
We can locate/identify prime numbers with reasonable efficiency. Does this mean there's an equation to locate prime numbers? Well, no, there isn't -- there's an algorithm (several, actually).
We can compute square roots with reasonable efficiency. Does this mean there's an equation that produces a square root for a given argument? As Isaac Newton (and many others) discovered, no, there isn't -- there's an algorithm, a sequential process that ends when a suitable level of accuracy has been attained.
I could give hundreds of examples, but perhaps you will think a bit harder and arrive at this fact for yourself.
> To even think about approaching a description of that precision would require measurements of wind, temperature etc. at a grid resolution of much less than 1 mm.
Your argument is that, because we can't measure the atmosphere to the degree necessary to associate changes with the quantum realm, we therefore can rule it out as a cause. Science doesn't work that way. Remember that I didn't say it was so, I said it was a matter of active discussion among professionals, as it certainly is.
> At scales much smaller than this, such as in QM, the flow is locally uniform everywhere.
What an argument. It says that, even though at larger length scales, there is turbulence that prevents closed-form solutions (and in this connection everyone is waiting for a solution to the Navier-Stokes equations, which may ultimately be a pipe dream), but as the length scale decreases, things smooth out and become uniform (I would have added "predictable" but you had the good sense not to make that claim). This contradicts everything we know about nature in modern times, and contradicts the single most important property of QM.
> Can you show me a strong scaling plot for any PDE solver that shows good scaling significantly beyond 10k DOFs per processor?
Would you like to make the argument that, as time passes and as processors become less expensive, faster and more numerous, any such argument isn't undermined by changing circumstances?
This paper (PDF warning):
Makes the unsurprising argument that, as time passes and as processor costs fall, matrices are broken into more and more, smaller, parallel subsets in the name of rapid throughput (with appropriate graphics to demonstrate the point). The end result of that process should be obvious, and at the present time, 10,000 serial computations per processor is absurd -- this is not a realistic exploitation of a modern supercomputer. In reality, more processors would each be assigned fewer cells, because that produces a faster result. This is not a difficult concept to grasp.
- Approximation to the model
(- Compiler & hardware)
So, your answer might not match experiment. Maybe the experiment is wrong. Maybe it's an expected effect due to an approximation you used. Maybe your integration grid isn't sufficiently converged. You just have to systematically eliminate these, often by comparing to more expensive but verified simulations, or analytic solutions on simplified models.
This is beyond any HPC that I have ever witnessed before. Maybe CERN does more, but I don't quite know the details of it.
Eg. saying that "switch to Heliocentric worldview was a major breakthrough for Geocentric theory" sounds really weird to me.
1) The reactors have been failing to generate sustained plasma due to specific failures.
2) The simulation models of the reactors did not correctly predict those failures.
3) The new model correctly predicts those failures.
4) Now you can iterate designs using the model with the expectation that these particular failures can be identified before building a full reactor.
The heliocentric theory was a breakthrough for a variety of technologies. We certainly would have found going into space much more difficult if we still followed the geocentric theory.
Another way to put it is that they improved the model.
That _does_ bring us closer to improving actual reactors.
All told, the mainstream scientific community has been very responsible lately in what claims they make about fusion. It's the contractors and startups who think career scientists don't know what they're doing that are over-hyping their pet schemes.
I honestly wish them the best of luck, but what I resent is the credibility gap that fusion scientists work under due to their broken promises.
If you can simulate a design hypothesis instead of having to build an actual reactor, you can make a lot more progress. Understanding the phenomena should also inform the design process.
It seems to be a step toward making our models agree with actual observations, which in general is a very good thing. As to whether it's a big deal WRT actual fusion, that will have to wait until someone thinks of a way to apply this result to present tokamak designs.
And from what I've read, don't the fusion reactors also become radioactive from the high energy neutrons in fusion reactors? Yes the fuel isn't radioactive... but they also need to be scrapped/reprocessed after (pulling number out of butt) five years?
The great thing about pressurized-water reactors and boiling water reactors is that they work on water, which is easy to handle and doesn't itself become highly radioactive. The radioactive portions of commercial nuclear reactors are very simple, with very few moving parts. The complexity is outside the reactor vessel.
Most of the alternative reactor designs have much more complex radioactive portions. Pebble bed reactors get pebble jams. (An experimental pebble-bed reactor in Germany has been jammed for decades, and still can't be decommissioned.) Helium-cooled reactors leak. (Ft. St. Vrain was so promising.) Sodium cooled reactors have sodium fires.
That's why alternative nuclear technologies haven't caught on. If you need 40 years of trouble-free operation to make a plant pay, none of those technologies qualify.
The waste coming out of LFTR's has to be stored ~400 years (which is actually a sane timespan and it's easy to find places underground that stay stable for 400 years).
Another bonus is the fact that Thorium is much easier to find than Uranium, and that an LFTR can be built more or less failsafe.
That said, I'm convinced we'll have to deal with it. Fusion will be ready when the uranium runs out.
Nuclear waste can and should be recycled which would reduce the amount of waste.
Eventually it will be possible to use most of the waste as fuel
>...That said, I'm convinced we'll have to deal with it. Fusion will be ready when the uranium runs out.
With breeder reactors, we could run the world on nuclear power and have enough fuel for tens of thousands of years. By then I would hope fusion would be ready.
Frederick de Moleyns, who invented the electric light bulb in 1841, had the right idea - use a metal wire with a really high melting point. He used platinum. Worked, but cost far too much. Edison and Swan figured out how to make a cheap electric light bulb, using carbonized paper. There was a long detour through various forms of carbonized cellulose, including paper, bamboo, and extruded cellulose. Bulb life was short and efficiency was low, but it worked. Then there was a brief detour into tantalum wire around 1902. Finally, Coolidge's process for making ductile tungsten wire was developed at General Electric, and thin tungsten wire could not only be made, but worked easily. Filaments could be coiled up into compact forms. Ductile tungsten lamps came out in 1908. That was it. Incandescent lamps didn't change much over the next century.
Yes, and this result suggests that we now have a model that's better at making such a prediction -- more finely tuned to agree with nature.
Finally, how is this research money zero sum? It's not like nuclear is completely broke or something because we're spending all this money of fusion.
By analogy: the first fusion reactor is Napster. Obviously the traditional oil/gas co's play the part of the record labels, trying to shut it down. And then who will become Apple/iTunes of the energy industry?
> simulation required 15 million hours of computation, carried out by 17,000 processors over a period of 37 days ...
> Now, researchers at General Atomics are taking these new results and using them to develop a simplified, streamlined simulation that could be run on an ordinary laptop computer
> they show that this is a general phenomenon, not one specific to a particular reactor design.
>In a result so surprising that the researchers themselves found it hard to believe their own results at first, it turns out that interactions between turbulence at the tiniest scale, that of electrons, and turbulence at a scale 60 times larger, that of ions, can account for the mysterious mismatch between theory and experimental results.
So basically they forgot the fact that a single pebble thrown into a pond causes ripples; or that a candle can light a room? Sounds like they forgot to apply the KISS principle and over complicated things a bit; which I can safely say, I probable do at least once a day.
Damnit, Steve. How many times do we have to tell you?