
New finding may explain heat loss in fusion reactors - MVBaks
http://news.mit.edu/2016/heat-loss-fusion-reactors-0121
======
drjesusphd
Note what is required to do something like this:

You need to solve a 5-dimensional (+ time) nonlinear integro-differential
equation. Not only that, but you need to keep it going for a long enough time
for turbulent fluctuations to average out, and cover both ion and electron
scales (that last bit is the big new contribution here).

And once you're done with that, you need to repeat it at even higher
resolution as evidence that what you're seeing is physical.

Bravo!

~~~
wnevets
I know some of those words. Anyone have a ELI5?

~~~
schwarrrtz
Take a simple differential equation as an example, like dy/dx = x. To solve
this we need to find a function y(x) whose derivative is equal to x. This
particular differential equation is well-behaved, and the solution is simply
0.5*x^2, which you can verify using basic calculus.

Not all differential equations are nicely behaved. Many do not have analytic
solutions at all, meaning that we can't even write down a function that
satisfies the equation. However, such equations can still be solved
approximately using computational techniques. This can be incredibly difficult
and computationally expensive (note the huge resources required to achieve
this breakthrough) but can produce very useful results.

~~~
stanmancan
How many 5 year olds do you know that understand "well behaved differential
equations" and derivatives?

~~~
andrepd
ELI5 is not for actual 5 year olds.

~~~
stanmancan
Agreed, but I was implying that he didn't simplify it nearly enough. I
actually understood the parent post more clearly than this ELI5 version.

~~~
lisper
The real math that describes the behavior of fusion reactors is too
complicated to actually carry out, so physicists have to make some simplifying
assumptions. They try to choose assumptions so that the results they get are
still valid, kind of like when you say, "Keep the change" at the grocery
store. You assume that giving up small amounts of money on occasion won't make
you go bankrupt, even though you're not actually adding it up all the time to
keep track of how much you've given up.

It turns out that one of the simplifying assumptions that physicists have been
making over the years _does_ change the result. Giving up the "spare change"
_does_ make you "go bankrupt." In this case, ignoring small-scale turbulence
makes you lose a significant amount of heat. But figuring that out required
some really hairy math and a butt-load of computing power.

Is that simple enough for you?

~~~
stanmancan
Yes! That was _way_ better, thank you!

~~~
lisper
You bet.

------
rwbhn
MIT article this points to: [http://news.mit.edu/2016/heat-loss-fusion-
reactors-0121](http://news.mit.edu/2016/heat-loss-fusion-reactors-0121)

~~~
dang
Ok, we changed to that from [http://futurism.com/nuclear-fusion-breakthrough-
mit-experime...](http://futurism.com/nuclear-fusion-breakthrough-mit-
experiments-help-reveal-source-heat-loss/). Thanks!

------
matt2000
I'm always suspicious of "breakthrough" announcements when it comes to fusion,
especially ones that originate from a college's PR office. Can someone put
this in context for us? Is it a big deal?

~~~
ogrisel
It's a theoretical breakthrough that explains an unexpectedly bad behavior of
current designs that was not understood so far: a discrepancy between theory
and experimental observations.

Now that the model is fine enough to explain the turbulences observed in the
experiments one might hope that this will help suggest design changes to
better deal with the turbulences. However the MIT article does not suggest any
practical design modification yet.

~~~
semi-extrinsic
It is a hard thing to write the code for and then run a simulation of coupled
plasma turbulence taking 15 million CPU-hours to explain a phenomenon.

But it is another thing entirely to take such a huge simulation and try to use
many of them to do tokamak design optimization.

~~~
S_A_P
I would be curious to know the process for even modeling this sort of thing.
How do they know they got it right? How accurately does this model existing
experiments? Im also wondering if they model to a result vs modeling to what
should actually be happening. Would anyone learned on this subject please
chime in? I've always wondered how modeling these types of complex systems
works.

~~~
lutusp
> I would be curious to know the process for even modeling this sort of thing.
> How do they know they got it right?

It helps that the model creates the same outcomes as the actual observations
(and the old model didn't). This isn't an ironclad guarantee that the model
reflects reality, but it suggests that we're moving toward a more accurate
reflection of nature.

> I've always wondered how modeling these types of complex systems works.

To explain I will use a simpler example -- weather forecasting. In weather
forecasting, we need to model the atmosphere. It's not possible to model the
entire atmosphere directly, instead we break the atmosphere into a bunch of
cubes, each with an initial temperature and humidity and a few other things.

Then we process the cubes in a supercomputer consisting of many processors
running in parallel -- one processor per cube. The actual computation consists
of seeing what effect a cube's neighbor's pressures, temperatures, etc. have
on the tested cube, for a brief interval of time. Then we repeat the process
for another slice of time. _Ad infinitum._

This is obviously a numerical process, with no overarching equation that
explains it all, but with some basic first principles at work (a theme in
modern physics). Such a process becomes more accurate if we can create more,
smaller cubes, and for that we need more and faster processors. So most of
modern supercomputer design involves thinking of ways to acquire more and
faster parallel processors.

The kind of model that led to the discussed result follows the same basic
pattern. The irony is that the most often cited reason for having
supercomoputers -- to be able to say whether it will rain 72 hours from now --
is also the least likely to succeed, because of the role played by quantum
mechanics (i.e. by chance processes) in earth's atmosphere.

~~~
semi-extrinsic
Some corrections/nitpicks on your weather forecasting example:

* there are most definitely some overarching equations that explain it all. We just can't solve them analytically. But we use them to write the code.

* when you say "one processor per cube": if you mean cube = grid cell you're way off. You want to have at least ~ 10 000 grid cells per processor to have any level of performance.

* it is definitely not quantum mechanics that makes weather hard to predict. It's what's popularly called "chaos", or mathematically, "exponential sensitivity to variations in initial data".

* getting more performance for this type of application is not about getting more cores, it's about getting more memory bandwidth. That's why e.g. GPUs are useless for speeding up weather forecasting.

* funfact: weather simulations today are not running any faster than in 1994; it's at about 15 minutes per simulation. The increases in computing power are used to a) increase grid resolution and b) run more replicas.

* The last point is what makes modern weather simulations tick. You don't run one simulation, you run 1000, each with slightly different initial data, and you analyse the ensemble results.

~~~
lutusp
> * there are most definitely some overarching equations that explain it all.

Not so, not for Earth's atmosphere, the topic I was discussing. There are
first principles, but no overarching equations, analytically expressible or
not. Prove me wrong -- show me the equation that tracks arctic temperature,
rainfall, and snow cover, including the feedback effects that join them all,
_as an equation_.

> * when you say "one processor per cube": if you mean cube = grid cell you're
> way off. You want to have at least ~ 10 000 grid cells per processor to have
> any level of performance.

It was part of a simple explanation, but in the future, in the name of
increased throughput and as processor costs continue to fall, it will be
literally true. I say this on a day when the Raspberry Pi Zero is so much in
demand that I can't find one for sale.

> * it is definitely not quantum mechanics that makes weather hard to predict.
> It's what's popularly called "chaos", or mathematically, "exponential
> sensitivity to variations in initial data".

Both play a part. Quantum mechanics is thought to be the final obstacle to
long-term weather forecasting, and chaos theory (extreme sensitivity to
initial conditions) is the mechanism by which small initial causes lead to
large effects.

Your last three point weren't corrections, so no need for comment.

~~~
semi-extrinsic
> show me the equation that tracks arctic temperature, rainfall, and snow
> cover, including the feedback effects that join them all, as an equation.

The reader is referred to "Chapter IV: The Governing Equations" of
Richardson's classical book "Weather Prediction by Numerical Process" [1].

> It was part of a simple explanation, but in the future, in the name of
> increased throughput and as processor costs continue to fall, it will be
> literally true.

No. You didn't read what I wrote. Unless some radical paradigm shift occurs in
both hardware and algorithm design, strong scaling is never going to pay off
once you get less than about 10 000 grid cells per CPU. Core cost is not tge
issue, power usage over the cluster lifetime is already a huge cost. Memory
bandwidth is the issue, Linpack (peak float) performance is so irrelevant even
management has started to ignore it.

> Quantum mechanics is thought to be the final obstacle to long-term weather
> forecasting

[By whom?][Citation needed]. Of course QM is behind how atoms and molecules
behave, but that doesn't mean any QM result will ever improve weather
prediction. And, why stop at quantum mechanics? Why not invoke quarks and
gluons and the Higgs field if you're going all the way down to fundamental
particles?

[1]
[https://books.google.com/books?hl=en&lr=&id=D52d3_bbgg8C&oi=...](https://books.google.com/books?hl=en&lr=&id=D52d3_bbgg8C&oi=fnd&pg=PA3&dq=equations#v=onepage&q&f=false)

~~~
lutusp
>> show me the equation that tracks arctic temperature, rainfall, and snow
cover, including the feedback effects that join them all, as an equation.

> The reader is referred to "Chapter IV: The Governing Equations" of
> Richardson's classical book "Weather Prediction by Numerical Process" [1].

I will say this one more time: "As an equation". There is no equation that
describes the system I described, there are only algorithms that cannot be
expressed in closed form. The fact that they cannot be expressed or solved in
closed form is revealed by the language of your reference: "by Numerical
Process".

Numerical algorithms are used -- _must be used_ \-- for system that don't have
a closed-form solution, i.e. an equation that describes the system. Weather
and atmospheric physics are in that category. My specific example -- arctic
temperature, rainfall and snow cover -- represents a system that we can't even
model with any reliability, much less hope to express as an equation.

Another example, one that surprises many people, is the fact that there is no
equation able to describe an orbital system with more than two masses. Those
systems must also be solved numerically.
([https://en.wikipedia.org/wiki/Three-
body_problem](https://en.wikipedia.org/wiki/Three-body_problem))

> ... but that doesn't mean any QM result will ever improve weather prediction

So? I made the exact opposite claim -- that the effect of QM on weather
systems may _prevent_ accurate forecasts past a certain time duration.

>> It was part of a simple explanation, but in the future, in the name of
increased throughput and as processor costs continue to fall, it will be
literally true.

> No. You didn't read what I wrote.

Who cares what you wrote? You tried to say that 10,000 cells per processor is
required for reasonable throughput. This is already false, and it will become
more false in the future.

>> Quantum mechanics is thought to be the final obstacle to long-term weather
forecasting

> [By whom?][Citation needed].

I didn't make an evidentiary claim, I expressed a commonly heard opinion, so I
don't have to provide a citation.

> Why not invoke quarks and gluons and the Higgs field if you're going all the
> way down to fundamental particles?

The evidentiary burden is yours to try to support the idea that weather isn't
affected by fundamental particles.

~~~
semi-extrinsic
So the terminology you are using is what confuses me. You first said "there is
no equation describing the system", but now you're expanding this to mean "the
equation describing the system has no closed form solution". These are two
very different things. Take you three-body-problem example. That wiki page
lists a set of coupled second order ODEs that completely describes the system.
Yes, we must solve it numerically in the general case. The system is still
described by an equation (or three), don't you agree?

> You tried to say that 10,000 cells per processor is required for reasonable
> throughput. This is already false, and it will become more false in the
> future.

Can you show me a strong scaling plot for any PDE solver that shows good
scaling significantly beyond 10k DOFs per processor?

> the effect of QM on weather systems may prevent accurate forecasts past a
> certain time duration.

No, and there are two reasons why, one practical and one fundamental.
Practical first: The key here is that "exponential sensitivity to initial
conditions" means you have to have precise measurements of the weather at
close to the scale where QM is significant before QM can have an effect. To
even think about approaching a description of that precision would require
measurements of wind, temperature etc. at a grid resolution of much less than
1 mm. Leaving the huge practical problems with a measurement like that aside,
if you were to measure with an (insufficient) 1 mm grid just air velocity and
temperature for the Continental US, the data storage required would be 1 000
000 000 000 000 Terabytes of data for a single point in time. This is 1 000
000 000 times the total storage capacity of the largest supercomputer in the
world. For a single second, and what you want is a time series spanning many
days. And you're not even beginning to approach the regime where QM becomes
important, that would require a storage capacity 10^18 times larger than this
already absurd storage capacity. We're talking about 10^30 Terabytes of data
at each instant in time!

The other and more fundamental objection to your assertion is the vast
discrepancy between the Kolmogorov length scale, i.e. the smallest scale of
variations in the flow, and the scale of QM. The Kolmogorov length scale for
atmospheric motion lies in the range 0.1 to 10 mm. At scales much smaller than
this, such as in QM, the flow is locally uniform everywhere.

~~~
lutusp
> So the terminology you are using is what confuses me. You first said "there
> is no equation describing the system", but now you're expanding this to mean
> "the equation describing the system has no closed form solution".

 _There is no equation that describes that system_. Which part of this is
confusing you? A numerical algorithm is not an equation. My other example,
easier to grasp, was the three-body problem -- the existence of a numerical
solution doesn't mean there's an equation that describes a three- (or more)
body orbit, quite the contrary (it has been proven that no such equation can
exist) -- such orbits must be solved numerically, and there is no overarching
equation, only an algorithm.

The presence of an algorithm doesn't suggest that there's an equation behind
it. Here's another example -- the integral of the error function used in
statistics. It's central to statistical calculations, there is no closed form
(i.e. _no equation_ ), consequently it must be, and is, solved numerically
everywhere. This is just one of hundreds of practical problems in many
disciplines for which _there is no equation_ , only an algorithm. Reference:

[https://en.wikipedia.org/wiki/Error_function](https://en.wikipedia.org/wiki/Error_function)

We can locate/identify prime numbers with reasonable efficiency. Does this
mean there's an equation to locate prime numbers? Well, no, there isn't --
there's an algorithm (several, actually).

We can compute square roots with reasonable efficiency. Does this mean there's
an equation that produces a square root for a given argument? As Isaac Newton
(and many others) discovered, no, there isn't -- there's an algorithm, a
sequential process that ends when a suitable level of accuracy has been
attained.

I could give hundreds of examples, but perhaps you will think a bit harder and
arrive at this fact for yourself.

> To even think about approaching a description of that precision would
> require measurements of wind, temperature etc. at a grid resolution of much
> less than 1 mm.

Your argument is that, because we can't measure the atmosphere to the degree
necessary to associate changes with the quantum realm, we therefore can rule
it out as a cause. Science doesn't work that way. Remember that I didn't say
it was so, I said it was a matter of active discussion among professionals, as
it certainly is.

> At scales much smaller than this, such as in QM, the flow is locally uniform
> everywhere.

What an argument. It says that, even though at larger length scales, there is
turbulence that prevents closed-form solutions (and in this connection
everyone is waiting for a solution to the Navier-Stokes equations, which may
ultimately be a pipe dream), but as the length scale decreases, things smooth
out and become uniform (I would have added "predictable" but you had the good
sense not to make that claim). This contradicts everything we know about
nature in modern times, and contradicts the single most important property of
QM.

> Can you show me a strong scaling plot for any PDE solver that shows good
> scaling significantly beyond 10k DOFs per processor?

Would you like to make the argument that, as time passes and as processors
become less expensive, faster and more numerous, any such argument isn't
undermined by changing circumstances?

This paper (PDF warning):

[http://www.shodor.org/media/content/petascale/materials/UPMo...](http://www.shodor.org/media/content/petascale/materials/UPModules/beginnersGuideHPC/moduleDocument_pdf.pdf)

Makes the unsurprising argument that, as time passes and as processor costs
fall, matrices are broken into more and more, smaller, parallel subsets in the
name of rapid throughput (with appropriate graphics to demonstrate the point).
The end result of that process should be obvious, and at the present time,
10,000 serial computations per processor is absurd -- this is not a realistic
exploitation of a modern supercomputer. In reality, more processors would each
be assigned fewer cells, because that produces a faster result. This is not a
difficult concept to grasp.

------
jobu
On his death bed, Werner Heisenberg is reported to have said, "When I meet
God, I am going to ask him two questions: Why relativity? And why turbulence?
I really believe he will have an answer for the first."

~~~
drjesusphd
I think that quote is a myth. It's attributed to multiple people.

------
mikeyouse
For those asking about an ELI5 explanation, MIT released a video on Youtube
that's very clear:

[https://youtu.be/RLI6QW2x4Lg](https://youtu.be/RLI6QW2x4Lg)

------
cowardlydragon
... aaand meanwhile we continue to ignore LFTR / Thorium as a present-day
practical, demonstrated in the 1960s scalable super safe non-proliferation
nuclear energy option that seems to be held up by bureaucracy or lack of
patentability.

And from what I've read, don't the fusion reactors also become radioactive
from the high energy neutrons in fusion reactors? Yes the fuel isn't
radioactive... but they also need to be scrapped/reprocessed after (pulling
number out of butt) five years?

~~~
SeanDav
The idea of Thorium reactors as a better nuclear fuel option, gets proposed
frequently on HN. There are real issues with it though. There are reasons why
it is not a fuel of choice today, that have nothing to do with conspiracy
theories. A small amount of searching will reveal this.

~~~
rst
You may be right, but without references this is just a content-free sneer.

~~~
SeanDav
10 seconds on a search engine will supply 1000's of links to issues with
Thorium and LFTR, some of which are high quality and some of which are
garbage. Note that the OP did not provide links in support of their position.

~~~
mfringel
You have a chance to add more to conversation than the OP, then. For myself,
I'd be very interested in two or three relevant links, if you can provide
them.

------
moogly
I thought we already knew that current tokamak designs were very poor in
containing plasma due to unpredictable turbulence — ever since the 90s (I
remember writing that in a senior high school-equivalent report on fusion),
even. Hence the reignited interest in stellarators.

~~~
stephengillie
This sounds like telling Edison and that you can't draw Tungsten wires thinly
enough to make the lightbulb work, and he should just go back to gas lamps.
The whole article was about learning to predict the turbulence. Now that we
have some understanding of the turbulence in a tokamak, should we just
continue with stellerators anyway?

~~~
Animats
Edison didn't figure out how to draw tungsten. Edison figured out a way to
make a cheap lamp without tungsten.

Frederick de Moleyns, who invented the electric light bulb in 1841, had the
right idea - use a metal wire with a really high melting point. He used
platinum. Worked, but cost far too much. Edison and Swan figured out how to
make a cheap electric light bulb, using carbonized paper. There was a long
detour through various forms of carbonized cellulose, including paper, bamboo,
and extruded cellulose. Bulb life was short and efficiency was low, but it
worked. Then there was a brief detour into tantalum wire around 1902. Finally,
Coolidge's process for making ductile tungsten wire was developed at General
Electric, and thin tungsten wire could not only be made, but worked easily.
Filaments could be coiled up into compact forms. Ductile tungsten lamps came
out in 1908. That was it. Incandescent lamps didn't change much over the next
century.

[1]
[https://archive.org/stream/menandvoltsstory00hammrich/menand...](https://archive.org/stream/menandvoltsstory00hammrich/menandvoltsstory00hammrich_djvu.txt)

~~~
harywilke
interesting stuff. I'd also add that light bulbs got deliberately worse for a
while.
[https://en.wikipedia.org/wiki/Phoebus_cartel](https://en.wikipedia.org/wiki/Phoebus_cartel)

~~~
TazeTSchnitzel
Did they recover subsequently, though?

------
spdegabrielle
I'm reminded of the recent interview where Freeman Dyson calls fusion projects
'welfare for engineers'

[http://m.theregister.co.uk/2015/10/11/freeman_dyson_intervie...](http://m.theregister.co.uk/2015/10/11/freeman_dyson_interview/)

------
garyclarke27
Billions and Billions of our hard earned cash wasted on this ever continuuing
pipe dream - attempting to recreate and then to control a man-made sun, here
on earth is truly rediculous - even cold fusion is a better idea than this. I
would like to create a factory for small portable nuclear reactors, standard
tested proven design with inherent safety molten salt coolent - possibly
thorium fuel - submarine option, out of site out of mind and protection from
weather - desalination and aluminium smelter attachments to fully utilise
continuus relaible power.

~~~
mattsouth
an average of $500 million a year for 60 years seems reasonable to me given
the potential reward to society

[https://www.technologyreview.com/s/541636/weighing-the-
cost-...](https://www.technologyreview.com/s/541636/weighing-the-cost-of-big-
science/)

------
sooper
Tangential question: How much of what we learn about fusion in reactors can we
apply to understanding the sun/stars or vice versa?

------
bricemo
As we get closer to "limitless energy" the cynical side of me always starts
thinking about the business/political/power forces that will come into play.
It's not going to be easy going once we have it.

By analogy: the first fusion reactor is Napster. Obviously the traditional
oil/gas co's play the part of the record labels, trying to shut it down. And
then who will become Apple/iTunes of the energy industry?

------
basicplus2
Finally something to be excited about!

------
guelo
The article at MIT has a few more details.

> simulation required 15 million hours of computation, carried out by 17,000
> processors over a period of 37 days ...

> Now, researchers at General Atomics are taking these new results and using
> them to develop a simplified, streamlined simulation that could be run on an
> ordinary laptop computer

> they show that this is a general phenomenon, not one specific to a
> particular reactor design.

[http://news.mit.edu/2016/heat-loss-fusion-
reactors-0121](http://news.mit.edu/2016/heat-loss-fusion-reactors-0121)

~~~
dang
We changed to that URL from [http://futurism.com/nuclear-fusion-breakthrough-
mit-experime...](http://futurism.com/nuclear-fusion-breakthrough-mit-
experiments-help-reveal-source-heat-loss/). HN prefers original sources.

------
dementis
>A long-standing discrepancy between predictions and observed results in test
reactors has been called “the great unsolved problem” in understanding the
turbulence that leads to a loss of heat in fusion reactors

>In a result so surprising that the researchers themselves found it hard to
believe their own results at first, it turns out that interactions between
turbulence at the tiniest scale, that of electrons, and turbulence at a scale
60 times larger, that of ions, can account for the mysterious mismatch between
theory and experimental results.

So basically they forgot the fact that a single pebble thrown into a pond
causes ripples; or that a candle can light a room? Sounds like they forgot to
apply the KISS principle and over complicated things a bit; which I can safely
say, I probable do at least once a day.

------
hinkley
Steve left the door open again, didn't he?

Damnit, Steve. How many times do we have to tell you?

