New finding may explain heat loss in fusion reactors (mit.edu) 258 points by MVBaks on Feb 15, 2016 | hide | past | web | favorite | 103 comments

 Note what is required to do something like this:You need to solve a 5-dimensional (+ time) nonlinear integro-differential equation. Not only that, but you need to keep it going for a long enough time for turbulent fluctuations to average out, and cover both ion and electron scales (that last bit is the big new contribution here).And once you're done with that, you need to repeat it at even higher resolution as evidence that what you're seeing is physical.Bravo!
 I know some of those words. Anyone have a ELI5?
 Take a simple differential equation as an example, like dy/dx = x. To solve this we need to find a function y(x) whose derivative is equal to x. This particular differential equation is well-behaved, and the solution is simply 0.5*x^2, which you can verify using basic calculus.Not all differential equations are nicely behaved. Many do not have analytic solutions at all, meaning that we can't even write down a function that satisfies the equation. However, such equations can still be solved approximately using computational techniques. This can be incredibly difficult and computationally expensive (note the huge resources required to achieve this breakthrough) but can produce very useful results.
 How many 5 year olds do you know that understand "well behaved differential equations" and derivatives?
 ELI5 is not for actual 5 year olds.
 Agreed, but I was implying that he didn't simplify it nearly enough. I actually understood the parent post more clearly than this ELI5 version.
 The real math that describes the behavior of fusion reactors is too complicated to actually carry out, so physicists have to make some simplifying assumptions. They try to choose assumptions so that the results they get are still valid, kind of like when you say, "Keep the change" at the grocery store. You assume that giving up small amounts of money on occasion won't make you go bankrupt, even though you're not actually adding it up all the time to keep track of how much you've given up.It turns out that one of the simplifying assumptions that physicists have been making over the years does change the result. Giving up the "spare change" does make you "go bankrupt." In this case, ignoring small-scale turbulence makes you lose a significant amount of heat. But figuring that out required some really hairy math and a butt-load of computing power.Is that simple enough for you?
 Yes! That was way better, thank you!
 You bet.
 People don't say "keep the change" at a grocery store.
 I do. I don't like small coins.
 I'm reminded of this clip of Feynman talking about magnets: https://www.youtube.com/watch?v=wMFPe-DwULM
 > ELI5 is not for actual 5 year olds.True in general, but it would be nice to meet the five-year-olds for whom it's the right forum.
 In this case it should be ELIHAHSPE.Explain like I have a high school physics education.
 They teach diffy-q in high school now?
 EDIT: Ordinary differential equations, yes - we do here in .au, at least in Maths C (advanced maths) in Queensland. Partial differential equations, the kind that are very useful in the kinds of physics simulations discussed - not so much.And even with that background, our foreign lecturers were regularly disappointed at our lack of mathematics competency in University ("Now I have to waste the next 4 weeks teaching you things we learnt in highschool in my home country before I can move on to what this subject is actually about").These were Asian/Indian and a Ukrainian lecturer, that I recall.
 Ah Russian math lecturers in first year courses. Always making it perfectly clear that as far as they're concerned they might as well be lecturing to a bunch of 8 year olds.
 I always wondered if they just said that, as part of bluster. Hard to believe entire classes of graduate math students are at a lower level than all 11th grade Russians. (It wasn't just the first year undergrads I heard of getting scolded by Russian lecturers)I know Russia values math highly and does push their students, but there's a limit to believability.
 Having worked with a few asian/eastern engineers, I do believe that a number of things conspire against western maths education: our culture doesn't value mathematics competency (in some circles people are even afraid to admit to an interest); the way we teach maths here in .au is bloody awful (standardized tests make teachers hate their jobs); and teaching generally is a much less respected/lower-paid profession relative to other occupations compared to better-performing countries (so our quality of teachers is much worse).
 I did derivatives and integrals in my highschool calc course, circa 2007
 Which order were they taught in?I had the same curriculum but was utterly bewildered by derivatives being the first topic. For the simple reason that until that point I had habitually looked at "wholes" in the external world, only then to wonder "what kinds of things progressively make it up?"I feel like to derive something, you start with the assumption that you know what you're talking about fist. Yet the thing is actually the sum of its parts. Therefore integration seems more natural at first, and only then can things similar to it be derived with confidence.So the teaching order seemed backwards. I wonder if anyone has felt the same way.
 Every school I've ever been at in British Columbia (High School, College, University) - starts with derivatives, and then moves onto integral calculus.They start with the calculating slope on a curve f(x) being lim h->0 f(x+h)-f(x)/h - spend a few weeks deriving various derivatives, show how you can find tops/bottoms of curves (derivate=0), talk about limit theory a bit, take a bunch more complicated equations derivatives.Then, once that's figured out - take the inverse, and look at area's under curves, volumes, etc... My brain lost it at integration by parts (so many parts) - too much memorization, and so came an end to my mathematics education in that space.I'm happy they were taught in that order, mostly on being exasperated with having to memorize all the integration by parts formulas.
 Thankfully, you don't have to remember the integration by parts formula, just the usual derivative of a product and rearrange.
 Yep! Archimedes was essentially doing integrals 2000 years ago. He got close to infinitesimals, and centuries later we had derivatives and limits. It's strange to teach them in the reverse order (i.e. Most sophisticated theory first, like teaching the reals before the integers).I have an online series that starts with the historic order which you might like:
 Thanks!
 I had differential equations in high school. I must say it proved to be very helpful later at university.
 I don't know. But I would imagine most high schoolers could at least understand what a differential equation is, if explained. Aside from if they could solve one.
 Maybe it's 5 in dog years
 As you might know, turbulence is when simple streams of laminar flow (in say, plasma) turn into chaotic filaments and vortices. As a filament or vortex separates/stretches from your laminar flow region, it steals away heat/energy.Scientists often assume that you can accurately approximate stuff by neglecting irrelevant parts of the physics (basketballs vs ping pong balls; protons vs electrons). Turns out that ping pong balls greatly affect the outcome and amount of heat/energy dissipation in the basketballs.This doesn't surprise me too much, since critical behavior often happens in regimes where two separate aggregate effects become relevant. But the details are all still just a big mystery :)P.S. someone posted a much better description than the OP. I'll reiterate: http://news.mit.edu/2016/heat-loss-fusion-reactors-0121
 A fusion reactor needs a lot of energy (heat & electricity) in order to make more. But a big problem is that the heat wants to escape. (Think of when you hold something warm -- the warmth you feel is the heat escaping into your hands! How would you stop it from doing that?) Different kind of reactors try to keep the heat in in different ways, but several of them do it with big magnets, which basically can keep the fuel floating so it doesn't touch anything.Now there are two main parts to the stuff inside the reactor, ions & electrons. Think of them as mac & cheese. When the reactor gets really hot, these bits start jostling around. (It's in the nature of hot things to want to move more, and really "hot" means that the little atomic components of everything are wiggling and moving faster and more excitedly.) Think of boiling the mac and cheese. For a long time scientists thought that the wiggles of the ions would diminish the wiggles of the electrons, because 1) the ions & electrons want to be near each other so they form a clump (like how the noodles absorb the cheese but waaaay stronger) and 2) because each ion/noodle is way bigger than each piece of cheese powder/electron, so its wiggles should be way stronger.Now to figure these things out, scientists have to use computers to predict how it all might work. But the formulas they use don't have a single answer they can just solve by hand. Instead the computer can take a guess and then spend a lot of time seeing how good the guess was, and trying to make it better. A computer can do things fast, but it's hard because when it does math, it's only figuring out how a single piece of cheese dust moves at a time! There are so many little pieces and knowing how a few cheese dusts move won't give us a good idea what a big pot of mac n' cheese is like. (Could you learn how sand feels by holding one grain?) So they have to keep the computer running until it can tell them what a lot of cheese moves like. Even harder is that they need to watch the cheese move for a long time to make sure it's "stable". (For example, if you put mac and cheese back on a hot stove it might be quiet for a bit before suddenly a big cheese bubble burst and splattered everywhere. We need to know if there are bursting bubbles or not, so just looking for a second won't be enough to tell us.) So the computer also has to do this guessing for many little tiny moments, until they add up to a longer amount of time. Then they can figure out how the sauce behaves when you leave it on the stove for a while.Ultimately these scientists found that the ions & electrons don't act exactly like we thought. The heat actually causes the electrons/cheese to stretch out into long strands that pull away from the main blob! Also when the cheese starts swirling around, instead of bumping into the noodles and stopping, they can bump strong enough that the noodles and cheese all start swirling together crazily. These two things give us big clues on why our reactors won't stay as hot as we want them to, because they explain why the mac and cheese gets so wiggly and wild, and with the discovery we can now try to find new ways to build better reactors that work better.All the guessing by the computers was so much work it would've taken a single computer 15 million hours (over 1700 years!), but they got 17000 computers to all work as a team and give them answers in about a month. Then the scientists could get a couple answers in a year, which made them feel more confident that they probably did things right.NOTE: Totally not a physicist or anything, and I probably started with a bad analogy, but wanted to try this as a writing challenge. Also I know you probably didn't want it this simpified, but I tried to take "ELI5" at face value for once.
 I don't know if an actual 5-year-old would be able to understand this, but kudos for the attempt!
 Haha I agree. ELI9ish-and-already-quite-into-science?
 Fusion is hard.
 [meta] - I love the crosspolination between HN and reddit wrt many things. The thought that some random off reddit invented the idea of ELI5 And now its a valid comment request is fucking awesome. The internet evolved a bit when they invented that.
 The "Explain it like I'm five" idiom was not invented by Reddit.
 I didn't know this, thanks. Where was it invented?
 I don't know, I've just heard it before Reddit existed.
 You know that opening a Poke Ball releases a tremendous amount of energy, because on television they show a big flash of light. Imagine if you could open a thousand poke balls every second. We call that fusion.What MIT did was make it easier to open all these poke balls in a sustained way. This is important because once fusion is sorted out, our political leaders like Ted Cruz could use all that energy to do really interesting things.One side effect is that a lot of poke monsters will be released as well, which could cause problems...and so we as a society need to think about the long term ramifications.
 MIT article this points to: http://news.mit.edu/2016/heat-loss-fusion-reactors-0121
 Ok, we changed to that from http://futurism.com/nuclear-fusion-breakthrough-mit-experime.... Thanks!
 I'm always suspicious of "breakthrough" announcements when it comes to fusion, especially ones that originate from a college's PR office. Can someone put this in context for us? Is it a big deal?
 Yes, this is progress in understanding plasma behavior. No, it's not Commercial Fusion Real Soon Now.MIT did not overhype this. The MIT news office used the title "New finding may explain heat loss in fusion reactors".[1] The site from which this was taken added the clickbait title.
 It's a theoretical breakthrough that explains an unexpectedly bad behavior of current designs that was not understood so far: a discrepancy between theory and experimental observations.Now that the model is fine enough to explain the turbulences observed in the experiments one might hope that this will help suggest design changes to better deal with the turbulences. However the MIT article does not suggest any practical design modification yet.
 It is a hard thing to write the code for and then run a simulation of coupled plasma turbulence taking 15 million CPU-hours to explain a phenomenon.But it is another thing entirely to take such a huge simulation and try to use many of them to do tokamak design optimization.
 Indeed, I found this quote from the original article relevant:>> Now, researchers at General Atomics are taking these new results and using them to develop a simplified, streamlined simulation that could be run on an ordinary laptop computer
 Maybe this could be done with machine learning to learn an approximate function to predict the outcome of the paired simulation from the outcome of the isolated simulations.
 I would be curious to know the process for even modeling this sort of thing. How do they know they got it right? How accurately does this model existing experiments? Im also wondering if they model to a result vs modeling to what should actually be happening. Would anyone learned on this subject please chime in? I've always wondered how modeling these types of complex systems works.
 I don't do plasma physics but other sorts of HPC numerical physics. The answer is a myriad of ways:Check small numerical cases on an already-validated code.Check problems for which you have an analytic solution (in this case, maybe many laminar flow scenarios).Check problems where the 'weird part' (ie the turbulence in this case) is limited to one small part of the simulation.Compare only moderately complicated simulations to experiments that are already entirely understood.
 > I would be curious to know the process for even modeling this sort of thing. How do they know they got it right?It helps that the model creates the same outcomes as the actual observations (and the old model didn't). This isn't an ironclad guarantee that the model reflects reality, but it suggests that we're moving toward a more accurate reflection of nature.> I've always wondered how modeling these types of complex systems works.To explain I will use a simpler example -- weather forecasting. In weather forecasting, we need to model the atmosphere. It's not possible to model the entire atmosphere directly, instead we break the atmosphere into a bunch of cubes, each with an initial temperature and humidity and a few other things.Then we process the cubes in a supercomputer consisting of many processors running in parallel -- one processor per cube. The actual computation consists of seeing what effect a cube's neighbor's pressures, temperatures, etc. have on the tested cube, for a brief interval of time. Then we repeat the process for another slice of time. Ad infinitum.This is obviously a numerical process, with no overarching equation that explains it all, but with some basic first principles at work (a theme in modern physics). Such a process becomes more accurate if we can create more, smaller cubes, and for that we need more and faster processors. So most of modern supercomputer design involves thinking of ways to acquire more and faster parallel processors.The kind of model that led to the discussed result follows the same basic pattern. The irony is that the most often cited reason for having supercomoputers -- to be able to say whether it will rain 72 hours from now -- is also the least likely to succeed, because of the role played by quantum mechanics (i.e. by chance processes) in earth's atmosphere.
 Some corrections/nitpicks on your weather forecasting example:* there are most definitely some overarching equations that explain it all. We just can't solve them analytically. But we use them to write the code.* when you say "one processor per cube": if you mean cube = grid cell you're way off. You want to have at least ~ 10 000 grid cells per processor to have any level of performance.* it is definitely not quantum mechanics that makes weather hard to predict. It's what's popularly called "chaos", or mathematically, "exponential sensitivity to variations in initial data".* getting more performance for this type of application is not about getting more cores, it's about getting more memory bandwidth. That's why e.g. GPUs are useless for speeding up weather forecasting.* funfact: weather simulations today are not running any faster than in 1994; it's at about 15 minutes per simulation. The increases in computing power are used to a) increase grid resolution and b) run more replicas.* The last point is what makes modern weather simulations tick. You don't run one simulation, you run 1000, each with slightly different initial data, and you analyse the ensemble results.
 > * there are most definitely some overarching equations that explain it all.Not so, not for Earth's atmosphere, the topic I was discussing. There are first principles, but no overarching equations, analytically expressible or not. Prove me wrong -- show me the equation that tracks arctic temperature, rainfall, and snow cover, including the feedback effects that join them all, as an equation.> * when you say "one processor per cube": if you mean cube = grid cell you're way off. You want to have at least ~ 10 000 grid cells per processor to have any level of performance.It was part of a simple explanation, but in the future, in the name of increased throughput and as processor costs continue to fall, it will be literally true. I say this on a day when the Raspberry Pi Zero is so much in demand that I can't find one for sale.> * it is definitely not quantum mechanics that makes weather hard to predict. It's what's popularly called "chaos", or mathematically, "exponential sensitivity to variations in initial data".Both play a part. Quantum mechanics is thought to be the final obstacle to long-term weather forecasting, and chaos theory (extreme sensitivity to initial conditions) is the mechanism by which small initial causes lead to large effects.Your last three point weren't corrections, so no need for comment.
 > show me the equation that tracks arctic temperature, rainfall, and snow cover, including the feedback effects that join them all, as an equation.The reader is referred to "Chapter IV: The Governing Equations" of Richardson's classical book "Weather Prediction by Numerical Process" [1].> It was part of a simple explanation, but in the future, in the name of increased throughput and as processor costs continue to fall, it will be literally true.No. You didn't read what I wrote. Unless some radical paradigm shift occurs in both hardware and algorithm design, strong scaling is never going to pay off once you get less than about 10 000 grid cells per CPU. Core cost is not tge issue, power usage over the cluster lifetime is already a huge cost. Memory bandwidth is the issue, Linpack (peak float) performance is so irrelevant even management has started to ignore it.> Quantum mechanics is thought to be the final obstacle to long-term weather forecasting[By whom?][Citation needed]. Of course QM is behind how atoms and molecules behave, but that doesn't mean any QM result will ever improve weather prediction. And, why stop at quantum mechanics? Why not invoke quarks and gluons and the Higgs field if you're going all the way down to fundamental particles?
 >> show me the equation that tracks arctic temperature, rainfall, and snow cover, including the feedback effects that join them all, as an equation.> The reader is referred to "Chapter IV: The Governing Equations" of Richardson's classical book "Weather Prediction by Numerical Process" [1].I will say this one more time: "As an equation". There is no equation that describes the system I described, there are only algorithms that cannot be expressed in closed form. The fact that they cannot be expressed or solved in closed form is revealed by the language of your reference: "by Numerical Process".Numerical algorithms are used -- must be used -- for system that don't have a closed-form solution, i.e. an equation that describes the system. Weather and atmospheric physics are in that category. My specific example -- arctic temperature, rainfall and snow cover -- represents a system that we can't even model with any reliability, much less hope to express as an equation.Another example, one that surprises many people, is the fact that there is no equation able to describe an orbital system with more than two masses. Those systems must also be solved numerically. (https://en.wikipedia.org/wiki/Three-body_problem)> ... but that doesn't mean any QM result will ever improve weather predictionSo? I made the exact opposite claim -- that the effect of QM on weather systems may prevent accurate forecasts past a certain time duration.>> It was part of a simple explanation, but in the future, in the name of increased throughput and as processor costs continue to fall, it will be literally true.> No. You didn't read what I wrote.Who cares what you wrote? You tried to say that 10,000 cells per processor is required for reasonable throughput. This is already false, and it will become more false in the future.>> Quantum mechanics is thought to be the final obstacle to long-term weather forecasting> [By whom?][Citation needed].I didn't make an evidentiary claim, I expressed a commonly heard opinion, so I don't have to provide a citation.> Why not invoke quarks and gluons and the Higgs field if you're going all the way down to fundamental particles?The evidentiary burden is yours to try to support the idea that weather isn't affected by fundamental particles.
 So the terminology you are using is what confuses me. You first said "there is no equation describing the system", but now you're expanding this to mean "the equation describing the system has no closed form solution". These are two very different things. Take you three-body-problem example. That wiki page lists a set of coupled second order ODEs that completely describes the system. Yes, we must solve it numerically in the general case. The system is still described by an equation (or three), don't you agree?> You tried to say that 10,000 cells per processor is required for reasonable throughput. This is already false, and it will become more false in the future.Can you show me a strong scaling plot for any PDE solver that shows good scaling significantly beyond 10k DOFs per processor?> the effect of QM on weather systems may prevent accurate forecasts past a certain time duration.No, and there are two reasons why, one practical and one fundamental. Practical first: The key here is that "exponential sensitivity to initial conditions" means you have to have precise measurements of the weather at close to the scale where QM is significant before QM can have an effect. To even think about approaching a description of that precision would require measurements of wind, temperature etc. at a grid resolution of much less than 1 mm. Leaving the huge practical problems with a measurement like that aside, if you were to measure with an (insufficient) 1 mm grid just air velocity and temperature for the Continental US, the data storage required would be 1 000 000 000 000 000 Terabytes of data for a single point in time. This is 1 000 000 000 times the total storage capacity of the largest supercomputer in the world. For a single second, and what you want is a time series spanning many days. And you're not even beginning to approach the regime where QM becomes important, that would require a storage capacity 10^18 times larger than this already absurd storage capacity. We're talking about 10^30 Terabytes of data at each instant in time!The other and more fundamental objection to your assertion is the vast discrepancy between the Kolmogorov length scale, i.e. the smallest scale of variations in the flow, and the scale of QM. The Kolmogorov length scale for atmospheric motion lies in the range 0.1 to 10 mm. At scales much smaller than this, such as in QM, the flow is locally uniform everywhere.
 > So the terminology you are using is what confuses me. You first said "there is no equation describing the system", but now you're expanding this to mean "the equation describing the system has no closed form solution".There is no equation that describes that system. Which part of this is confusing you? A numerical algorithm is not an equation. My other example, easier to grasp, was the three-body problem -- the existence of a numerical solution doesn't mean there's an equation that describes a three- (or more) body orbit, quite the contrary (it has been proven that no such equation can exist) -- such orbits must be solved numerically, and there is no overarching equation, only an algorithm.The presence of an algorithm doesn't suggest that there's an equation behind it. Here's another example -- the integral of the error function used in statistics. It's central to statistical calculations, there is no closed form (i.e. no equation), consequently it must be, and is, solved numerically everywhere. This is just one of hundreds of practical problems in many disciplines for which there is no equation, only an algorithm. Reference:https://en.wikipedia.org/wiki/Error_functionWe can locate/identify prime numbers with reasonable efficiency. Does this mean there's an equation to locate prime numbers? Well, no, there isn't -- there's an algorithm (several, actually).We can compute square roots with reasonable efficiency. Does this mean there's an equation that produces a square root for a given argument? As Isaac Newton (and many others) discovered, no, there isn't -- there's an algorithm, a sequential process that ends when a suitable level of accuracy has been attained.I could give hundreds of examples, but perhaps you will think a bit harder and arrive at this fact for yourself.> To even think about approaching a description of that precision would require measurements of wind, temperature etc. at a grid resolution of much less than 1 mm.Your argument is that, because we can't measure the atmosphere to the degree necessary to associate changes with the quantum realm, we therefore can rule it out as a cause. Science doesn't work that way. Remember that I didn't say it was so, I said it was a matter of active discussion among professionals, as it certainly is.> At scales much smaller than this, such as in QM, the flow is locally uniform everywhere.What an argument. It says that, even though at larger length scales, there is turbulence that prevents closed-form solutions (and in this connection everyone is waiting for a solution to the Navier-Stokes equations, which may ultimately be a pipe dream), but as the length scale decreases, things smooth out and become uniform (I would have added "predictable" but you had the good sense not to make that claim). This contradicts everything we know about nature in modern times, and contradicts the single most important property of QM.> Can you show me a strong scaling plot for any PDE solver that shows good scaling significantly beyond 10k DOFs per processor?Would you like to make the argument that, as time passes and as processors become less expensive, faster and more numerous, any such argument isn't undermined by changing circumstances?This paper (PDF warning):http://www.shodor.org/media/content/petascale/materials/UPMo...Makes the unsurprising argument that, as time passes and as processor costs fall, matrices are broken into more and more, smaller, parallel subsets in the name of rapid throughput (with appropriate graphics to demonstrate the point). The end result of that process should be obvious, and at the present time, 10,000 serial computations per processor is absurd -- this is not a realistic exploitation of a modern supercomputer. In reality, more processors would each be assigned fewer cells, because that produces a faster result. This is not a difficult concept to grasp.
 I think the answer is "with great difficulty". When you're developing something like this, you might have an issue with:- Experiment- Model- Approximation to the model- Numerics- Code(- Compiler & hardware)So, your answer might not match experiment. Maybe the experiment is wrong. Maybe it's an expected effect due to an approximation you used. Maybe your integration grid isn't sufficiently converged. You just have to systematically eliminate these, often by comparing to more expensive but verified simulations, or analytic solutions on simplified models.
 In addition to the points mentioned by others, for PDE based codes like this we have the very powerful "method of manufactured solutions".
 Looked up how much that is as if a single CPU. 1711 years. 625000 days.This is beyond any HPC that I have ever witnessed before. Maybe CERN does more, but I don't quite know the details of it.
 That's why Fortran is still relevant.
 So the current designs don't work? Sounds more like a step back not a "breakthrough"
 Discovering your believed correct model is wrong is a scientific breakthrough, as is discovering how to fix it.
 Ok, I thought "breaktrough" brings you closer to something.Eg. saying that "switch to Heliocentric worldview was a major breakthrough for Geocentric theory" sounds really weird to me.
 The breakthrough would be more like saying, "the switch to heliocentric worldview was a major breakthrough for solar system modeling."1) The reactors have been failing to generate sustained plasma due to specific failures.2) The simulation models of the reactors did not correctly predict those failures.3) The new model correctly predicts those failures.4) Now you can iterate designs using the model with the expectation that these particular failures can be identified before building a full reactor.5) Breakthrough.
 Better understanding does bring you closer to making something tangible happen.The heliocentric theory was a breakthrough for a variety of technologies. We certainly would have found going into space much more difficult if we still followed the geocentric theory.
 The step is that they figured out _why_ reality wasn't as good as the model.Another way to put it is that they improved the model.That _does_ bring us closer to improving actual reactors.
 We already know the current designs "don't work". If it weren't for turbulence, JET would have broken even a long time ago. Now there are better tools to predict when they won't work.
 "Breakthrough" as in progress, not solution. It's a meaningful and useful step forward, but it does not directly suggest any design changes.
 It could prompt a change to the design of the model and/or modeling software/hardware, no?
 In terms of Modeling software, it's really expensive. So, it can help validate a design, but it's not the kind of thing you want running after every incremental change.
 As you should be. There is a lot of press out there about "get rich quick" schemes for fusion which have little to no foundation in plasma physics. This is an example of real progress being made in magnetic confinement. It's not promising a burning reactor in less 10 years (ahem...).All told, the mainstream scientific community has been very responsible lately in what claims they make about fusion. It's the contractors and startups who think career scientists don't know what they're doing that are over-hyping their pet schemes.I honestly wish them the best of luck, but what I resent is the credibility gap that fusion scientists work under due to their broken promises.
 This makes a major design challenge tractable. Now modelling can be used to predict plasma behavior, whereas before it would not reflect experimental observations.If you can simulate a design hypothesis instead of having to build an actual reactor, you can make a lot more progress. Understanding the phenomena should also inform the design process.
 > Is it a big deal?It seems to be a step toward making our models agree with actual observations, which in general is a very good thing. As to whether it's a big deal WRT actual fusion, that will have to wait until someone thinks of a way to apply this result to present tokamak designs.
 On his death bed, Werner Heisenberg is reported to have said, "When I meet God, I am going to ask him two questions: Why relativity? And why turbulence? I really believe he will have an answer for the first."
 I think that quote is a myth. It's attributed to multiple people.
 For those asking about an ELI5 explanation, MIT released a video on Youtube that's very clear:
 ... aaand meanwhile we continue to ignore LFTR / Thorium as a present-day practical, demonstrated in the 1960s scalable super safe non-proliferation nuclear energy option that seems to be held up by bureaucracy or lack of patentability.And from what I've read, don't the fusion reactors also become radioactive from the high energy neutrons in fusion reactors? Yes the fuel isn't radioactive... but they also need to be scrapped/reprocessed after (pulling number out of butt) five years?
 You are right in saying that Thorium reactors are receiving too little attention. However due to their breeder nature they also produce nuclear waste which needs to be stored for hundreds of years. The main advantage of fusion is that you only get light radioactive istotopes of Helium or Lithium which decay within hours or days. The reactor walls are activated through neutron capture, which renders these highly radioactive for 50 to 100 years [1]. In comparison to fission reactors, that's not very long.
 The idea of Thorium reactors as a better nuclear fuel option, gets proposed frequently on HN. There are real issues with it though. There are reasons why it is not a fuel of choice today, that have nothing to do with conspiracy theories. A small amount of searching will reveal this.
 Thorium molten salt reactors need an attached chemical reprocessing plant which runs on radioactive fluoride fuel salts. Chemical processing plants for radioactive materials are a huge headache. Most of the older ones are now Superfund toxic sites.The great thing about pressurized-water reactors and boiling water reactors is that they work on water, which is easy to handle and doesn't itself become highly radioactive. The radioactive portions of commercial nuclear reactors are very simple, with very few moving parts. The complexity is outside the reactor vessel.Most of the alternative reactor designs have much more complex radioactive portions. Pebble bed reactors get pebble jams. (An experimental pebble-bed reactor in Germany has been jammed for decades, and still can't be decommissioned.) Helium-cooled reactors leak. (Ft. St. Vrain was so promising.) Sodium cooled reactors have sodium fires.That's why alternative nuclear technologies haven't caught on. If you need 40 years of trouble-free operation to make a plant pay, none of those technologies qualify.
 You may be right, but without references this is just a content-free sneer.
 10 seconds on a search engine will supply 1000's of links to issues with Thorium and LFTR, some of which are high quality and some of which are garbage. Note that the OP did not provide links in support of their position.
 You have a chance to add more to conversation than the OP, then. For myself, I'd be very interested in two or three relevant links, if you can provide them.
 LFTR's are very safe, but the current technical challenge is actually building containments that can handle heat + flouride salts. The tritium formed during the nuclear reactions is also challenging to handle, as it passes through those walls via diffusion. Until we find materials that can cope with these situations, it will remain a dream. Advances in nanotechnology will bring us further here, I totally agree with you though that LFTR's by far get too little attention, as today's nanotechnology could solve it for us.The waste coming out of LFTR's has to be stored ~400 years (which is actually a sane timespan and it's easy to find places underground that stay stable for 400 years). Another bonus is the fact that Thorium is much easier to find than Uranium, and that an LFTR can be built more or less failsafe.
 Yes, neutron activation is a source of radioactivity, but the results are short-lived (~10 yrs) radioactive isotopes. The problem with fission waste isn't necessarily the quantity of radiation, but the fact that the timescales are far beyond what humanity can count on.That said, I'm convinced we'll have to deal with it. Fusion will be ready when the uranium runs out.
 >...The problem with fission waste isn't necessarily the quantity of radiation, but the fact that the timescales are far beyond what humanity can count on.Nuclear waste can and should be recycled which would reduce the amount of waste. https://en.wikipedia.org/wiki/Radioactive_wasteEventually it will be possible to use most of the waste as fuel http://en.wikipedia.org/wiki/Integral_fast_reactor>...That said, I'm convinced we'll have to deal with it. Fusion will be ready when the uranium runs out.With breeder reactors, we could run the world on nuclear power and have enough fuel for tens of thousands of years. By then I would hope fusion would be ready.
 I thought we already knew that current tokamak designs were very poor in containing plasma due to unpredictable turbulence — ever since the 90s (I remember writing that in a senior high school-equivalent report on fusion), even. Hence the reignited interest in stellarators.
 Containing the plasma isn't really the problem, it's containing the heat. Because of the way turbulence works, you can pump as much energy into the plasma as you want, and all you'll get is more energy leaving the system instead of heating it up.
 This sounds like telling Edison and that you can't draw Tungsten wires thinly enough to make the lightbulb work, and he should just go back to gas lamps. The whole article was about learning to predict the turbulence. Now that we have some understanding of the turbulence in a tokamak, should we just continue with stellerators anyway?
 Edison didn't figure out how to draw tungsten. Edison figured out a way to make a cheap lamp without tungsten.Frederick de Moleyns, who invented the electric light bulb in 1841, had the right idea - use a metal wire with a really high melting point. He used platinum. Worked, but cost far too much. Edison and Swan figured out how to make a cheap electric light bulb, using carbonized paper. There was a long detour through various forms of carbonized cellulose, including paper, bamboo, and extruded cellulose. Bulb life was short and efficiency was low, but it worked. Then there was a brief detour into tantalum wire around 1902. Finally, Coolidge's process for making ductile tungsten wire was developed at General Electric, and thin tungsten wire could not only be made, but worked easily. Filaments could be coiled up into compact forms. Ductile tungsten lamps came out in 1908. That was it. Incandescent lamps didn't change much over the next century.
 interesting stuff. I'd also add that light bulbs got deliberately worse for a while. https://en.wikipedia.org/wiki/Phoebus_cartel
 Did they recover subsequently, though?
 > I thought we already knew that current tokamak designs were very poor in containing plasma due to unpredictable turbulence ...Yes, and this result suggests that we now have a model that's better at making such a prediction -- more finely tuned to agree with nature.
 I'm reminded of the recent interview where Freeman Dyson calls fusion projects 'welfare for engineers'
 Billions and Billions of our hard earned cash wasted on this ever continuuing pipe dream - attempting to recreate and then to control a man-made sun, here on earth is truly rediculous - even cold fusion is a better idea than this. I would like to create a factory for small portable nuclear reactors, standard tested proven design with inherent safety molten salt coolent - possibly thorium fuel - submarine option, out of site out of mind and protection from weather - desalination and aluminium smelter attachments to fully utilise continuus relaible power.
 an average of \$500 million a year for 60 years seems reasonable to me given the potential reward to society
 Setting aside the fact that, if it actually works it would be clean, cheap, reliable, and nearly limitless energy for all, what about all the tech and science we're picking up just following this line of inquiry? Also, the entire budget stacks up pretty well against Social Security, Medicare, or Defense all of which are basically helping old people, poor people, or helping us kill people (kind of wasteful).Finally, how is this research money zero sum? It's not like nuclear is completely broke or something because we're spending all this money of fusion.
 Tangential question: How much of what we learn about fusion in reactors can we apply to understanding the sun/stars or vice versa?
 As we get closer to "limitless energy" the cynical side of me always starts thinking about the business/political/power forces that will come into play. It's not going to be easy going once we have it.By analogy: the first fusion reactor is Napster. Obviously the traditional oil/gas co's play the part of the record labels, trying to shut it down. And then who will become Apple/iTunes of the energy industry?
 Finally something to be excited about!
 The article at MIT has a few more details.> simulation required 15 million hours of computation, carried out by 17,000 processors over a period of 37 days ...> Now, researchers at General Atomics are taking these new results and using them to develop a simplified, streamlined simulation that could be run on an ordinary laptop computer> they show that this is a general phenomenon, not one specific to a particular reactor design.
 We changed to that URL from http://futurism.com/nuclear-fusion-breakthrough-mit-experime.... HN prefers original sources.
 >A long-standing discrepancy between predictions and observed results in test reactors has been called “the great unsolved problem” in understanding the turbulence that leads to a loss of heat in fusion reactors>In a result so surprising that the researchers themselves found it hard to believe their own results at first, it turns out that interactions between turbulence at the tiniest scale, that of electrons, and turbulence at a scale 60 times larger, that of ions, can account for the mysterious mismatch between theory and experimental results.So basically they forgot the fact that a single pebble thrown into a pond causes ripples; or that a candle can light a room? Sounds like they forgot to apply the KISS principle and over complicated things a bit; which I can safely say, I probable do at least once a day.
 Steve left the door open again, didn't he?Damnit, Steve. How many times do we have to tell you?

Registration is open for Startup School 2019. Classes start July 22nd.

Search: