
Google and a nuclear fusion company have developed a new algorithm - jonbaer
https://www.theguardian.com/environment/2017/jul/25/google-enters-race-for-nuclear-fusion-technology
======
abefetterman
This is actually a really exciting development to me. (Note, what is exciting
is the "optometrist algorithm" from the paper [1] not necessarily googles
involvement as pitched in the guardian). Typically a day of shots would need
to be programmed out in advance, typically scanning over one dimension (out of
hundreds) at a time. It would then take at least a week to analyze the results
and create an updated research plan. The result is poor utilization of each
experiment in optimizing performance. The 50% reduction in losses is a big
deal for Tri Alpha.

I can see this being coupled with simulations as well to understand sources of
systematic errors, create better simulations which can then be used as a
stronger source of truth for "offline" (computation-only) experiments.

The biggest challenge of course becomes interpreting the results. So you got
better performance, what parameters really made a difference and why? But that
is at least a more tractable problem than "how do we make this better in the
first place?"

[1]
[http://www.nature.com/articles/s41598-017-06645-7](http://www.nature.com/articles/s41598-017-06645-7)

~~~
amelius
Perhaps a stupid question, but why can't the whole experiment be run as a
simulation?

~~~
zaph0d_
Even if this would be the dream of a lot of theoretical physicists to replace
experiments with simulations, this must not happen! Ever! Even if every
complex system in the world could be simulated in reasonable time it would
still require experiments to verify or falsify the simulation results. A
simulation is essentially just a calculation from a model someone came up with
to describe a system. In order to check how good the model is one has to check
it against experimental data. Just expanding the models without experimental
verification will not necessarily result in a good theoretical description. It
would be like writing software without testing the components and expecting it
to work correctly when you're done. There was recently an article on HN where
economists were described as the astrologers of our time [1] since they do not
verify their mathematical models to an extent where they can predict
economical systems. This is another example where more experimental data
should be considered in order to falsify certain theories.

Those are the reasons why string-theorist will not (and should not) get any
Nobel price in the next decades. Since its predictions are hard to measure on
those small scales there's no way of telling if the model is any good until it
is compared against suitable experimental data.

[1] [https://aeon.co/essays/how-economists-rode-maths-to-
become-o...](https://aeon.co/essays/how-economists-rode-maths-to-become-our-
era-s-astrologers)

~~~
amelius
I believe this is more about solving an engineering/mathematics problem, than
about fundamental physics and the scientific process.

~~~
Retric
Physics is a lot more than just fundamental physics. H-Bomb designs for
example get hundreds of hours of super computer time to simulate a few pounds
of stuff for 1/1,000th of a second and even then they are approximations which
need to be validated.

------
briankelly
From the actual journal article:

> Two additional complications arise because plasma fusion apparatuses are
> experimental and one-of-a-kind. First, the goodness metric for plasma is not
> fully established and objective: some amount of human judgement is required
> to assess an experiment. Second, the boundaries of safe operation are not
> fully understood: it would be easy for a fully-automated optimisation
> algorithm to propose settings that would damage the apparatus and set back
> progress by weeks or months.

> To increase the speed of learning and optimisation of plasma, we developed
> the Optometrist Algorithm. Just as in a visit to an optometrist, the
> algorithm offers a pair of choices to a human, and asks which one is
> preferable. Given the choice, the algorithm proceeds to offer another
> choice. While an optometrist asks a patient to choose between lens
> prescriptions based on clarity, our algorithm asks a human expert to choose
> between plasma settings based on experimental outcomes. The Optometrist
> Algorithm attempts to optimise a hidden utility model that the human experts
> may not be able to express explicitly.

I haven't read the full article nor do I understand the problem space, but the
novelty seems overstated based on this. Maybe they can eventually collect
metadata to automate the human intuition.

Edit: here's their formal description of it:
[https://www.nature.com/articles/s41598-017-06645-7/figures/2](https://www.nature.com/articles/s41598-017-06645-7/figures/2)

~~~
pm90
I mean, if it has not been done before, it doesn't look like they're
overstating the novelty. Most algorithms look "obvious" in hindsight :).

~~~
euyyn
It's a well-known technique in the out-of-fashion world of knowledge-based
systems: To create an expert system, your experts often won't be able to
articulate their utility function, so you extract it by presenting them A/B
choices.

~~~
regularfry
Doing an old thing on a new problem counts as novel.

~~~
euyyn
Sure. Just pointing out that this isn't a "new algorithm that only looks
obvious because of hindsight". There's an almost endless supply of problems,
like this one, that benefit from automating the assessment task of human
experts.

------
dwaltrip
There was a talk about the state of nuclear fusion by some MIT folks linked
here on HN a few days ago. One of the biggest takeaways was that many fusion
efforts are very far away (3 to 6+ orders of magnitude) on the most important
metric, Q, which is energy_out / energy_in. Additionally, much press and
public discussion completely fail to discuss this and other core factors that
actually matter for making fusion viable.

I remember Tri-alpha being listed on one of the slides near the bottom left of
the plot, 4 or 5 orders of magnitude away from break even, where Q = 1
(someone please correct me if I'm remembering incorrectly).

Is the 50% improvement described in the article meaningful, as that would only
be a fraction of an order of magnitude?

I understand the broader concept of combining experts and specialized software
on complex problems is a powerful idea -- I'm just wondering if this specific
result actually changes the game for Tri-alpha.

~~~
hyperbovine
But hey, string ~20 consecutive 50% improvements together and you're at four
orders of magnitude :-)

~~~
markvdb
You might allowed to be much more optimistic.

A 50% increase could be much much more significant depending on the parameter
optimised. Tokamak magnetic field strength for example has a factor ^4 effect
towards net energy.

Have a look at
[https://www.youtube.com/watch?v=L0KuAx1COEk](https://www.youtube.com/watch?v=L0KuAx1COEk)
, as previously discussed here:
[https://news.ycombinator.com/item?id=14834390](https://news.ycombinator.com/item?id=14834390)
.

------
EternalData
Google might try to become the conglomerate of all forward-facing things but
it is somewhat funny to see how through it all, it's their advertising
revenues that form the core of the business.

~~~
zitterbewegung
This pattern happens more often than you think.

Microsoft: They make an Operating System and Office Suite. From Microsoft
Research they have labs on Quantum Computing, they have five Turing Award
winners (One is Leslie Lamport) and he developed TLA+ while employed there.

Facebook: A social network Funds a bunch of Deep Learning Research and NLP.

Elon Musk: Helped create PayPal, now does electric cars and rockets, (Tesla,
SpaceX)

NVIDIA: Made graphics cards for video games. Now those same devices allow for
deep learning.

~~~
eternauta3k
Bell Labs, over 1k PhDs at some point.

------
ZenoArrow
Sounds like some promising results, hopefully this approach will continue to
be useful.

Addressing the wider article, it always surprises me that the focus fusion
approach is never mentioned in fusion articles put out by the mainstream
media. I don't know what to attribute that to, but it's surprising that one of
the most promising fusion approaches is constantly overlooked.

To give an idea how drastically overlooked focus fusion is, here's a graph
showing R&D budgets for different fusion projects...

[http://lppfusion.com/wp-content/uploads/2016/05/fusion-
funds...](http://lppfusion.com/wp-content/uploads/2016/05/fusion-funds-pie-
chart.png)

... and here's a graph showing energy efficiency of fusion devices (running on
deuterium I believe)...

[http://lppfusion.com/wp-content/uploads/2016/05/wall-plug-
ch...](http://lppfusion.com/wp-content/uploads/2016/05/wall-plug-chart.png)

You'd think that the second most efficient device would've gotten more than $5
million in funding over 20 years (I think the original funding was from NASA
back in 1994).

------
mtgx
I think their universal quantum computer (to be announced later this year)
could accelerate fusion research even more, as I imagine it could more
accurately simulate the atom reactions and experiments on it. Practical
quantum computers may just be what we were missing to finally be able build
working fusion reactors.

The millions of possible "solutions" and algorithms for working fusion
reactors may be what has made fusion research so expensive and fusion reactors
seem so far away. Quantum computers may be able to cut right through that hard
problem, although we may have to wait a bit more until quantum computers are
useful enough to make an impact on fusion research. I don't know if that's
reaching 1,000 qubits or 1 million qubits.

~~~
_FKS_
Even if you had the computing power AND if you were simulating your fusion
reactor's plasma in realtime, while it's running AND you know/can predict the
plasma instabilities in realtime (under a few ms), you still need a way to
"counter" those instabilites in the said plasma. And you need to counter fast,
before the instability "poisons" the entire plasma, something that should
happen within a few ms. If you don't, your entire experiment stops, and it
takes a while to get it back (minutes). Currently: 1. nobody really
understands the instabilities, why and when they happen; 2. there's no way to
'counter' them. So it's not only about the computing power.

------
yousefvi
As a psychologist, this looks an awful lot like computerized adaptive testing
methods, only instead of estimating some parameter vector about a person,
you're estimating some parameter vector about plasma.

Even the title "optometrist algorithm" is telling, because that paradigm is a
basic model for how a lot of testing is done, except that it's not the
optometrist doing it, it's a computer.

------
DrNuke
Diversification of the business, me thinks... nuclear is so big (but slow)
that a penny invested today may become a tenner tomorrow, just in case.

------
siscia
I do have a naive question.

Suppose a big breakthrough comes out of a private company, and such innovation
is necessary to use nuclear fusion.

The company will be free to do whatever it pleases with the technology or it
will somehow "force" to let other use, maybe behind the payment of some
royalties.

~~~
markvdb
A related economics question, a thought experiment really...

Suppose a private company manages to lower the cost of electric energy by 90%,
using a device self produced fast with virtually no capex. From an economics
point of view, they build free money printing presses, essentially.

How would they benefit from this the most possible? Selling the energy? At
what price? Take over sectors of the economy where electric price generates
most added value? Aluminium production? Data centers? ...

~~~
DennisP
With costs that low, it'd make sense to replace even brand-new fossil plants
just to save the cost of fuel. You could eliminate fossil fuels for
electricity production in a very short period of time.

Selling the energy mostly wouldn't make sense, since it means replacing
regulated utilities. You could sell the reactors themselves but you'd have to
be good at scaling up factories fast; unless that's a core competency you'll
probably make money faster by licensing to people who are good at it. Or, just
outsource the manufacturing.

Either way, if you can churn out lots of reactors fast, just sell them to
everyone. Don't bother trying to take over e.g. aluminum production; what do
you know about producing aluminum? How long will it take you to learn? Just
sell the reactor to the aluminum producer.

------
rurban
No, they have not. They developed a very useful new program.

But simple assisted hill climbing is not a new algorithm, you might call it
"Wizard" though. This would attract the right audience.

------
janemanos
Maybe I'll see commercial fusion within my lifetime... how nice is that!

------
j7ake
how does this nuclear fusion company hope to make money ? Their product is
decades in the future.

~~~
detaro
Quite possibly, it never making any money at all but getting the world closer
to usable fusion is an acceptable outcome for investors like Paul Allan. And
it could make money by being the first ones to sell said product, decades in
the future, or by at least holding valuable IP at that point?

~~~
j7ake
one possible alternative would be to put the money into academic research so
there is no pressure from investors to have a return on investment.

~~~
DennisP
I'm not convinced that having no pressure to get a return on investment is a
net benefit. I've seen arguments by fusion scientists that the field has
historically been much too focused on pure plasma physics, with too little
emphasis on practical results or on the economics of reactor designs.

People invest in Tri Alpha because if things work out, a practical reactor is
more like one decade in the future, and it would be very economical. The
return on investment would be enormous.

------
suzzer99
Am I the only one that never reads these articles but just goes straight to
the comments? It seems like reporters always get the facts bungled and go for
the simple story - out of necessity of course.

~~~
trhway
for me it is about page loading - pretty much straightforward successful and
predictable on HN and slow and/or heavy, jerking current position/scroll
around and full of whatever else surprises the page of the source. I want to
know what it is about immediately, i.e. basically it is an issue of instant
gratification for me :) If information on the HN comments page isn't enough
(which is rare), or the source is really vouched/confirmed to be interesting
by itself - then i take the bullet.

------
JohnJamesRambo
Google didn't enter the race. They helped a company with some calculations.

~~~
dang
Ok, we changed the title to the first sentence of the article, which basically
says that.

~~~
grayhatter
thank you! I was very confused... twice...

------
Necromant2005
It's nothing. Even if Google is invented something we will never see a product
customer can purchase.

------
grnadav1
You jusk KNOW Elon Musk is gonna beat'em to it ;)

~~~
dokem
Electric cars and rockets have existed for decades.

------
MrQuincle
There are two directions within the energy world that I don't completely get.
One of them is hydrogen storage, the other nuclear fusion.

From what I always understood is that the high-energy neutrons produced by the
fusion reaction irradiate the surrounding structure and that there is still
considerable nuclear waste (although lifetimes are better than with nuclear
fission). Do the scientists not care or is this outdated info?

~~~
openasocket
You're thinking of
[https://en.wikipedia.org/wiki/Neutron_activation](https://en.wikipedia.org/wiki/Neutron_activation)

You need to use materials that stand up well to neutron bombardment. Many
materials upon neutron capture have a half life measured in seconds, which
isn't a big deal. As nuclear waste disposal goes, this really isn't a concern.

~~~
MrQuincle
On this EU site they state that a site remains active for 50-100 years,
[https://www.euro-fusion.org/faq/does-fusion-give-off-
radiati...](https://www.euro-fusion.org/faq/does-fusion-give-off-radiation/).

I know it's better than fission, but still not nice.

If is indeed seconds, then it doesn't matter of course. I was kind of hoping
to understand more about material design in the recent scientific past with
this question.

------
hailmike
I want to start placing "Google and " before stating my accomplishments.

"Google and a nuclear fusion company have developed a new algorithm"

sounds way better than:

"Nuclear fusion company has developed a new algorithm using Google"

They may not mean the same, but in today's world faking it until you make it
might pay off.

------
quickben
Outside of the title being misleading, I'm sceptical. It's one thing to have
the hardware for research, and completely other to have the expertise for the
research.

Google entered the self driving cars research, and we have yet to see them
driven around.

This heavily reminds me of Intel and their diversification, up until recently,
they were in IoT, makers market and what not. One solid push from AMD and they
jumped out of everything way too fast to track.

Google seems the same with the nuclear fusion. They have the advertising money
to throw around, but that just it, they are in different segment, and from
investing side I'm more inclined to stay away from their stock then buy it.

~~~
The_Sponge
>Google entered the self driving cars research, and we have yet to see them
driven around.

You see their working prototypes flying around mountain view all the time.
And, they've been transparent with their progress.

People have been working on this since the 80s.

~~~
jsmthrowaway
Considering “the whine of the electric motor in Waymo’s 25mph prototype is
mildly annoying when my window is open and they drive by several times an
hour” is a real thing in my own life as a resident of Mountain View, it’s odd
to see the assertion that they don’t drive around.

And I live on a side street.

