
AI in physics: are we facing a scientific revolution? - ezrakewa
https://www.4alltech.com/2020/07/ai-in-physics-are-we-facing-science.html
======
currymj
Some applications in computational physics involve solving a "variational"
problem, where you have some parameterized function and try to numerically
find the parameters that minimize energy or error. This does not necessarily
involve supervised learning from outside data as in this article -- it can be
purely an optimization problem.

But neural networks are very good parametric function approximators, generally
better than what traditionally gets used in physics (b-splines or whatever).
So people have started to design neural networks that are well-suited as
function approximators for specific physical systems.

It's fairly straightforward -- it's not an "AI" that has "knowledge" of
"physics" \-- just using modern techniques and hardware to solve a numerical
minimization problem. I think this will probably become pretty widespread. It
won't be flashy or exciting though -- it will be boring to anyone but
specialists, as the rest of machine learning ought to be.

~~~
ChrisRackauckas
Yes, I think this is a great use for neural networks since they are
effectively high dimensional function approximators, and something like
Schrondinger's equation is a PDE where the number of dimensions is the number
of observables so it can get very high dimensional very fast. Classical
methods don't necessarily scale that well in high dimensions (curse of
dimensionality: cost is exponential in dimensions), but using neural networks
does very well. This gives rise to the physics-informed neural network and
deep backwards stochastic differential equation approaches which will likely
be driving a lot of future HPC applications in a way that blends physical
equations with neural network approaches. We recently released a library,
NeuralPDE [1], which utilizes a lot of these approaches to solve what were
traditionally difficult equations in an automated form. I think the future is
bright for scientific machine learning!

[1] [https://neuralpde.sciml.ai/dev/](https://neuralpde.sciml.ai/dev/)

~~~
wenc
This is fascinating. ELI5: how does this work? (I'm couldn't find references
on the linked site)

Let's say I supply a high-dimensional DAE, f(x', x, z) = 0, x(0) = x₀, where
classical methods like quadrature are unwieldy. Does the algorithm generate n
samples in the solution space by integrating n times and then fitting an NN?
With different initial conditions? Or does it perform quadrature with NNs
instead of polynomial basis functions?

~~~
ChrisRackauckas
A lot of these methods here utilize the universal differential equation
framework described here:
[https://arxiv.org/abs/2001.04385](https://arxiv.org/abs/2001.04385) .
Specifically, the last example in this preprint describes how high dimensional
parabolic PDEs can be solving using neural networks inside of a specific SDE
(derivation in the supplemental). Discrete physics-informed neural networks
also are a subset of this methodology.

The other subset of methods, continuous physics-informed neural networks, are
described in
[https://www.sciencedirect.com/science/article/pii/S002199911...](https://www.sciencedirect.com/science/article/pii/S0021999118307125)
.

For a very basic introduction, I wrote some lecture notes on how this is done
for a simple ODE with code examples:
[https://mitmath.github.io/18S096SciML/lecture2/ml](https://mitmath.github.io/18S096SciML/lecture2/ml)

~~~
jedbrown
These methods are really interesting for high-dimensional PDE (like HJB), but
there's a ton of skepticism about the applicability of NN models for solving
the more common PDE that arise in physical sciences and engineering.

The tests are rarely equivalent, in that standard PDE technology can move to
new domains, boundary conditions, materials, etc., without new training
phases. If one needs to solve many nearby problems, there are many established
techniques for leveraging that similarity. There is active research on ML to
refine these techniques, but it isn't a silver bullet.

Far more exciting, IMO, is to use known methods for representing (reference-
frame invariant and entropy-compatible) constitutive relations while training
their form from observations of the PDE, and to do so using multiscale
modeling in which a fine-scale simulation (e.g., atomistic or grain-resolving
for granular/composite media) is used to train/support multiscale constitutive
relations. In this approach, the PDEs are still solved by "standard" methods
such as finite element or finite volume, and thus can be designed with desired
accuracy and exact conservation/compatibility properties and generalize
immediately to new domains/boundary conditions, but the trained constitutive
models are better able to represent real materials.

A good overview paper on ML in the context of multiscale modeling:
[https://arxiv.org/pdf/2006.02619.pdf](https://arxiv.org/pdf/2006.02619.pdf)

~~~
ChrisRackauckas
Yes, and our recent work
[https://arxiv.org/abs/2001.04385](https://arxiv.org/abs/2001.04385) gives a
fairly general form for how to mix known scientific structural knowledge
directly with machine learning. In fact, some of these PDE solvers are just
instantiations of specific choices of universal differential equations. I
agree that in many cases the "fully uninformed" physics-informed neural
network won't work well, but we need to fully optimize a library with all of
the training techniques possible in order to prove that, which is what we plan
to do. In the end, I think PINNs will be most applicable to (1) non-local PDEs
where classical methods have not fared well, so things like fractional
differential equations, and (2) very high dimensional PDEs, like 100's of
dimensions, but paired with constraints on the architecture to preserve
physical quantities and relationships. But of course, something like a
fractional differential equation is not an example for the first pages of
tutorials since they are quite niche equations to solve!

~~~
jedbrown
You've got a lot of broken references (??) in that preprint, BTW.

I think I understand why you're putting in the learned derivative operator,
but I think it's rarely desirable. Computing derivatives with compatibility
properties is a well-studied domain (e.g., finite element exterior calculus),
as is tensor invariance theory (e.g., Zheng 1994, though this subject is
sorely in need of a modern software-centric review). When the exact theory is
known and readily computable, it's hard to see science/engineering value in
"learned" surrogates that merely approximate the symmetries.

More generally, it is disheartening to see trends that would conflate
discretization errors with modeling errors, lest it bring back the chaos of
early turbulence modeling days that prompted this 1986 Editorial Policy
Statement for the Journal of Fluids Engineering.
[https://jedbrown.org/files/RoacheGhiaWhite-
JFEEditorialPolic...](https://jedbrown.org/files/RoacheGhiaWhite-
JFEEditorialPolicyStatementControlOfNumericalAccuracy-1986.pdf)

~~~
ChrisRackauckas
>When the exact theory is known and readily computable, it's hard to see
science/engineering value in "learned" surrogates that merely approximate the
symmetries.

I completely agree, which is why the approach I am taking is to only utilize
surrogates to think which are unknown or do not have an exact theory. I don't
think surrogates will be more efficient than methods developed that exploit
specific properties of the problem. In fact, I think the recent proof of
convergence for PINNs simultaneously demonstrates this might be an issue
(there was no upper bound to the proved convergence rate, but the one they
could prove was low order).

>More generally, it is disheartening to see trends that would conflate
discretization errors with modeling errors, lest it bring back the chaos of
early turbulence modeling days that prompted this 1986 Editorial Policy
Statement for the Journal of Fluids Engineering.
[https://jedbrown.org/files/RoacheGhiaWhite-
JFEEditorialPolic...](https://jedbrown.org/files/RoacheGhiaWhite-
JFEEditorialPolic..).

Agree, this is a difficult issue with approaches that augment numerical
approaches with data-driven components. There are ways to validate these
trained components independent of the training data (i.e. by using other
data), but validation will always be more difficult.

~~~
jedbrown
With enough coaxing, we can get the optimizer to converge to known methods
(high-order, conservative, entropy-stable, ...), and I'm sure this tactic will
lead to more papers, though they'll be kind of empty unless we're really
discovering good methods that were not previously known.

I presume you meant "verify" in the last sentence.

~~~
ChrisRackauckas
No, what I am doing is using high order, conservative (universal DAEs),
strong-stability preserving, etc. discretizations for the numerics but
utilizing neural networks to represent unknown quantities to transform it into
a functional inverse problem. In the discussion of the HJB equation, we
mention that we solve the equation by writing down an SDE such that the
solution to the functional inverse problem gives the PDE's solution, and then
utilize adaptive, high order, implicit, etc. SDE integrators on the inverse
problem. Essentially the idea is to utilize neural networks in conjunction
with all of the classical tricks you can, making the neural network have to
perform as small of a job as possible. It does not need to learn good methods
if you have already designed the training problem to utilize those kinds of
discretizations: you just need a methodology to differentiate through your
FEM, FVM, discrete Galarkin, implicit ODE solver, Gaussian quadrature, etc.
algorithms to augment the full algorithm with neural networks, which is
precisely what we are building.

So I completely agree with you that throwing away classical knowledge won't go
very far, which is why that's not what we're doing. We utilizing neural
networks within and on top of classical methods to try and solve problems
where they have not traditionally performed well, or utilizing it to cover
epistemic uncertainty from model misspecification.

~~~
fluffything
This looks really interesting.

I think it would be a good topic for a blog post or teaching paper that shows
how to do this for very simple problems "end-to-end" (e.g. advection eqt,
diffusion eq, advection-diffusion, burgers eqt., poisson eqt, etc.).

I see the appeal in showing that these can be used for very complex problems,
but what I want to understand is what are the trade-offs for the most basic
hyperbolic, parabolic, and elliptic one-dimensional problems. What's the
accuracy? What's the order of convergence in practice? Are there tight upper
bounds? (does that even matter?), what's the performance, how does the
performance scale with the number of degrees of freedom, what does a good
training pipeline look like, what's the cost of training, inference, etc.

There are well-understood methods that are optimal for all of the problems
above. Knowing that you can apply these NN for problems without optimal
methods is good, but I'd be more convinced that this is not just "NN-all-the-
things hype" if I were to understand how these methods fair against problems
for which optimal methods are indeed available.

~~~
ChrisRackauckas
No, it will not work well without the optimal method. But the method is no
longer optimal if say a nonlinear term is added to these equations, so you can
use the "optimal" method as a starting point and then try to nudge towards
something better. Don't throw away any information that you have.

------
cameronperot
I'm studying in the intersection of physics and data science, and I think
there's a number of places where physics can benefit from ML. From my current
point of view though, most of these applications lie more on the
experimental/computational sides of physics rather than the theoretical side.
One of the current use cases is using ML to aid in the processing and analysis
of data obtained from experiments.

I would like to see more truly innovative work done on the theoretical side,
but I don't think we'll see "AI" bridge the gap between QFT and GR any time
soon. I think in order for something like that to happen we need a new
approach, as the current approach of throwing deep learning models at it
doesn't feel like the right answer.

On a more general note, the SciML organization [1] has been quite successful
in helping incorporating more ML into science.

[1] [https://sciml.ai/](https://sciml.ai/)

~~~
md2020
I agree that the potential impact of ML on the theoretical side is very
exciting. I think there’s a lot of bridging to be done between the most
advanced mathematics and the most advanced physics that could lead to new
insight, but it’s a hard problem for humans to tackle since we have very few
people who are deeply proficient in both—although it is becoming more common.
I’m thinking something like GPT-3 trained on literature in both fields could
be the kind of thing we want, but like you I still doubt that a DL system is
likely to come up with any real insight. I’d like to be proven wrong, though.

~~~
spyder
GPT-3 is already not too bad with basic physics:

[https://www.lesswrong.com/posts/L5JSMZQvkBAx9MD5A/is-
gpt-3-c...](https://www.lesswrong.com/posts/L5JSMZQvkBAx9MD5A/is-
gpt-3-capable-of-reasoning)

And this is without training on the specific task. It's getting scary...

------
BrandoElFollito
I am actually surprised this is not more mainstream.

20 years ago I wrote my PhD thesis in physics, using genetic algorithms and
neural networks to "guess" some basic physical behaviour in particle physics.

It was difficult to find good reporters because the application was quite
exotic but I felt that this is something which would be worth investigating. I
quit academia afterwards and did not come back - but I am happy to see that
this road is back on the radar.

~~~
blablabla123
I wrote my diploma thesis 10 years ago and had to do a lot of pen and paper
calculations. Actually it was kind of standard stuff (Lagrangians of Standard
model, calculating parametrized decay widths) At that time I really hoped I
could automatize the error-prone steps of plugging in and simplifying
equations but I found nothing, except for isolated steps. Maybe this is also
due to the fact that the most powerful tools for manipulating symbolic
expressions are closed source. Not sure how it is now but as long as these
tools are not expressive enough to work "end-to-end" with SM Lagrange
densities, I doubt anything innovative could be done by automatizing that with
AI.

~~~
physicsgraph
That problem of pen-and-paper calculations featuring unintended errors is what
I try addressing in a project I work on [1]. My approach is to use Sympy
(which has a lot of Physics support) to validate expressions entered by a
human. Not quite the AI-focus of this thread, but still a machine augmenting
the work of researchers. To your point about the complexity of the math, the
Physics Derivation Graph is able to handle simple inference rules but there's
nothing preventing more advanced use.

[1] [https://derivationmap.net/](https://derivationmap.net/)

------
wenc
There's a ML group at Fermilab just outside Chicago working on ML applications
in high energy physics and astrophysics.

[https://computing.fnal.gov/machine-
learning/](https://computing.fnal.gov/machine-learning/)

One of the "AI" applications I remember seeing -- potentially applicable
outside physics -- involved using CNNs to read a 2D graph (as in graphical
plot, not G = (V,E)) in order to visually detect certain patterns/aberration.
(probably many physics groups around the world are doing the same)

At first glance this sounds kind of silly and trivial -- one might say, why
not just detect those patterns from the data arrays directly? Instead of from
a bitmap image of a plot of the data?

Unfortunately some patterns are contextual. A trained human eye can detect
them easily, while writing a foolproof mathematical algorithm is difficult:
e.g. it has to pick out the pattern, apply a bunch of exclusion rules etc.

(One instance of this, for example, is an old mechanic telling you what's
going on under the hood just from listening the vibrations of a car, while a
traditional DSP algorithm might not be able to do it as reliably because it
hasn't seen all the patterns and _contexts_ in which those sounds arise.)

This is a domain where neural networks/transfer learning really shines. It can
capture "intuition" by learning the surrounding context, rather than relying
on handcrafted features.

So Fermilab has an AI algorithm that looks at millions of graphs via a CNN,
which replicates the work of thousands of human physicists looking for
patterns. We've already seen examples of this in radiology.

~~~
sjg007
Makes sense. A graph can be represented by a matrix which is what an image is.

~~~
oivey
Images and matrices are 2D data structures of numbers, but that is where the
similarities end. An image is more like a vector, which matrices can be
applied to. You would never matrix multiply an image onto another vector.
Still, it isn’t uncommon to visualize matrices as images.

~~~
sjg007
Well a matrix is a collection of vectors so... I guess I somewhat agree.. You
can certainly apply projections to images, I mean this is what photoshop does.

~~~
gspr
> Well a matrix is a collection of vectors so

That's like saying "a matrix is a collection of real numbers, so anything you
say about one applies to the other".

> You can certainly apply projections to images, I mean this is what photoshop
> does.

This doesn't seem to refer to anything in the comment you're replying to.

~~~
sjg007
Would you please elaborate on your last point?

~~~
gspr
In reply to a comment that said nothing about projections, you wrote:

> You can certainly apply projections to images, I mean this is what photoshop
> does.

What's the relationship of this to anything in the comment you replied to?

~~~
sjg007
"You would never matrix multiply an image onto another vector."

~~~
gspr
> "You would never matrix multiply an image onto another vector."

That wasn't me. But I can still elaborate: while you can certainly consider a
non-color image as a matrix, the operation of multiplying this matrix with a
vector is rather meaningless.

While a lot of things can be made into or viewed as matrices, a matrix is
typically only meaningful as a representation of a linear map.

------
fmakunbound
> If AI is like Columbus, computing power is Santa Maria

Does that mean when AI finally arrives, it slaughters all of us?

~~~
SiempreViernes
A lot of the death was due to the introduction of new diseases, so maybe AI is
to blame for Covid-19?

Also, in the similie I think humanity is supposed to be the Old world, so I'm
really wondering _who_ we're supposed to find and enslave ...

~~~
jessaustin
Everybody thinks they're the center of the universe. Sometimes they're right,
for a time. When they decide they were wrong, they tear down the old statues,
if only to make the new overlords feel welcome...

------
LatteLazy
I am not working in AI so I only know what I read here or on other sites etc.
There seems to be a lot of buzz for AI and ML. But where actually are these
techs succeeding currently? I feel like there is supposed to be a revolution
going on everywhere but anywhere I look, it's just plans and press releases...

~~~
Veedrac
Siri, Google Assistant, speech detection, speech generation, textual photo
library search, similar data augmentations for web search, Google Translate,
recommendation algorithms, phone cameras, server cooling optimization, phone
touch screens touch detection, video game upscaling, noise reduction in web
calls, file prefetching, Google Maps, OCR, etc.

AI has already won, most people just don't realize it.

~~~
goatlover
Won what? Is there a competition? Humans still have jobs. Humans are still
politicians, judges, CEOs, generals. They even still play chess!

All those successful forms of AI are narrow, not the AGI of science fiction
(like Data, Skynet, HAL) or Ray Kurzweil predictions. AI is a tool humans use
to extend human capabilities. It always has been. Maybe someday it will be
something more.

~~~
Veedrac
Won a place in the software stack, alongside traditional software approaches,
much like the GPU won, and became a second pillar of computing.

------
jhrmnn
IMO not a revolution, but I can see a solid evolution. My reading of the work
on embedding ML into physical models so far is that the best strategy is to
take it as far as possible with the standard physics approach of abstraction
and reduction, and once you exhaust that, apply ML to solve the remaining
(often crucial) complex behavior.

------
ylem
There are a lot of cool advances in AI and physics. In my particular field of
condensed matter physics, a number come to mind. One is trying to
automatically extract synthesis recipes from the literature. Imagine that you
want to see how people have synthesized a given solid state compound. Then
searching through the literature can be painful. A great collaboration from
MIT/Berkeley did this using NLP. I don't know what blood oaths they signed,
but they were able to obtain a huge corpus of articles. But, how to know if an
article contains a synthesis recipe? They set up their internal version of
Mechanical Turk and had their students label a number of articles. Then they
had to find the recipes, represent them as a DAG, etc. They have now
incorporated the result with the Materials project
([https://materialsproject.org/apps/synthesis/#](https://materialsproject.org/apps/synthesis/#)).

There are groups that are using graph neural networks to understand
statistical mechanics and microscopy. There are also a number of groups
working on trying to automate synthesis (most of it is Gaussian process based,
a handful of us are trying reinforcement learning--it's painful). On the
theory side, there is work speeding up simulation efforts (ex. DFT
functionals) as well as determining if models and experiment agree (Eun Ah Kim
rocks!).

Outside of my field, there has been a push with Lagrangian/Hamiltonian NNs
that is really cool in that you get interpretability for "free" when you
encode physics into the structure of the network. Back to my field, Patrick
Riley (Google) has played with this in the context of encoding symmetries in a
material into the structure of NNs.

There are of course challenges. In some fields, there is a huge amount of data
--in others, we have relatively small data, but rich models. There are
questions on what are the correct representations to use. Not to mention the
usual issues of trust/interpretability. There's also a question of talent
given opportunities in industry.

------
pjc50
GPT3 + replication crisis = huge volume of scientific papers produced, but
nobody can know if they're accurate or not.

Landmark to watch for will be when the first GPT-generated paper gets a
citation in a human-authored paper without the human realising.

------
visarga
> For this they use so called neural graph networks (GNN). These neural
> networks rely on graphs instead of layers arranged one after the other.

This affirmation shows the author has little idea about GNNs. GNNs have
layers, and each layer is a graph. In order to implement the graph GNNs use
the adjacency matrix to propagate information along the edges. But there are
multiple layers of GNN, without multiple layers they would not be able to do
multi-hop inferences.

------
test6554
Columbus is probably not the best character to use for analogies...

"If AI is like Columbus, computing power is Santa Maria"

and intractable physics problems are like... indigenous people?

------
staycoolboy
As someone who has worked on ADAS software and saw a simple un-optimized ML
object detector beat a custom hardware solution at both speed and accuracy, I
can honestly say machine learning is amazing.

Just in this domain alone, excluding the 100 other applications of ML, and the
fact that we haven't even begun optimization in earnest, I certainly believe
ML will change the direction of computing. It already has: look at where
investment and research dollars have gone. (not to say that trends don't
happen, but when I saw the performance results I thought: sh*t, this is big.)

Add to this the rise of the qubit, and the next 50 years are going to be even
crazier than the last 50.

Yes, I am a proselytizer of school of James Gleick. "Faster" was a
prophecy[1].

[1] [https://www.amazon.com/Faster-Acceleration-Just-About-
Everyt...](https://www.amazon.com/Faster-Acceleration-Just-About-
Everything/dp/067977548X)

------
tim333
I've often thought that maybe the reason we can't get a quantized theory of
gravity is that it's too complicated for human brains rather than we need a
bigger accelerator. You might be able to get somewhere with a brute force type
approach of almost randomly coming up with equations for a theory and then
trying to see if they make any sense and predict anything interesting. I
suspect a breakthrough may be like AlphaGo's move 37 where it leaves the
humans saying wow what happened there?
[https://www.huffpost.com/entry/move-37-or-how-ai-can-
change-...](https://www.huffpost.com/entry/move-37-or-how-ai-can-change-the-
world_b_58399703e4b0a79f7433b675)

------
ylem
Shameless plug--The American Physical Society has a topical group on Data
Science. Since our annual meeting was cancelled due to Covid, we've been
running a free series of webinars on data science and physics:
[https://www.youtube.com/channel/UCfPG-
nSsgnFeWuzgPcbKlCw/vid...](https://www.youtube.com/channel/UCfPG-
nSsgnFeWuzgPcbKlCw/videos)

If anyone is interested, we have one on data science in industry coming up:
[https://attendee.gotowebinar.com/register/604483936035643777...](https://attendee.gotowebinar.com/register/6044839360356437776)

------
Myrmornis
> If you want to read a linear function from the data in a two-dimensional
> coordinate system in math lessons, you can do it in five minutes - or
> quickly watch a video on YouTube.The situation is different for more complex
> tasks: Physicists, for example, have been trying to combine quantum theory
> and relativity theory for almost a hundred years. And if this succeeds, it
> could take generations to clarify the effects, says physicist Lee Smolin .

What on Earth does that paragraph mean? Parts of the article read to me like
they were generated automatically, but other parts don't.

------
dkural
The author has no idea what he's writing about, calling it a "graphene"
network, and several awkward phrasings about dark matter etc. Read the papers
instead.

------
tabtab
Re: "scientific progress could be bound by Moore's law and increase so much."

Moore's law appears to be slumping lately.

Re: "This coincides with our previous experience in physics, says Cranmer:
"The language of simple symbolic models describes the universe correctly."

As an approximation, yes, but that doesn't mean a "true" formula has
necessarily been found.

~~~
Veedrac
> Moore's law appears to be slumping lately.

Not so.
[https://docs.google.com/spreadsheets/d/1NNOqbJfcISFyMd0EsSrh...](https://docs.google.com/spreadsheets/d/1NNOqbJfcISFyMd0EsSrhppW7PT6GCfnrVGhxhLA5PVw)

------
mola
I think it's more of an engineering revolution. The opaqueness of (at least
current) machine learning means we won't really enhance our understanding of
the universe, just our ability to predict it.

Some people would argue that these things are one, I think otherwise.

------
andrewon
Not sure about if this is really science. Physical formula are derived from
known physical laws in order to understand the original of the phenomenon. If
theorists are allowed make up arbitrary formula of course it can fit the data
with less error.

------
jshaqaw
The article confuses me. I was doing symbolic genetic algorithms to derive
formulas back in the mid-90s so that's not new. But this seems to suggest a
combined genetic algorithm/NN approach is being used. Curious to see the
underlying paper.

~~~
jackcosgrove
I'm also interested to see how this is different from generalized additive
models (GAMs - not GANs). It seems to be the same principle except with a
genetic mutation and selection aspect.

------
ricksharp
What is the purpose of the neural network and how does that help generate the
symbolic regression using genetic algorithms?

Are they somehow using the parameters of the ANN to seed the generic
algorithms (and structure)?

------
norcon4
The site is throwing a security error for me: PR_CONNECT_RESET_ERROR Anybody
else have the same issue? Or is the site just being hugged to death.

~~~
pmontra
Do you use (willingly or not) any proxy, including an antivirus? The problem
might be there.

------
timwaagh
I think this is pretty significant. I would have guessed this to be among the
very last things to be automated.

~~~
ben_w
I was expecting AI to become an indispensable part of science well before it
was able to turn natural language descriptions into functional code, but:
[https://mobile.twitter.com/sharifshameem/status/128410376521...](https://mobile.twitter.com/sharifshameem/status/1284103765218299904)

------
nestorD
TLDR: Using neural network to model physical systems as black boxes and then,
later, using symbolic regression (genetic algorithm to find a formula that
fits a function) on the model to make it explainable and improve its
generalization capacities.

The system managed to reinvent Newton's second law and find a formula to
predict the density of dark matter.

(note that symbolic regression is often said to improve explainability but
that, left unchecked, it tends to produce huge unwieldy formulas full of
magical constants)

------
fxtentacle
No, we have merely found a new and slightly better way of interpolating
between (slow and properly calculated) known data points.

------
godelski
I work in this space (intersection of science and ML) and I can say with high
certainty that Betteridge's Law[0] is likely accurate.

But then again, pretty much any article that uses AI instead of ML is hogwash
too. Are we crediting someone with this one?

[0]
[https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...](https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines)

------
LoSboccacc
curve fitting is no science, no matter how deep the net goes, it's great for
calculus, and obtain numerical models of what we already can measure, but all
the correlation would require an human to verify and a theory to be synthesize
post fact, especially if there's a margin of error or confidence, as
generating infinite correlation would only result in finding models that are
not there

this shows the effect of infinite dissecting data searching without a theory
pretty well [https://xkcd.com/882/](https://xkcd.com/882/)

