Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Machine learning accelerates cosmological simulations (phys.org)
24 points by dnetesn on May 6, 2021 | hide | past | favorite | 10 comments


I see a lot of problems wit such approaches. Firstly, the results may look similar but they certainly are not the same so using them to get an idea/a hunch is ok but using them directly for scientific purposes is questionable. Secondly, NN often extrapolate badly, at best they are not reliable because we don’t know why/how they work. Object recognition is the prominent counter-example here. Even with state-of-the art networks while the performance is usually very good, they sometimes surprised us with “idiotic” errors and could be easily confused by covering/over-blending part of the picture or simply changing the scale/perspective. We’re still far far way from robust networks so I’d be cautious about using such techniques. Besides, one important goal of simulations is to help validate theories. If the simulation result is faked, it might help to demonstrate the idea but not the theory itself.


It has multiple advantages:

Your ml can be improved.

It can help to find things where you then spend the real effort.

You can share your ml.

The end results need to be verified of course.


So here's a thing: I recently talked to the head of a new mathematics "excellence cluster" at a big German university. His opinion was that AI is a fundamental new approach to solving differential equations -- in contrast to the classical semi-linear way in numerics. That's how it looks like when even scientists "go with the hype", and that's why nowadays in academia it (sometimes) feels like you can apply for any grant when you say you do AI.


I may have missed it while skimming the article, but I saw no mention of how accurate the ML model was.

It may look like the results of a high fidelity simulation, but how useful is that?


I feel skeptical as well.

> We couldn't get it to work for two years," Li said, "and suddenly it started working. We got beautiful results that matched what we expected.

How many times have you gotten an algorithm to work, where later you realize you had an off by 1 error, or a double free somewhere in your code that you didn’t catch until you exposed your program to more situations?

I am curious about how they validated the ML approach. The advantage of simulation is the ability to uncover emergent phenomena that are difficult to predict and are not expected. It could be that they’re averaging a lot of common situations and they may miss novel outcomes.


They mention a GAN which is a generative model and currently no good measure of evaluation (beside metrics like Fréchet inception distance).

The article only mentions a qualitative comparison but no incorporation of causal / physic based-modeling that I would imagine would be important in astronomy.

Easy enough for GAN to synthesize realistic, high-resolution images without any underlying model of reality / casuality.


Uummm. Is this accelerated due to the inherent fuzziness of neural style calc or because they have to models working on NN/TF/GPU style hardware? I skimmed and couldn't find an answer. As for the "is it accurate?" question. No, no it isn't but then neither is any other model - it just has to be good enough over the time frames concerned. This is more like weather forecasting than chess.


From the article, it seems like what they're doing is equivalent to typical "super-resolution" tasks where you refine a coarse grid into a (consistent) fine one using AI heuristics. This one even has a GAN to make sure the difference between original and super-resolved data isn't easy to spot (which should be possible if the physics was wrong).

This is fine in physics generally, since the initial conditions aren't exact, often you just want some plausible result, rather than the one that corresponds exactly to the exact (microphysical) details of your input.

If someone else wants to actually read the paper and double check, that would be great.


I think the question is less, is it accurate, and more do we know how inaccurate it is?

Inaccurate models csn still be useful as long as have an idea of how inaccurate they are in the worst case. If we have no idea how accurate something is, we can't tell if its sufficient for its purpose.


Differential equations vs Machine learning based model/data -- I am wondering about the trend of scientific progress in the next few years. Which one is better? Let's time tell us the truth. Science is not the truth itself, but a journey to find a truth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: