Hacker News new | comments | show | ask | jobs | submit login
Neural networks meet space (symmetrymagazine.org)
74 points by qubitcoder on Aug 31, 2017 | hide | past | web | favorite | 30 comments



'“The neural networks we tested—three publicly available neural nets and one that we developed ourselves—were able to determine the properties of each lens, including how its mass was distributed and how much it magnified the image of the background galaxy,” says the study’s lead author Yashar Hezaveh, a NASA Hubble postdoctoral fellow at KIPAC.

This goes far beyond recent applications of neural networks in astrophysics, which were limited to solving classification problems, such as determining whether an image shows a gravitational lens or not.'

Pretty fascinating stuff. Once you think about it, applying NNs to space makes a lot of sense. There is a ton of data to sift through and find patterns in. Amazing to think neural nets could crunch through this data in seconds, and point out areas of interest immediately. I wonder if NNs have been used in the search for exo planets yet.


> I wonder if NNs have been used in the search for exo planets yet.

Might be kind of overkill. The patterns being looked at for exoplanets are periodical dimming of stars AFAIK. I don't think you necessarily need a neural network to sift through that.


yes and no. Most exoplanet detections so far have been indirect: people just search for the dimming from the eclipse or similar other methods. So you're right about that. But recently we're starting to actually directly take sensitive enough images to be able to see planets (see http://planetimager.org). They aren't using NNs yet. But there have been discussions of using NNs for various aspects of those searches.


Detecting dimming is actually surprisingly difficult and currently partly done by humans using Mechanical Turk-like platform. https://www.planethunters.org/


Dude, stars don't have constant brigthness...


They are also very very far away, and only a few photons are making it to the lens every so often; it's quite a difficult task.


Astronomy strives to be a science, so ultimately it needs to tune causal models with data. NNs will by their nature never be causal (their entire point is that they can approximate everything), so they will be used like here to find candidates to investigate with real models.


The article says they use the neural net to find lenses which match the model the astronomers develop:

To train the neural networks in what to look for, the researchers showed them about half a million simulated images of gravitational lenses for about a day.

I'm not sure what you mean by a "real models" in any case.

Things like NN based generative models combined with model selection certainly can build models which discover real-world behavior. There's a long history of this in the disease epidemiology field. In these cases it isn't usually neural network based, but that is mostly about the most appropriate learning algorithm for the data available.


"NNs will by their nature never be causal (their entire point is that they can approximate everything)".

A NN can sureley result in a traditional causal model.

For example: Build a NN that has the same computational structure as some simple physics law. By giving it training data it then figures out necessary constants. That may converge to the traditional model, which is mirrowed in the network architecture anyway.

So I really can't support saying NN will by their nature NEVER be causal.


I'm not saying a NN doesn't reproduce analytical expressions, rather that "take everything and mix it together a bunch of times using this recipe" is not the form that the laws of nature has.

I'm not sure what you think explicit construction proves, it is clearly a case of taking preexisting knowledge and expressing it in what is doubthlessly a more akward form.


Especially since humans arguably uses wetware neural nets and don't we like to think we are coming up with causal models?


Well, "neural nets that come up with causal models" is a research problem and we don't know how to do so.


Wouldn’t it take too long to train with such amount of data? I’m thinking the NN has to be constantly be in training because you never want to miss some data sets that contain special categories and not to miss them when in the inference stage


hi, Yashar here (one of the authors). These NNs were really fast to train. A day or so. We trained them on half a million simulated images. Then they're good to go for the analysis of any new data. We don't need to keep on training them as we get more data.


How long did it take you to get them to train in a day?


surprisingly not that long. We started this in Feb without any expectation that it would work at all, and after 2-3 weeks of playing with things they were working great.


I love how machine learning went to statistical learning and then back to machine learning. Playing is exactly the word to describe the discovery process.


I guess you train it on things you know and use it to tell if a new observation looks familiar or not. You can't use the NN for the final analysis because its just a pile of linear algebra shaped like a causal theory: there is no actual physics in it.


yes, in principle you're right. But in this case, the answer from the neural nets is so incredibly close to the true answer, that we think we can trust it for most purposes. If someone really wanted the most accurate answer, then they can start from the NN answer and do a proper model fitting procedure, which of course fits a simulated model with all the appropriate physics in it to the data.


Well, in the important case of finding something new and interesting you need proper MC to verify your understanding. In practice you are right that using the NN like it actually speaks about reality will be common and not too harmful, people do lots of dubious least squares fits and astronomy still survives!


dimensionality reduction is very powerful when you need to traverse a large configuration space and look for things that seem salient.


Naive question: is the exponential increase in performance talked about in this article unique to neural nets? Or are there other techniques for writing classifiers that yield the same performance increase, given advances in hardware?


Neural networks are function approximators. So if you 1) know an algorithm that is really computationally complex but not highly random and 2) have a lot of inputs and outputs of that algorithm, you can usually train a neural network to approximate a closed-form formula of the algorithm. It boils down to a bunch of matrix-multiplies and some standard non-linear functions in between.


is that anything close to like polynomial fitting? What with PTIME and NP-Completion?


Kind of - but instead of computational complexity in the "NP" sense, you have lots of data. It's often so hard to get good training data that the cost of just waiting out a big algorithm to finish can be cheaper. So you have to weigh that.


well sure, you said as much before. But I was also thinking, what if P~=NP, in the sense that any function can be approximated by a polynomial of sufficiently high degree.


No, you can of course write some simple stretchy model that fits fast. The thing with NNs is that you don't have to domuch work to get a good fast model, you use cpu cores for that instead.


Nice. Friends with the authors here, I'll try to bring them to answer questions.


Bob?


If I understand it correctly, it sounds like they used a NN to fit a surrogate model to the kind of analytical physics-based pipeline that they had been using before?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: