
Recreating Our Galaxy in a Supercomputer - chmaynard
http://www.caltech.edu/news/recreating-our-galaxy-supercomputer-51995
======
throwaway_yy2Di
Apropos, the Gaia space telescope's first data release is next week. It's an
extremely large dataset of the kinematics (3d position + velocity) of Milky
Way stars.

[http://www.cosmos.esa.int/web/gaia/dr1](http://www.cosmos.esa.int/web/gaia/dr1)

------
hossbeast
I'm always a bit uncomfortable when I read about findings from these
simulations. To say they have "answered questions" about galactic evolution
seems to strong a statement. How about, "created a more accurate model"?

~~~
VikingCoder
"answered questions"

Yes. Like, "Given everything we understand, how could it be that in the
physical world, there aren't more dwarf galaxies?"

And there was no answer to that question, until now.

Now they come up with a new model, and run it, and they end up with a number
of dwarf galaxies that much more closely matches the physical world. No one
had done that before, so we didn't know how it could be.

This has answered the question, "HOW COULD IT BE?"

The answer is, "If reality matches this new model."

~~~
a_throwaway3838
Saying "it has been answered" is a strong definitive statement, though. It's
rather arrogant to say "Yes, thanks to this model, we absolutely know <xyz>."
Are we absolutely certain that this model is correct? Are we absolutely
certain that what we can extrapolate from this model accurately applies to the
physical world?

Granted, yes, questions were answered... I guess. But saying that "The
simulation solves a decades-old mystery" is uncomfortably confident and
counter to how scientific process works.

It's definitely catchy and clickbaity, though

~~~
lutorm
Some people like to say things like that, but it's more accurate to say "this
proves our current model is consistent with data". I only skimmed the article
(and haven't read the actual paper) but it focused on surrounding dwarf
galaxies which have traditionally been a problem with these simulations. I'm
sure there are still plenty of ways in which these sims do not match our
actual universe.

------
fiatjaf
Nice, you can't model a worm, but you can model a galaxy.

~~~
misja111
[http://www.openworm.org/](http://www.openworm.org/)

~~~
FeepingCreature
Still don't have a working integrated model.

------
jessriedel
For context, this question of whether vanilla dark matter models are in
conflict with galactic observations has been raging for many years. (It
includes both the dwarf galaxy problem mentioned in the article, and things
like the "core-cusp" problem concerning the radial dark matter density
distribution.) These numerical simulations are fiendishly complicated, and
there's always a risk that you just keep adding new tweaks or effects until
you get the results you're looking for. I'd be interested in reading a
skeptical assessment from the folks on the other side of this question.

------
starmole
Can somebody who knows about this tell me how much of the novelty in this is
CS and how much physics? For example in graphics all the physics of light is
well understood - it's just infeasible to compute - so all advances are in CS
and how to sample better. Is this the case here? Or is it adding new models?

~~~
sillysaurus3
_For example in graphics all the physics of light is well understood - it 's
just infeasible to compute - so all advances are in CS and how to sample
better._

Nitpick: The physics of light is understood. Human vision is also understood.
But critically, human vision is completely overlooked in most graphical
applications. It's not accurate to say that "the path toward making a
computer-generated video indistinguishable from a camcorder recording is just
a matter of sampling better." There will need to be a fundamental shift in the
industry's thinking before realism will be achieved.

It's a fascinating topic that occupied many years of my life, though this is a
bit off-topic.

~~~
Le_SDT
I'm curious. What do you mean by "Human vision is completely overlooked"? Are
you talking about color perception? 3d rendering?

Thanks :)

~~~
sillysaurus3
There's pretty much no way to avoid writing a large comment in response to
this. So, with apologies...

It was my focus for a long time to achieve perfectly photorealistic rendering.
I come from a gamedev background, and I've been fascinated since around age
ten about how to get a computer to paint pictures. By 17 I was writing game
engines mostly equivalent to quake, and used this portfolio to get into the
industry. The next three years were spent getting as close as possible to the
bleeding edge of real-time graphics development.

I remember having long debates with a colleague about what high-dynamic range
lighting "meant." "If we spin around in our office chair, our brains do not
suddenly change the overall brightness of this office. Why should we be
programming games to do this? Why is everyone doing things that way?"

My concerns were of a more fundamental nature as well. What is a diffuse
texture? A diffuse texture is a well-understood concept in realtime graphics.
Anyone with a basic knowledge of shaders should immediately recognize:

    
    
      color = lighting * diffuse + ambient
    

Trouble is, it doesn't correspond to reality even slightly. It's not even
_roughly_ close. It happens to look good to humans, and that's why we use it.
But the trouble was, the further I tried to probe the mystery of realism in
computer graphics, the more I ran against this phenomenon of "We use X
technique because it looks good."

So I turned to research papers. Books. The medical field. Everywhere that was
remotely related to possible breakthroughs in photorealistic rendering.
Research papers are excellent for assembling techniques, but not results. The
books in human vision and color science were more promising, yet most of the
industry seemed (and still seems) to pay little attention to them. Compare a
book about color perception to, say,
[http://www.pbrt.org/](http://www.pbrt.org/) and you'll see a stark
difference. Flip through the table of contents and you get transformations,
shapes, primitives, color and radiometery, sampling, reflection, materials,
texture, scattering, light sources, monte carlo integration...

And for what? We know that these techniques simply do not produce computer-
generated videos that a human will identify as a real-life image. It's not for
lack of processing power. There is a disconnect between the old rules and
those that will ultimately result in real-time realism, and you won't find it
in that table of contents.

Now, the trouble with writing all of this is that if I knew how to do it, I'd
have done it already. It's a life-long search, and it's not so easy to refute
an entire industry without being (rightly) dismissed. But if you wish to know
what I suspect are the ways forward, it's this: Get a camera. Take photos.
Compare these photos to the results of the algorithms you write. Iterate on
your algorithms until they are producing results that match something that
already captures nature, not our beliefs about how we ought to be able to
capture nature. "Just throw in physical models and presto!" has not thus far
been true.

You'll notice, for example, that computer graphics in videogames have
plateaued. They get more impressive with each generation, but that
impressiveness does not get them progressively closer to looking _real_. Nor
should it. A computer game tells a story. The closer it looks to real life,
the more restricted the artists are, along with the rest of the design of the
game.

So we turn to the movie industry for hope. But it's restricted in exactly the
same way. The research papers are all along the lines of new techniques to
try, or studies of existing pipelines and how to deal with their complexities.
It's not fundamental research.

As someone who has spent his life in pursuit of realism in computer-generated
video, my recommendation is this: Read DaVinci's journal. Pay attention to
what each page is saying. He had to discover from first principles what makes
a painting look real, and why. You'll notice that he spends most of his time
talking about human vision and our perceptions of color.

If someone is going to make this development happen, it's not going to come
from the game industry, and it won't come from the movie industry. That leaves
you. Hopefully this will encourage some of you to pursue this. Once you accept
that most of the computer graphics industry isn't actually focused on
achieving realism, you'll start to develop your own techniques. My hope is
that this will eventually lead to a breakthrough.

~~~
formula1
I appreciate you probably put a lot of effort out but your post largely sums
up to "They are wrong, I've done extra curricular research to prove it, I have
no alternative"

While noble, I cant help but wanting to understand what alternatives you were
leading yourself done. As an example, light can arguably simplified to
intersecting cylinders and spheres that bounce off surfaces to create new 3d
shapes. Each shape also would have an origin 2d shape based upon whats
reflecting it. an "eye" reads shape intersections with self and also can
filter those intersections in respect to origin shape. After each bounce, the
new shape takes form as the bouncing lights color multiplied by the color of
the bounced object In low light situations, subtle luminosity differences can
be enhanced.

What I did was offer an example. Perhaps youll one day be successful but I got
the impression you are some kind of renegade with a mission. While I can
certainly relate to that, I view science and building the future quite far
from renegade status. And in the mean time, you gave me a sob story with no
algorithms/solutions except for "take real pictures and compare them". As a
lazy programmer, walking outside and discoveringvthe world doesnt interest me
too much.

~~~
sillysaurus3
We can come up with hundreds of simplified lighting models. Have you tried the
one you mentioned? Does it correspond to photographs, video? That's the hard
work, and it's why I listed none.

Here's something specific: What did you mean by "multiply"? You cannot
"multiply" colors. Not unless you concede that your model has nothing
whatsoever to do with physical reality. And at that point, why not use a photo
of nature (or your eyes' perception of nature) as a baseline comparison?

From
[http://www.feynmanlectures.caltech.edu/I_35.html](http://www.feynmanlectures.caltech.edu/I_35.html):

"The phenomenon of colors depends partly on the physical world. We discuss the
colors of soap films and so on as being produced by interference. But also, of
course, it depends on the eye, or what happens behind the eye, in the brain.
Physics characterizes the light that enters the eye, but after that, our
sensations are the result of photochemical-neural processes and psychological
responses.

There are many interesting phenomena associated with vision which involve a
mixture of physical phenomena and physiological processes, and the full
appreciation of natural phenomena, as we see them, must go beyond physics in
the usual sense. We make no apologies for making these excursions into other
fields, because the separation of fields, as we have emphasized, is merely a
human convenience, and an unnatural thing. Nature is not interested in our
separations, and many of the interesting phenomena bridge the gaps between
fields."

Walking outside and discovering how the world looks is exactly how to improve
your techniques as a graphics programmer.

~~~
tizzdogg
I work in film vfx, and if you think we dont compare our rendered images to
real photographs and video constantly, then you dont really know how modern
graphics are produced. We shoot reference for everything, and use things like
gonioreflectometers to sample real world BSDFs. Yes, lighting models are
always necessarily simplified from the real world physics. But the thing is
that 99% of the time that doesnt matter, because most common types of surfaces
are able to be reproduced accurately enough to fool the majority of people. I
would wager that most of the CG things you see these days on TV or in movies
you have no idea was CG.. it's only the bad stuff (or obviously impossible)
that stands out.

I actually am a bit confused what you are arguing here.. all of these problems
you're mentioning have been well-understood by graphics researchers for the
past couple decades.

~~~
sillysaurus3
The main point was that film isn't interested in _exact_ photorealism. As you
said, it doesn't matter, because the simplified models are good enough.
Therefore it's unlikely that the film industry will be the first to produce a
fully computer-generated video that will be indistinguishable from a camcorder
capture.

The reason most of the CG you see in TV or movies looks very good is because
they take place within real video. We're not looking at a completely CG scene
-- it's mixed with video from the real world. And that's a perfectly valid
technique, but my comment was talking about 100% CG.

A secondary point re: the film industry is that artists must necessarily
retain control of the art pipeline in order to create scenes that advance the
plot. That requires the art pipeline to be flexible. The more flexible your
art pipeline, the more productive your studio is. Yet that flexibility is
precisely opposite to realism. Obviously, the more realistic a purely CG scene
looks, the _less_ flexibility you get, otherwise it wouldn't appear real;
hence the argument that the vfx industry won't be the ones to produce the
elusive fully-CG fully-realistic video. (It doesn't make financial sense for
them to do so, if nothing else.)

~~~
tizzdogg
Again, I would wager that you have seen many, many shots on TV and in movies
which were fully, 100% CG already. You just didn't notice because they really
were indistinguishable. We've definitely already crossed that boundary in many
areas, and the main things which still stand out is just bad work.

------
mikekij
It'd be pretty cool for us to discover that there are tiny virtual humans in
that simulation, complete with consciousness, contemplating their own
existence.

~~~
BurningFrog
But what if they are wicked?

~~~
treehau5
Define wicked

~~~
macintux
Every single male has a goatee.

------
DigitalPhysics
I think Stephen Wolfram is right; Physics is going through a paradigm shift,
albeit one that might last decades. Digital Physics is here!

[https://vimeo.com/ondemand/digitalphysics/](https://vimeo.com/ondemand/digitalphysics/)

------
MarlonPro
Title should be: "Creating a Model of the Known Galaxy in a Supercomputer"

------
shireboy
See, this is how universes get created.
[https://en.wikipedia.org/wiki/Simulation_hypothesis](https://en.wikipedia.org/wiki/Simulation_hypothesis)

------
nsxwolf
I thought the Milky Way was a barred spiral.

------
Sir_Cmpwn
I wonder if any aliens out there have taken pictures of the real deal yet.

