
100,000 stars - davex
http://workshop.chromeexperiments.com/stars/
======
equalarrow
WOW, that is amazing. I'm always blown away by stuff like this, where you can
actually get a sense of how small we all are and how distant even the closest
neighbor stars are.

I just close my eyes for a minute and think (or try to), what would it be like
for those people that are finally able to reach, say, Vega (I know it's not
the closest). Sure, this is not a big deal in sci-fi, but for reality, it's
pretty mind blowing. This is 100% why I seriously want to live for a few
hundred years: to have an opportunity to see the first time we actually go to
the nearest star.

In the meantime, I guess this will have to suffice.

I also love this image that is not interactive like this, but still mind
blowing:
[http://en.wikipedia.org/wiki/File:Earths_Location_in_the_Uni...](http://en.wikipedia.org/wiki/File:Earths_Location_in_the_Universe_\(JPEG\).jpg)

~~~
incision
>WOW, that is amazing. I'm always blown away by stuff like this, where you can
actually get a sense of how small we all are and how distant even the closest
neighbor stars are.

Same.

If you don't already own a pair, I'd recommend getting a basic pair of
binoculars and doing some backyard astronomy. You'd be amazed how much more
you can see with even a basic 10x50 pair, even in thoroughly light-polluted
skies.

Also SpaceRip [1] collects hundreds of interesting, easily digestible and
pretty timely videos.

1: <http://www.youtube.com/user/SpaceRip>

~~~
alwaysinshade
> I'd recommend getting a basic pair of binoculars and doing some backyard
> astronomy

If you've got the spare cash, get image stabilized ones. I could clearly see
the moons around Jupiter with my Canon 12x36 IS binos the other night despite
my hand tremors. The real party trick is handing them to a friend and telling
them to look at the moon. Blows them away every time - to most people it's
just a yellowish glowing thing in the sky, rather than a scarred rocky globe.

------
shanelja
This is without a doubt one of the coolest and most beautiful webgl
experiments I have seen in the last few years, it actually struck a real chord
with me - music, lighting, effects, the zoom and the sheer beauty of it.

For those people unlucky enough to not be able to load this app (it took me
quite a while) here is a particularly fantastic image I took (without asking
or any right to, of course) -
[http://shanearmstrong.co.uk/content/cdn/the_beauty_of_the_co...](http://shanearmstrong.co.uk/content/cdn/the_beauty_of_the_cosmos.png)
\- I apologize for any slow load times.

~~~
altrego99
I am guessing you have liked this too -
<http://www.youtube.com/watch?v=17jymDn0W6U>

Being a video it is not interactive, but definitely does strike something in
me. It's almost the Total Perspective Vortex.

~~~
shanelja
Until now I had not seen this video, but thank you.

This really puts in to perspective the brevity of human life and how little we
have achieved so far, from leaving the primordial soup to firing Glee across
our television network to entertain teenage girls and travelling to and from
the moon.

We are irrelevantly small and unimportant and yet we have already done the
hardest thing, of what we know for sure, there are 8.5 million species on the
planet and we are the only constructively intelligent one present here, there
are 400 known satellites in our solar system, if you assume that every solar
system in the milky way has a similar amount of planetoids on which some minor
form of life could have grown, placed around 300 billion stars. I'm going to
take a _complete_ guess that only 1 in 10,000 of those contain a similar
amount of life, which could be light years off, or could be spot on, or could
even be far, far less tan the actual number - we simply don't know yet.

The maths is breathtakingly overwhelming:

(1 / ((8500000 * 400) * 300000000000)) * 10000 = 0.000000000000000098

Our significance in the Milky Way is 0.000000000000000098.

We account for only 0.0000000000000098% of potential life in this galaxy.

But we survived. We made it this far. From here the only way is up, or down,
or left or under (depending on the location of the camera when we finally make
it far enough off this rock to consider it interstellar travel.)

 _Disclaimer: This maths was about as well as I could do at 5.30am and is the
product of a Google search of the accumulated human knowledge of the last few
thousand years_

~~~
nathan_long
>> We are irrelevantly small and unimportant

That we are relatively small is undeniable. That we are unimportant is an
emotional judgement that I think is unwarranted. Size != importance.

Importance is a value judgement made in a mind. My left thumb is more
important to me than Alpha Centauri is. As far as I know, Alpha Centauri has
no opinion on the matter.

Are we important to one another? To yet-unknown sentient creatures? To God?

The answers to those questions will probably not depend on whether we are 1
meter or 1 parsec tall.

------
mey
_Warning: Scientific accuracy is not guaranteed. Please do not use this
visualization for interstellar navigation._

~~~
typpo
If you are interested in scientifically accurate models in webgl, you may like
this asteroid simulation, which models objects in our solar system in an
astronomically accurate manner: <http://asterank.com/3d/>

~~~
batgaijin
that's a really cool simulator! what's the license for it? I can't find it on
your github account.

------
secondForty
This is REALLY COOL!

Could someone explain how this is built or give an overview of how it works?
In the 'about' page <http://www.chromeexperiments.com/detail/100000-stars/> it
says WebGL and CSS3D, but I'm wondering how they fit together and what does
what.

Is there a better way to view the source than just 'view source' in chrome?

I know a number of programming languages and I'd like to learn more about how
this project works. [Saw the link to book on graphic programming in other
comments below <http://www.arcsynthesis.org/gltut/index.html>, but how to
"take apart and study" this project? ] Kudos to anyone who can point me in the
right direction. Thanks!

~~~
krsunny
<https://github.com/mrdoob/three.js/> \- That should get you going. Download
that and take a look at all the examples and how they are made. Here is a good
book on webgl too - [http://www.amazon.com/WebGL-Up-Running-Tony-
Parisi/dp/144932...](http://www.amazon.com/WebGL-Up-Running-Tony-
Parisi/dp/144932357X/)

~~~
secondForty
Tanks Krsunny! Really helps to have some advice on what's current and where to
start.

------
rogerbinns
Here is a great Youtube video showing the sizes of objects starting with the
moon and working its way up to the largest known star. (Our Sun is a rounding
error at that point!) <https://www.youtube.com/watch?v=HEheh1BH34Q> \- if
youtube refuses to play because of audio try
<https://www.youtube.com/watch?v=fKTu6B4Rgek>

Here is another one showing an animation of asteroids discovered in our solar
system from 1980 to 2011. It starts off pretty tame, and by the end gets
scary! <https://www.youtube.com/watch?v=ONUSP23cmAE>

------
ajross
Amusingly for a "Chrome Experiment" it refuses to run on my chrome (Chrome 23
on Fedora 17 with Ivy Bridge graphics).

Firefox on the same machine works flawlessly.

------
juanre
For those of you interested in the topic, the best publicly available database
I've found is <http://www.astronexus.com/node/34>, and The Book for
astronomical computation is Astronomical Algorithms, by Jean Meeus,
<http://www.willbell.com/math/mc1.HTM>

(Shameless plug: I used both to implement the Common Lisp sky renderization
engine for my startup, <http://greaterskies.com>, that makes pretty
personalized posters out of thousands of stars)

------
scrumper
It's like the galaxy map in Frontier Elite II was, in my imagination.

~~~
tanepiper
This is exactly what I was thinking

------
ynniv
On my MBP scrolling is backwards (swipe up moves closer), and mouse motion
controls the camera offset angle. It's easy to write the code this way, but
awkward and surprising during use. It's better to pretend that the hand
manipulates the model, with a swipe up pushing it away and a swipe right
turning the model counterclockwise around the vertical axis of the view.

~~~
duaneb
Swiping up to move closer feels natural to me. Perhaps it is the other
platforms that are wrong?

~~~
ynniv
It would take a proper user interaction study to find out, but I don't think
that I am wrong. Imagine a photograph on a table that you want to see better
(make larger). Your natural motion is to touch and pull, which is dragging
down.

~~~
Domenic_S
Hold control on your MBP and scroll up. It zooms in.

command and '+' zooms text.

Scrolling down generally scrolls to the end of a page. Scrolling down here
scrolls to the end of the galaxy.

Hold out your hand and make the "unpinch" gesture, which enlarges photos in
iOS and Android. Which direction do your scrolling fingers move?

~~~
ynniv
_Hold control on your MBP and scroll up. It zooms in._

This is a rare interaction, which is arguably backwards.

 _command and '+' zooms text. Scrolling down generally scrolls to the end of a
page. Scrolling down here scrolls to the end of the galaxy._

Those are all logical, not physical mappings. They are not relevant here.

 _Hold out your hand and make the "unpinch" gesture, which enlarges photos in
iOS and Android. Which direction do your scrolling fingers move?_

Left and right? You're seriously stretching to make your argument. People pull
things towards them to get a better view. They push them away to see the
bigger picture. Done.

~~~
Domenic_S
You can't be serious. Put your hand to your monitor, and with your thumb and
forefinger, unpinch. Your thumb and finger move left and right?

So my counterexamples are irrelevant, and you unpinch in a way nobody else
does. Sounds like you just get off on being contrarian.

------
epsylon
Absolutely amazing. By the way, it also works on Firefox. (Though performance
isn't... _stellar_ )

~~~
narag
FF 16.0.2 on Windows 7 64 here... it works smoothly, but that's no surprise,
this machine is very powerful.

The page is so beautiful! Until now I've never felt the need to say this: wish
I could upvote it more :-)

------
mratzloff
This is incredible, and with all the positional, magnitude, and spectral
information publicly available, anyone could do it.

I would really love to see a search box that would allow me to jump to a
specific star.

------
Detrus
MBP Chrome here, I see maybe 50 closest stars. When I zoom out, the stars in
galaxy are just transparent squares. Doesn't look like
[http://shanearmstrong.co.uk/content/cdn/the_beauty_of_the_co...](http://shanearmstrong.co.uk/content/cdn/the_beauty_of_the_cosmos.png)
or
[http://www.chromeexperiments.com/detail/100000-stars/img/ahZ...](http://www.chromeexperiments.com/detail/100000-stars/img/ahZzfmNocm9tZXhwZXJpbWVudHMtaHJkchgLEg9FeHBlcmltZW50SW1hZ2UY27DmBQw/large)

~~~
owenjones
Same, on a Mac Mini.

------
shn
67:2 [He] who created death and life to test you [as to] which of you is best
in deed - and He is the Exalted in Might, the Forgiving -

67:3 [And] who created seven heavens in layers. You do not see in the creation
of the Most Merciful any inconsistency. So return [your] vision [to the sky];
do you see any breaks?

67:4 Then return [your] vision twice again. [Your] vision will return to you
humbled while it is fatigued.

------
malbs
Nothing interesting to add except it was an enjoyable way to start the day,
thanks. Always good to have something remind you how insignificant you are!
The music was great too

edit: loaded it in chrome instead, even better (should have been obvious given
it's located on chromeexperiments.com)

------
dennish00a
I need to learn 3D graphics for some of my scientific projects. Specifically,
I want to rotate clouds of points just as shown here. I have no idea where to
start with doing this, however. Can somebody point me to a good tutorial or
other resource in 3D graphics?

~~~
kragen
I don't know of a good one, so I'll write one here. This focuses on how things
work, rather than how to use existing APIs, because I've basically never used
GL. What's below is short and doesn't have much math, but it should be enough
to allow someone who knows linear algebra and 2-D graphics to both understand
and rederive most of 3D graphics.

To rotate a point cloud, you multiply each point by a rotation matrix to get
the rotated point. A rotation matrix that rotates around the X-axis looks like

    
    
        [[1  0  0]
         [0  c  s]
         [0 -s  c]]
    

where s and c are the sin and cos of the angle you want to rotate. Then you
can do an orthographic projection by just dropping the Z coordinate, leaving
just X and Y coordinates (which you may need to scale to your screen), or a
perspective projection by dividing X and Y by Z. (Be wary of division by
zero.)

The usual approach is to maintain the original points unrotated and make a
rotated copy of them for every frame, instead of overwriting them with a
rotated version every frame, so that numerical errors don't accumulate and you
can get away with single-precision floating-point. Also, conventionally,
positive Z coordinates are in front of the camera and negative Z coordinates
are behind it.

If the above isn't sufficiently clear, there's some code I wrote to generate
an ASCII-art animation of a perspective-projected point cloud (the corners of
a cube) at [http://lists.canonical.org/pipermail/kragen-
hacks/2012-April...](http://lists.canonical.org/pipermail/kragen-
hacks/2012-April/000540.html). It's 15 lines of code and the only library it
depends on are Python's functions to sleep for a fraction of a second, output
stuff to stdout, and round to integer.

EXTRAS:

DISTANCE: For things that aren't points, you might be interested in how far
away they are from the camera, too, like to scale them or figure out which
ones are in front. That's the Z-coordinate after you rotate into camera space.

TRANSFORM COMPOSITION: If you want to rotate around two axes, it's probably
better to multiply the two rotation matrices together, then multiply each
point by the resulting transformation matrix, rather than doing two matrix
multiplies for each point. You can also scale camera space to screen
coordinates this way.

TRANSLATION: If you want to move the camera, you probably want to translate
your points so the camera is at the origin before rotating them. If you
represent your transformations as 4x4 matrices, with a possibly implicit
fourth element in each point vector that is 1, you can represent translation
in your transformation matrices too.

MULTIPLE SEPARATELY MOVING OBJECTS: A point cloud is a single rigid object.
But whether you're drawing point clouds or something more complicated, it's
often interesting to be able to move multiple objects separately. The usual
way is to go from two coordinate systems, camera and world, to N: camera,
world, and one for each object. Each object has a transformation matrix that
maps its object space into world space. You move the object by changing its
transformation matrix.

POLYGONS: If you're drawing polygons, straight lines are still straight lines
when you rotate them, and in either perspective or orthographic projections,
so you can just rotate and project the corners of the polygons into your
canvas space, and then connect them with 2-D straight lines (or fill the
resulting 2-D triangle).

FLAT SHADING: The color resulting from ordinary illumination ("diffuse
reflection") is the underlying color of the polygon, multiplied by the cosine
of the angle between the normal (perpendicular) to the surface and the
direction of illumination; it's easiest to compute that cosine by taking a
dot-product between two unit vectors, and to compute the normal by normalizing
a cross-product between two of the sides. If you have more than one lighting
source, add together the colors generated by each lighting source. You
probably want to treat negative cosines as zero, or you'll get negative
lighting when faces are illuminated from behind.

BACKFACE REMOVAL: if you're drawing a single convex object made of polygons,
you can do correct hidden surface removal just by not drawing polygons whose
normal points away from the camera (has a positive Z component). This is a
useful optimization even if your object is more complicated, because it halves
the load on the heavier-weight algorithms below.

HIDDEN SURFACE REMOVAL: If your polygons don't intersect, or only intersect at
their edges, you can use the "painter's algorithm" to get correctly displayed
hidden surfaces by just drawing them in order from the furthest to the
closest; if they do intersect, you can either cut them up so they don't
intersect any more, or you can use a "Z buffer" which tells you which object
is closest to the camera at each pixel --- as you draw your things, you check
the Z buffer to see what's the currently closest Z coordinate at each pixel
you're drawing, and if the relevant point on that object has a lower Z
coordinate, you update that pixel in both the Z buffer and the canvas.

SMOOTH SHADING: you can get apparently smooth surfaces out of quite rough
polygon grids by storing a separate surface normal at each vertex, and then
instead of coloring the whole polygon a single flat color, interpolate. You
can either compute the colors at the corners of the polygons and interpolate
the colors at each point you draw (Gouraud shading) or you can interpolate the
normals and redo the lighting calculation for each point (Phong shading),
which gives you dramatically better results if you have specular highlights.

SPECULAR HIGHLIGHTS: The diffuse-illumination calculation explained in "FLAT
SHADING" above is sufficient for things that aren't shiny at all. For things
that are somewhat shiny, you want "specular highlights", and the usual way to
do those is to do the lighting calculation a second time, but instead of
directly using the cosine of the angle between the light source direction and
the surface normal, you take that cosine to some power (called the "shininess"
or "Phong exponent") first. The 5th power is pretty shiny.

FOG: Faraway things fade exponentially. That is, you take the density of the
fog (a fraction slightly less than 1) to the power of the Z coordinate of the
point on the object, and multiply that by the color of the object.

TEXTURE MAPPING: If you want your surfaces not to be a single solid color, you
can use a raster image (called a "texture") to map colors onto the surface.
You just figure out where you are on the surface (by doing a matrix multiply
from your surface point into "texture space") and figure out which texture
pixel ("texel") you're at, or which ones you should interpolate between. (You
can also use some other function to generate the color, rather than having an
explicitly stored texture. The important thing is that it maps a 3-D point in
object space to a color.) This is the start of the whole universe of
"shaders", which represents a big part of current 3-D work. Another
application of shaders is bump mapping:

BUMP MAPPING: If you're doing Phong shading, you can get apparent texture (in
the usual sense: something you could feel if you could touch the object) on
your surfaces without having to transform more points by simply perturbing the
interpolated surface normals you're using to do your shading calculations.
It's helpful if you perturb them in a deterministic way so that the texture
moves with the surface.

~~~
danenania
Wow, thanks for posting this!

~~~
kragen
Sure, I hope it's helpful!

------
gerhardi
I wonder if there are clusters where distance between neighbouring stars are
much less than with Sun and Proxima Centauri. Would these locations provide
better opportunities for space faring civilizations to reach other star
systems?

~~~
nyrath
I'm no expert but I think the answer is "No", due to other problems.

Distances are closer in Globular Clusters. Alas, those are "metal poor", so
the only planets are gas giants like Jupiter.

Distances are closer in Open Galactic Clusters like the Pleiades.
Unfortunately those cluster tend to disperse. By the time a space faring
civilization has evolved, the stars are no longer close.

Distances are closer in the Galactic Core. Unfortunately that is a high
radiation environment due to Sagitarius A* (the mega-black hole at the center
of the galaxy_ and all the nebulae the hole is dragging in.

Short answer: places where the distance between stars is closer are unlikely
to have space faring civilizations.

Of course there is always Zeta Reticuli A and B.

------
Osmium
Eventually crashed Safari on my rMBP, but this seems to be a trend, so
possibly Safari's fault (and would explain why WebGL is not enabled by
default).

In any case, great visualisation. Would be a perfect use for a 3D monitor.

------
esusatyo
As cool as this is, if you have an iPad check out Star Walk. It's really
really cool, and if you hold your iPad up to the sky it'd show you what the
sky would look like if there isn't any pollutions.

------
astrobiased
Awesome! The only issue is that the visualization makes it look like we are in
a cluster of stars. That is not correct. We're part of the diffuse field star
population in the Milky Way.

------
TeMPOraL
I love the way they made the stars - from close a sun actually looks like a
pile of hot, glowing, boiling gas, and not like cold orange sphere you can see
pretty much everywhere else.

------
mamu95
If your interested in simulating the physics of the universe checkout Universe
Sandbox <http://universesandbox.com/>

------
tnash
Was I the only one really hoping for "Reticulating Splines"?

~~~
TazeTSchnitzel
I see I'm not the only one here who was reminded of Spore when they saw
this...

------
swang
I am on rMBP and it hardlocked Chrome after I navigated towards the sun and
clicked the information button about the sun. Worked fine until then, really
cool.

------
tudorw
OMG, someone pinch me, nexus 10 was born for this :)

------
chucknelson
Wow, this is great. Reminds me of Mass Effect and its presentation of planets
and their descriptions. At least this stuff is real! :)

------
codyromano
Small typo: should be "farthest man-made object" instead of "furthest."
Otherwise, awesome job.

------
ronyeh
Beautiful. Someone needs to make a turn-based strategy game out of this
visualization.

------
xvolter
This is amazing. Now to figure out how to make this my desktop background...

------
suyash
that is very cool. How was the glow crated using CSS3 for the Sun?

~~~
cleverjake
this is all canvas and webgl, no css

~~~
dtf
Not that I've actually checked the code yet, but the project page states
"CSS3D" as one of the technologies used.

<http://www.chromeexperiments.com/detail/100000-stars/>

------
hayksaakian
Aaaaaaaaaand chrome mobile fails once again. What's the point of chrome mobile
if it's never updated? The stock aosp browser has seen more updates than
Google's own browser. The only feature of this god damn browser is idiocy.
/rage

------
Jarihd
wow, awesome visualization.

I always wondered how scientist determine the position of earth in our galaxy
and the center of the galaxy. can somebody throw some light on this ???

------
comex
This is what EVE Online's map ought to look like. :)

Works fine in Safari.

------
roundfounder
a. this is so beautiful and well done... what we've come to expect of Chrome
Experiments.

b. If you haven't already, Toggle the spectral index... so sick

------
TomGullen
That is incredible, wasn't expecting much.

------
tluyben2
What was this optimized with? Closure?

------
pardner
It's a small world after all.

------
suyash
slow as hell while loading up but in the end worth it

------
tydok
"There's a lot of space out there to get lost in."

------
sonabinu
Thumbs up!!!!

------
Roybatty
I have my first child coming in a couple weeks and this is the type of stuff
that I'm going to love showing she or he.

Awesome

------
TheAmazingIdiot
Who made that music? And do they make more from where that came from?

~~~
davex
Music by Sam Hulick, whose work you may have heard in the video game, Mass
Effect. The track is titled “In a Strange Land” and is used with his
permission.

~~~
chewxy
Ya. This would have been perfect: <http://www.youtube.com/watch?v=BwJtYfF72Io>

------
jpxxx
Unfortunately, it looks wonky as hell on Chrome for Mac.

Edit: I'm sorry for my misleading and value-free comment. Allow me to clarify:
THIS EXPERIMENT LOOKS LIKE HOT BROKEN GARBAGE ON CHROME FOR MAC, A FACT WHICH
MAY BE OF INTEREST TO PERHAPS HALF OF THE HACKER NEWS READERSHIP WHO WILL MOST
LIKELY EXPERIENCE THE SAME VISUAL CORRUPTION AT VARIOUS VIEW LEVELS. SOME MAY
VIEW THIS AS UNFORTUNATE, AS THE INTENDED EXPERIENCE IS A WORTHY ONE THAT
EXERCISES A NUMBER OF CUTTING EDGE WEB PRESENTATION TECHNIQUES THAT ARE LIKELY
TO GAIN SIGNIFICANT TRACTION IN THE NEAR FUTURE.

~~~
daeken
I'm guessing for your edit that you feel you're being downvoted for a negative
comment about a cool demo. However, you're being downvoted for not giving any
actionable information, or even a screenshot; you're not making it possible
for the author to make the project better, you're just making a valueless
comment (and making it worse with your edit).

~~~
jpxxx
TinyGrab is down again, I have a headache, etc. And I felt this thread is a
bit more interesting with an hand-crafted ALL-CAPS punching bag. :)

Ever at your service! -JPXXX

