Hacker News new | past | comments | ask | show | jobs | submit login
100,000 stars (chromeexperiments.com)
746 points by davex on Nov 14, 2012 | hide | past | web | favorite | 109 comments



WOW, that is amazing. I'm always blown away by stuff like this, where you can actually get a sense of how small we all are and how distant even the closest neighbor stars are.

I just close my eyes for a minute and think (or try to), what would it be like for those people that are finally able to reach, say, Vega (I know it's not the closest). Sure, this is not a big deal in sci-fi, but for reality, it's pretty mind blowing. This is 100% why I seriously want to live for a few hundred years: to have an opportunity to see the first time we actually go to the nearest star.

In the meantime, I guess this will have to suffice.

I also love this image that is not interactive like this, but still mind blowing: http://en.wikipedia.org/wiki/File:Earths_Location_in_the_Uni...



Pannable version, that file was too big for me & I remembered this tool from another HN post:

http://hugepic.io/7c2aaf4a6


>WOW, that is amazing. I'm always blown away by stuff like this, where you can actually get a sense of how small we all are and how distant even the closest neighbor stars are.

Same.

If you don't already own a pair, I'd recommend getting a basic pair of binoculars and doing some backyard astronomy. You'd be amazed how much more you can see with even a basic 10x50 pair, even in thoroughly light-polluted skies.

Also SpaceRip [1] collects hundreds of interesting, easily digestible and pretty timely videos.

1: http://www.youtube.com/user/SpaceRip


> I'd recommend getting a basic pair of binoculars and doing some backyard astronomy

If you've got the spare cash, get image stabilized ones. I could clearly see the moons around Jupiter with my Canon 12x36 IS binos the other night despite my hand tremors. The real party trick is handing them to a friend and telling them to look at the moon. Blows them away every time - to most people it's just a yellowish glowing thing in the sky, rather than a scarred rocky globe.


Be sure to check out the middle "star" in Orion's sword.


Yes, this is great


Obligatory buzz-kill, Charles Stross on why we're unlikely to leave the solar system: http://news.ycombinator.com/item?id=2639456


He's wrong.

For starters, there's this assertion: "The far-fetched version is to use back holes as power sources [1] as this is, as far as I've read anyway, the only remotely viable method of providing propulsion without reaction mass to speak of and reaction mass is the death of any form of interstellar propulsion."

Not true. We can definitely build something with today's technology that allows for propulsion without reaction mass: light-sails pushed by lasers[1]. I can address some of his other points but it's not necessary. If you crunch the numbers, it should be doable to travel to another star in about 150 years.

[1] See Humble's canonical text on space propulsion design: http://www.amazon.com/Propulsion-Analysis-Design-Ronald-Humb...


So you're suggesting two almost completely undeveloped technologies (light sails and extremely high-powered lasers in space) and telling us that interstellar propulsion is a solved problem? That's a tad optimistic :)

Btw, if you'd read 'Accelerando' you'd know that Charlie is fully aware of the possibilities offered by laser powered light sails.


Stross and cletus both say we will never leave the solar system and travel to another star. Never!

I'm trying to show that notion is dead wrong. I didn't say it was a solved problem, if you mean in the sense of the engineering is basically done and we're launching something tomorrow. But certainly it is very reasonable to think that within with 150 years we will have both the technological and economic advancement to do a mission to another star. We know how we'd do it, and it uses real physics and engineering, materials, etc. that already exist.

The main reason Stross is wrong though, is he says we'd need free energy to do it. He's right that the limiting factor is the cost of energy, but he's overlooking what you get for 150 years of economic growth. World GDP has grown at an average annual rate of 3.5% over the last 100 years. World GDP is currently at about $80 trillion. If it grows at 3.5% for another 150 years, it will be $14,000 trillion. You can buy a lot of energy with that. And a lot of spaceships.


50 years ago it was very reasonable to think that we'd have worked out how to make a fusion reactor too.

Just because something uses real physics and engineering materials that already exist, doesn't mean that it's actually possible in the real world & getting to the stars requires that we solve three or four very hard problems.

Impossible? Of course not. Much, much harder than the 'whee, we're all going to space!' crowd likes to think? I'm afraid so.

NB. GDP is not a good proxy for available energy for hopefully obvious reasons: Just because you have a high GDP doesn't mean that you have a lot of energy available, it may mean that you use the energy you have very effectively.


>50 years ago it was very reasonable to think that we'd have worked out how to make a fusion reactor too.

Most of the reason we haven't is economic; modern fission reactor designs are far more efficient than even the most optimistic estimates of 50 years ago.


Most of the reason we haven't is because it turns out to be really, really hard to contain a high energy plasma long enough to get useful levels of fusion out of it. We still haven't effectively solved the 'how do we efficiently extract the generated energy' problem either.

Perhaps these problems will be solved in the future, but so far we've spent billions and billions of $ with (relatively) little to show for it in terms of output.


  World GDP has grown at an average annual rate of 3.5% over the last 100 years.
That's not actually very relevant. According to DeLong[1], the rate of growth has been anything but constant over the last 100 years, nor has it always been positive. You may well expect the industrial revolution to carry us forward for another 150 years, but I'm not sure if I do.

[1] http://img641.imageshack.us/img641/1946/gwp.png

edit: corrected the 2000-2011 segment for inflation with CPI data (not int$).


How powerful do the lasers need to be? How are you going to power it for 150 years?


The mission would leave in 150 years. It would last for 20-30 years one way.

The laser (a bank of lasers, more likely) would need to consume energy at a rate approximately equal to the world's entire current electricity production, basically continuously for the duration of the mission.

That seems absurd of course. But again, you can't underestimate compounded interest applied to GDP growth (see above comment). In 150 years time, the cost to do the mission could easily be the same percentage of US GDP as the Apollo mission was in the 1960's.

Of course if the assumption of continuous future economic growth doesn't hold up, this isn't going to happen. But in that case the world will have much bigger problems... this will be something of a non-issue.


You're in "magic wand" territory, and in agreement with Stross' arguments.


Same here, but I doubt we'll ever be able to do that without some serious genetic enhacements to the point where we might not be recognizable as humans, or an even better solution, transferring our minds to machines, and then being able to live thousands of years or more and make it more possible for us to travel there.

But it seems a lot more likely that we'll just send some strong AI robots to send us back the data until then.

EDIT: I suggest checking out Space Engine. It seems to cover more than one galaxy and being able to change the viewing/moving speed feels like being in a Star Trek starship. Going through space like that feels a bit unsettling:

http://en.spaceengine.org


You could also take a look at http://htwins.net/scale2/. It's also a very nice demonstration of the scale of the universe from the smallest to the largest objects.


yeah makes us feel small.


Life on Earth has existed for about 26% of the age of the Universe. Feel any bigger?

(3.6 billion years / 13.75 billion years)


Indeed that was a bit of an epiphany for me recently whe I read a site that listed major astronomical and terrestrian events in parralel. Despite knowing many of the numbers, somehow my gut feeling had always been that since Earth is so tiny in size compared to the universe at large, the same must be true about timescales.


Then if you imagine how long the Internet has existed compared to the universe, all of this hullabaloo on HN becomes even funnier .


Hmmm... but what about intelligent life? Or technologically sophisticated (relatively speaking) intelligent life. Still feeling small here.


This is without a doubt one of the coolest and most beautiful webgl experiments I have seen in the last few years, it actually struck a real chord with me - music, lighting, effects, the zoom and the sheer beauty of it.

For those people unlucky enough to not be able to load this app (it took me quite a while) here is a particularly fantastic image I took (without asking or any right to, of course) - http://shanearmstrong.co.uk/content/cdn/the_beauty_of_the_co... - I apologize for any slow load times.


I am guessing you have liked this too - http://www.youtube.com/watch?v=17jymDn0W6U

Being a video it is not interactive, but definitely does strike something in me. It's almost the Total Perspective Vortex.


Until now I had not seen this video, but thank you.

This really puts in to perspective the brevity of human life and how little we have achieved so far, from leaving the primordial soup to firing Glee across our television network to entertain teenage girls and travelling to and from the moon.

We are irrelevantly small and unimportant and yet we have already done the hardest thing, of what we know for sure, there are 8.5 million species on the planet and we are the only constructively intelligent one present here, there are 400 known satellites in our solar system, if you assume that every solar system in the milky way has a similar amount of planetoids on which some minor form of life could have grown, placed around 300 billion stars. I'm going to take a complete guess that only 1 in 10,000 of those contain a similar amount of life, which could be light years off, or could be spot on, or could even be far, far less tan the actual number - we simply don't know yet.

The maths is breathtakingly overwhelming:

(1 / ((8500000 * 400) * 300000000000)) * 10000 = 0.000000000000000098

Our significance in the Milky Way is 0.000000000000000098.

We account for only 0.0000000000000098% of potential life in this galaxy.

But we survived. We made it this far. From here the only way is up, or down, or left or under (depending on the location of the camera when we finally make it far enough off this rock to consider it interstellar travel.)

Disclaimer: This maths was about as well as I could do at 5.30am and is the product of a Google search of the accumulated human knowledge of the last few thousand years


>> We are irrelevantly small and unimportant

That we are relatively small is undeniable. That we are unimportant is an emotional judgement that I think is unwarranted. Size != importance.

Importance is a value judgement made in a mind. My left thumb is more important to me than Alpha Centauri is. As far as I know, Alpha Centauri has no opinion on the matter.

Are we important to one another? To yet-unknown sentient creatures? To God?

The answers to those questions will probably not depend on whether we are 1 meter or 1 parsec tall.


I agree this is a fantastic demonstration of technology and art.


Warning: Scientific accuracy is not guaranteed. Please do not use this visualization for interstellar navigation.


If you are interested in scientifically accurate models in webgl, you may like this asteroid simulation, which models objects in our solar system in an astronomically accurate manner: http://asterank.com/3d/


that's a really cool simulator! what's the license for it? I can't find it on your github account.


I used to think, if kidnapped by interstellar travelers, I might be able to find my way home. Now I know I never could, not in a million years.


Sure you could, if you have the Pioneer map with you. Some people have it tattooed just in case. http://en.wikipedia.org/wiki/Pioneer_plaque


I agree with you. I am calling them total bullshit on the scientific part.

They could at least try to put the Alpha Cassiopeiae and and Beta Cassiopeiae in the same "general" direction from the Sun. It would fool more people.

Distances seem to be correct but the coordinates aren't true at all.

Regardless, it looks beautiful. But it would be more beautiful if you could actually see their true locations. Without it, it is a just a game ui demo.


May I suggest taking a look at http://en.spaceengine.org/

Edit: and my apologies if I have wasted your afternoon.


Also the xplatform Celestia: http://shatters.net/celestia/


Just keep your towel on you and you'll be alright.


This is REALLY COOL!

Could someone explain how this is built or give an overview of how it works? In the 'about' page http://www.chromeexperiments.com/detail/100000-stars/ it says WebGL and CSS3D, but I'm wondering how they fit together and what does what.

Is there a better way to view the source than just 'view source' in chrome?

I know a number of programming languages and I'd like to learn more about how this project works. [Saw the link to book on graphic programming in other comments below http://www.arcsynthesis.org/gltut/index.html, but how to "take apart and study" this project? ] Kudos to anyone who can point me in the right direction. Thanks!


https://github.com/mrdoob/three.js/ - That should get you going. Download that and take a look at all the examples and how they are made. Here is a good book on webgl too - http://www.amazon.com/WebGL-Up-Running-Tony-Parisi/dp/144932...


Tanks Krsunny! Really helps to have some advice on what's current and where to start.


Here is a great Youtube video showing the sizes of objects starting with the moon and working its way up to the largest known star. (Our Sun is a rounding error at that point!) https://www.youtube.com/watch?v=HEheh1BH34Q - if youtube refuses to play because of audio try https://www.youtube.com/watch?v=fKTu6B4Rgek

Here is another one showing an animation of asteroids discovered in our solar system from 1980 to 2011. It starts off pretty tame, and by the end gets scary! https://www.youtube.com/watch?v=ONUSP23cmAE


Amusingly for a "Chrome Experiment" it refuses to run on my chrome (Chrome 23 on Fedora 17 with Ivy Bridge graphics).

Firefox on the same machine works flawlessly.


For those of you interested in the topic, the best publicly available database I've found is http://www.astronexus.com/node/34, and The Book for astronomical computation is Astronomical Algorithms, by Jean Meeus, http://www.willbell.com/math/mc1.HTM

(Shameless plug: I used both to implement the Common Lisp sky renderization engine for my startup, http://greaterskies.com, that makes pretty personalized posters out of thousands of stars)


It's like the galaxy map in Frontier Elite II was, in my imagination.


This is exactly what I was thinking


Absolutely amazing. By the way, it also works on Firefox. (Though performance isn't... stellar)


FF 16.0.2 on Windows 7 64 here... it works smoothly, but that's no surprise, this machine is very powerful.

The page is so beautiful! Until now I've never felt the need to say this: wish I could upvote it more :-)


This is incredible, and with all the positional, magnitude, and spectral information publicly available, anyone could do it.

I would really love to see a search box that would allow me to jump to a specific star.


MBP Chrome here, I see maybe 50 closest stars. When I zoom out, the stars in galaxy are just transparent squares. Doesn't look like http://shanearmstrong.co.uk/content/cdn/the_beauty_of_the_co... or http://www.chromeexperiments.com/detail/100000-stars/img/ahZ...


Same, on a Mac Mini.


67:2 [He] who created death and life to test you [as to] which of you is best in deed - and He is the Exalted in Might, the Forgiving -

67:3 [And] who created seven heavens in layers. You do not see in the creation of the Most Merciful any inconsistency. So return [your] vision [to the sky]; do you see any breaks?

67:4 Then return [your] vision twice again. [Your] vision will return to you humbled while it is fatigued.


On my MBP scrolling is backwards (swipe up moves closer), and mouse motion controls the camera offset angle. It's easy to write the code this way, but awkward and surprising during use. It's better to pretend that the hand manipulates the model, with a swipe up pushing it away and a swipe right turning the model counterclockwise around the vertical axis of the view.


Swiping up to move closer feels natural to me. Perhaps it is the other platforms that are wrong?


It would take a proper user interaction study to find out, but I don't think that I am wrong. Imagine a photograph on a table that you want to see better (make larger). Your natural motion is to touch and pull, which is dragging down.


Hold control on your MBP and scroll up. It zooms in.

command and '+' zooms text.

Scrolling down generally scrolls to the end of a page. Scrolling down here scrolls to the end of the galaxy.

Hold out your hand and make the "unpinch" gesture, which enlarges photos in iOS and Android. Which direction do your scrolling fingers move?


Hold control on your MBP and scroll up. It zooms in.

This is a rare interaction, which is arguably backwards.

command and '+' zooms text. Scrolling down generally scrolls to the end of a page. Scrolling down here scrolls to the end of the galaxy.

Those are all logical, not physical mappings. They are not relevant here.

Hold out your hand and make the "unpinch" gesture, which enlarges photos in iOS and Android. Which direction do your scrolling fingers move?

Left and right? You're seriously stretching to make your argument. People pull things towards them to get a better view. They push them away to see the bigger picture. Done.


You can't be serious. Put your hand to your monitor, and with your thumb and forefinger, unpinch. Your thumb and finger move left and right?

So my counterexamples are irrelevant, and you unpinch in a way nobody else does. Sounds like you just get off on being contrarian.


I agree that scrolling should be inverted. Scrolling down moves the zoom slider on the right side of the page up which seems counter intuitive to me.


I agree that the nav interface is a bit wonky, but it is a good start!


Nothing interesting to add except it was an enjoyable way to start the day, thanks. Always good to have something remind you how insignificant you are! The music was great too

edit: loaded it in chrome instead, even better (should have been obvious given it's located on chromeexperiments.com)


I need to learn 3D graphics for some of my scientific projects. Specifically, I want to rotate clouds of points just as shown here. I have no idea where to start with doing this, however. Can somebody point me to a good tutorial or other resource in 3D graphics?


If you just want to rotate point clouds, it might be hard to do better than taking apart and studying a project that does that (like this one).

http://www.arcsynthesis.org/gltut/index.html will give you a good general background, though.


I don't know of a good one, so I'll write one here. This focuses on how things work, rather than how to use existing APIs, because I've basically never used GL. What's below is short and doesn't have much math, but it should be enough to allow someone who knows linear algebra and 2-D graphics to both understand and rederive most of 3D graphics.

To rotate a point cloud, you multiply each point by a rotation matrix to get the rotated point. A rotation matrix that rotates around the X-axis looks like

    [[1  0  0]
     [0  c  s]
     [0 -s  c]]
where s and c are the sin and cos of the angle you want to rotate. Then you can do an orthographic projection by just dropping the Z coordinate, leaving just X and Y coordinates (which you may need to scale to your screen), or a perspective projection by dividing X and Y by Z. (Be wary of division by zero.)

The usual approach is to maintain the original points unrotated and make a rotated copy of them for every frame, instead of overwriting them with a rotated version every frame, so that numerical errors don't accumulate and you can get away with single-precision floating-point. Also, conventionally, positive Z coordinates are in front of the camera and negative Z coordinates are behind it.

If the above isn't sufficiently clear, there's some code I wrote to generate an ASCII-art animation of a perspective-projected point cloud (the corners of a cube) at http://lists.canonical.org/pipermail/kragen-hacks/2012-April.... It's 15 lines of code and the only library it depends on are Python's functions to sleep for a fraction of a second, output stuff to stdout, and round to integer.

EXTRAS:

DISTANCE: For things that aren't points, you might be interested in how far away they are from the camera, too, like to scale them or figure out which ones are in front. That's the Z-coordinate after you rotate into camera space.

TRANSFORM COMPOSITION: If you want to rotate around two axes, it's probably better to multiply the two rotation matrices together, then multiply each point by the resulting transformation matrix, rather than doing two matrix multiplies for each point. You can also scale camera space to screen coordinates this way.

TRANSLATION: If you want to move the camera, you probably want to translate your points so the camera is at the origin before rotating them. If you represent your transformations as 4x4 matrices, with a possibly implicit fourth element in each point vector that is 1, you can represent translation in your transformation matrices too.

MULTIPLE SEPARATELY MOVING OBJECTS: A point cloud is a single rigid object. But whether you're drawing point clouds or something more complicated, it's often interesting to be able to move multiple objects separately. The usual way is to go from two coordinate systems, camera and world, to N: camera, world, and one for each object. Each object has a transformation matrix that maps its object space into world space. You move the object by changing its transformation matrix.

POLYGONS: If you're drawing polygons, straight lines are still straight lines when you rotate them, and in either perspective or orthographic projections, so you can just rotate and project the corners of the polygons into your canvas space, and then connect them with 2-D straight lines (or fill the resulting 2-D triangle).

FLAT SHADING: The color resulting from ordinary illumination ("diffuse reflection") is the underlying color of the polygon, multiplied by the cosine of the angle between the normal (perpendicular) to the surface and the direction of illumination; it's easiest to compute that cosine by taking a dot-product between two unit vectors, and to compute the normal by normalizing a cross-product between two of the sides. If you have more than one lighting source, add together the colors generated by each lighting source. You probably want to treat negative cosines as zero, or you'll get negative lighting when faces are illuminated from behind.

BACKFACE REMOVAL: if you're drawing a single convex object made of polygons, you can do correct hidden surface removal just by not drawing polygons whose normal points away from the camera (has a positive Z component). This is a useful optimization even if your object is more complicated, because it halves the load on the heavier-weight algorithms below.

HIDDEN SURFACE REMOVAL: If your polygons don't intersect, or only intersect at their edges, you can use the "painter's algorithm" to get correctly displayed hidden surfaces by just drawing them in order from the furthest to the closest; if they do intersect, you can either cut them up so they don't intersect any more, or you can use a "Z buffer" which tells you which object is closest to the camera at each pixel --- as you draw your things, you check the Z buffer to see what's the currently closest Z coordinate at each pixel you're drawing, and if the relevant point on that object has a lower Z coordinate, you update that pixel in both the Z buffer and the canvas.

SMOOTH SHADING: you can get apparently smooth surfaces out of quite rough polygon grids by storing a separate surface normal at each vertex, and then instead of coloring the whole polygon a single flat color, interpolate. You can either compute the colors at the corners of the polygons and interpolate the colors at each point you draw (Gouraud shading) or you can interpolate the normals and redo the lighting calculation for each point (Phong shading), which gives you dramatically better results if you have specular highlights.

SPECULAR HIGHLIGHTS: The diffuse-illumination calculation explained in "FLAT SHADING" above is sufficient for things that aren't shiny at all. For things that are somewhat shiny, you want "specular highlights", and the usual way to do those is to do the lighting calculation a second time, but instead of directly using the cosine of the angle between the light source direction and the surface normal, you take that cosine to some power (called the "shininess" or "Phong exponent") first. The 5th power is pretty shiny.

FOG: Faraway things fade exponentially. That is, you take the density of the fog (a fraction slightly less than 1) to the power of the Z coordinate of the point on the object, and multiply that by the color of the object.

TEXTURE MAPPING: If you want your surfaces not to be a single solid color, you can use a raster image (called a "texture") to map colors onto the surface. You just figure out where you are on the surface (by doing a matrix multiply from your surface point into "texture space") and figure out which texture pixel ("texel") you're at, or which ones you should interpolate between. (You can also use some other function to generate the color, rather than having an explicitly stored texture. The important thing is that it maps a 3-D point in object space to a color.) This is the start of the whole universe of "shaders", which represents a big part of current 3-D work. Another application of shaders is bump mapping:

BUMP MAPPING: If you're doing Phong shading, you can get apparent texture (in the usual sense: something you could feel if you could touch the object) on your surfaces without having to transform more points by simply perturbing the interpolated surface normals you're using to do your shading calculations. It's helpful if you perturb them in a deterministic way so that the texture moves with the surface.


Wow, thanks for posting this!


Sure, I hope it's helpful!


I wonder if there are clusters where distance between neighbouring stars are much less than with Sun and Proxima Centauri. Would these locations provide better opportunities for space faring civilizations to reach other star systems?


I'm no expert but I think the answer is "No", due to other problems.

Distances are closer in Globular Clusters. Alas, those are "metal poor", so the only planets are gas giants like Jupiter.

Distances are closer in Open Galactic Clusters like the Pleiades. Unfortunately those cluster tend to disperse. By the time a space faring civilization has evolved, the stars are no longer close.

Distances are closer in the Galactic Core. Unfortunately that is a high radiation environment due to Sagitarius A* (the mega-black hole at the center of the galaxy_ and all the nebulae the hole is dragging in.

Short answer: places where the distance between stars is closer are unlikely to have space faring civilizations.

Of course there is always Zeta Reticuli A and B.


Eventually crashed Safari on my rMBP, but this seems to be a trend, so possibly Safari's fault (and would explain why WebGL is not enabled by default).

In any case, great visualisation. Would be a perfect use for a 3D monitor.


As cool as this is, if you have an iPad check out Star Walk. It's really really cool, and if you hold your iPad up to the sky it'd show you what the sky would look like if there isn't any pollutions.


Awesome! The only issue is that the visualization makes it look like we are in a cluster of stars. That is not correct. We're part of the diffuse field star population in the Milky Way.


I love the way they made the stars - from close a sun actually looks like a pile of hot, glowing, boiling gas, and not like cold orange sphere you can see pretty much everywhere else.


If your interested in simulating the physics of the universe checkout Universe Sandbox http://universesandbox.com/


Was I the only one really hoping for "Reticulating Splines"?


I see I'm not the only one here who was reminded of Spore when they saw this...


I am on rMBP and it hardlocked Chrome after I navigated towards the sun and clicked the information button about the sun. Worked fine until then, really cool.


OMG, someone pinch me, nexus 10 was born for this :)


Wow, this is great. Reminds me of Mass Effect and its presentation of planets and their descriptions. At least this stuff is real! :)


Small typo: should be "farthest man-made object" instead of "furthest." Otherwise, awesome job.


Beautiful. Someone needs to make a turn-based strategy game out of this visualization.


This is amazing. Now to figure out how to make this my desktop background...


that is very cool. How was the glow crated using CSS3 for the Sun?


http://www.w3.org/TR/SVG/filters.html

Not sure if that's what you're looking for, but it should contain the info for what is being used. I took a quick glance and it looked like it was at minimum using the following 3 svg filters:

‘feGaussianBlur’ ‘feOffset’ ‘feMerge’


this is all canvas and webgl, no css


Not that I've actually checked the code yet, but the project page states "CSS3D" as one of the technologies used.

http://www.chromeexperiments.com/detail/100000-stars/


Aaaaaaaaaand chrome mobile fails once again. What's the point of chrome mobile if it's never updated? The stock aosp browser has seen more updates than Google's own browser. The only feature of this god damn browser is idiocy. /rage


wow, awesome visualization.

I always wondered how scientist determine the position of earth in our galaxy and the center of the galaxy. can somebody throw some light on this ???


This is what EVE Online's map ought to look like. :)

Works fine in Safari.


a. this is so beautiful and well done... what we've come to expect of Chrome Experiments.

b. If you haven't already, Toggle the spectral index... so sick


That is incredible, wasn't expecting much.


What was this optimized with? Closure?


It's a small world after all.


slow as hell while loading up but in the end worth it


"There's a lot of space out there to get lost in."


Thumbs up!!!!


I have my first child coming in a couple weeks and this is the type of stuff that I'm going to love showing she or he.

Awesome


Who made that music? And do they make more from where that came from?


Music by Sam Hulick, whose work you may have heard in the video game, Mass Effect. The track is titled “In a Strange Land” and is used with his permission.


Ya. This would have been perfect: http://www.youtube.com/watch?v=BwJtYfF72Io


Unfortunately, it looks wonky as hell on Chrome for Mac.

Edit: I'm sorry for my misleading and value-free comment. Allow me to clarify: THIS EXPERIMENT LOOKS LIKE HOT BROKEN GARBAGE ON CHROME FOR MAC, A FACT WHICH MAY BE OF INTEREST TO PERHAPS HALF OF THE HACKER NEWS READERSHIP WHO WILL MOST LIKELY EXPERIENCE THE SAME VISUAL CORRUPTION AT VARIOUS VIEW LEVELS. SOME MAY VIEW THIS AS UNFORTUNATE, AS THE INTENDED EXPERIENCE IS A WORTHY ONE THAT EXERCISES A NUMBER OF CUTTING EDGE WEB PRESENTATION TECHNIQUES THAT ARE LIKELY TO GAIN SIGNIFICANT TRACTION IN THE NEAR FUTURE.


I'm guessing for your edit that you feel you're being downvoted for a negative comment about a cool demo. However, you're being downvoted for not giving any actionable information, or even a screenshot; you're not making it possible for the author to make the project better, you're just making a valueless comment (and making it worse with your edit).


TinyGrab is down again, I have a headache, etc. And I felt this thread is a bit more interesting with an hand-crafted ALL-CAPS punching bag. :)

Ever at your service! -JPXXX


I'm going to go out on a limb and assume you have an Intel or nvidia chip set in your mac. See http://code.google.com/p/chromium/issues/detail?id=159275 (which references http://code.google.com/p/chromium/issues/detail?id=137303 but we can't see)

There is a security issue in the OSX drivers for certain chipsets, and AA is broken. Google Chrome has taken the step to disable AA on these systems to prevent an exploit of the OS.


Sounds right, good sleuthing, thank you. This machine has an Intel 3000.

The stars appear properly placed, but instead of a point there's an effect-ruining translucent square around each one. The single-pixel distance lines also look wrong during transitions.

Since this mystery ends in a Radar ticket about GPU drivers, I won't hold my breath for a fix. As far as Apple is concerned, it ain't broke until Final Cut is broke.


Looks pretty good on Chrome on a retina MBP (15"). Also looks fine on firefox. You could at least say which model of mac you're on.


MacBook Pro8, 13-inch, Late 2011 2.4 GHz Intel Core i5 Intel HD Graphics 3000 512 MB

As surmised below, this appears to be a OpenGL driver issue with NVidia and Intel GPUs.


It doesn't work at all for me in Chrome on Win XP.


It works perfectly for me in Chrome in Win XP. Probably a hardware glitch.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: