Hacker News new | comments | show | ask | jobs | submit login
Animating quasicrystals in parallel Haskell (mainisusuallyafunction.blogspot.com)
73 points by dons 2102 days ago | hide | past | web | 13 comments | favorite

My WebGL version: http://zokier.net/stuff/webgl-quasicrystal/

Almost literal translation from the Haskell version.

edit: And in amazing technicolor: http://zokier.net/stuff/webgl-quasicrystal/color.html

Was playing with this a bit a lunchtime...

Modifying your main routine to do 3x3 super-sampling reduces flashing/aliasing quite a bit. Check it out.

    void main(void)
        float s = 0.0;
        for( float xx = 0.0; xx < 1.0; xx += 1.0/3.0) {
            for( float yy = 0.0; yy < 1.0; yy += 1.0/3.0) {
                s += combine(vec2(uv.x*xpixels+xx, uv.y*ypixels+yy));
        gl_FragColor = vec4(vec3(s/9.0), 1.0);
4x4 is better, but my puny Mac Mini GPU starts to hiccup at that many samples when rendering 1600x800.

Nice, updated the colored version with that.

Congratulations on making the first WebGL demo I've seen that didn't turn my macbook into a mobile space heater.

W00t! That is awesome.

Looking forward to someone porting this to WebGL. This animation could be realtime in your browser.

done! paste this [1] into shadertoy (and I de-lurked on HN after 3 years to do this; who knew) nice effect, but it pains me that a multicore cpu implementation can be SO SLOW. modern pc's are fast, you know? not just the gpu... oh well.

[1] https://gist.github.com/f448ba84e94c61ab5924

Thank you! While modern CPUs are definitely fast, they are not as fast as GPUs for code like this. Dynamic & realtime FTW.

<ramble> true, true! and I apologise for sounding whingey before, I do not mean to rag on you or the OP (I know nothing about what is good/idiomatic haskell and how that relates to efficient haskell). but it still feels damn slow, multiple seconds to make that image!

to put money where my (gut's?) mouth is, the dumb transliteration of my webgl shader to C++, compiled by MSVC in release mode on my win32 machine, takes 100ms to compute a frame at 800x600, on a single core, with precisely no tuning or effort.

with #pragma omp magic, equivalent in pain to the OP's point about almost-free-parallelisation in ghc, I imagine that would drop to around 20ms on 8 cores. and if I used an SSE vector class, probably another 2x, but that could legitimately be disallowed as overly complex.

my point being, you're right, GPUs stomp over CPUs for this kind of work! but my gut told me that this image should not take long for 'even' a CPU to produce; 10 or 20ms without effort, sub millisecond with effort (bytes rather than floats, asm, etc)

maybe I'm just lamenting the abuse of our modern CPUs, which are fantastically fast machines, even for stuff that they are not designed to excel at, like this. </ramble>

An optimized Haskell version running in real time, by Ben Lippmeier, http://www.youtube.com/watch?v=v_0Yyl19fiI

"Embarrassingly parallel floating point operations" "for the win". The things that GPUs are better at than CPUs are very poorly modeled by the words "dynamic" or "realtime". They are happy to do long-term batch computations (and getting happier about it), and there are plenty of dynamic real-time things they are bad at, because they involve lots of branching.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact