Hacker News new | past | comments | ask | show | jobs | submit login
Planet Shadertoy (shadertoy.com)
420 points by ttsiodras on Feb 24, 2018 | hide | past | web | favorite | 121 comments


It reminds me of two other recent demoscene productions, also shader-based:

Waillee, in 4kB: http://www.pouet.net/prod.php?which=71873

Fermi Paradox, in 64kB: http://www.pouet.net/prod.php?which=67113

Here's a presentation from one of the mercury guys explaining how to do that kind of thing: https://www.youtube.com/watch?v=s8nFqwOho-s

Also, elevated from RGBA should probably be mentioned since iq (https://www.shadertoy.com/user/iq) is the guy behind Shadertoy. (Also because it's pretty.) http://www.pouet.net/prod.php?which=52938

FYI I downloaded the Fermi Paradox intro from that website (the actual executable one), and Windows Defender quarantined it, claiming it contains a Win32/Tiggre!rfn trojan.

I don't know enough about this stuff to figure out if it's a false positive, but there you go.

Some of the strategies that demoscene authors use to shrink their executable size (think packing, self-modifying code, etc...) are also incredibly common in malware.

Some antivirus complain about any file compressed using kkrunchy (http://www.farbrausch.de/~fg/kkrunchy/). This is a known issue.

Some background here: https://conspiracy.hu/about/antivirus/

The only solution is to report the false positive to antivirus vendors, but it takes a lot of time.

Alternatively, stop using antivirus. The amount of stuff that gets flagged as false positives is incredible once you go outside of the applications written by big corporations and the like. It's almost as if they use a whitelist...

Also, seeing that some AV will alert on even the presence of an empty folder that happens to share the same name as actual malware[1] or a completely innocuous Hello World[2][3], it's hard to recommend any. That and the detection of cracks and keygens (which goes beyond "antimalware", IMHO) further strengthens my opposition against what is essentially censorware.

[1] https://news.ycombinator.com/item?id=2390907

[2] https://www.csoonline.com/article/3216765/security/heres-why...

[3] https://stackoverflow.com/questions/22926360/malwarebytes-gi...

Thank you for the explanation.

I’m not sure what I’m more impressed by the shader itself was r the fact that it runs at 60fps on an iPhone 7

- Westmere Xeon at 3.33GHz

- Radeon RX 580

- 1440p Resolution

- Linux 4.12 with Mesa 17.3.2

At first I thought the demo was broken entirely or was designed as some stress-test because Chrome churned between 0.6 to 0.9 fps. Never cracked 1.0 (yes, never passing one-point-zero).

Then I opened Firefox and not only ran, but never dropped below 30 fps, mostly hovered between 40 and 50.

This will be chrome being in "3D software rasterizer" mode as it tends to produce artifacts badly when run on those kinds of GPUs with mesa.

- Threadripper 1900x

- Radeon Vega 64

- 4k resolution

- Linux 4.15.4 and Mesa 17.3.5 (Fedora 27)

Firefox locked to 60 fps (is there a way to disable vsync?); Chrome hovering somewhere between 9 and 16 fps.

30fps on my old xperia z3. Very impressed. I'm slowly starting to feel like web 3D on mobile is a thing and not just a happy accident when it works.

Meanwhile I'm on 4 fps on a laptop while pushing the laptop to thermal limits.

That's on Chrome. On FF with same laptop...butter-smooth 60fps with no heat.

Whatever this thing is it definitely doesn't like Chrome.

chrome://gpu I bet it's not using all of them. Open chrome://flags and enable "Override software rendering list" and restart. Then thank me later

Works beautifully in Chrome on my iPhone.

Note that Chrome on an iPhone is still just Safari but with different chrome (no pun intended).

And a solid 5 fps on Chrome on Linux on Haswell, heh.

If you have a nvidia GPU, then it may well be the fault of the latest drivers (390.25), which have been buggy, and it's especially notable under Chrome/Chromium (video/audio stuttering, problems with vsync, general slowness, high CPU usage, etc..).

Arch has a recent thread about it: https://bbs.archlinux.org/viewtopic.php?id=234241

I got solid 60FPS using Firefox Developer Edition on Arch, it would be interesting to see what the issue is for you (might be hardware, I'm running a Vega.)

Between 30 and 50 fps on Firefox 58, on Kubuntu, on a slow-ass Intel Celeron 3205U (1.50ghz) laptop.

Amazing. Just amazing.

Re: Vega — I think it's pretty clearly some sort of software issue — other people are having no trouble with mobile/embedded/iGPU hardware. This machine has an Nvidia GT 610 in it, which isn't high end gaming equipment. But it should be adequate for this demo.

You might wanna try activating the #ignore-gpu-blacklist in chrome://flags

But be aware that it can cause some collateral damage like broken websites (if you have it and it bothers you, you can simply turn the flag off again). For example, with my Haswell Intel GPU I had never any problems, but with my Radeon it results in some strange textures on some website. FF has no problems with neither of them.

Same here, 3 to 5 fps on Chrome and a consistent 60 fps on firefox 58. There's definitely something wrong with Chrome for linux. Try the Stripe docs page https://stripe.com/docs/api#error_handling , same issue. Buttery smooth scrolling on firefox but unusable on Chrome.

Anything under 10fps on a modern machine is almost certainly not benefitting from GPU acceleration.

Unless you try running a demanding AAA game in 8K resolution :D

60 FPS + negligible CPU on Chrome OS with Haswell though, interesting.

Yeah, clearly Haswell (+ my GPU) is capable of much more. Just not being utilized well.

I get ~60 fps on Broadwell Xeon, Xubuntu, Chromium, and a 1060 GPU.

I'm getting 30-60 FPS (depending on the scene) on my iPhone 5s, so even more impressive.

On an LG Power X it doesn't even start, beyond freezing Chrome.

Or a minimum of 26fps on iPhone 5S and mostly higher than 40fps.

Seriously? I'm getting 32-36 on the iPhone X. I don't think it's resolution dependent, so I'm pretty interested in why the performance is so much less.

Fragment shaders (which this is) are quite resolution dependent: they are programs which execute at each pixel, so the input size to the algorithm contained in the shader is essentially equal to the resolution (it's not strictly, since other data can be passed in which the program iterates over—but it is fairly uncommon for shaders to make heavy use of loops or recursion from what I've seen).

I understand that, I'm quite acquainted with 3D programming and shaders. But I was under the impression that they were dependent on the resolution of the renderview, not the device.

It's definitely resolution dependent. On my Galaxy S7, the frame rate is 40-45 when the screen is set to 1080p, and drops to 20fps if I increase the resolution to 1440p.

Make sure Low Power Mode is off, if it's on it'll have a big effect on performance.

That was it. Didn't even remember that I have it on.

Why wouldn't it be resolution dependent unless you specifically knew otherwise?

I understood it to be dependent on the resolution of the renderview, not the device. Perhaps I'm incorrect, but the problem in this case was low power mode.

I’m getting 60 on X. Are you using safari?

Irrelevant as on iOS Apple does not allow any other rendering engine other than Webkit.

Not sure if it’s still the case, but there were limitations around JS optimisations in webviews outside Safari, so the question might actually be relevant.

This is a shader, there is absolutely no JS code involved. The code runs directly on the GPU

Right. My point was that, given that Apple have historically limited the optimisations available to webviews, it’s not obvious that WebGL would be exempt of similar limitations.

I’m getting 60 fps on my iPhone X.

I am getting 60 on an iPhone 6s

30-35fps on 6S here. Weird.

Edit: 60fps with Low Power Mode off.

I really admire the value this piece has as a composition. Or, the clouds, rings, and water terrain individually work as cool demoscene shaders, but flipping them on/off depending where the camera is gives it a great sense of scale.

ShaderToy is the new demoscene, in my opinion. It's cool to figure out what you can do running the exact same shader code for every fragment on the screen exactly once per frame.

It’s better than demoscene. Demoscene is super secretive, Shadertoy is open and lets me study the shit out of shaders.

The demoscene thinking is - I've been led to believe - that given most of the coding is done on assembly level, anyone with the binary already has the code.

Apart from old-school productions, there's very little assembly in modern demoscene. 64kB intros use mostly C++ (or Rust). Even for 4kB intros, assembly is not used very often (and the interesting code is in the shaders anyway).

Who's writing demos in rust?

Logicoma's demos are almost entirely written in Rust, last I heard they were only using C++ for their audio synthesizer. Examples:

http://www.pouet.net/prod.php?which=69658 / http://www.pouet.net/prod.php?which=68375

Their coder Ferris answered some questions about the implementation here:


Surely the assembly code has comments, names, etc. The code might be closer to the binary in most programming languages, but it isn't the same.

It depends what year we are talking. C64 was mostly assembly, farbrausch eas cpp iirc.

It was started by demoscene people.

Did this cause noise in anyone else's audio output? I have a GTX1070, running on Arch w/ Chromium. I am using the onboard sound card, not even the GTX for that.

Yes, it's supposed to, they're "wind noises", check the audio tab of the shaders.

If you stick a "vol = 0.;" before the final line of the shader (that says "return Wind(time.05) vol;") it should be silent.

or you simply click the mute button in the GUI ;-)

If you're looking for a high res video out of this awesome shader, look at the intro of my recent Shadertoy Best Of video on YouTube : https://www.youtube.com/watch?v=7BB8TkY4Aeg

The democratization of non gaming 3D content and both powerful and cheap consumer electronics are long term driving forces.

I am both biased and hopeful since I am working on something very related, but my take on this is that the 3D part of the web will grow much faster than the non 3D part [1]. This growth will mainly be driven by non gaming 3D content that do not need high quality graphics to be relevent or entertaining.

VR and AR are better understood when we realize that they are just means to the end of consuming a wider variety of 3D content in a more engaging fashion.

The percieved lack of interest about VR is absolutely not about technical limitations (such as HMD weight, resolution, controllers, wire or whatever)[2] but it clearly is about the layman having no 3D content as relevant to his daily life as let say Facebook, YouTube, LinkedIn, Amazon etc. The first big VR company will launch a product that will be useful both in and out of VR and the web is the platform the most likely to host it.

[1] Most (say over 70%) of the web will stay 2D for a VERY long time though [2] Most people who try VR HMDs enjoy the experience just fine (for the least). They just have no reason to try it again, and even less reason to pay money for that.

I was already sufficiently impressed... and then I realized that by default it's in "VERY_LOW_QUALITY" mode. Select "HIGH_QUALITY" using the #defines at the top of the file if your machine can handle it :)

For a second there I thought it was going to be a cut for cut remake of the Star Trek TNG intro.

It’s not working on my iPhone. But, I think this is what you are looking for


Shadertoy in the new WinAmp Visualizations.

Wow, I miss those... Software from that era was just more fun... Things have gotten a little sterile since

I still run Winamp with Milkdrop on an old laptop. It’s still a lot of fun too. It really whips the llama’s ass.

Very impressive. Any chance the author would be interested in porting Oolite[1] to the web? I started to work on it[2] but gave up eventually[3].

1. http://www.oolite.org/

2. http://grondilu.github.io/oolite/

3. http://aegidian.org/bb/viewtopic.php?f=5&p=260273

the best way to do it would be with emscripten. You just need to make sure it only uses at most GLES3 gl calls, and also replace the sound/input/controllers apis with emscripten/web ones.

Remember that WebGL is also a security nightmare. Shaders are fed to the GPU driver. The driver contains a compiler and compiles the shaders into the GPU specific ISA. The GPU that runs that code is a PCIe device with full DMA access. What could possibly go wrong?

(I'm aware that at least Chrome does some syntactic checks on the shader syntax)

GPUs can only access pinned memory that is intentionally mapped into their address space. Also, each context gets its own virtual address space on the GPU, isolated from other contexts.

There can still be issues, but it isn't quite as much of a free for all as the above comment sounds.

Fun fact, the Raspberry Pi's GPU can access everything. And to deal with that, the Mesa VC4 driver validates every shader to prevent reading other processes' stuff.

Isn't that because in that SOC they use unified memory where GPU and CPU memory is the same. This does not apply to most desktop computers or mobile phones...

Are there examples where this model has been abused to steal real data?

A long time ago, a bug in Firefox let a screenshot be taken of data outside the browser window:

"This issue allows attackers to capture screen shots of private or confidential information"


I suppose this is more about reading another texture than the one you are supposed to use. GPU memory is flat, and there is no concept of process memory up there.

In the early days of WebGL some browsers leaked information via uninitialized GPU memory, so an attacker could potentially read texture data left behind by other processes.

Todays WebGL implementations take care to wipe new memory allocations with zeros before letting the untrusted script do anything with them though.

That's the same than starting a process and mallocing some memory; you will have the garbage of the previous process... Because you have no idea who owned that memory and what it was used to it would be hard to build something on top of that. This being said it's not a bad idea to zero stuff when you start to use them.

Which is why WebGL is only a subset of native GL ES and will never be as good.

Truly amazing work, especially considering the whole thing isn't running off of a game engine. Everything was made from scratch.

That being said, I wish I had the patience to do something like this :)

It isn't completely from scratch, i recognize several noise functions, of course everyone uses them.

Marginally related: is there something native (and for linux/unix) equivalent to shadertoy? When debugging a sharer it's quite handy, but the web interface is just too laggy for me.

I'm currently using my own simple test rig, but I'd like something more refined.

Bonzomatic is an equivalent of Shadertoy designed for live-coding competitions: https://github.com/Gargaj/Bonzomatic/blob/master/README.md

What is a good introductory book to start learning stuff like this?

Books might not be the best resource for Shadertoy-type stuff. Almost all of Shadertoy 3d shaders use a technique called ray-marching with signed distance functions. If you Google it, you should find good resources. Also, someone on Shadertoy made a very good tutorial using Shadertoy, which I think is kindof amazing... https://www.shadertoy.com/view/4dSfRc There are other tutorial shaders on Shadertoy and I always try to make mine readable and heavily commented... https://www.shadertoy.com/user/otaviogood

wow, a shader tutorial and it's written in shaders!

Someone else already mentioned the book of shaders (which is the single best introductory resource IMO) - aside from that, I've found that reverse-engineering existing shaders and reapplying the learnings to my own shaders has been very helpful. With time, you start developing an eye for which shaders are just one or two steps beyond your understanding. You'll also start noticing that certain users (such as @Shane) on the site are really good about commenting their code, while others treat it like a game of code golf.

When studying existing shaders, it's best to focus on the well-documented shaders that are a few steps beyond your current capabilities, rather than the ones that consist of hundreds of lines of single letter variables and incomprehensible math. As far as open source repositories go, it doesn't get much better than shadertoy (in terms of pedagogy) since you can easily tweak values and comment out pieces of code right there in the browser if you're trying to figure out what a certain line of code does. The in-browser editor makes reverse-engineering very efficient and reduces friction as much as possible, which is really helpful for this kind of dense mathematical code.

Once you get used to the whole process of reverse-engineering shaders, you'll quickly come to see shadertoy as the perfect place to learn how different visual effects and graphics techniques are achieved. I don't know of anywhere else on the web (except perhaps codepen) where you can so immediately go from viewing a visual effect in a gallery to messing around with the code in nearly the exact environment that it was created in.

Edit: the best resource I've come across for learning raymarching (the 3d rendering technique used in the shader that is the subject of this submission) is this tutorial by Jamie Wong: http://jamie-wong.com/2016/07/15/ray-marching-signed-distanc...

One of the co-founders of shader toy is a legend in the field and has the most relevant resources for procedural generation and distance field ray-marching: http://iquilezles.org/www/index.htm

May I suggest this free and open resource: https://thebookofshaders.com

The ShaderX series is probably a good start: http://www.realtimerendering.com/resources/shaderx/

What I find most incredible is that consumer-grade computer hardware has gotten so fast this kind of thing can be done in a web browser.

I'm honestly not sure if that's an attempt at sarcasm or whether you're really serious.

Isn't it just rendering two triangles with a fragment (pixel) shader executing purely on the GPU. I mean, why would that be any slower in a web browser than anywhere else? (Unless the shader compiler is pretty bad?)

Are pure WebGL fragment shader demos really significantly slower in a web browser? If so, why?

I'm not being sarcastic.

Viewed through the lens of someone that lived through PCs requiring programs written in x86 assembly and be the only thing running on the bare metal to achieve anything close to 60FPS for full-screen faux-3D in 320x200 in 256 colors and well.. yes, this is absolutely incredible that a damn web browser can do this stuff in a tab - and it's entirely due to how fast processors (including GPUs) have become.

I'm aware the shader is being throw at the GPU and that's the majority of the complexity, but the GPU is part of the incredible progress consumer hardware has made.

The browser being in the loop just furthers the impressiveness; there's a bunch of other software running on the computer while this is going on in a damn tab.

This[0] is a great book for people who do not know the history of gaming and the struggles devs had in early pc game dev when you had no gpu’s and very crappy graphics cards.

[0] https://www.amazon.com/gp/aw/d/B0768B3PWV/ref=mp_s_a_1_1?ie=...

Yes, coming from the days of 6502 8-bit assembly with 64k of memory total for the operating system, the video memory, and whatever was left over for a program to run on the Apple ][+, this IS amazing.

45-60 fps on my 120$ Xiaomi Redmi phone. This is amazing work! The amount of math in this code can not even be dreamt by me!

It doesn't render properly on my Samsung S8+, Chrome/Firefox. Weird!

Huh, this works in Safari but not in Safari Technology Preview. Weird.

:O 24fps on the all mighty Intel HD3000 :O

Can someone make a video and upload it to youtube please? While slowly scrolling through the shader code? I'm on a linux laptop :0

Have you tried firefox? I get between 25-50 fps on it and just 1 fps on Chrome on my fedora laptop.

Stable 60 FPS on high setting here with Firefox 58, Linux, open source drivers (mesa-17.3.5) and an AMD Radeon RX 460 (passive).

So its neither a Linux nor an open source driver problem. Maybe your hardware is in fact not up to the job or you need to install some updates ;-)

Btw. I also tested chrome and as long as I run it with default settings I get around 3-5fps, but when I activate #ignore-gpu-blacklist in chrome://flags it reaches 60 fps there too.

I am also on a linux laptop, with integrated graphics but working drivers ;-)

The code copied to a pastebin: https://pastebin.com/f4uFYMzy

Recorded webm with the shadertoy-integrated record tool: https://gfycat.com/CrazyMassiveBluewhale or https://webmshare.com/vPx9d (sorry for having the tab in the background while recording :P)

The author made a video of it: https://www.youtube.com/watch?v=VVrPhvfAXko

The shadertoy version is missing the mountain terrain scene present in this video.

Only when using the VERY_LOW_QUALITY preset.

There is a video posted in the comments.

But it worked fine on my Debian Stretch laptop with Intel integrated graphics. MED was pretty slow, but VERY_LOW and LOW detail levels were fine.

It's also raytraced/marched too, right? Reducing the resolution where possible might help with performance. I'm not sure if there's an easy way to do that via ShaderToy though.

Not clicking the full screen button is a great way to reduce the resolution :)

So am I - Chromium on Arch. i7 Kaby Lake. Out of the box wrt graphics, I have made no changes at all. #ignore-gpu-blacklist is still disabled. I suppose I could get the Nvidia GPU fired up but it isn't needed for this.


Yes ! The intro of this video : https://www.youtube.com/watch?v=7BB8TkY4Aeg

I was able to view the demo with Chromium/Lubuntu on an integrated Intel graphics, if that helps!

It runs at 60f/s on my Arch + Chrome laptop.

That’s so incredibly impressive, and the frame rate is pretty stable too! I’d love to see a similarly detailed dive into a black hole with these tools.

They should apply at Space X.

they write pretty 3D graphics ray tracing software, how does that make them good for space-x? pixar or nvidia, maybe

SpaceX does make promotional 3d animations for their projects/concepts.

Here's an example:


I'm sure they outsource that.

Also sure they at least use vertex shaders :)

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact