Hacker News new | past | comments | ask | show | jobs | submit login
KABOOM in 180 lines of bare C++ (github.com/ssloy)
854 points by haqreu on Jan 27, 2019 | hide | past | favorite | 63 comments



Add in some lookup-tables from experimental nuclear interaction probability tables and you've got yourself a Monte Carlo radiation transport system capable of simulating in precise geometric detail all sorts of systems. This was indeed one of the early uses of computers back in the 1950s.

For example: https://en.wikipedia.org/wiki/Monte_Carlo_N-Particle_Transpo...


Wow, this is very similar to the way I use procedural 3D textures and displacement maps to draw planets and things. So much so that I don't even see the explosion, just a growing planet (really, just tweak the mapped color gradient a bit).

Example:

https://www.friendlyskies.net/images/266.jpg

So much of 3D graphics is "hmm, this random thing I just made slightly resembles an $X". "OK, so let's say I just created a method for modeling $X." :-) Really fun stuff.


That's a common trick in PC 4k intros.

Despite the apparent variety, most 4k intros rely on a single effect, maybe two. There isn't enough space for more. The only things that change are the parameters.

Interestingly, the technique used here is signed distance field raymarching. A technique that is used by maybe 90% of all 4k intros today.

So basically, write the code in GLSL instead of C, add music, play a bit with the parameters, use a good exe packer like crinkler and you have a nice 4k intro.


For reference http://www.pouet.net/prodlist.php?order=release&type%5B%5D=4...

The effects make the difference, besides artistic direction, so materials, blur (bokeh), the sdf geometries (often just mere balls and cubes, but perhaps a twister or a nice water surface or a fractal) just for starters. Just from my outside perspective.


That's really cool, in part because it's comforting to hear. :-) Sometimes I feel a bit guilty about how much variety can be had from abusing a single distance function + various texture maps, camera angles and focal settings, environment colors, etc.


The signed distance function also finds application in real physics simulations of fluids, where it goes under the name of "level set method". Ron Fedkiw, who's worked a lot on it, is one of very computational physicists who also holds an Academy Award. His homepage is very interesting:

http://physbam.stanford.edu/~fedkiw/


Very interesting indeed!

Off-topic: May I suggest that you switch to using imgur instead of the image host you chose? The ads on imgur are much less intrusive.


Imgur doesn't work on the old iPod I was using while my phone charged. It never loads fully, but TBF I never upgrade iOS past 2 major versions. So iOS 7 and the first "plain" image host search result it is... :-)

Edit: Found the image link on my own site:

https://www.friendlyskies.net/images/266.jpg


It'd work if people just simply posted the direct image, so you don't have to load their massively bloated website. Then again a direct link wouldn't support them through their monitization methods (ads), but if I'm already running an ad blocker..


While I argued for Imgur over using the hosting site the parent to mine used with regards to intrusiveness of ads I should note something important about Imgur with regards to the bloat you mentioned.

The image subdomain servers of Imgur, i.imgur.com, look at the referer of your request and will conditionally redirect you to the web app.

Basically, direct links to images will only give you the image directly if the page you came from is on their whitelist.

Reddit is on said whitelist but I don’t think HN is.

And to further complicate the matter keep in mind that browser caching might make it look to you as though a direct link you posted somewhere is really direct, but to those who haven’t visited the image already they will be redirected the first time they visit that direct link.

For example, here is a direct link to an image hosted on Imgur: https://i.imgur.com/6O265V5.jpg

For me it will look this direct link serves the image only, even when I click my own link in this comment after I’ve posted the comment and am visiting it with HN as referer.

But if HN is not on their whitelist then you and everyone else clicking the link in my comment will be redirected to their web app, provided you didn’t happen to have the image in your cache already.

Edit: Yup, visited the link in this comment from another computer and am indeed redirected as I expected.

Edit 2: Am also redirected even on the device I posted from when following the link in my comment. So even browser caching didn’t stop that in this case.


The image subdomain servers of Imgur, i.imgur.com, look at the referer of your request and will conditionally redirect you to the web app.

Fortunately, if you send no referer it seems you get the image directly. This is easy to do with a browser setting.


> This is easy to do with a browser setting.

On your desktop browser yes. On mobile devices not as easy.

And even on desktop you might not want to globally disable sending referer because other sites might break.

And even if you use a browser add-on to only block referers for i.imgur.com, most other people haven’t so everyone else is still being redirected.


> On your desktop browser yes. On mobile devices not as easy.

What's even worse, on mobile, imgur heavily downscales and compresses images. Which is fine for kitten photos, but completely destroys any of its utility for screenshots, as you can't read normal-sized text on the phone.


Which makes me think that it would be nice if someone set up an image hosting site that allows hotlinking for HN but not for others (in order for bandwidth requirements to be reasonable and for your server to not be overloaded).

But running an image hosting service that anyone may post to is a lot of work.

Firstly you have your run-of-the-mill DMCA takedown notices, both the legitimate and the bogus ones. So you need to deal with those. And if you are unfortunate with your choice of hosting provider or registrar then those might not forwarding DMCA takedown notices to you as they should but instead just terminate service.

And DMCA takedown notices aren’t even the worst part. Sooner or later someone might post illegal photos to your server depicting sexual abuse and other atrocities, and you absolutely need to figure out the proper procedures for dealing with that.

On top of that you have trolls abusing what ever form of report functionality you create.

This leads to a kind of Catch-22. You need for reports to be legitimate in order to provide service to the honest part of your user base, but at the same time you neither can nor want to look at the worst kind of images that someone could post.

So you need systems that can automatically identify those sorts of images without human interaction. And certainly it will be very difficult to get such a system correct so that it has no false negatives and limited amount of false positives (the latter meaning that the system removes images that shouldn’t be removed), because once again you neither can nor want to look at the images that it should be able to identify and remove.

Obviously, it’s not impossible — otherwise there wouldn’t be any image hosting sites in existence — but like I said, it’s a lot of work.

Additionally, even if you do get all of that right, the utility of a HN specific image hosting site is very limited. For example, if the HN community was to adopt it then suddenly the referer check that was there to make the service feasible in terms of bandwidth cost and server load will result in seemingly broken links for anyone on HN that wants to share any of the images outside of HN.


Ah, thank you for the informative reply! I admit that's clever of them, even though it's not to my benefit.


What's weird is, I'm pretty sure I originally posted the "direct" image link that PostImage provided, among the options given post-upload. When I tested the link from HN, I just saw the image on Safari's black background. But minutes later I tested the same link here at HN and saw the image buried in a page of ads.

I wonder if they have some kind of "hey, non-members get a limited number of directs" policy.


Understandable. Unfortunate but understandable.


Found a copy on my web server and linked above.


Thanks :)


As pretty as this is I recommend reading all of his tutorial. Lots of it is old stuff that most of you will have seen takes on before, but he's got some great angles and insights hidden in each post.

Start here: https://github.com/ssloy/tinyrenderer/wiki/Lesson-1:-Bresenh...


I was hoping that this was a clone of the Atari game.

https://en.wikipedia.org/wiki/Kaboom!_(video_game)


Me too!


Really well written. The other articles in the series on computer graphics are excellent too. The use of GitHub to show diffs of each step is quite effective.


> The use of GitHub to show diffs of each step is quite effective.

Little off-topic, but that's one of the reasons I like to use Magit on Emacs, you can see the diff of each commit interactively and intuitively.

[1]:https://magit.vc/


Git can show you the diff of any commit as well, though I guess maybe the interactivity you mentioned is important.

Anyhow, with just git.

Diff of most recent commit:

  git show HEAD
Diff of preceding commit:

  git show HEAD^
Diff of commit before that again

  git show HEAD^^
Of course you don’t want to type ^ 200 times. Fortunately you don’t have to. Diff of commit three steps back before most recent:

  git show HEAD~3
Or you can look at the log first

  git log
And find a specific commit and then use the first few characters of the id of that commit, e.g.

  git show af4c
And of course I would be remiss to not mention

  git diff
And

  git diff --cached
These two show you unstaged and staged changes before you commit. I use these commands all the time. So much that they are two of the commands I have created very short two-letter aliases for in my .bashrc

  alias di="git diff --cached"
  alias dp="git diff"


Alternatively, you can use `tig` which allows you to browse the commit log, diffs, and blames interactively. It's a terminal ui tool.


I’m a noob on a new team with git (well versed on TFS though) thank you for posting this.


This technique (distance field sphere tracing) was popularized in a big way on the community https://www.shadertoy.com/


Inigo Quilez created that site, he could be said to have been the one who really popularised it all. He has a bunch of articles on his site about distance fields:

https://www.iquilezles.org/www/articles/distfunctions/distfu...

https://iquilezles.org/www/articles/raymarchingdf/raymarchin...

He also has videos on YouTube, some of which show the creation of a scene starting from the basics.


I ported this [1], as well as ssloy's previous [2] article that was featured recently in HN, in Golang.

I wanted to say he's an excellent introduction to graphics. As a sysadmin, I had never drawn more than simple lines and the whole thing seemed daunting. Now I see how fun it can be, and I'm waiting for next weekend to pick up some more!

[1] https://github.com/tpaschalis/go-tinykaboom

[2] https://github.com/tpaschalis/go-tinyraytracer


This is amazing, thanks for sharing! One particularity of fire is that it does not generate shadows, while smoke does (might be useful for more realistic effects):

https://physics.stackexchange.com/questions/372117/shadow-of...


I suppose this shows a bit of a sickness inside me, because I have a strong aversion against reading these articles because of my pure distaste for C++, which is stupid because this article is great.

Still, it's hard for me to not want to create a series of blog posts under the title "All those cool graphics tutorials in <language I like better>".


Brings back fond memories from the demoscene days back in the 90's! Cheers!


Oh. I thought it was the Atar2600 game.


Kaboom! was awesome... need paddle controllers to really appreciate it though.


Aw, here I thought it was an implementation of the 1980's Atari 2600 Activision game :/

https://en.wikipedia.org/wiki/Kaboom!_(video_game)


reminds me of a patch to MacOS WindowMgr that made an explosion when you closed the window.. didnt win MacHack i n Ann Arbor MI that year but it was fun..


This is pretty cool. Reminds me of my old escapades on the demo scene (The Party!) in the late 90es.


Nice. Does Visual Studio support the level or version of OpenMP used in this code? (see the pragma)


Yes it does, however you'd need to modify CMakeLists.txt, as it is written for g++


SDFs are amazing.


I would learn language C C+ and C++ who can help me please


Impressive effect.

I didn't realize openmp was so easy to use. It isn't realtime but you could bake up some cool effects with this.

AMD FX8320 3.5GHz

$ time ./tinykaboom

real 0m4.176s user 0m28.631 sys 0m0.012s


Running it on the GPU would probably get to realtime speeds.

This technique is used a lot in demoscene demos, which certainly do run in realtime.


I saw the openmp pragma and thought to myself "neat! should be fun to watch the cores work hard at this" and went ahead and compiled and run it and smiled at the 400% cpu usage in top.

    $ time ./tinykaboom 
    ./tinykaboom  78.08s user 0.02s system 369% cpu 21.159 total
Then I wondered how it would fare if I were to port it to Go and went ahead and hastily did port to Go and thought that, "hmmm this should run a bit slower than the c++ version" but surprisingly it ran more than twice faster:

    $ go build ./tinykaboom.go
    $ time ./tinykaboom 
    ./tinykaboom  34.32s user 0.03s system 368% cpu 9.315 total
https://github.com/holygeek/tinykaboom/blob/master/tinykaboo...

Here's the corresponding perf report:

Go:

    Samples: 103K of event 'cycles:pp', Event count (approx.): 37252033995665
    Overhead  Command     Shared Object      Symbol
      32.17%  tinykaboom  tinykaboom         [.] math.sin
      28.80%  tinykaboom  tinykaboom         [.] main.hash
      11.81%  tinykaboom  tinykaboom         [.] main.rotate
       7.76%  tinykaboom  tinykaboom         [.] math.Min
       5.18%  tinykaboom  tinykaboom         [.] main.lerpFloat64
       4.25%  tinykaboom  tinykaboom         [.] main.noise
       2.59%  tinykaboom  tinykaboom         [.] runtime.mallocgc
       2.59%  tinykaboom  tinykaboom         [.] main.fractal_brownian_motion
       2.58%  tinykaboom  tinykaboom         [.] main.signed_distance
c++:

    Samples: 234K of event 'cycles:pp', Event count (approx.): 86721459552303
    Overhead  Command     Shared Object        Symbol
      67.93%  tinykaboom  libm-2.23.so         [.] __sin_avx
      30.80%  tinykaboom  tinykaboom           [.] _Z5noiseRK3vecILm3EfE
       1.27%  tinykaboom  libm-2.23.so         [.] __floorf_sse41
       0.00%  tinykaboom  tinykaboom           [.] _Z23fractal_brownian_motionRK3vecILm3EfE
       0.00%  tinykaboom  tinykaboom           [.] floorf@plt
If anyone can give suggestions on how to make the tinykaboom.cpp faster that would be neat!


There are a few potential improvements here: 1) Use a look up table for 'sin' rather than using 'std::sin'. 2) Tell the compiler what instruction sets to use; for example, tell GCC to use 'skylake' instructions (https://gcc.gnu.org/onlinedocs/gcc-6.2.0/gcc/x86-Options.htm...). 3) Many of the functions could be 'inline constexpr'. 4) Although 'ofs <<' is buffered, it can still be very slow. Create the output in memory and use a lower level function like 'fwrite' to write it to file. 5) Use 'std::thread' or 'std::async'. It makes the multi-threading more portable and clear.


What were your compilation flags?


I used the default one in CmakeLists.txt (-O3).

I ran the comparison again on another machine that I have and this time their performances are about the same:

c++:

    $ time ./tinykaboom
    ./tinykaboom  46.72s user 0.01s system 364% cpu 12.804 total
go:

    $ time ./tinykaboom     
    ./tinykaboom  42.50s user 0.07s system 350% cpu 12.161 total


i7-3770: real 0m6.800s user 0m6.695s sys 0m0.034s

2x e5-2667 v2: real 0m2.217s user 0m56.088s sys 0m0.016s

Seems like it's pretty inefficient with the dual CPU setup.


Weird result. I guess it makes sense that it could use twice as much CPU to finish in half the time but looking at the numbers doesn't feel intuitive.

I wonder how many shaders this would keep busy. There is probably a class of GPUs and above that this could work on rather well alongside an already large workload.


Impressive

I was willing to criticize the line count if it had a bunch of dependencies, but 180 lines seems accurate and I'll give a pass to the obligatory core C++ include statements

Way to go!


Modern GPUs are just too powerful to ignore.

Especially for graphics-related stuff.

Yet HN community seems to ignore GPUs.

When I recently published my hobby project that renders much more advanced procedurally-generated stuff, I only got a single upvote: https://news.ycombinator.com/item?id=18921046


You completely missed the point of why people like this..


The OP's 180 lines of C++ could be rewritten to similar count of lines in HLSL or GLSL. The resulting code would render in realtime while also consuming less electricity.

Indeed, I don't see why people like this. In modern world, doing graphics on CPU is very inefficient.


You miss the point. The point is not doing it efficiently, but explaining the concepts without being distracted by getting it running on specific hardware and the like. Read the introduction to the overall series.


> The point is not doing it efficiently

Have you read the linked article? It says "I want to have a simple stuff applicable to video games."

> without being distracted by getting it running on specific hardware

Just target Windows and use Direct3D, 99% of PC game developers do just that. The last GPU that didn't support D3D feature level 11.0 was intel sandy bridge from 2011. Everything newer then that supports 11.0, and unlike OpenGL with it's extensions, the majority of features are mandatory. Very rarely I saw compatibility issues across GPUs in recent years, and when I did it was a driver bug.


The algorithm is applicable to video games. As you yourself point out it'd not be a lot of effort to rewrite. I suggest you read the very next paragraph, which ends:

> I do not pursue speed/optimization at all, my goal is to show the underlying principles.

...

> Just target Windows and use Direct3D, 99% of PC game developers do just that.

Misses the point of the series. From the introduction (linked at the top of the page):

> I do not want to show how to write applications for OpenGL. I want to show how OpenGL works. I am deeply convinced that it is impossible to write efficient applications using 3D libraries without understanding this.

The exact same could be said for Direct3D. He gives his students a class to read/write TGA images and set pixels for that article, which should make it exceedingly clear that the point is to ensure the focus is the algorithm and no irrelevant details to teach the principles without people being sidelined by worrying about libraries, and differences between platforms and the like.

And in any case, I explained to you what the appeal of this to people here is. That you think it could be done differently does not change that the appeal to people is exactly that there are no dependencies like Direct3D or Windows or anything else (I don't have Windows anywhere, so for me that would have made it relatively uninteresting; as it would for a lot of other people here). I don't care about the performance; I care about the concepts.


> I am deeply convinced that it is impossible to write efficient applications using 3D libraries without understanding this.

What he explains is almost irrelevant for efficiency. Other things are relevant: early Z, tiled rendering, other things about architecture of GPUs: resource types, cache hierarchy, fixed-function pipeline steps, these warps, many others.

> I care about the concepts.

I've been programming C++ for living since 2000, about half of that time something relevant to 3D graphics, both games and CAD. You no longer need deep understanding of rasterizer. Vague understanding of what the hardware does, and how to control it, is enough already. Only people working in companies like nVidia or Chaos Group need that info, IMO.


Yes, irrelevant, because the article is simply describing how to combine a sphere+bumps+noise to look like an explosion.

Your project seems to be interesting in its own way, but I don't see why you'd juxtapose it with this tutorial, other than that they both involve pixels.


The project was just an example of HN bias against GPUs. At least for doing graphics on GPUs.

Try searching "GPU" on this site. 100% of the first page of results are about using GPGPU in the clouds. Do you think that matches what people buy GPUs for, or amount of code developers white for them?


I searched "GPU" and almost all of the top results were people doing weird/unusual things with the GPU: terminal emulator, Postgres query acceleration, stripped down HTML engine, APL compiler, etc. A search of "Direct3D" suggests HN doesn't have much interest in Direct3D, though.


> weird/unusual things with the GPU: terminal emulator, Postgres query acceleration, stripped down HTML engine, APL compiler

Exactly. People here mostly do general-purpose computations on them, despite “G” stands for graphics.

Search for “Graphics”, and the majority of top results are for CPU-based stuff, pixel graphics in terminal, python, Skia. There’re some results for GPU-based graphics, like graphic studies for MGS and GTA, but they’re minority.

I think graphics is how the majority of users use their hardware for, but it’s under-represented here.


Funny how you did not notice that the project is focused on the GPU :)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: