Hacker News new | comments | ask | show | jobs | submit login
Graveyard Keeper: How the graphics effects are made (gamasutra.com)
559 points by thrower123 3 months ago | hide | past | web | favorite | 43 comments



Note on the usage of lookup tables: Color transitions are hard - their example here looks pretty good but in general tweening between two color spaces in a linear manner can produce unwanted "washing out" or dull in-between states. The problem is not just within the transition of individual colors, but how the relative values of nearby color affect our perception. For example, see this illusion in action: https://twitter.com/AkiyoshiKitaoka/status/10284735661933158...


Color transitions are hard, and that illusion is pretty trippy. Beyond that, my sense is another big problem most people run into when they try to light pixel art (or any hand-painted asset) is that the asset is almost always "pre-tonemapped" by the artist. By first inverse-tonemapping the asset before applying any lighting, and then re-tonemapping afterwards, you can get pretty good results (assuming you can work out your artist's intuitive tonemapping function). Here's a fun article which explores the idea a bit more: http://www.codersnotes.com/notes/untonemapping/


I think cause of this article problem is not that textures require some "untonemapping", but that they are typically stored with sRGB gamma, and thus should be converted to linear for lighting, and converted back to sRGB gamma in final pass. (either by explicitly using 2.2 gamma in shaders, or graphics API conversion, eg. in GL using GL_SRGB8_ALPHA8 format and sRGB framebuffer with enabled GL_FRAMEBUFFER_SRGB)


That’s a really cool idea. Thanks for the article too, it provided for some excellent illumination. (Badum-tish. Sorry.)


Thanks, I will take a look :)


I find that RGB color space has lots of washed-out colors with linear interpolation, but other color spaces like HSV or HCL don't have the same issue. I tend to use those color spaces for creating gradients for charts with D3. I don't know if any games make use of them.


Agreed, here is one good comparison of color spaces: http://davidjohnstone.net/pages/lch-lab-colour-gradient-pick...

Things like CIELAB are also interesting in that they provide more perceptually uniform color spaces, but at a greater computational cost.


There's also a good discussion of colorspaces and perceptual linearity in "A Better Default Colormap for Matplotlib": https://bids.github.io/colormap/


Agreed. I found the YIQ color space for perceived luminance to be extremely useful for foreground/background color readability. http://blog.nitriq.com/BlackvsWhiteText.aspx

There are other tricks you can use like stroking/bordering the text with the opposite color as it provides contrast when white is on a light background or black is on a dark background. You can also have the text "glow" or some use a drop shadow. But these don't work that great when your text is small. However you have to use these tricks when you are putting text on top of a gradient.


HSV has been my color space of choice when I made simulators for government and industry. That's been my technique since 2007. Lerping through colors that way always looks atleast fine.


I faced what I think is a similar issue on Android : colors in RGB are gamma encoded; which make linear animations between 2 RGB values not linear at all.

There I solved it once I realized what was going on by converting the start and end colors to linear sRGB values and then interpolate between these two.

Would this work here too ?


Even worse, directly tweening between two colors in RGB space can produce other unwanted colors.

One has to switch to HSV for proper tweening.


I worked on a game with very similar dynamic lighting. I wrote a blog post that goes into more detail code-wise on how I did it if anyone is interested: http://www.mattgreer.org/articles/dynamic-lighting-and-shado...


Hi there! just wanted to say thanks for your write up. I recently implemented this approach (thanks to your notes) in the Godot engine for a game I'm working on. https://twitter.com/FlyingBastion/status/1054394571948417024


Oh nice, you were able to use my post? That's awesome. Your game is looking good, I just followed you guys.


At first I though the approach to dynamic lighting was overdone ("they had to paint light sources manually 4 times?") but then remembered the world is not 3D, so there's no way to do it automatically without drawing defined boundaries among objects. Painting 4 directional light sources and blending them together is super smart.

I totally envy pixel artists, especially those from the earlier days of the art. What a different world.


I got my start as a pixel editor/animator using Deluxe Paint and Deluxe Animator for DOS at Virgin Games.

Hand anti-aliasing, selecting your limited palette, and working with the hardware and screen constraints was fun. But, we at the time (early 90s) couldn't wait to have more colors and more resolution and more framerate and more memory.

So, folks today get to choose their constraints, which is important to most art, and the results are lovely.


Just making sure you've already seen Mark Ferrari's GDC talk: https://www.youtube.com/watch?v=aMcJ1Jvtef0 :)


The other way to do it which does work is to make both the normal maps and the diffuse maps from an underlying 3d model, and then render them out to make them look "pixely". That's how Dead Cells does it, the "source" of the sprites is a 3d model which is then rendered out as pixel art.


I played stardew valley and terraria a lot, but I do find pixel games hard on the eyes.

I have no idea why you would want to go from a 3D model back to a pixel look. Yooka-Laylee has just done this to make it N64-retro-esque but ... no thanks. If anything make it higher res, more beautiful.


The reason they want to do it is because they find it an appealing art-style, though obviously that's up to personal taste. I think the style really works for Dead Cells. You should check out the Gamasutra article on the graphics pipeline of Dead Cells if you're interested in how they did it: https://www.gamasutra.com/view/news/313026/Art_Design_Deep_D...


Always surprised by how well effects like this work. Wouldn't expect the wind shader to look as good as it does, for instance. Games (especially older games...) are full of so many elegant hacks!

One that I think about a lot is how in order to animate a 'rain' effect, you can actually just spawn randomly positioned 'impacts' on the ground and randomly positioned raindrops that disappear 'behind' the ground. You would think you might need to resolve each raindrop's collision and animate an 'impact' at the right point but it turns out our brain is pretty good at being fooled.


The beginning of "A Link to the Past" went even simpler than that and just looped a transparent rain layer over everything: https://i.imgur.com/W4fdw19.gif

I think what helped sell it, though, was the rain and thunder sound effects.


This is really neat. I was amused by this, though:

>I tried to add a skew there, but it turned out to be unnecessary.

and then

>It should be noted here, by the way, that the sprite is highly distorted vertically (the shadow sprite original looks like a circle). That’s why its rotation looks like not just a simple rotation but also like a distortion.

…which, if I recall my affine transforms, would also be called a "skew". I think I get the point though. This let them construct the skew in a more intuitive way than trying to make it an explicit parameter.


>The problem “how to find the closest 3 light sources and to calculate the distance and an angle was solved with a script running in the Update() loop. > > Yes, it’s not the quickest way considering how much math is involved. If I programmed it today, I would use that modern Unity Jobs System. But when I did it there was no such thing so I had to optimize regular scripts we had.

I wish they'd let you in on a little more. Because my numb mind says this isn't something you should ever need to optimize. How many light sources do they have, dozens at most? And how many sprites? Dozens/hundreds? That is nothing.. what is this heavy math? I figured all they'd need is a vector from light source to object, and the vector's magnitude.


Doing it the "naive" way would be: for every object that needs to be rendered like this, loop through all the lights and find the closest 3 of them. That would be O(n * m) where n is the number of lights and m is the number of objects. If you have hundreds of lights and thousands of objects, it's not insignificant, especially since you're doing it on the main thread.

I had to do this for a Unity game which had a similar "custom" light solution, and on lower end devices it was a problem. It was easily solved using a simple spacial data structure, but doing it the naive way did present some issues.


Agreed.

I've written a lighting system exactly like this one (it's a very common 2d lighting concept, basically any sprite-based game with dynamic lightning will use exactly this). The only difference was that I used 24 light-direction frames instead of 4 frames because I was pre-rendering 3d models to 2d sprites, so I didn't have to have an artist draw each frame but could just export any level of precision, limited not by the artist but by filesize/memory.

The calculations he's talking about are as simple as it gets and were never really a problem for me even on low-end machines. (I was building a top-down space rpg, basically any of the 100 plasma discharges, engine exhausts, particles etc was a dynamic source of light)


I would hope that they aren't doing that calculation for every sprite every frame. It strikes me that most of the sprites and probably most of the light sources are static, so most of the information could be precalculated and cached, which would leave just the moving sprites and lights. I would probably look at some kind of spatial partioning and marking sections dirty to prune things further. If it turned out to be significant enough that it was hurting framerate, anyway.


Indeed. And you don't even need square roots for identifying the three closest, since squared distances are just as good for that. So it's, what, a few instances of very basic arithmetic and then three roots? I don't see the complication either. It would be nice to know which part of their process introduced the difficulty. Was it the update frequency?


I might even avoid the squares and use a manhattan distance metric instead. No multiplication at all, just absolute value and addition.


finding an approximate root for these purposes can be really quite fast, as was demonstrated in Quake III. https://en.wikipedia.org/wiki/Fast_inverse_square_root


This could _possibly_ be because Unity's scripting engine is Mono/C# which might not be optimal for doing large amounts of math in tight loops in this particular case.

100% supposition.


I mean, my point was that a few dozen light sources times a few hundred sprites isn't large amounts of math, unless they're doing something really fancy with each pair. Language speed definitely isn't key here. It's got to be something else.


Finding the nearest three light sources is the main bottleneck, and in scenarios with many lights, is highly dependent on having a broadphase algorithm to filter them out early. If they did it the naive way, the algorithm becomes O(number of sources * number of lit objects).

But that doesn't seem to be their concern here, either. Optimizing broadphase collision is a well-understood problem, and the scenario of finding the nearest is only slightly different from that.


In more recent versions it is actually IL2CPP for most targets, and with the addition of former Insomiac devs, their C# subset (HPC#) has been taking a greater role on the engine.


I started porting my 2D roguelike game to a 2D engine like this one and got so bogged down in lighting difficulties that I gave up and abandoned the project.

This is really impressive, and a succinct write up of what can take a very long time to implement successfully. 2D/3D hybrid engines are hard!


Gorgeous. Did you consider doing things in 3d with an isometric camera?


Another recent game 'Brigador', also has earned much praise for its lighting work, and as a game in general.


That game got a very interesting review in Japanese which I was compelled to translate:

https://gamedaisukinahito.blog.fc2.com/blog-entry-954.html

https://pbs.twimg.com/media/Dja-bHqU0AEYJYe.jpg:large


Neat. Now I understand why modern "2D" pixel art games still sometimes need some GPU and CPU power.


Images have been consistently failing to load on this article for the last few hours.


Same here; `wget -HkKp` worked, albeit it took about a minute to actually download everything (and also the CSS broke).


What an interesting read. Thanks for finding and sharing.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: