Things like CIELAB are also interesting in that they provide more perceptually uniform color spaces, but at a greater computational cost.
There are other tricks you can use like stroking/bordering the text with the opposite color as it provides contrast when white is on a light background or black is on a dark background. You can also have the text "glow" or some use a drop shadow. But these don't work that great when your text is small. However you have to use these tricks when you are putting text on top of a gradient.
There I solved it once I realized what was going on by converting the start and end colors to linear sRGB values and then interpolate between these two.
Would this work here too ?
One has to switch to HSV for proper tweening.
I totally envy pixel artists, especially those from the earlier days of the art. What a different world.
Hand anti-aliasing, selecting your limited palette, and working with the hardware and screen constraints was fun. But, we at the time (early 90s) couldn't wait to have more colors and more resolution and more framerate and more memory.
So, folks today get to choose their constraints, which is important to most art, and the results are lovely.
I have no idea why you would want to go from a 3D model back to a pixel look. Yooka-Laylee has just done this to make it N64-retro-esque but ... no thanks. If anything make it higher res, more beautiful.
One that I think about a lot is how in order to animate a 'rain' effect, you can actually just spawn randomly positioned 'impacts' on the ground and randomly positioned raindrops that disappear 'behind' the ground. You would think you might need to resolve each raindrop's collision and animate an 'impact' at the right point but it turns out our brain is pretty good at being fooled.
I think what helped sell it, though, was the rain and thunder sound effects.
>I tried to add a skew there, but it turned out to be unnecessary.
>It should be noted here, by the way, that the sprite is highly distorted vertically (the shadow sprite original looks like a circle). That’s why its rotation looks like not just a simple rotation but also like a distortion.
…which, if I recall my affine transforms, would also be called a "skew". I think I get the point though. This let them construct the skew in a more intuitive way than trying to make it an explicit parameter.
I wish they'd let you in on a little more. Because my numb mind says this isn't something you should ever need to optimize. How many light sources do they have, dozens at most? And how many sprites? Dozens/hundreds? That is nothing.. what is this heavy math? I figured all they'd need is a vector from light source to object, and the vector's magnitude.
I had to do this for a Unity game which had a similar "custom" light solution, and on lower end devices it was a problem. It was easily solved using a simple spacial data structure, but doing it the naive way did present some issues.
I've written a lighting system exactly like this one (it's a very common 2d lighting concept, basically any sprite-based game with dynamic lightning will use exactly this). The only difference was that I used 24 light-direction frames instead of 4 frames because I was pre-rendering 3d models to 2d sprites, so I didn't have to have an artist draw each frame but could just export any level of precision, limited not by the artist but by filesize/memory.
The calculations he's talking about are as simple as it gets and were never really a problem for me even on low-end machines. (I was building a top-down space rpg, basically any of the 100 plasma discharges, engine exhausts, particles etc was a dynamic source of light)
But that doesn't seem to be their concern here, either. Optimizing broadphase collision is a well-understood problem, and the scenario of finding the nearest is only slightly different from that.
This is really impressive, and a succinct write up of what can take a very long time to implement successfully. 2D/3D hybrid engines are hard!