This is why if you render vector graphics to a raster image at high resolution and then scale the image down (using high quality resampling), you get something that looks substantially thinner/lighter than a vector render.
This causes all kinds of problems with accurately rendering very detailed vector images full of fine lines and detailed patterns (e.g. zoomed out maps). It also breaks WYSIWYG between high-resolution printing and screen renders. (It doesn’t help that the antialiasing in common vector graphics / text renderers are also fairly inaccurate in general for detailed shapes, leading to weird seams etc.)
But nobody can afford to fix their gamma handling code for on-screen rendering, because all the screen fonts we use were designed with the assumption of wrong gamma treatment, which means most text will look too thin after the change.
* * *
To see a prototype of a better vector graphics implementation than anything in current production, and some nice demo images of how broken current implementations are when they hit complicated graphics, check this 2014 paper: http://w3.impa.br/~diego/projects/GanEtAl14/
OS X dilates glyphs when using (linear) LCD antialiasing to counteract this effect. Chrome was having problems getting consistent text rendering in different contexts because of this dilation: https://lists.w3.org/Archives/Public/www-style/2012Oct/0109....
By the way, as I mentioned in my article, Photoshop has an option to use a custom gamma for text antialiasing, which is set to 1.42 by default (check out the antialiasing section). Vector graphics programs could adopt a similar workaround and things would be mostly fine.
Be sure to check you're not using gamma-incorrect scaling algorithms, they'd have that effect as well.
f(x+eps)/f(x) ~= eps f'(x)/f(x) + 1
f(x) = x^2.2
f'(x) = 2.2x^1.2
f(x+eps)/f(x) ~= 1.2 eps/x + 1
Human response to light is not particularly well-modeled by a logarithmic response. It's --- no big surprise --- better modeled by a power law.
This stuff is confusing because there's two perceptual "laws" that people like to cite: Fechner-Weber, and Stephens's. Fechner-Weber is logarithmic; Stephens's is a generalized power-law response.
This stuff is confusing because there's two perceptual "laws" that people like to cite: Fechner-Weber, and Stephens's. Fechner-Weber is logarithmic; Stephens's is a generalized power-law response."
Neither would seem that useful, given the very uneven weighting of the eye towards certain wavelengths, and the fact that women have different color perception versus men (and that doesn't get into those who have had cataract surgery and can now see into the UV range, which throws perception off dramatically from what most people consider 'normal.') Those generalizations are outdated with any current science from roughly the 1970s on. Steven's power law was shown to not hold up very well when considering individual respondents.
"Essentially, all models are wrong, but some are useful" and all that.
Let y = ax^n
Then log(y) = log(ax^n) = log(a)+log(x^n) = log(a) + log(x)*n
i.e. and equation for a line when plotting log(y) vs. log(x).
Except that for me it isn't. The first one, graded by emission rather than perception, appears more evenly graded to me. There is no setting I can find using the Apple calibration tool (even in expert mode) that does anything but strengthen this perception.
This raises only questions. Is this discrepancy caused by my Apple Thunderbolt Display? By my mild myopia? The natural lighting? My high-protein diet? The jazz on the stereo? The NSA? Or do I really have a different perception of light intensity?
And is anyone else getting the same?
Note: I have always had trouble with gamma correction during game setup; there has never been a setting I liked. Typically there'll be a request to adjust gamma until a character disappears, but however I fiddle things it never does.
Try a few different gamma calibration images from other sources (Google Images -> "gamma calibration") and if they consistently indicate that your monitor is miscalibrated, then you have your answer.
And if it _is_ plausible that four random monitors are all miscalibrated in the same way, why should we optimize for well-calibrated monitors?
Looking closer, it seems like my computer is, like, anti-aliasing the image itself. In Digital Color Meter, the white and black pixels are both grays. See screenshot below, the magnified area is from square A in the browser. When I downloaded the image and opened it in Preview it is black and white like it's supposed to be.
Screenshot: https://cl.ly/2T2U2J0A3v31 (If you're on a Mac maybe try opening the image in an image viewer)
Anyone know what's going on? I also can't distinguish between the first few black bars in Figure 2.
EDIT: Sorry, the above applies to my non-retina external display that I have hooked up to my retina macbook. When I view the test images on my internal retina display, I do see the issue you describe (pattern matches B). If I press cmd+- (command minus) a few times until I'm at a 50% zoom level, the issue is resolved and the pattern matches C! Makes some sense actually, since showing a normal dpi photo at 50% on a retina makes for a 1:1 pixel mapping :) Showing an image at 100% on a retina makes for a 1:2 pixel mapping (each pixel from the image ends up being 2x2=4 physical pixels), which disobeys the don't-rescale directive.
The extreme for me was figure 12. A and B are so similar I can't see the line between them, but C (the "corrected" square) is a completely different shade.
I'm viewing on a data projector. That's probably the reason. Still, it makes me skeptical that there's anything display-agnostic you can do for gamma.
All these algorithms assume they are performing math on linear scale measurements of physical light. However, most image data is not encoded as linear scale samples of light intensity. Instead they are gamma encoded.
What the article gets slightly wrong though is that images are not gamma encoded to deal with the non linear response to intensity of the human eye. Instead, it's to deal with the non linear response of CRT displays to linear amounts of voltage, as produced by camera sensors. The gamma encoding adjusts the image data so that a display will correctly produce linear scales of light intensity to match the physical light measured from a scene.
You are rightly skeptical that Gamma Encoding can't really deal with the broad variety of different displays. However, it is still the case that most images are gamma encoded with roughly gamma 2.2, and that all image processing algorithms on the other hand assume gamma 1.0, and misbehave on data that is gamma 2.2
It is, of course still the case that by chance, human visual response is roughly the inverse of gamma 2.2. But bringing this up while trying to make a point about performing operations on linear gamma data is somewhat distracting.
* Author only said browser, but actually everything in the chain matters. If you're not at a 1:1 pixel mapping you're resampling, and resampling breaks the checkerboard example. Digital keystone (but not optical keystone with tilting lenses) included.
If you work on game textures, and especially for effects like particles, it's important that you change the photoshop option to use gamma correct alpha blending. If you don't, you will get inconsistent results between your game engine and what you author in photoshop.
This isn't as important for normal image editing because the resulting image is just being viewed directly and you just edit until it looks right.
In the course on computer vision in my university (which I help teaching) we teach this stuff to make students understand physics, but at the end of the lecture I'd always note that for vision it's largely irrelevant and isn't worth the cycles to convert image to the linear scale.
For the linear case a "tent-filter" instead of a box filter would be really correct, a similar box filter function exist in cubic. Both of these receive a reverse jacobian matrix indicating the shape and extent of surrounding area in source image to sample from.
Yes, a linear interpolation is the same as a tent filter. See http://stackoverflow.com/a/12613415/5987
To eliminate moire artifacts you need to remove all frequencies above the Nyquist limit. In resizing applications there are two Nyquist limits, one for the input and one for the output; you need to filter for whichever is smallest. When upsizing the input limit is always smallest, so the filters can be constant. When downsizing the output limit is smallest so that's the one you need to tune your filter for. That's why I suggest widening the interpolation formula when downsizing.
I've been meaning for years to make a blog post on this subject. I don't think many people realize that an interpolation formula is also a filter formula, and that it can be manipulated and analyzed as such.
I have my own implementation of the classic filters that I use for my resizing tasks. It works in linear gamma space as suggested in the article. I've implemented lots of different algorithms, and I've settled on Lanczos-5 as the best overall compromise. One interesting observation, the Catmull-Rom bicubic interpolation is nearly indistinguishable from Lanczos-2.
But I wonder about what the "right" way to blend gradients really is -- the article shows how linear blending of bright hues results in an arguably more natural transition.
Yet a linear blending from black to white would actually, perceptually, feel too light -- exactly what Fig. 1 looks like -- the whole point is that a black-to-white gradient looks more even if calculated in sRGB, and not linearly.
So for gradients intended to look good to human eyes, or more specifically that change at a perceptually constant rate, what is the right algorithm when color is taken into account?
I wonder if relying just on gamma (which maps only brightness) is not enough, but whether there are equivalent curves for hue and saturation? For example, looking at any circular HSV color picker, we've very sensitive to changes around blue, and much less so around green -- is there an equivalent perceptual "gamma" for hue? Should we take that into an account for even better gradients, and calculate gradients as linear transitions in HSV rather than RGB?
Until then, have a look at this online gradient generator tools that can use five different algorithms (there's some explanation provided as well for each method):
I also recommend reading the superb 'Subtleties of Color' NASA article series on the matter:
PS: I'm slightly confused about your comment on the black to white gradient though. Are you saying the gradient on Figure 1 looks more even to you than on Figure 2? In that case, I think your monitor is miscalibrated, so try gamma-calibrating it first then have another look. Your opinion might change :)
All of your color gradients use a constant light output. You transition from one fully saturated primary to another, or two fully saturated primaries to two others. For that to look natural, the interpolated values should also produce a constant light output.
The grayscale ramp is not a constant light output, and the light intensity needs to follow a perceptual curve to look natural.
This is not how light behaves (it doesn't divide by two on a wall), but it's what you need to make a gradient.
When selecting your initial color palette, though, it is true hue and saturation do have nonlinear perception.
There are (somewhat complicated) color spaces designed to deal with that: https://en.wikipedia.org/wiki/Lab_color_space
http://www.husl-colors.org/ lets you play with a compromise color space.
Then it immediately occurred to me that a toaster has some binary enumeration of the blackness level of the toast, like from 0 to 15, and this corresponds to a non-linear way to the actual darkness: i.e. yep, you have to know something about gamma.
In the 3d printing world, we have a filament of PLA (corn based plastic) with wood dust mixed in the plastic, of around 20%. This gives a filament that feels, machines, and colors like wood.
And if you extrude at varying temps, you can give the part lighter or darker colors (darkness dependant on higher temp).
I can almost guarantee that there's no gamma correction, let alone darkness calculation, being done for filaments like that. And it would follow an inverse power distribution - the more heat, the higher the power and the lower the final energy result (burnt = lower potential energy in wood).
So yeah, even if you don't do graphics, it still applies. Damn...
Only if it's a fancy commercial toaster with optical feedback then gamma correction could matter.
FIGURES 1 & 2. On monitor A, all bands of color in figure 1 were easily discernible. The first four bands of color in figure 2 looked identically. Figure 1 looked more evenly spaced than figure 2. On monitor B, all bands of color in figure 1 were easily discernible. The first five bands of color in figure 2 looked identically. Figure 1 looked more evenly spaced than figure 2. On monitor C, all bands of color except the last two in figure 1 were easily discernible. The first three bands of color in figure 2 looked identically. Figure 1 looked about as evenly spaced as figure 2. The result from monitor D was the same as the result from monitor A.
FIGURE 12. On monitors A and B, the color of (A) was closer to (B) than to (C). On monitor C, (A) appeared equally close in color to (B) and (C). On monitor D, the color of (A) was exactly identical to (B).
CONCLUSION: On monitor C, gamma correction had neutral effect. On all other monitors, the effects were negative. Unfortunately, I was unable to find a standalone PC monitor for my comparison. It is entirely possible that a PC monitor would give a different result. However, since most people use laptops and tablets nowadays, I doubt the article's premise that "every coder should know about gamma".
Practically all the problems described in the article (which BTW has a few factual inaccuracies regarding the technical details on the how and why of gamma) vanish if graphics operations are performed in a linear contact color space. The most robust choice would have been CIE1931 (aka XYZ1931).
Doing linear operations in CIE Lab also avoids the gamma problems (the L component is linear as well), however the chroma transformation between XYZ and the ab component of Lab is nonlinear. However from a image processing and manipulation point of view doing linear operations also on the ab components of Lab will actually yield the "expected" results.
The biggest drawback with contact color spaces is, that 8 bits of dynamic range are insufficient for the L channel; 10 bits is sufficient, but in general one wants at least 12 bits. In terms of 32 bits per pixel practical distribution is 12L 10a 10b. Unfortunately current GPUs experience a performance penality with this kind of alignment. So in practice one is going to use a 16 bits per channel format.
One must be aware that aside the linear XYZ and Lab color spaces, even if a contact color space is used images are often stored with a nonlinear mapping. For example DCI compliant digital cinema package video essence encoding is specified to be stored as CIE1931 XYZ with D65 whitepoint and a gamma=2.6 mapping applied, using 12 bits per channel.
Nope. As you point out, if you use 8-bit integers to represent colors, you absolutely want to use a gamma-encoded color space. Otherwise you’re wasting most of your bits and your images will look like crap. In the 80s/90s, extra bits per pixel were expensive.
Linear encoding only starts to be reasonable with 12+ bit integers or a floating point representation.
> The most robust choice would have been CIE1931
RGB or XYZ doesn’t make any difference to “robustness”, if we’re just adding colors together. These are just linear transformations of each other.
> (the L component is linear as well) [...] from a image processing and manipulation point of view doing linear operations also on the ab components of Lab will actually yield the "expected" results.
This is not correct.
It is true that the errors you get from taking affine combinations of colors in CIELAB space are not quite as outrageous as the errors you get from doing the same in gamma-encoded RGB space.
What I meant was, that there are so many different RGB color spaces, that just converting to "RGB" is not enough. One picture may have been encoded in Adobe RGB, another one in sRGB. And even after linearization they're not exactly the same. Yes, one can certainly bring them into a common RGB. But then you can as well transform into a well defined contact color space like XYZ.
> It is true that the errors you get from taking affine combinations of colors in CIELAB space are not quite as outrageous as the errors you get from doing the same in gamma-encoded RGB space.
That's what I meant with "expected" results. In general Lab is a very convenient to work with color space. I was wrong though about L being linear. It's nonlinear as well, but not in such a nasty way as sRGB is.
Another important aspect to consider is, that using just gamma is not the most efficient way to distribute the bits. You want a logarithmic mapping for that; which also has the nice side effect, that a power law gamma value ends up as a constant scaling factor to the logarithmic values.
Now, it's also important to understand that these days the bread-and-butter colorspace is sRGB and that complicates things. sRGB has the somewhat inconvenient property that for the lower range of values its actually _linear_ and only after a certain threshold it continues (differentiable) with a power law curve. That's kind of annoying, because with that you no longer can remap logarithmically. And of course converting from and to sRGB can be a bit annoying because of that threshold value; you certainly can no longer write it as a convenient one-liner in a GPU shader for example. That's why modern OpenGL profiles also have special sRGB framebuffer and image formats and reading from and writing to them will perform the right linearization-mapping.
However either way what the explanation for gamma is, the important takeaway is, that to properly do image processing the values have to be converted into a linear color space for things to work nicely. Ideally a linear contact color space.
There are some (in my opinion) better explanations in books, but it’s hard to link people to books. :-)
I investigated and wrote a post called "Computer color is only kinda broken".
This post includes visuals and investigates mixing two colors together in different colorspaces.
If you're reading comments, I just thought you should know that the link to w3.org in the (color) "Gradients" section is broken.
It should point to https://lists.w3.org/Archives/Public/www-style/2012Jan/0607.... but there's an extra "o" at the end of the URL in your page's link.
Many TVs have horribly excessive color saturation and contrast settings by default, in order to look sharp and colorful (specifically, more sharp and colorful than the next brand) at the store. Maybe that applies to computer monitors too.
One thing not discussed though is what to do about values that don't fit in the zero-to-one range? In 3-D rendering, there is no maximum intensity of light, so what's the ideal strategy to truncate to the needed range?
This depends on your aesthetic goals. There’s no single right answer here.
There’s been a large amount of academic research into “high dynamic range imaging”. If you do a google scholar search, you can find hundreds of papers on the subject, I recommend you start with the most cited ones, and then follow the citation graph where your interests lead you.
Or start with Wikipedia, https://en.wikipedia.org/wiki/High-dynamic-range_imaging
I'm more interested in the computational and algorithmic side of ray-tracing, so I care more about things like constructing optimal bounding volume hierarchies than getting all the physically-correct-rending details right to produce the absolute best possible output. I just don't want the output to be ugly for easily fixable reasons.
So, the short answer to your real question is 'tone mapping' which...is kind of dumb, imho. Clamping is probably fine.
The important thing to remember is that, while ray tracing is cool and fun to code, it has no basis in physical reality. It's no more or less of a hack than scanline polygon rendering (which is to say, you could possibly look at them as approximate solutions to the 'rendering equation' with important things variably ignored? but that's like saying y=x is a cheap approximation of y=x^2...)
One cool hack to take a description of a scene graph as a set of polygons and end up with an image of the scene is to be like "ok what polygon is under this pixel in the target view? Which way is it facing? Which way are the lights facing relative to it? What color is this material? Ok multiply that crap together, make the pixel that color". That's good old fashioned scanline 'computer graphics'. Another cool hack is "well, what if we followed a 'ray' out from each pixel, did angle-of-incidence-equals-angle-of-reflection, did some csg for intersecting the rays with surfaces, see if we end up with a light at the end and then multiply through the light and the material colors blah blah blah" but its also just a hack.
I mean, it takes some loose inspiration from the real world I guess, but it's not physically correct at all.
I mention this because I totally get where you are coming from. You might want to check out some techniques that are physically-based though, because they also have interesting implementations (mlt, photon mapping, radiosity)....you might even find it useful to drive your physically-based renderer's sampling bias from intermediate output of your ray tracer!
My goal is real-time rendering. I've met with some success; it's definitely nowhere close to the fastest ray tracers around, but I can manage to pull off around 30 fps on a well-behaved static scene with maybe a few tens of thousands of triangles at 720x480 on a dual-socket broadwell Xeon setup with 24 cores. This means that it's fast enough to make simple interactive simulations and/or games, which is what I mostly care about.
Ray tracing has a lot of advantages when it comes to building applications. I can define by own geometric primitives that are represented by compact data structures. I can do CSG. I can create "portals" that teleport any ray that hits them to another part of the scene and use that to build scenes that violate conventional geometry. I can trace rays to do visibility calculations, collision detection, and to tell me what I just clicked on. I can even click on the reflection of a thing and still be able to identify the object. There may be ways to do some of these things in scanline renderers, but I find it satisfying to be able to do them purely in software with a relatively simple codebase.
I don't have the CPU resources or the skill at optimization to attempt global illumination in real time, but there are other projects that are working on that sort of thing. I have done non-real-time photon mapping before in an earlier incarnation of my ray-tracer; maybe I'll port that forward some day.
(In case anyone is curious, my ray-tracer minus a lot of changes I've made in the last month or so and haven't bothered to push yet can be found here: https://github.com/jimsnow/glome)
By the way, I've written about this exact same topic in my initial ray tracing post:
The easiest way is to make a copy of the image, subtract 1.0 from the copy, blur that a bit and then add it on top of the original. This should of course be done before you go into gamma space, at which point you do clamp to 1.0.
As long as you're rendering in a linear space (which means simply that you've gamma-corrected all your inputs) and displaying with a display gamma applied, then you're fine.
Beyond that you might choose to emulate a "film look" by applying an S-curve to your image.
EDIT: also, you really don't want to be clamping at all when saving images - just write out a floating-point OpenEXR file.
This is just because most of the academic researchers in the field (basically mathematicians/programmers) have horrible aesthetic taste and poor understanding of human vision, not because the concept of local contrast control is inherently bad.
HDR “tonemapping” when well done doesn’t call attention to itself, and you won’t notice it per se.
This is a problem with image processing in general. Researchers are looking for something to demo at SIGGRAPH to non-artists. They want something very obvious and fancy on a few hand-picked images for a 5 minute talk. They aren’t necessarily trying to build general-purpose tools for artists.
Real photographers use all kinds of trickery to control contrast and dynamic range. Ansel Adams tweaked the hell out of his photographs at every stage from capture to print, using pre-exposed negatives, non-standard chemistry timings, contrast-adjusting masks sandwiched with negatives, tons of dodging and burning, etc.
The old term was “printing”, but to a layman that now connotes pressing a button to get an inkjet or whatever.
You can certainly come up with automatic operators which look better than “hard clip everything brighter than 1.0”.
I agree though that there should be more work put into making usable tools for artists instead of “let’s magically fix the picture in one click”.
Check out these photos - https://imgur.com/a/cisJY - this was shot with a DSLR with 14 stops of dynamic range. My eyes couldn't see much in the dark spots when I was looking at the bright buildings.
These same RAW photos look horrible (and I mean really, unbearably bad) with "linear mapping".
ACES has a decent enough transform.
Ultimately it depends on what aesthetic you are aiming for and the context you are displaying it.
You don't truncate, you map.
If you want to be really sure you're correct do everything in the XYZ colour space and then finally convert to the correct RGB space. Which is pretty much unknown for most displays by the way, if you're lucky it's something similar to sRGB.
What gets super confusing is that you have a bunch of different stuff flying around. You have textures in different formats and render targets in different formats (some are in sRGB, some are in HDR 16-bit floating-point, some are other random formats somewhere in-between). You need to set up your shader state to do the right thing for both the input texture and the render target, and the nuances of how to do this are going to change from system to system. Sometimes if you make a mistake it is easily spotted; other times it isn't.
And then there are issues of vertex color, etc. Do you put your vertex colors in sRGB or linear space? Well, there are good reasons for either choice in different contexts. So maybe your engine provides both options. Well, now that's another thing for a programmer to accidentally get wrong sometimes. Maybe you want to introduce typechecked units to your floating-point colors to try and error-proof this, but we have not tried that and it might be annoying.
All that said, everyone is about to rejigger their engines somewhat in order to be able to output to HDR TVs (we are in the process of doing this, and whereas it is not too terrible, it does involve throwing away some old stuff that doesn't make sense any more, and replace it by stuff that works a new way).
I can't find it, but I was also reading an article a few days ago about why that person would choose to write their own engine over a third-party one. In that article (maybe it was the comments), they specifically called out the game mechanics of the The Witness and Braid as example of things traditional engines don't do very well.
"..when I'm making these games, they're not just commercial products. They're expressive works that we're working very hard to make, and we want them to have as long of a lifetime as possible. So I would like this game to be relevant 20 years from now, 40 years from now. How do you do that? Well, you don't necessarily do that by building it on top of a very complicated system, that you don't own, that somebody else will cease to support at some point in the future."
Given that UE4 is open source I wonder if he still has the same sentiment.
As someone that has been working on a game in UE4 for 2.5 years the idea of rolling and supporting my own engine/tools is dizzying.
Look at the particles inside the pit at the bottom of the screen. http://imgur.com/BQjJD25
Can you see what is wrong?
Now take a look at what it used to look like: http://imgur.com/9z9b1cq
After seeing the correct version the problem is really obvious.
The particular problem here is that at some point in the pipeline the colour of the particles were stored linearly instead of in sRGB. That results in the colour banding.
A programmer who was working on the engine managed to accidentally break this, and the thing that really sucks about it is that the issue is really subtle and nobody noticed it for quite some time. Thankfully it didn't get to production before I noticed it, but that was complete chance. It could easily have gone out like that.
I have worked with them a lot and they are very powerful (if a bit verbose), despite this I was able to write a fairly nice implementation of async await and just last weekend I made writing both synchronous and asynchronous IO code much easier using a ``multisync`` macro. This only took me ~100 lines.
1: Still a WIP, but already reduces code duplication A LOT: https://github.com/nim-lang/Nim/blob/devel/lib/pure/httpclie...
This is how a debug macro, a very simple one, is implemented in Nim (taken from the tutorial):
macro debug(n: varargs[expr]): stmt =
result = newNimNode(nnkStmtList, n)
for i in 0..n.len-1:
result.add(newCall("write", newIdentNode("stdout"), toStrLit(n[i])))
result.add(newCall("write", newIdentNode("stdout"), newStrLitNode(": ")))
result.add(newCall("writeLine", newIdentNode("stdout"), n[i]))
(lambda (e i c)
(let ((stmts (cdr e)))
(map (lambda (stmt)
`(begin (write ,stmt)
(display ": ")
I don't blame Nim for being bad at manipulating its own AST. Lisp is pretty bad at it, too (no, lists are not the lisp AST, not in any lisp suitable for the Real World. I don't know who told you that, but it's a lie: Most lisp ASTs are just as complex as Nim's). What I do blame Nim for is not being potentially homoiconic so there's no easy datastructure to represent code in, or at least providing a datastructure that's easier to manipulate, or at the very least provide some mechanisms to make the existing structures easier to manipulate.
(lambda (e i c)
(let ((stmts (cdr e)))
(cons 'begin (map (lambda (stmt)
`(begin (write ,stmt)
(display ": ")
If you mean that whatever AST is generated by calling macro gets put in the enclosing scope - then yes, AFAIK you're right, but this can be fixed by wrapping a macro in a proc (function).
I'll agree that forward declaration is really clumsy, I hope that that is fixed before 1.0.
That's pretty bad.
I also fail to see how wrapping a macro in a proc would help, unless you mean the call to the macro at the expansion site, which is a pretty clumsy thing to do, and actually doesn't fix most of the problems.
You mean this? http://nim-lang.org/docs/macros.html#genSym,NimSymKind,strin...
That makes things a little bit better, but still.
It's a small thing, but it's nice to do. Or at least detect overflows and move to bignum in the default numerical implementation. It's not end-of-the-world if you don't, but it helps avoid a lot of bugs...
It's really problematic to use them, though. Every integer would turn into a pointer, and the O(N) thing really is a problem. If you're doing secure coding you need careful control of data dependencies, since they create side-channel leaks, and nobody who makes "safe" languages appreciates this.
Plus most people don't need numbers that big. I think it was even a mistake to make size_t 64-bit.
The way a lot of HLLs do it is to detect a potential overflow, and convert to bignum if needed.
As others commented the gamma scaling issues seem even more relevant.
Just please, don't use RGB color space for generating gradients. In fact, it's ill fitted for most operations concerning the perception of colors as is.
Interesting excursion: historically the default viewing gammas seem to have lowered, because broadcasting defaulted to dimly lit rooms, while today's ubiquitous displays are usually in brighter environments.
I'm troubleshooting a problem right now where two separate applications blend images together and the result is different. Both results have been shipped to clients for years and fixing it is more of "tidying things up."
But doing it right the first time or being aware helps things look right more often and can enable you to do more complex things without constantly having to tweak the results.
As I mentioned in the article, you can get away with working directly in sRGB in some cases (general image processing, non-photorealistic rendering), but in some other cases it's a must (physically accurate photorealistic rendering). But ignoring it will always produce wrong results. You actually might like those results, but the important point I was trying to get through was to be aware of these potential issues, and then you may ignore them at your own peril.
One thing the article misses is a process or checklist to discover your requirements for color spaces. Characterizing it as a gamma-only problem isn't entirely correct since we also have completely different models of color(e.g. LAB, HSV, YUV) that make tradeoffs for different applications. So, something like:
1. Input data: if it contains gamma data or specifies a color space use that, else assume linear sRGB.
2. Output device: is gamma correction done for you/can you access information about the device capabilities and lighting situation? This can inform viewing conditions. For example, if your smartphone has a light sensor this can be used to adjust gamma as well as backlighting to achieve a near-linear perceptual response. Most apps wouldn't consider doing this of course. If the output is a file or stream determine the most likely use cases and convert as necessary.
3. Internal processing. For each image or signal you process, determine its input color space, and the ideal color space to run it in. Then decide which approximation is an appropriate trade-off for your application(since many color space conversions are compute-intense), and implement conversions as necessary. For many situations, gamma-corrected sRGB is "good enough" hence its emphasis in the article.
Actually, no emulator does gamma-correct scaling, and when I've tried adding some in the past a lot of games looked quite different and much darker. So different that I don't think most players, who grew up on emus instead of the real thing, would actually accept it.
It was hard enough getting them to accept correct aspect ratios, since NES pixels weren't square either.
Since doing limited ranges on sRGB looks so weird (mainly, 16 is surprisingly bright for being black), some people gamma correct for 1.8 instead of 2.2... which is a hilarious value when you realize that is the gamma value for classic Macs.
I mean, I don't like running with the accurately rainbowy and scanliney composite cable mode filters myself, and I thought I cared.
To do this naively: Mathematically, 5 bit is 32 values. Multiply each value step by 8, and you get 0, 8 .. 256. Multiply each step by 7, and you get 0, 7 ... 224. Offset each step by 16, you get 16, 17 ... 240.
Also, some emulators do offer chroma subsampling effects just to make certain things look more accurate.
That said, yes, all my emulation days involved "incorrect" SNES rendering: 5 bit per channel SNES RGB, with each step being +32, with no NTSC adjustment.
I guess that that the required number of bits to encode physical intensity values depends on the operations. performed. The author suggest using floats, but this means 3x4 bytes and 4x4 bytes with the alpha channel. Would 16 bit unsigned integer be enough ? Floats are ok when using graphic cards, but not ok when using the processor.
Floating point is much easier to work with, and sometimes makes a difference with quality. On modern CPUs it's also about as fast as integer processing, if not a bit faster sometimes.
On GPUs, it might actually be the other way around -- image processing tasks tend to be limited by memory bandwidth, so you might get better performance with 16-bit integers. But I haven't tried it.
It needs something that not only permits comparable overlays, but (perhaps with a third diff layer) also highlights the ugly/wrong pixels with a high-contrast paint.
A handful of images are only somewhat obviously problematic, but for most of the images, I really had to struggle to find undesirable artifacts.
If it's that difficult to discern inconsistent image artifacts, one can understand why so little attention is often paid to this situation.
Not just OS X. The majority of Linux games from the past 2 decades including all SDL and id tech 1-3 games relied on X server's gamma function. An X.Org Server update broken it about 6 years ago. It was fixed a few weeks ago.
You can try the calibration instructions here, which does a decent job of calibrating the monitor, and showing you whether yours is good or not: http://www.lagom.nl/lcd-test/
Ok, does that mean that the device performs the gamma-transformation for me, and I don't need to worry about gamma?
(and if not, why not?)
Another option is to save the image from the webpage to disk and then open that image in a basic image editor (making sure the editor is not zoomed in at all). I'm not sure how feasible this is if you're on a phone though.
What would have been ideal is if the author of the article had included srcset alternatives for these images to cater for some of the more common high DPI devices. Would then have just work automatically for most people and caused a lot less confusion.
It's also possible that the image file is a PNG with a gAMA chunk, which sometimes gets rendered incorrectly by browsers.
You keep saying this, but many people don't have a real desktop monitor. Laptop sales exceed desktop sales and both are dwarfed even by tablets alone.
So whether it's technically the correct approach is irrelevant for a large proportion of potential end-users, for whom it will look broken.
Anyway, I think you have bigger problems than gamma with those devices anyway (glare, reflections etc.), so that's probably the least of your concerns.
My main problem is that I'm not good at on-the-fly encoding and outputting frame by frame feels a bit excessive.