I'm not judging which is better or worse as it would indeed be nice if the calibration could be controlled by the OS. I'm just saying that adjusting the monitor hardware directly is what's being done in the pro linux content creation world. Also, I imagine that certain brightness and color gamut controls could only happen from the bezel controls. Dreamcolors can switch between srgb & P3 for example.
it's good to know that you've found the controls sufficient on the systems you have adjusted!
When it comes time to implement the color management stuff into Wayland, it's declared out of scope for the core Wayland protocol, but don't worry guys -- someday, somehow, a consortium led by GNOME will implement it on a DBUS interface that all compositors will understand. At least two, mutually incompatible, such protocols emerge, neither of which fix the underlying issues, both of which are tied to particular compositors.
Meanwhile, we are told that X11 is still horribly, irredeemably broken in this regard and if we haven't yet switched to Wayland, we really should by now.
The professional colorist industry goes on using Xorg on Linux.
Right now, Linux just can't do that consistently. Most of the existing solutions want me to buy a very expensive color calibration tool that I can't justify or afford.
Why is that? I like to have my monitors with a bit of a warmer colour balance, since it is nicer on my eyes, and I have no need for 100% colour reproduction. I think that everyone should strive for the most comfortable colours as long as they aren't doing photo editing or something.
But this should be explicit, and the software should know about it.
Anyone should strive to have as accurate colors as possible.
No, people have different needs and set their monitors' brightness and contrast accordingly. It's only the mentioned industries which require that accuracy --- and the associated, often very expensive, monitors and calibration equipment.
Except for accessibility reasons (e.g. high contrast for the visually impaired), there are no "different needs" that dictate that people should see colors rendered falsely compared to their reference if they're not in the creative professions.
Or, any user of audio equalizer set to a genre preset.
While I understand what you mean (having the screen properly calibrated out of the box would sure be nice), you might be using too strong words to express it :)
Then the layers up the stack like Wayland/X/GNOME/KDE are just messengers to/from the bottom @ drm.
We also need floating point frame buffers to be first-class citizens at the KMS level. I don't want to be forced into OpenGL/Vulkan just to be able to have the hardware apply gamma correction on a software-rendered frame buffer, and if I have the hardware do color correction, it kind of needs the HDR of floats - not uchars, which I don't think libdrm supports with dumb buffers of today. If not floats, at least more bits than 8 per color component.
Programs like Plymouth or other embedded style applications running directly on libdrm should be able to have color-corrected output without needing bespoke software implementations and their own calibration tables. I should be able to tell the kernel the correction table, maybe compile it in, or another payload on the boot parameters cmdline.
Hell there are fairly well-known simple algorithms for generating approximate color tables from a single gamma value. If I only want to make things look "better" in my linux kiosk, and care not about absolute color correctness, let me stick a drm.gamma=.55 field on the kernel commandline to generate the correction table in lieu of a full calibrated table.
Or is your point that it's substantially more complicated than it appears, and I'd know that if I tried doing it myself?
However I think complaining about open source software has its place. Sometimes the developer has never thought of adding the feature that the end user can't get along without.
(not challenging, just asking, I know next to nothing about color correction)
Most desktop environments and applications assume that the source is sRGB, unless specifically tagged (in image metadata).
So you have two points: source (image, video) and reproduction device (display, printer etc). And you convert colors from the former to the later. sRGB may not even come into play if the source image is AdobeRGB and the display is a wide-gammut pro LCD display!
If by "video LUT" you mean something at the CRTC or even after it on some external device, if the software producing the visuals has already reduced the pixels down to 8-bits-per-component before they hit the LUT, then you've lost accuracy particularly in the small values.
This is why it's desirable to do one of the following:
1. inform the software of the LUT and let it perform the transform before it packs the pixels for display
2. change the entire system to have more bits per color component all the way down to the framebuffer, then the per-component LUTs at the CRTC can profitably contain > 256 entries.
I'm not an expert in this field at all, just play with graphics hacks. But this is what I've come to understand is the nature of the issue.
To clarify, the reality implied by the need for correction is that some areas of the 0-256 range of values are more significant than others. When you do a naive linear conversion of whatever precision color the application is operating in down to the 24-bit rgb frame buffer, you've lost the increased accuracy in the regions that happen to actually be more significant on a given display. So you'd much rather do the conversion before throwing away the extra precision, assuming the application was working in greater than 24-bit pixels.
Thus your software has to know what your display is capable of. It can use this information to show you which parts of the photograph you edit are not shown accurately.
Sounds like an easy thing to fix. I'd suggest the author to try and make some patches - don't know about GNOME, but KDE is pretty friendly and easy to contribute to.
It's not just saying "Hey you, go fix that." , it's saying "Hey everyone viewing this comment on a public page : this exists and needs to be fixed. Someone grab a wrench."
So sure, things are fixed at an astronomical rate, but that's because things are also decaying at an astronomical rate so there's a constant supply of low hanging fruit
Or if you prefer to jump on Apple's bandwagon about how regular computers are outdated, a detachable keyboard and mouse for an Android 10"/12" tablet.
Huh? Linux is behind both Windows and OS X in most non-server related areas. They can't even agree on a good compositor...
Given that there is no such thing as just "Linux desktop" (I mean, my setup can be completely different from another persons' one), it's hard to say whenever it's ahead or behind, unless every component option is known to be such.
Well, at least when ignoring the UI/UX mention that has absolutely nothing to do with the discussion here (color correction is required in many fields, but UI and UX aren't really those ones).
The author seems to be technical enough to make a nice analysis of what's done and what needs to be done. I suspect he might be able to provide this particular fix himself - or, if he isn't, then he might be able to contribute by providing a well-detailed feature request for others to implement (with that even I might be able to go and fix it, while without it I surely won't, as I lack the knowledge, hardware and in fact even awareness of this problem; only reading this post put some light on it for me).
Some people don't do the mental switch between "I'll wait for fix" and "I'll fix it" if they're not used to it, even if they are perfectly capable of fixing it and have time for it. I see it on my own example, as there were some parts of the stack I never really considered digging into to fix stuff by myself, and when I finally tried, turned out there was no reason to keep myself restrained. It's just a friendly reminder that you can often fix such stuff by yourself and it might be not as hard as it seems.
Source: colord author.
At the other end of the spectrum, when the target audience is comprised of a small number of professionals that don't code, for example advanced graphic or music editors or an engineering toolbox, open source struggles to keep up with proprietary because the economic model is less adequate: each professional would gladly pay, say, $200 each to cover the development costs for a fantastic product they could use forever, but there is a prisoner dilema that your personal $200 donation does not make others pay and does not directly improve your experience. Because the userbase is small and non-software oriented, the occasional contributions from outside are rare, so the project is largely driven by the core authors who lack the resources to compete with proprietary software that can charge $200 per seat. And once the proprietary software becomes entrenched, there is a strong tendency for monopolistic behavior (Adobe) because of the large moat and no opportunity to fork, so people will be asked to pay $1000 per seat every year by the market leader simply because it can.
A solution I'm brainstorming could be a hybrid commercial & open source license with a limited, 5 year period where the software, provided with full source, is commercial and not free to copy (for these markets DRM is not necessary, license terms are enough to dissuade most professionals from making or using a rogue compile with license key verification disabled).
After the 5 year period, the software reverts to an open source hybrid, and anyone can fork it as open source, or publish a commercial derivative with the same time-limited protection. The company developing the software gets a chance to cover it's initial investment and must continue to invest in it to warant the price for the latest non-free release, or somebody else might release another free or cheap derivative starting from a 5-year old release. So the market leader could periodically change and people would only pay to use the most advanced and inovative branch, ensuring that development investment is paid for and then redistributed to everybody else.
I've wanted to know whether two colors I'm displaying actually get distinguished by the monitor or if the LIST maps then to the same output value.
What platforms do they all use? Not Macintosh, despite its reputation for being the platform for "graphics professionals" (it was missing 10 bit/channel color until very recently). And not Linux, despite its use in render farms.
They use Windows 10 and HP DreamColor monitors. That's the only platform that works and works well for people who need to care about color.
Further, HP Dreamcolors have tons of problems and aren't considered solid for color critical work (but are fine for semi color accurate stuff like intermediate comps etc). Color accurate work is done over SDI with dedicated LUT boxes handling the color transforms and the cheapest monitors being $7500 Flanders Scientific Inc 25" OLED panels.
A while ago I had a look at Eizo's 10 bit / channel TFTs, which looked impressive to me (from a layman's perspective), do you have any opinions of those?
For non color critical necessary displays, Eizo is about the best. Lots of" good enough" panels from LG, Acer, and Fell though. I actually have a gaming panel that calibrated surprisingly well and holds those numbers.
The best consumer display by far though are the LG OLED televisions. They're so good that we're installing them in lots of mid level suites as client monitors (aka close enough to our color critical panels).
I've owned it for about a year and the red channel on mine exhibits painfully obvious burn-in patterns.
This sounds pretty interesting. What do you actually do, in layperson-y terms?
It's more stuff like 'what is it about this process that makes a dedicated colour specialist necessary?', 'what are the things things they're supposed to accomplish?', 'what are their technical and creative constraints/inputs/deliverables?', etc.
The inputs are roughly "An ordered sequence of footage clips" and the output is roughly "A final projection/broadcast/download-ready movie"
There are exceptions - some apps (ZBrush) don't run on Linux, so there are Windows machines around, but in general >= 95% of machines the artists and developers use are Linux at the big places.
And most of those apps use OpenColorIO as a framework for handling colourspaces.
The lineage is from SGI, where many of these applications were born, but as the company faltered and consumer graphics hardware took off thanks to gaming, Linux became the natural home.
(I've worked at a few of the larger VFX studios mentioned throughout the thread)
That's for rendering, where the OS and Desktop experience doesn't really matter, and the cheaper it is the better.
Few pros do the actual editing and color work (where the decisions are made, not the rendering part) on Linux.
Every major color house I've worked in runs Linux exclusively in their suites (CO3, The Mill, Technicolor, etc).
That's not to say windows and OSX suites don't exist, I use them and my own suite runs windows, but the highest end of color is basically Linux only.
The recommended setup is a super micro chassis with dual xeons (12 core cpus min rec, 20 core preferred), min 32GB ram (usually at least 64,128+ common on high end systems), SSD for OS, thunderbolt (min)/pciE/10GbE/fibre (preferred) attached storage usually 8 bay raid6 or similar min, almost always NVIDIA GPUs with 8x 1080ti's or the latest Titans being the most common set up I see.
This runs on CentOS or RHEL 6.8 or 7.3.
Video signal is output over SDI from a PCIe to a LUT box (for color transforms) then to a color critical display (FSi, Sony, or Dolby typically with the best suites using cinema projectors). A second SDI runs out to a box showing video scopes. Everything is usually calibrated by light Illusions software and using a Minolta colorimeter probe (typically a 3rd party service does this every few months).
The GUI monitor(s) are usually just regular consumer whatever.
The software is controlled by a large, $30K control panel that looks similar to an airplane cockpit.
That's most of the important stuff, but I can fill in details where you're curious.
Can you help me answer two things, as both have bugged me for years..?!
How do they achieve a look of tinted monochrome in films, which are still actually in colour? If that doesn't make sense - I'm thinking of films like Heat where there is often a strong blue tint which gives the feel of monochrome but it is all in colour. I've found was able to replicate it somewhat by combining the image with a quadtoned version, but it was still fairly far off tbh.
The other question is - how does colour gamut relate to the brightness of the display? Is it all to do with the dynamic range of each channel - i.e. the difference between black and, say, red, rather than overall brightness? I was at a photography show recently, and was blown away by some of the prints made by some of fuji's printers. Is it ever possible to match the gamut our eyes can see? And what colour space/gamut do you usually work in? Sorry two extra Q's there...
Thanks, and thanks for the fascinating info already.
At the risk of asking a silly question, what does the LUT-box do that couldn't be done in software (or, I guess, why isn't it done in software)?
This stuff is fascinating to me.
Do you know of any good YouTube videos on colorist hardware? I've seen a couple of videos on workflow, but neither went into the guts of the machines and LUT-boxes.
If you need color accuracy then you're calibrating your monitor hardware.
People using Macs certainly do care about ColorSync. That’s the name of the software which uses the display characterization to keep colors looking as expected throughout the operating system and most applications.
Using LUTs either at the application or OS level to adjust colour information is a big no-no, although that doesn't stop some people from doing it. You simply don't want to change your colour space until you absolutely have to.
The point of calibrating your monitor (which is a hardware + firmware level problem) is to see how your RGB image will look on a colour space restricted piece of hardware (for example in video this is often 12-bit RGB --> Rec709).
Same story if you want to show your image on a display with a different gamut.
Most gamut mapping algorithms used in practice (whether on a display or in software) are actually pretty mediocre in my opinion. It would be possible to do substantially better by writing your own code, at the expense of being a bunch of work. Alas.
P.S. The Wikipedia article about color space (and articles about many other color-related topics) is pretty terrible, but I’ve been too lazy to rewrite it.
I currently work for a media conglomerate where colour tends to matter a lot to both the print and digital channels. I'm not sure which monitors they use as I tend to work rather separate from that group, but they all work on Macs—that's been the case since the late 80's early 90's in publishing and there seems to be no move to stray.
Higher end displays are already pretty decently calibrated out of the factory, but if you want to be exact you will need to buy an external piece of hardware that will measure your display’s colors and tell you how off they might be.
The author bought a piece of color calibrating hardware that was meant to be open in design and work with Linux, as presumably he wants to support these efforts.
But he encountered a bevy of problems, ranging from packages not updated in a while to things that just plain don’t work as documented, and got frustrated.
Understandable, since on macOS or Windows with proprietary hardware, this would have been a 5 minute process. The author is sad and frustrated that the open source alternatives aren’t there.
This is true with even the highest quality color critical displays like we use in film/tv color correction ($5-$50k+ for 25" panels).
Displays aren't calibrated at the factory. To use an LED-backlit display as an example, not every single LED in the world is created equally. Not every LED is going to give off the exact same wavelength for the same current value.
Extrapolate this out to all the other components and this is the reason that your monitor has built-in physical controls for changing RGB/contrast/brightness values to begin with.
Calibration accounts for this.
As for why ICC profiles are used instead of just changing settings on the monitors, the OSD options usually don't offer enough fine-grained control to get things just right. Display makers are typically targeting main-stream consumers so they provide simple adjustment controls.
Standard values of the voltages, currents, timings etc. that are applied to LEDs, liquid crystal pieces and other electronic components of the display in order to get the desired colors are only a starting point; a calibration that measures the differences between devices and compensates them is needed because of manufacturing and accidental differences.
In practice, monitors change color over time (much mire common in ccfl backlit monitors, i think) and even shift with brightness, so we have to do it “at runtime”
In addition to that the built-in options for configuration are often very simplified and have a coupling. Low resolution control plus simplified options means that you often can't dial in perfect color reproduction. Hence ICC profiles.
It is physically impossible to get an unique colour space across all surfaces.
Spending €100 also isn't an insignificant amount, and for that price I would expect what I'm buying to work properly.
Source: Person that designed the ColorHug hardware.
I'd like to add I'm completely appreciative and thankful of what you're doing.
Source: Am the person that sits in a shed and builds each ColorHug.
Better support? Open Hardware means Open Hardware, and nothing more - you get the access to the schematics, documentation, sometimes also right to produce similar devices by yourself. You can expect greater hackability, definitely, but "Open Hardware" sticker means nothing in terms of support or reliability. It might be better, it might be worse, you can't tell.
The price in such projects is directly related to the production scale. How many EX3s, Spyders and ColorHugs have been produced? Open Hardware projects (especially the equivalents of already available non-free devices) are often costlier because it initially attracts only the people who really care about its hackability, which makes the yields low, which makes the prices high, which further strengthens that relation, and the circle is closed.
With userbase kept small, most users usually keep the firmware/software support just right enough to scratch their own itches.
Please remember that hardware is not software, and open hardware comes with completely different set of challenges than free (open) software and when it comes to hardware, you often really need to pay extra for the freedom - not just with your time, like we were used to with early FLOSS, but also with your money. If you choose a project because of its "Open Hardware" sticker, it's really more than likely that it will be costlier and it will be rough at edges, because it's usually harder to roll with such projects than with closed competitors and the ROIs are usually way smaller too. That's just how it is and there's nothing surprising about it; if you care about openness, you have to accept it, otherwise it will never get better.
As for software, DisplayCal is actually very well regarded in pro color and considered one of the only serious three choices, the others being CalMAN and the big dog being Light Illusions.
tl;dr : you get what you pay for.
Ok, but I'm not sure how that explains it. You'll need to go into more detail.
We’ve seen this play out dozens of times since the ‘90s, and the startups keep making the same mistakes.
They should at least read their predecessors retrospectives, and strive to make different mistakes.