A single wavelength can't reproduce all visible colors. These pixels are variable wavelength, but can only produce one at a time, so you'd still need at least 2 of these pixels to reproduce any visible color.
The fundamental problem is that color space is 2D[1] (color + brightness is 3D, hence 3 subpixel on traditional displays), but monochromatic light has only 1 dimension to vary for color.
Ha, yea, in particular these monochromatic pixels can't simply be white. Notably ctrl-f'ing for "white" gives zero results on this page.
Relatedly, the page talks a lot about pixel density, but this confused me: if you swap each R, G, or B LED with an adjustable LED, you naively get a one-time 3x boost in pixel area density, which is a one-time sqrt(3)=1.73x boost in linear resolution. So I think density is really a red herring.
But they also mention mass transfer ("positioning of the red, green and blue chips to form a full-colour pixel") which plausibly is a much bigger effect: If you replace a process that needs to delicately interweave 3 distinct parts with one that lays down a grid of identical (but individually controllable) parts, you potentially get a much bigger manufacturing efficiency improvement that could go way beyond 3x. I think that's probably the better sales pitch.
It would be interesting to plot all of the achievable colors of this LED on the chromaticity diagram. Presumably it'd be some sort of circle/ellipse around white but might have some dropouts in certain parts of the spectrum?
Pure wavelengths are on the horseshoe-shaped outline of the CIE 1931 space. The straight line connecting the ends of the horseshoe is the line of purples, which also isn't monochromatic.
Presumably they wouldn't need to do a pixel-to-pixel mapping, but could account for the wavelengths of neighbouring pixels to produce a more faithful colour reproduction at an effectively lower resolution.
The key to this is using the same process to get all the colors. For separate R,G,B pixels you need 3 different processes and can't build them on the same chip, you need to assemble them - that's what allows the vast improvement in pixel density.
This is definitely a problem; if the control circuitry is up for it you could PWM the pixel color, basically dithering in time instead of space to achieve white or arbitrary non-spectral colors.
It also can't produce white or anything else in the interior of this diagram (as well as, as you mention, shades of magenta and purple that lie on the flat lower edge):
Single chip DLP projectors strobe red, green, blue, white sequentially. Modern DLPs use separate light sources (LED/Laser) and pulse them at a high frequency - kilohertz I assume. Before we had high-power LEDs DLP projectors used a xenon lamp and a color wheel (https://www.projectorjunkies.com/color-wheel-dlp/) spinning at as little as 60 revolutions per second. This caused a "rainbow effect" which was very annoying to some people, but apparently enough people didn't notice it that those products got sold anyway. So somewhere around 180Hz is the bare minimum.
According to this, humans can't see flicker above 100 Hz for most smooth images, but if the image has high frequency spatial edges then they can see flicker up to 500-1000 Hz. It has to do with saccades.
This reminds me of the observation I had in high school that I could immerse LEDs in liquid nitrogen and run them at higher than usual voltage and watch the color change.
I got a PhD in condensed matter physics later on but never got a really good understanding of the phenomenon but I think it has something to do with
> I got a PhD in condensed matter physics later on but never got a really good understanding of the phenomenon but I think it has something to do with
The color of most* LEDs is controlled by the band gap of the semiconductor they're using. Reducing the temperature of the material widens the band gap, so the forward voltage of the diode increases and the wavelength of the emitted light gets shorter
No, they're extremely common. Every white LED in the market is phosphor-converted: they're blue LEDs, usually ~450nm royal blue, with yellow-emitting phosphors on top. Different phosphors and concentrations give different color temperatures for the final LED, from about 7500K through 2000K. (Last I looked, anything below about 2000K didn't look right at all, no matter what its manufacturer claimed.)
Bigger LEDs are often phosphor-converted as well. Most industrial grow lamps use this type of LED. So they're around! You're probably looking at some right now!
I'm assuming that in most cases they'll just make these act as RGB displays, either by sequentially tuning the wavelength of each pixel to red, green, blue in a loop, or by assigning each pixel to be red, green, or blue and just having them act as subpixels.
Surely the 'tricks' we have for RGB displays would be more effective when every element has the same color range as every other. For example, the subpixel rendering of typography for RGB displays had an unavoidable rainbow halo that would no longer be an issue for most colors of text with polychromatic pixels.
This seems like a non-problem, cut the display resolution in half on one axis and reserve two 'subpixels' for each pixel. Then you have a full color display with only one physical pixel type and that needs one less subpixel. These displays could even produce some saturated colors with specific wavelengths that can't be represented on regular rgb displays.
Assuming they can PWM the brightness while getting consistent color (seems reasonable since microLEDs have extremely fast response time) then I think what you're saying would work great. It would be akin to 4:2:2 chroma subsampling where luminance (which we have higher acuity for) gets more fidelity and the resulting image quality is closer to full-res than half-res.
Human eyes have three different color receptors, each tuned for it's own frequency, so it's already 3d. However, apart from human perception, color, just like sound, can have any combinations of frequencies (when you split the signal with Fourier transform), and may animals do have more receptors than us.
Humans perceive all stimulation in the same raito of the L, M, and S cones to be the same color, but with different brightnesses. So only two dimensions are nessesary to represent human visible colors, hence HSV or L*a*b* space.
There is a fair point there, but a few things - HSV and Lab are only models, they don’t necessarily capture all visible colors (esp. when it comes to tetrachromats). Brightness is a dimension, and can affect the perception of a color, esp. as you get very bright - HSV and Lab are 3D spaces. Arguing that brightness should be ignored or factored out is problematic and only a small step from arguing that saturation should be factored out too and that color is mostly one dimensional.
According the opponent process model of colour perception you need three axes to represent all colours: luminosity [L+M+S+rods], red-green [L-M] and blue-yellow [S - (L+M)].
You only need to mix two different wavelengths to render any human perceptible color. They give you four parameters to work with (wavelength1, brightness1, wavelength2, brightness2) which makes it an underdetermined system with an infinite number of solutions for all but the pure, spectral boundary of the gamut.
Total tangent, but is that because of the wavelengths involved? I imagine a “sound camera” would have to be huge to avoid diffraction (but that’s just intuition), requiring impracticality large ears. Likewise i imagine that perceiving “chords” of light requires sensing on really tiny scales, requiring impractically small complex structure in the eyes?
There are plenty of monochromatic cases. Right now hw has a lot of orange.
Dynamic resolution / subpixel rendering. Retina looks really good already, not sure if the effect would be relevant or interesting but it might open up something new
You make it sound like there is still an easy to spot difference. When i look at the print quality of pictures on a news paper, its the opposite and at least for me, i don't need more than retina and i was very eager to switch to 4k to have higher dpi.
But 14' with retina im very happy.
I'm actually more surprised by hdr on my lg oled 4k. Its actually quite nice when done well.
Newspapers are famously printed on the lowest quality recycled paper and cheapest print process available, because they're disposable. Compare a retina screen to a coffee table style reference book with high resolution photos – the kinds you can use a magnifying glass on - and you'll still notice differences.
Or just look at what companies do when manufacturing technologies allow them to push for higher densities: iPhones now exceed 450 dpi, and the 8" iPads exceed 300; if the technology allowed it, Apple would most likely introduce higher densities on larger iPads and Macbooks as well.
One thing I noticed is that they were talking about demoing 12,000 ppi displays, which is way more resolution than you're going to resolve with your eye. So using 2 pixels is still probably a win.
> These pixels are variable wavelength, but can only produce one at a time
Citation needed. The article doesn't say anything about how the colors are generated, and whether they can only produce one wavelength at a time.
Assuming they are indeed restricted to spectral colors, dithering could be used to increase the number of colors further. However, dithering needs at least 8 colors to cover the entire color space: red, green, blue, cyan, magenta, yellow, white, black. And two of those can't be produced using monochromatic light -- magenta and white. This would be a major problem.
Dithering just black, red, green, and blue is sufficient to produce a full-colour image. Everything else is a combination of those. That's effectively how normal LCD or OLED monitors work!
No, normal monitors use additive color mixing, but dithering isn't additive, it's averaging. With just red, green, blue, black you couldn't dither cyan, magenta, yellow, white, just some much darker versions of them. E.g. you get grey instead of white.
You can check this by trying to dither a full color image in a program like Photoshop. It doesn't work unless you use at least the 8 colors.
In fact, ink jet printers do something similar: They use subtractive color mixing to create red, green and blue dots (in addition to cyan, magenta, yellow and black ink and white paper), then all the remaining shades are dithered from those eight colors. It looks something like that: https://as2.ftcdn.net/v2/jpg/01/88/80/47/1000_F_188804787_u1... (though there black is also created with subtractive color mixing).
The color mixing type used by dithering is sometimes called "color blending". Apart from dithering it's also used when simulating partial transparency (alpha).
Emissive means additive, not averaging. Cyan, magenta and yellow are not primaries here. Red and green light adds up to perceptual yellow. Red, green and blue adds up to perceptual white (or grey, at very low luminance). Treating each of these pixels like subpixels (which is arguably a form of dithering) will produce a full color image (at a lower resolution), but given that they did not demonstrate it, color reproduction and/or luminance likely is far from competitive at this point.
That's not true. Dithering can be used in emissive screens, but dithering is not additive. If you mix red and green with color blending (e.g. by dithering), you get less red and less green in your mix, and therefore the resulting mix (a sort of ochre) is different from additive color mixing (yellow), where the amount of red and green stays the same. Or when you mix black and white, you get white with additive color mixing, but grey with blending. You also get grey when blending (dithering) red, green and blue. You can test this in software like Gimp, you won't be able to dither a full color image without at least the eight colors I mentioned.
I am not saying you can use the exact same math as in an image manipulation program, these work with different assumptions. Mixing colors in those is usually not correct anyway.
I am saying you can think of subpixels, which already exist, as a form of dithering. Most displays use just three primaries for subpixels - red, green and blue. Their arrangement is fixed, but that is not a limitation of this new technology.
This vaguely reminds me of "CCSTN" (Color Coded Super Twisted Nematic) LCD displays, which were used in a few Casio calculators to produce basic colour output without the usual RGB colour filter approach.
Hm, thinking about this further, this would need dithering to work properly (which probably works fine, but the perceived quality difference would mean pixel density comparisons aren't apples-to-apples)
Presumably, you get to control hue and brightness per-pixel. But that only gives you access to a thin slice of the sRGB gamut (i.e. the parts of HSL where saturation is maxed out), but dithering can solve that. Coming up with ideal dithering algorithms could be non-trivial (e.g. maybe you'd want temporal stability).
You really can't think about single wavelength tunable pixels as something except at the edge HSL.
I think about it from the CIE "triangle" where wavelength traces the outer edge, or even the Lab (Luminance a-green/red b-yellow/blue) color space since it's more uniform in perceivable SDR color difference (dE).
One key realization is that although 1 sub-pixel can't cover the gamut of sRGB (or Rec2020), but only 2 with wavelength and brightness control rather than 3 RGB. Realistically, this allows something like super-resolution because your blue (and red) visual resolution is much less than your green (eg 10-30pix/deg rather than ~60ppd). However, your eye's sensitivity off their XYZ peaks are less and perceived brightness would fall.
I guess what I'm saying is that a lot of the assumptions baked into displays have to be questioned and worked out for these kinds of pixels to get their full benefit.
Sure, but PPI/DPI headline figures are usually counted per-pixel, not per-subpixel, so the raw density numbers aren't directly comparable (and I'm not really sure what a fair "adjustment factor" would be)
> only gives you access to a thin slice of the sRGB gamut (i.e. the parts of HSL where saturation is maxed out)
Note that even if we restrict our attention to the max-saturation curve, these pixels can't produce shades of purple/magneta (unless, as you say, they use temporal dithering or some other trick).
You could use several pixels as sub-pixels or if the color shift time is fast enough, temporal dithering.
Even if these could produce just three wavelengths, if you can pulse them fast enough and accurately, the effect would be that color reproduction is accurate (on average over a short time period)
I'm not sure why saturation couldn't be controlled.
I probably missed something in the article, though I do see ex. desaturated yellow in the photographs so I'm not sure this is accurate.
If you can't control saturation, I'm not sure dithering won't help, I don't see how you'd approximate a less saturated color from a more saturated color.
HSL is extremely misleading, it's a crude approximation for 1970s computing constraints. An analogy I've used previously is think of there being a "pure" pigment, where saturation is at peak, mixing in dark/light (changing the lightness) changes the purity of the pigment, causing it to lose saturation.
Any desaturated colors I saw were also very bright, so I blame it on overexposure of the camera. Probably looked totally different in person.
Unsaturated colors aren't a problem, you just need to mix a bit of the opposite color. Unsaturated purples will be a challenge because you need to mix 3 wavelengths rather than just 2.
Saturation can't be controlled on a per-pixel basis because, per the article, they're tuned to a specific wavelength at any given time.
You're right though, there appear to be yellows on display. Maybe they're doing temporal dithering.
Edit: Oh wait, yellow doesn't need dithering in any case. Yellow can be represented as a single wavelength. Magenta on the other hand, would (and there does seem to be a lack of magenta on display)
Honestly might just be the limits of photography, there's so much contrast between the ~97 L* brightness of pure yellow and black that the sensor might not be able to capture the "actual" range.
I've been called a color scientist in marketing, but sadly never grokked the wavelength view of color. It sounds off to me, that's a *huge* limitation to not mention. But then again, if they had something a year ago, its unlikely ex. Apple folds its microLED division they've been investing in for a decade. Either A) it sucks or B) it doesn't scale in manufacturing or C) no ones noticed yet. (A) seems likely given their central claim is (B) is, at the least, much improved.
That's not hugely surprising given that (I believe) LEDs have always shifted spectrum-wise a bit with drive current (well, mostly junction temperature, which can be a function of drive current.)
I guess that means they're strictly on/off devices, which seems furthered by this video from someone stopping by their booth:
You can clearly see some pretty shit dithering, so I guess they haven't figured out how to do PWM based brightness (or worse, PWM isn't possible at all?)
I guess that explains the odd fixation on pixel density that is easily 10x what your average high-dpi cell phone display has (if you consider each color to be its own pixel, ie ~250dpi x 3)
It seems like the challenge will be finding applications for something with no brightness control etc. Without that, it's useless even for a HUD display type widget.
In the meantime, if they made 5050-sized LEDs, they would probably print money...which would certainly be a good way to further development on developing brightness control.
I doubt they can. Probably the process only works (or yields) small pieces, otherwise they'd be doing exactly what you suggest.
I also notice that their blues look terrible in the provided images. Which will be a problem. I don't think they get much past 490nm or so? That would also explain why they don't talk at all about phosphors, which seem like a natural complement to this tech... I don't think they can actually pump them. Which is disappointing :(
I understand that one of the big issues with microLED is huge brightness variation between pixels. Due to some kind of uncontrollable (so far) variations in the manufacturing process, some pixels output 1/10 the light (or less) as others. Ultimately the brightness of the whole display is constrained by the least bright pixels because the rest have to be dimmed to match. Judging by their pictures they have not solved this problem.
> I understand that one of the big issues with microLED is huge brightness variation between pixels. Due to some kind of uncontrollable (so far) variations in the manufacturing process, some pixels output 1/10 the light (or less) as others.
I instead understand that this is false. Available MicroLED screens (TVs) are in fact brighter than normal screens.
The issue with MicroLED is instead that they are extremely expensive to produce, as the article points out, due to the required mass transfer. Polychromatic LEDs would simplify this process greatly.
I should have specified that I was talking about microLED microdisplays, as shown in the article. Sounds redundant but there are also large format microLED displays which are manufactured by individually cutting LEDs from a chip and placing them on a different substrate with bigger spacing. This process allows replacing the ones with poor brightness during assembly. For microdisplays, on the other hand, the LEDs are fabricated in place and the not individually moved after. The chip is the display.
Would be fun if displays come full circle with variable addressable geometry/ glowing goo too.
Not quite vector display, but some thing organic than can be adressed with some stimulators like reaction-diffusion or gaussian, FFT, laplacians, gabor filters, Turig patterns, etc.
Get fancy patterns with lowest amount of data.
I didn't realize we even had a discrete LED tunable across the visible spectrum, let alone a Micro-LED array of them. Anybody know where I can buy one? I want to build a hyperspectral imager.
An imager/camera: by illuminating a scene (or light box) solely with the tunable LED, sweeping it across the spectrum, and capturing it with an achromatic camera.
Asking because I have a 410x410px hyperspectral imager that has an aligned 1886x1886px panchromatic imager that is use to perform pan-sharpening of the HSI data bringing it up to 1886x1886. I'd never heard of a panchromatic camera before I got involved in this business and I've never heard of an achromatic camera either. All I seem to find is achromatic lenses.
Yes, "panchromatic" is probably the more accurate term for it. It's just a camera with no color filters and a known spectral response curve that's high enough across the frequencies being imaged.
Ah, yeah, I'd say that fits 'panchromatic camera' then. The panchromatic imager on my setup uses the exact same CCD and covers the exact same spectral range (350nm-1000nm), but it doesn't have the HSI lenses/filters. The company actually sells a smaller unit that is made from the same imager, but with the HS lens/filters.
Btw, is that still reasonably effective if the scene has ambient illumination, but (in addition to shining each wavelength at it) you take a monochrome photo in only the ambient light and you subtract that out from all your other images?
Sure that would work. The higher the ratio of controlled/ambient light, and the slower you can do the sweep, the better for SNR of the hyperspectral image.
I think a lot of these comments are missing the point-even if you have to reduce their reported density numbers by half, they made a display with dimensions of "around 1.1 cm by 0.55 cm, and around 3K by 1.5K pixels", which is insane! All without having to dice and mass-transfer wafer pieces, since every pixel is the same.
A lot of the article is focused on how this matters for the production side of things, since combining even 10 um wafer pieces from 3 different wafers is exceedingly time consuming, which I think is the more important part. Sure, the fact that each emitter can be tuned to "any colour" might be misleading, but even if you use rapid dithering like plasma displays did, and pin each emitter to one wavelength, you suddenly have a valid path to manufacturing insanely high density microLED displays! Hopefully this becomes viable soon, so I can buy a nice vivid and high contrast display without worrying about burn in.
I'm really curious about the reproducibility. The color is decided by the bandgap and the bandgap is tunable by voltage, but how temperature dependent is it, and how much does production variability impact it?
I image these displays could have color sensors attached to self-calibrate.
Or the variability is low and all you need is very precise voltages.
I think the first versions will be RGB displays with fixed colors, just no longer needing mass transfer. You could use tens of subpixels per pixel, reducing all worries about color resolution.
Make these into e.g. 1x1cm mini displays and mass transfer those into any desired display size.
OLED tech has been very transformative for lots of my old gear (synthesizers and samplers mostly) that originally came with backlit LCD displays. But the OLEDs are offered in static colors, usually blue or amber. Sometimes white red or green
It would be very cool to have a display with adjustable color.
The promotional document focuses on wavelength tunability but I imagine brightness at any one wavelength suffers because to emit at one wavelength requires an electron to lose the amount of energy in that photon by transitioning from a high to low energy state. Maximum brightness then corresponds to how many of these transitions are possible in a given amount of time.
Some states are not accessible at a given time (voltage can tune which states are available) but my understanding is the number of states is fixed without rearranging the atoms in the material.
These still produce a single [adjustable] wavelength, which means some colors that are displayable on displays of today are not representable using just one of these, and multiples will be required.
Yes, it’d be two subpixels instead of the current three. It’s not clear that that’s worth the added complexity of having to control each subpixel across two dimensions (brightness and wavelength) instead of just one (brightness).
Yes, mix two complementary colors like orange and cyan. You just need two wavelengths that hit all three cone types [0] in the right ratio. There’s the possibility that it’s subject to more variation across individuals though, as not everyone has exactly the same sensitivity curves.
Human vision in the yellow (~590nm) region is known to be extremely sensitive to particular wavelengths. Observe how quickly things go from green through yellow to amber/orange!
Every single white LED bulb you buy for your light fixtures is a mix of blue LED and yellow phosphor, so in practice it's no problem at all. Although I do concede that the yellow is probably not monochromatic.
It is 100% not monochromatic and that makes all the difference.
Here's one model I'm fairly familar with, having evaluated it for design-in to a product a few years back: https://www.lightstec.com/wp-content/uploads/2018/10/Philips... (apologies for the non-authoritative link, their entire datasheet server appears to be down....)
Take a look at page 8 (PDF page 9), Figure 4, "Relative Spectral Distribution vs. Wavelength". Look at those spectral curves and what that phosphor really does. See that nice broad peak, that's pretty insensitive to the exact details? A little shift in the peak doesn't change the output much. And yet, they still bin white LEDs intensively!
These things just do not work with monochromatic emission in the orange. And the phosphor isn't even that good at low color temperatures (CCTs). Below about 2000K-2400K (ish), this approach doesn't work: the resulting LED looks like yellow trash, not like you'd expect (it should look something like a candle flame). So even phosphors can't get you down all that far in CCT. (There are probably expensive phosphors that can do it... but none were in mass production five or six years ago when I did a deep search.)
Or if pixel density is high enough, adjacent pixels could display the colors to combine with no flickering. Unlike regular RGB subpixels, this would only be needed for areas where the color cannot be displayed by an individual pixel alone.
Yeah, and both techniques can be combined, which common with LCD screens, although it does sometimes lead to visible moving patterns when viewed close up.
There’s more flexibility with tunable wavelengths, though, since there will often be multiple solutions for what colors and intensities can be combined to create a particular photoreceptor response. By cycling through different solutions, I wonder if you could disrupt the brain’s ability to spot any patterns, so that it’s just a very faint noise that you mostly filter out.
Sure, but that’s assuming you need a higher rate than is already used for brightness. That’s a question I think can only be determined experimentally by putting real human eyes on it, although I think you could do the experiment with traditional RGB LEDs. But the other question is whether the wavelength tuning can be changed at the same rate as intensity.
I bet you might run into some interesting problems trying to represent white with two wavelengths. For example, colorblind people (7% of the population) might not perceive your white as white. And I wonder if there is more widespread variation in human eye responses to single wavelengths between primary colors that is not classified as colorblindness but could affect the perception of color balance in a 2-wavelength display.
LED are somewhat temperature sensitive devices, and getting repeatable high-granularity bit-depth may prove a difficult problem in itself.
There are ways to compensate for perceptual drift like modern LCD drivers, but unless the technology addresses the same burn-in issues with OLED it won't matter how great it looks.
You may want to look at how DMD drivers handled the color-wheel shutter timing to increase perceptual color quality. There are always a few tricks people can try to improve the look at the cost of lower frame rates. =)
Incredible accomplishment, but the question remains what this will look like at the scale of a display on any given consumer device.
Of course, it's only just now been announced, but I'd love to see what a larger scale graphic looks like with a larger array of these to understand if perceived quality is equal or better, if brightness distribution across the spectrum is consistently achieved, how pixels behave with high frame rates and how resilient they are to potential burn-in.
They already have these, but people need to modify the GPU designs before it is really relevant. The current AI hype cycle has frozen development in this area for now... so a super fast 1990's graphics pipeline is what people will iterate on for awhile.
Nvidia is both a blessing and a curse in many ways for standardization... =3
I can certainly see these being useful in informational displays, such as rendering colored terminal output. The lack of subpixels should make for crisp text and bright colors.
I don't see this taking over the general purpose display industry, however, as it looks like the current design is incapable of making white.
My ultimate hope is that this will allow us to store and display color data as Fourier series.
Right now we only represent colour as combinations of red, green, and blue, when a colour signal itself is really a combination of multiple "spectral" (pure) colour waves, which can be anything in the rainbow.
Individually controllable microLEDs would change this entirely. We could visualize any color at will by combining them.
It's depressing that nowadays we have this technology yet video compression means I haven't seen a smooth gradient in a movie or TV show in years.
The human eye can't distinguish light spectra producing identical tristimulus values. Thus for display purposes [1], color can be perfectly represented by 3 scalars.
[1] lighting is where the exact spectrum matters, c.f. color rendering index
Color data has three components for the simple reason that the human eye has three different color receptors. You can change the coordinate system of that color space, but three components will remain the most parsimonious representation.
I started working with a hyperspectral imager a while back and the idea of storing image data in 3 wide bands seems so odd to me now. Just the fact that my HSI captures 25 distinct 4nm bands inside a single 100nm band of what we are used to with a 3-band image is awesome.
Sorry, I get excited every time I work with hyperspec stuff now and love talking about it to anyone that will listen.
Hyperspectral imaging has its applications. A hyperspectral display on the other hand makes no sense (unless your target audience consists of mantis shrimps).
> I get excited every time I work with hyperspec stuff now and love talking about it to anyone that will listen.
Color is widely taught down to K-2, but content and outcomes are poor. So I was exploring how one might better teach color, with an emphasis on spectra. Using multispectral/hyperspectral images of everyday life, objects, and art, seemed an obvious opportunity. Mousing over images like[1] for example, showing spectra vaguely like[2]. But I found very few (non-terrain) images that were explicitly open-licensed for reuse. It seemed the usual issue - there's so much nice stuff out there, living only on people's disks, for perceived lack of interest in it. So FWIW, I note I would have been delighted to find someone had made such images available. Happy to chat about the area.
The fundamental problem is that color space is 2D[1] (color + brightness is 3D, hence 3 subpixel on traditional displays), but monochromatic light has only 1 dimension to vary for color.
[1]: https://en.wikipedia.org/wiki/Chromaticity