Hacker News new | past | comments | ask | show | jobs | submit login
Polychromatic Pixels (compoundsemiconductor.net)
203 points by bluehat974 4 months ago | hide | past | favorite | 142 comments



A single wavelength can't reproduce all visible colors. These pixels are variable wavelength, but can only produce one at a time, so you'd still need at least 2 of these pixels to reproduce any visible color.

The fundamental problem is that color space is 2D[1] (color + brightness is 3D, hence 3 subpixel on traditional displays), but monochromatic light has only 1 dimension to vary for color.

[1]: https://en.wikipedia.org/wiki/Chromaticity


Ha, yea, in particular these monochromatic pixels can't simply be white. Notably ctrl-f'ing for "white" gives zero results on this page.

Relatedly, the page talks a lot about pixel density, but this confused me: if you swap each R, G, or B LED with an adjustable LED, you naively get a one-time 3x boost in pixel area density, which is a one-time sqrt(3)=1.73x boost in linear resolution. So I think density is really a red herring.

But they also mention mass transfer ("positioning of the red, green and blue chips to form a full-colour pixel") which plausibly is a much bigger effect: If you replace a process that needs to delicately interweave 3 distinct parts with one that lays down a grid of identical (but individually controllable) parts, you potentially get a much bigger manufacturing efficiency improvement that could go way beyond 3x. I think that's probably the better sales pitch.


It would be interesting to plot all of the achievable colors of this LED on the chromaticity diagram. Presumably it'd be some sort of circle/ellipse around white but might have some dropouts in certain parts of the spectrum?


Pure wavelengths are on the horseshoe-shaped outline of the CIE 1931 space. The straight line connecting the ends of the horseshoe is the line of purples, which also isn't monochromatic.

https://en.wikipedia.org/wiki/Chromaticity#/media/File:Planc...


Presumably they wouldn't need to do a pixel-to-pixel mapping, but could account for the wavelengths of neighbouring pixels to produce a more faithful colour reproduction at an effectively lower resolution.


It's going to be the spectral locus.


The key to this is using the same process to get all the colors. For separate R,G,B pixels you need 3 different processes and can't build them on the same chip, you need to assemble them - that's what allows the vast improvement in pixel density.


Don't forget about bond wires that need to be run to each die and/or connected to a backplane.


Doesn't the fact they have successfully demonstrated displays at 2000, 5000 and 10000 DPI alleviate those concerns a little bit?


It's not really meant as a concern, more a supporting argument: If every subpixel is identical, you can use simpler wiring patterns.


The subpixels don't need bonding wires, they have dedicated connections just like any transistor on a regular IC.


Would one not just use a few pixels to create white?

That does mean a variable resolution scenario.


This is definitely a problem; if the control circuitry is up for it you could PWM the pixel color, basically dithering in time instead of space to achieve white or arbitrary non-spectral colors.


Yep. DLP color wheels come to mind.


It can produce all the colors of the rainbow. But no magenta. Perhaps they can quickly pulse the LED enough between multiple wavelengths.


It also can't produce white or anything else in the interior of this diagram (as well as, as you mention, shades of magenta and purple that lie on the flat lower edge):

https://upload.wikimedia.org/wikipedia/commons/b/ba/Planckia...


The human eye will see white when a pixel flashes through all of the colors quickly in time.


You don't need all the colors. As every household white LED bulb proves, you can get it with just a combination of blue and yellow.


You'll get atrocious CRI/sick skin tones that way. There's much more fleshed out spectrum in nowadays LEDs, especially warm white variants.


But that means it has reduced refresh rate.


The two are not related at all. Refresh rate is how fast it can accept input, whereas this is how fast it can do TDM of colors and intensities


How quickly? Surely well above 1 kHz (1000 FPS). Otherwise you will see flickering.


Single chip DLP projectors strobe red, green, blue, white sequentially. Modern DLPs use separate light sources (LED/Laser) and pulse them at a high frequency - kilohertz I assume. Before we had high-power LEDs DLP projectors used a xenon lamp and a color wheel (https://www.projectorjunkies.com/color-wheel-dlp/) spinning at as little as 60 revolutions per second. This caused a "rainbow effect" which was very annoying to some people, but apparently enough people didn't notice it that those products got sold anyway. So somewhere around 180Hz is the bare minimum.


According to this, humans can't see flicker above 100 Hz for most smooth images, but if the image has high frequency spatial edges then they can see flicker up to 500-1000 Hz. It has to do with saccades.

https://www.nature.com/articles/srep07861


See also https://en.wikipedia.org/wiki/Spectral_color

This reminds me of the observation I had in high school that I could immerse LEDs in liquid nitrogen and run them at higher than usual voltage and watch the color change.

I got a PhD in condensed matter physics later on but never got a really good understanding of the phenomenon but I think it has something to do with

https://www.digikey.com/en/articles/identifying-the-causes-o...

Here is a video of people doing it

https://www.youtube.com/watch?v=5PquJdIK_z8


> I got a PhD in condensed matter physics later on but never got a really good understanding of the phenomenon but I think it has something to do with

The color of most* LEDs is controlled by the band gap of the semiconductor they're using. Reducing the temperature of the material widens the band gap, so the forward voltage of the diode increases and the wavelength of the emitted light gets shorter

https://www.sciencedirect.com/science/article/abs/pii/003189...

*: With the exception of phosphor-converted LEDs, which are uncommon.


> phosphor-converted LEDs, which are uncommon

No, they're extremely common. Every white LED in the market is phosphor-converted: they're blue LEDs, usually ~450nm royal blue, with yellow-emitting phosphors on top. Different phosphors and concentrations give different color temperatures for the final LED, from about 7500K through 2000K. (Last I looked, anything below about 2000K didn't look right at all, no matter what its manufacturer claimed.)

Bigger LEDs are often phosphor-converted as well. Most industrial grow lamps use this type of LED. So they're around! You're probably looking at some right now!


I'm assuming that in most cases they'll just make these act as RGB displays, either by sequentially tuning the wavelength of each pixel to red, green, blue in a loop, or by assigning each pixel to be red, green, or blue and just having them act as subpixels.


However, you would have more flexibility to do tricks sub-pixel to improve resolution?


Surely the 'tricks' we have for RGB displays would be more effective when every element has the same color range as every other. For example, the subpixel rendering of typography for RGB displays had an unavoidable rainbow halo that would no longer be an issue for most colors of text with polychromatic pixels.


This seems like a non-problem, cut the display resolution in half on one axis and reserve two 'subpixels' for each pixel. Then you have a full color display with only one physical pixel type and that needs one less subpixel. These displays could even produce some saturated colors with specific wavelengths that can't be represented on regular rgb displays.


You'd still be unable to produce different brightness pixels. You'd get white but no grayscale.

I guess you could cheat it by moving the wavelength outside the visible spectrum?


I hate to think of the damage large amounts of IR or especially UV would do to the eye.


Assuming they can PWM the brightness while getting consistent color (seems reasonable since microLEDs have extremely fast response time) then I think what you're saying would work great. It would be akin to 4:2:2 chroma subsampling where luminance (which we have higher acuity for) gets more fidelity and the resulting image quality is closer to full-res than half-res.


> color space is 2D

Human eyes have three different color receptors, each tuned for it's own frequency, so it's already 3d. However, apart from human perception, color, just like sound, can have any combinations of frequencies (when you split the signal with Fourier transform), and may animals do have more receptors than us.


Humans perceive all stimulation in the same raito of the L, M, and S cones to be the same color, but with different brightnesses. So only two dimensions are nessesary to represent human visible colors, hence HSV or L*a*b* space.


There is a fair point there, but a few things - HSV and Lab are only models, they don’t necessarily capture all visible colors (esp. when it comes to tetrachromats). Brightness is a dimension, and can affect the perception of a color, esp. as you get very bright - HSV and Lab are 3D spaces. Arguing that brightness should be ignored or factored out is problematic and only a small step from arguing that saturation should be factored out too and that color is mostly one dimensional.


According the opponent process model of colour perception you need three axes to represent all colours: luminosity [L+M+S+rods], red-green [L-M] and blue-yellow [S - (L+M)].


You only need to mix two different wavelengths to render any human perceptible color. They give you four parameters to work with (wavelength1, brightness1, wavelength2, brightness2) which makes it an underdetermined system with an infinite number of solutions for all but the pure, spectral boundary of the gamut.


In this sense our hearing is much better than our color vision.

We can distinguish the combination a huge number of frequencies between 20-20000Hz.

But we can only distinguish 3 independent colors of light.

Of course our vision is vastly better than hearing for determining where the sound/light comes from.


Total tangent, but is that because of the wavelengths involved? I imagine a “sound camera” would have to be huge to avoid diffraction (but that’s just intuition), requiring impracticality large ears. Likewise i imagine that perceiving “chords” of light requires sensing on really tiny scales, requiring impractically small complex structure in the eyes?

Anybody know the answer?


There are plenty of monochromatic cases. Right now hw has a lot of orange.

Dynamic resolution / subpixel rendering. Retina looks really good already, not sure if the effect would be relevant or interesting but it might open up something new


What Apple sells as "retina" still doesn't match common print densities, there's definitely room for improvement.


You make it sound like there is still an easy to spot difference. When i look at the print quality of pictures on a news paper, its the opposite and at least for me, i don't need more than retina and i was very eager to switch to 4k to have higher dpi.

But 14' with retina im very happy.

I'm actually more surprised by hdr on my lg oled 4k. Its actually quite nice when done well.


Newspapers are famously printed on the lowest quality recycled paper and cheapest print process available, because they're disposable. Compare a retina screen to a coffee table style reference book with high resolution photos – the kinds you can use a magnifying glass on - and you'll still notice differences.

Or just look at what companies do when manufacturing technologies allow them to push for higher densities: iPhones now exceed 450 dpi, and the 8" iPads exceed 300; if the technology allowed it, Apple would most likely introduce higher densities on larger iPads and Macbooks as well.


One thing I noticed is that they were talking about demoing 12,000 ppi displays, which is way more resolution than you're going to resolve with your eye. So using 2 pixels is still probably a win.


Those are the densities needed for near eye displays. The best displays can still show pixelization to the human eye up close.


> These pixels are variable wavelength, but can only produce one at a time

Citation needed. The article doesn't say anything about how the colors are generated, and whether they can only produce one wavelength at a time.

Assuming they are indeed restricted to spectral colors, dithering could be used to increase the number of colors further. However, dithering needs at least 8 colors to cover the entire color space: red, green, blue, cyan, magenta, yellow, white, black. And two of those can't be produced using monochromatic light -- magenta and white. This would be a major problem.


Dithering just black, red, green, and blue is sufficient to produce a full-colour image. Everything else is a combination of those. That's effectively how normal LCD or OLED monitors work!


No, normal monitors use additive color mixing, but dithering isn't additive, it's averaging. With just red, green, blue, black you couldn't dither cyan, magenta, yellow, white, just some much darker versions of them. E.g. you get grey instead of white.

You can check this by trying to dither a full color image in a program like Photoshop. It doesn't work unless you use at least the 8 colors.

In fact, ink jet printers do something similar: They use subtractive color mixing to create red, green and blue dots (in addition to cyan, magenta, yellow and black ink and white paper), then all the remaining shades are dithered from those eight colors. It looks something like that: https://as2.ftcdn.net/v2/jpg/01/88/80/47/1000_F_188804787_u1... (though there black is also created with subtractive color mixing).

The color mixing type used by dithering is sometimes called "color blending". Apart from dithering it's also used when simulating partial transparency (alpha).


The article is talking about microLEDs, which are an emissive light source.


You can dither not just in print but also on illuminated screens. For example:

http://caca.zoy.org/study/out/lena6-1-2.png

This picture has only pixels of the aforementioned eight colors.


Emissive means additive, not averaging. Cyan, magenta and yellow are not primaries here. Red and green light adds up to perceptual yellow. Red, green and blue adds up to perceptual white (or grey, at very low luminance). Treating each of these pixels like subpixels (which is arguably a form of dithering) will produce a full color image (at a lower resolution), but given that they did not demonstrate it, color reproduction and/or luminance likely is far from competitive at this point.


That's not true. Dithering can be used in emissive screens, but dithering is not additive. If you mix red and green with color blending (e.g. by dithering), you get less red and less green in your mix, and therefore the resulting mix (a sort of ochre) is different from additive color mixing (yellow), where the amount of red and green stays the same. Or when you mix black and white, you get white with additive color mixing, but grey with blending. You also get grey when blending (dithering) red, green and blue. You can test this in software like Gimp, you won't be able to dither a full color image without at least the eight colors I mentioned.


I am not saying you can use the exact same math as in an image manipulation program, these work with different assumptions. Mixing colors in those is usually not correct anyway.

https://www.youtube.com/watch?v=LKnqECcg6Gw

I am saying you can think of subpixels, which already exist, as a form of dithering. Most displays use just three primaries for subpixels - red, green and blue. Their arrangement is fixed, but that is not a limitation of this new technology.


Well I disagree with that. It's two different ways of mixing colors (additive vs blending), with different results and different requirements.


This vaguely reminds me of "CCSTN" (Color Coded Super Twisted Nematic) LCD displays, which were used in a few Casio calculators to produce basic colour output without the usual RGB colour filter approach.

https://www.youtube.com/watch?v=quB60FmzHKQ

https://web.archive.org/web/20240302185148/https://www.zephr...



For some reason I find those displays' shades of orange and green to be SUPER appealing. The blue is nice enough.


I had a feeling the YouTube link would be Posy and was delighted when it was. His videos on display technologies are top notch.


Hm, thinking about this further, this would need dithering to work properly (which probably works fine, but the perceived quality difference would mean pixel density comparisons aren't apples-to-apples)

Presumably, you get to control hue and brightness per-pixel. But that only gives you access to a thin slice of the sRGB gamut (i.e. the parts of HSL where saturation is maxed out), but dithering can solve that. Coming up with ideal dithering algorithms could be non-trivial (e.g. maybe you'd want temporal stability).


You really can't think about single wavelength tunable pixels as something except at the edge HSL.

I think about it from the CIE "triangle" where wavelength traces the outer edge, or even the Lab (Luminance a-green/red b-yellow/blue) color space since it's more uniform in perceivable SDR color difference (dE).

https://luminusdevices.zendesk.com/hc/article_attachments/44...

One key realization is that although 1 sub-pixel can't cover the gamut of sRGB (or Rec2020), but only 2 with wavelength and brightness control rather than 3 RGB. Realistically, this allows something like super-resolution because your blue (and red) visual resolution is much less than your green (eg 10-30pix/deg rather than ~60ppd). However, your eye's sensitivity off their XYZ peaks are less and perceived brightness would fall.

I guess what I'm saying is that a lot of the assumptions baked into displays have to be questioned and worked out for these kinds of pixels to get their full benefit.


Good point, the HSL edge includes magenta which is of course not a wavelength.



Dithering is at worst equivalent to subpixels, which we already use.

If you take the "no subpixels" claim out of the article, this technology still seems useful for higher DPI and easier manufacture.


Sure, but PPI/DPI headline figures are usually counted per-pixel, not per-subpixel, so the raw density numbers aren't directly comparable (and I'm not really sure what a fair "adjustment factor" would be)


"Fair" has nothing to do with it, the adjustment factor will be whatever the marketing folks think they can get away with.


> only gives you access to a thin slice of the sRGB gamut (i.e. the parts of HSL where saturation is maxed out)

Note that even if we restrict our attention to the max-saturation curve, these pixels can't produce shades of purple/magneta (unless, as you say, they use temporal dithering or some other trick).


You could use several pixels as sub-pixels or if the color shift time is fast enough, temporal dithering.

Even if these could produce just three wavelengths, if you can pulse them fast enough and accurately, the effect would be that color reproduction is accurate (on average over a short time period)


I'm not sure why saturation couldn't be controlled.

I probably missed something in the article, though I do see ex. desaturated yellow in the photographs so I'm not sure this is accurate.

If you can't control saturation, I'm not sure dithering won't help, I don't see how you'd approximate a less saturated color from a more saturated color.

HSL is extremely misleading, it's a crude approximation for 1970s computing constraints. An analogy I've used previously is think of there being a "pure" pigment, where saturation is at peak, mixing in dark/light (changing the lightness) changes the purity of the pigment, causing it to lose saturation.


Any desaturated colors I saw were also very bright, so I blame it on overexposure of the camera. Probably looked totally different in person.

Unsaturated colors aren't a problem, you just need to mix a bit of the opposite color. Unsaturated purples will be a challenge because you need to mix 3 wavelengths rather than just 2.


Saturation can't be controlled on a per-pixel basis because, per the article, they're tuned to a specific wavelength at any given time.

You're right though, there appear to be yellows on display. Maybe they're doing temporal dithering.

Edit: Oh wait, yellow doesn't need dithering in any case. Yellow can be represented as a single wavelength. Magenta on the other hand, would (and there does seem to be a lack of magenta on display)


> Saturation can't be controlled on a per-pixel basis because, per the article, they're tuned to a specific wavelength at any given time.

Where does the article say this? I couldn't find it.


Honestly might just be the limits of photography, there's so much contrast between the ~97 L* brightness of pure yellow and black that the sensor might not be able to capture the "actual" range.

I've been called a color scientist in marketing, but sadly never grokked the wavelength view of color. It sounds off to me, that's a *huge* limitation to not mention. But then again, if they had something a year ago, its unlikely ex. Apple folds its microLED division they've been investing in for a decade. Either A) it sucks or B) it doesn't scale in manufacturing or C) no ones noticed yet. (A) seems likely given their central claim is (B) is, at the least, much improved.


This appears to be done by varying current, from a slide in this 'webinar': https://youtu.be/MI5EJk8cPwQ?t=238

That's not hugely surprising given that (I believe) LEDs have always shifted spectrum-wise a bit with drive current (well, mostly junction temperature, which can be a function of drive current.)

I guess that means they're strictly on/off devices, which seems furthered by this video from someone stopping by their booth:

https://youtu.be/f0c10q2S_PQ?t=107

You can clearly see some pretty shit dithering, so I guess they haven't figured out how to do PWM based brightness (or worse, PWM isn't possible at all?)

I guess that explains the odd fixation on pixel density that is easily 10x what your average high-dpi cell phone display has (if you consider each color to be its own pixel, ie ~250dpi x 3)

It seems like the challenge will be finding applications for something with no brightness control etc. Without that, it's useless even for a HUD display type widget.

In the meantime, if they made 5050-sized LEDs, they would probably print money...which would certainly be a good way to further development on developing brightness control.


> if they made 5050-sized LEDs

I doubt they can. Probably the process only works (or yields) small pieces, otherwise they'd be doing exactly what you suggest.

I also notice that their blues look terrible in the provided images. Which will be a problem. I don't think they get much past 490nm or so? That would also explain why they don't talk at all about phosphors, which seem like a natural complement to this tech... I don't think they can actually pump them. Which is disappointing :(


I understand that one of the big issues with microLED is huge brightness variation between pixels. Due to some kind of uncontrollable (so far) variations in the manufacturing process, some pixels output 1/10 the light (or less) as others. Ultimately the brightness of the whole display is constrained by the least bright pixels because the rest have to be dimmed to match. Judging by their pictures they have not solved this problem.


> I understand that one of the big issues with microLED is huge brightness variation between pixels. Due to some kind of uncontrollable (so far) variations in the manufacturing process, some pixels output 1/10 the light (or less) as others.

I instead understand that this is false. Available MicroLED screens (TVs) are in fact brighter than normal screens.

The issue with MicroLED is instead that they are extremely expensive to produce, as the article points out, due to the required mass transfer. Polychromatic LEDs would simplify this process greatly.


> Available MicroLED screens (TVs) are in fact brighter than normal screens.

Does that in any way contradict the claim that there are large variations in brightness between microLED pixels on the same screen?


I should have specified that I was talking about microLED microdisplays, as shown in the article. Sounds redundant but there are also large format microLED displays which are manufactured by individually cutting LEDs from a chip and placing them on a different substrate with bigger spacing. This process allows replacing the ones with poor brightness during assembly. For microdisplays, on the other hand, the LEDs are fabricated in place and the not individually moved after. The chip is the display.


It is solvable with enough capital investment though, question is how much will it cost to solve.


Is it? I feel like there has already been a lot of capital investment by the various organizations working on microLED.


Would be fun if displays come full circle with variable addressable geometry/ glowing goo too.

Not quite vector display, but some thing organic than can be adressed with some stimulators like reaction-diffusion or gaussian, FFT, laplacians, gabor filters, Turig patterns, etc. Get fancy patterns with lowest amount of data.

https://www.sciencedirect.com/science/article/pii/S092547739... https://onlinelibrary.wiley.com/doi/10.1111/j.1755-148X.2010...


I didn't realize we even had a discrete LED tunable across the visible spectrum, let alone a Micro-LED array of them. Anybody know where I can buy one? I want to build a hyperspectral imager.


Do you mean hyperspectral imager (i.e., camera), or a hyperspectral display?


An imager/camera: by illuminating a scene (or light box) solely with the tunable LED, sweeping it across the spectrum, and capturing it with an achromatic camera.


> achromatic camera

Is that the same as a panchromatic camera?

Edit:

Asking because I have a 410x410px hyperspectral imager that has an aligned 1886x1886px panchromatic imager that is use to perform pan-sharpening of the HSI data bringing it up to 1886x1886. I'd never heard of a panchromatic camera before I got involved in this business and I've never heard of an achromatic camera either. All I seem to find is achromatic lenses.


Yes, "panchromatic" is probably the more accurate term for it. It's just a camera with no color filters and a known spectral response curve that's high enough across the frequencies being imaged.


Ah, yeah, I'd say that fits 'panchromatic camera' then. The panchromatic imager on my setup uses the exact same CCD and covers the exact same spectral range (350nm-1000nm), but it doesn't have the HSI lenses/filters. The company actually sells a smaller unit that is made from the same imager, but with the HS lens/filters.


Ahh, that makes sense. Thanks!

Btw, is that still reasonably effective if the scene has ambient illumination, but (in addition to shining each wavelength at it) you take a monochrome photo in only the ambient light and you subtract that out from all your other images?


Sure that would work. The higher the ratio of controlled/ambient light, and the slower you can do the sweep, the better for SNR of the hyperspectral image.


I think a lot of these comments are missing the point-even if you have to reduce their reported density numbers by half, they made a display with dimensions of "around 1.1 cm by 0.55 cm, and around 3K by 1.5K pixels", which is insane! All without having to dice and mass-transfer wafer pieces, since every pixel is the same.

A lot of the article is focused on how this matters for the production side of things, since combining even 10 um wafer pieces from 3 different wafers is exceedingly time consuming, which I think is the more important part. Sure, the fact that each emitter can be tuned to "any colour" might be misleading, but even if you use rapid dithering like plasma displays did, and pin each emitter to one wavelength, you suddenly have a valid path to manufacturing insanely high density microLED displays! Hopefully this becomes viable soon, so I can buy a nice vivid and high contrast display without worrying about burn in.


I'm really curious about the reproducibility. The color is decided by the bandgap and the bandgap is tunable by voltage, but how temperature dependent is it, and how much does production variability impact it?

I image these displays could have color sensors attached to self-calibrate.

Or the variability is low and all you need is very precise voltages.

I think the first versions will be RGB displays with fixed colors, just no longer needing mass transfer. You could use tens of subpixels per pixel, reducing all worries about color resolution.

Make these into e.g. 1x1cm mini displays and mass transfer those into any desired display size.


> 6,800 pixel-per-inch display (around 1.1 cm by 0.55 cm, and around 3K by 1.5K pixels)

That sounds like it's getting close to being a really good screen for a VR headset.


Nice, that's double of what the Vision Pro has.


OLED tech has been very transformative for lots of my old gear (synthesizers and samplers mostly) that originally came with backlit LCD displays. But the OLEDs are offered in static colors, usually blue or amber. Sometimes white red or green

It would be very cool to have a display with adjustable color.


The promotional document focuses on wavelength tunability but I imagine brightness at any one wavelength suffers because to emit at one wavelength requires an electron to lose the amount of energy in that photon by transitioning from a high to low energy state. Maximum brightness then corresponds to how many of these transitions are possible in a given amount of time.

Some states are not accessible at a given time (voltage can tune which states are available) but my understanding is the number of states is fixed without rearranging the atoms in the material.


These still produce a single [adjustable] wavelength, which means some colors that are displayable on displays of today are not representable using just one of these, and multiples will be required.


Yes, it’d be two subpixels instead of the current three. It’s not clear that that’s worth the added complexity of having to control each subpixel across two dimensions (brightness and wavelength) instead of just one (brightness).


Can you produce "white" with just two wavelengths?


Yes, mix two complementary colors like orange and cyan. You just need two wavelengths that hit all three cone types [0] in the right ratio. There’s the possibility that it’s subject to more variation across individuals though, as not everyone has exactly the same sensitivity curves.

[0] https://upload.wikimedia.org/wikipedia/commons/f/f1/1416_Col...


Human vision in the yellow (~590nm) region is known to be extremely sensitive to particular wavelengths. Observe how quickly things go from green through yellow to amber/orange!

So this is probably a nonstarter.


Every single white LED bulb you buy for your light fixtures is a mix of blue LED and yellow phosphor, so in practice it's no problem at all. Although I do concede that the yellow is probably not monochromatic.


It is 100% not monochromatic and that makes all the difference.

Here's one model I'm fairly familar with, having evaluated it for design-in to a product a few years back: https://www.lightstec.com/wp-content/uploads/2018/10/Philips... (apologies for the non-authoritative link, their entire datasheet server appears to be down....)

Take a look at page 8 (PDF page 9), Figure 4, "Relative Spectral Distribution vs. Wavelength". Look at those spectral curves and what that phosphor really does. See that nice broad peak, that's pretty insensitive to the exact details? A little shift in the peak doesn't change the output much. And yet, they still bin white LEDs intensively!

These things just do not work with monochromatic emission in the orange. And the phosphor isn't even that good at low color temperatures (CCTs). Below about 2000K-2400K (ish), this approach doesn't work: the resulting LED looks like yellow trash, not like you'd expect (it should look something like a candle flame). So even phosphors can't get you down all that far in CCT. (There are probably expensive phosphors that can do it... but none were in mass production five or six years ago when I did a deep search.)


If the refresh rate is high enough, a single LED could flip between multiple wavelengths to dither to non spectral colors.


Or if pixel density is high enough, adjacent pixels could display the colors to combine with no flickering. Unlike regular RGB subpixels, this would only be needed for areas where the color cannot be displayed by an individual pixel alone.


Yeah, and both techniques can be combined, which common with LCD screens, although it does sometimes lead to visible moving patterns when viewed close up.

There’s more flexibility with tunable wavelengths, though, since there will often be multiple solutions for what colors and intensities can be combined to create a particular photoreceptor response. By cycling through different solutions, I wonder if you could disrupt the brain’s ability to spot any patterns, so that it’s just a very faint noise that you mostly filter out.


Higher refresh/modulation rates imply higher power consumption. It’s already a trade-off in current display tech for mobile.


Sure, but that’s assuming you need a higher rate than is already used for brightness. That’s a question I think can only be determined experimentally by putting real human eyes on it, although I think you could do the experiment with traditional RGB LEDs. But the other question is whether the wavelength tuning can be changed at the same rate as intensity.


Two adjustable wavelength emitters should be sufficient, right? So the picking-and-placing problem gets easier by factor of 3:2 rather than 3:1.


I bet you might run into some interesting problems trying to represent white with two wavelengths. For example, colorblind people (7% of the population) might not perceive your white as white. And I wonder if there is more widespread variation in human eye responses to single wavelengths between primary colors that is not classified as colorblindness but could affect the perception of color balance in a 2-wavelength display.


The whole point of this technology is that you don't need picking-and-placing anymore, it's all built on the same wafer.


I suppose anything besides the edge of the CIE horseshoe will need multiples.


Does this need very accurate DAC to cover the entire color spectrum? Maybe even fine-tuning on each pixel?


LED are somewhat temperature sensitive devices, and getting repeatable high-granularity bit-depth may prove a difficult problem in itself.

There are ways to compensate for perceptual drift like modern LCD drivers, but unless the technology addresses the same burn-in issues with OLED it won't matter how great it looks.

You may want to look at how DMD drivers handled the color-wheel shutter timing to increase perceptual color quality. There are always a few tricks people can try to improve the look at the cost of lower frame rates. =)


Are they able to adjust the color and brightness simultaneously? Or would brightness be controlled with PWM?


Brightness is PWM controlled, but likely at the micro - millisecond level. The required brightness range is about 100k:1.

Black levels would be determined more by reflectivity of the display than illumination.



Incredible accomplishment, but the question remains what this will look like at the scale of a display on any given consumer device.

Of course, it's only just now been announced, but I'd love to see what a larger scale graphic looks like with a larger array of these to understand if perceived quality is equal or better, if brightness distribution across the spectrum is consistently achieved, how pixels behave with high frame rates and how resilient they are to potential burn-in.


I imagine color consistency will be such a pain here.


I'd hope that per-pixel calibration would solve that, but I wonder how much that calibration would drift over time.


Whatever the drift would be, inorganics would drift less than organic materials.


This sounds awesome for future VR gear, when you need small displays with more pixels that is currently possible.

4K virtual monitors, here we come!


They already have these, but people need to modify the GPU designs before it is really relevant. The current AI hype cycle has frozen development in this area for now... so a super fast 1990's graphics pipeline is what people will iterate on for awhile.

Nvidia is both a blessing and a curse in many ways for standardization... =3


Did anybody notice just how fast their website loads? I didn’t even look at the content yet and I’m already impressed.


This is super cool!

I can certainly see these being useful in informational displays, such as rendering colored terminal output. The lack of subpixels should make for crisp text and bright colors.

I don't see this taking over the general purpose display industry, however, as it looks like the current design is incapable of making white.


Any idea if there is a plan to produce discrete LEDs that are tunable?


I wonder if these would improve VR/AR headset displays.


I hope this will go into AVP3


Alien Vs Predator 3?


Apple Vision Pro 3


My ultimate hope is that this will allow us to store and display color data as Fourier series.

Right now we only represent colour as combinations of red, green, and blue, when a colour signal itself is really a combination of multiple "spectral" (pure) colour waves, which can be anything in the rainbow.

Individually controllable microLEDs would change this entirely. We could visualize any color at will by combining them.

It's depressing that nowadays we have this technology yet video compression means I haven't seen a smooth gradient in a movie or TV show in years.


What would be the purpose of this?

The human eye can't distinguish light spectra producing identical tristimulus values. Thus for display purposes [1], color can be perfectly represented by 3 scalars.

[1] lighting is where the exact spectrum matters, c.f. color rendering index


Color data has three components for the simple reason that the human eye has three different color receptors. You can change the coordinate system of that color space, but three components will remain the most parsimonious representation.


I started working with a hyperspectral imager a while back and the idea of storing image data in 3 wide bands seems so odd to me now. Just the fact that my HSI captures 25 distinct 4nm bands inside a single 100nm band of what we are used to with a 3-band image is awesome.

Sorry, I get excited every time I work with hyperspec stuff now and love talking about it to anyone that will listen.


Hyperspectral imaging has its applications. A hyperspectral display on the other hand makes no sense (unless your target audience consists of mantis shrimps).


> I get excited every time I work with hyperspec stuff now and love talking about it to anyone that will listen.

Color is widely taught down to K-2, but content and outcomes are poor. So I was exploring how one might better teach color, with an emphasis on spectra. Using multispectral/hyperspectral images of everyday life, objects, and art, seemed an obvious opportunity. Mousing over images like[1] for example, showing spectra vaguely like[2]. But I found very few (non-terrain) images that were explicitly open-licensed for reuse. It seemed the usual issue - there's so much nice stuff out there, living only on people's disks, for perceived lack of interest in it. So FWIW, I note I would have been delighted to find someone had made such images available. Happy to chat about the area.

[1] http://www.ok.sc.e.titech.ac.jp/res/MSI/MSIdata31.html [2] https://imgur.com/a/teaching-color-using-spectra-zOtxQwe


With two wavelength-tunable LEDs you should be able to cover the entire CIE colorspace.

That's because the points on outer edge of CIE are pure wavelengths and you can get to any point inside by interpolating between two of them.


How do you make white?


E.g. mix 480nm cyan and 590nm orange.


Would this be practical? Or would it be similar to how printers have separate black ink, which is theoretically unnecessary?


By mixing two complementary colors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: