Very neat, the outcome probably isn't what you were hoping for, but it has a beautiful and ethereal quality to it. It's like an art piece.
From the photos and video, only a small section of the screen has colour. Does that change as you move around, or is the coloured section static?
Off topic, but the visual effect seen here is a lot like how my memories and dreams feel; only the thing I'm currently focussing on has any colour or detail, whilst everything else is muted and fuzzy.
Yeah I would like to improve the effect. I think the main colour region seems to stay the same.
I'm wondering if I can get acetate that's ever so slightly adhesive, so it sticks better to the monitor, which I'm wondering might help the colours show better all over.
Acetate sheets for use on Over-Head Projectors (OHP), used to come on a plain white backing sheet. The top was glossy smooth, and the back was a little tacky; if you can find those ...
Also curious to know if you can get slightly better color spread by looking at it from far away; I assumed the misaligned light or distortion through the glass might be enough to throw things off if you're working with tolerances as small as it looks.
Or, maybe the electron beam scatters more as it's deflected to hit the the outer pixels, possibly giving you a signal too muddy for your magic potato pixels to work with.
(these are obviously just vague guesses, I didn't even know this was possible before today)
Tektronix did it with a totally wild set of LCD color filter shutters over the whole face of the CRT, and then sequential color fields. It meant you got the whole spatial resolution of the display, but one third the temporal resolution. Similar to color-wheels in DLP projectors, really.
The system was called NuColor, if you want more detail.
The Bayer filter method in TFA is sensitive to perfect alignment between the filter and the image, which proved impractical in the real world. Sure makes for a neat experiment, though.
Man, that was the golden age of analog video right there. In the 90s it became just barely possible to do a lot of processing of analog video using a shitload of expensive electronics, and people did. Those were the days of "line doubling" deinterlacers that were as big as shoeboxes and as expensive as cars.
Can someone explains how this works? I have frustratingly been googling for 30 minutes now trying to learn how autochrome works. Being tech oriented it's so frustrating when people sum it up in 1 sentence "its made of potato starch and magically it makes color!"
How does it work? Do they hand paint the screen? How does it actually add red or blue to an arbitrary shade of gray which lacks any chroma? Does it work with video too?
The trick is because the shade of grey DOES have chroma. You can't just apply any autochrome panel to the image (unlike what the OP is trying to do with Rainbow), instead, the negative is generated using the chrominance filter
Imagine that each grain is blocking a 'pixel' of the negative. For simplicity of this explanation, I'll pretend the grains are red, green, and blue. If there's a blue grain, only blue light can get through, and thus the luminance is baked in specific to the amount of blue in that region. The same is true for the red, and the green.
Thus, each 'pixel' of the negative isn't determining a monochromatic luminosity (ie black and white), but instead telling the luminosity of a single channel of red, green, or blue.
When the grain filter and the developed negative are combined, we get natural colour. The same filter is used for generation and display.
Another way to think of it is like this: Each pixel on our screen is actually 3 subpixels, each defining red, green, and blue, and we format our image to alternate between telling the red, green, and blue channels how bright to be. This is doing the same, but the subpixels are distributed randomly because that's much easier to make.
EDIT:
If you organise the grains in a geometric pattern, you get Dufaycolor[1].
What TFA is doing is basically the inverse of a digital camera sensor. They're essentially putting a coloured tint in front of each pixel (actually a block of 4 pixels) in a bayer pattern (https://en.wikipedia.org/wiki/Bayer_filter), and using software to mosaic (convert) a colour image into its Bayer components if you will so that a block of four pixels under a green tint for example is lit only with it's green channel, and so on for each colour. You lose some spatial resolution this way of course.
Autochrome was an early technique for color photography. Wikipedia has a pretty good explanation [1].
The starch grains were randomly scattered on one side of the photographic plate, and acted as filters so that small portions of the plate would capture certain colors. The plate was then developed as a positive image.
When you would shine light through the developed plate it would go through the starch grains (still on the plate) as well as the developed image. The developed image controls the luminosity while the colored starch grains again act as a filter and control the hue of the transmitted light, roughly reproducing the original scene.
> go through the starch grains (still on the plate)
This part is key to understanding. The plate with the starch-based colored mosaic is not simply a filter used upon capture to be removed after (as most filters are today), but also a filter to be used upon viewing. Since the same exact “filter” is used for both capturing and viewing of a given photograph, it allows the mosaic to be random, simplifying creation of the filter.
so... I wonder if the Bayer matrix on digital sensors was inspired by this. If each sensor was also a display, and the beyer matrix was random, it would be the same thing...
Autochrome is pretty much just subpixels. You capture and view a greyscale image through the same dots-of-color filter plate, and thus you see color, just like an image sensor uses a Bayer filter to deconstruct color into greyscale (with three times as many pixels), and then your screen has three subpixel per pixel which you perceive as a solid color because they're sorta far away and kinda small.
Thanks for posting that. My grandmother told me about having that type of "color" tv before and I had never been able to find it. I had decided it must have just been some little cereal box kit with stick on color filters or something.
Oh, I hoped to see something based on Fechner colours (https://en.m.wikipedia.org/wiki/Fechner_color). It gave me quite some creeps when a misconfigured XFree86 server managed to draw a bright red line right across the black and white LCD screen of my first laptop...
Does anyone have an explanation of what I should be seeing here? I've been staring at it for a while (both full-screen and in the article), and I haven't noticed anything other than black and white.
In the spinning animation, the black bands in the white parts quickly turn another colour (brown for me, apparently blue or other colours for others) while the image is spinning.
The effect is much reduced if I tap on the image to make it full screen on mobile. The larger size seems to reduce the velocity of the spin, which seems to be an essential part of it. Could this be the reason you don't see it, are you perhaps on a low performance device for eg.?
Just for the sake of completion, the effect is incomplete for me - the outer black bands are always brown, but the inner two switch between black and brown. The middle one is more often black than brown, but it seems when that one's brown, the innermost one instead turns black.
When I had a Commodore PET computer I tried making colour by flashing areas at different rates. You can get a very limited colour by changing the length of flashes but not only did I get it to work (poorly) I also got terrible eye strain looking at those flashing squares.
I had no idea B&W LCD monitors even existed, looks like they are primarily marketed for medical imaging purposes, curious what advantage they have over displaying grey-scale images on a good color monitor. I'm guessing by the sub-pixels it is made from mostly the same panel as a color model.
I perused Ezio's website, and found three B&W monitors they currently sell, their "GX" series. Looking at the specs, the thing that stands out to me is that they're really bright: 1,200 cd/m^2 for the GX240 and GX340, and a whopping 2,500 cd/m^2 for the GX560 ! I bet these monitors are brighter than equivalently-backlit color displays from not having a Bayer filter in front.
Speculations on why medical monitors might be that bright: they can probably be better read in brightly-lit medical settings, shine through overlays and negatives placed over them, and be more easily read by some patients with poor vision.
They also offer palettized grayscale: up to 1,024 tones from a palette of 16,369. I'm not sure how useful that is in practice, but is is a unique feature.
It reminds me of back in the 68k Mac days you you could set your screen to 256 colors or thousands of colors or thousands of greys, where you had to choose a tradeoff of resolution and the greyscale mode was primarily for desktop publishing applications ah QuarkXPress and PageMaker..
From their website --
> 10-Bit Simultaneous Grayscale Display
> 10-bit (1,024 tones) simultaneous grayscale display extends grayscale fidelity to the boundaries of human visual perception abilities and helps radiologists discern the finest nuances within an image.
The gentle gradients and transformations from black & white to color are charming. It made me think of the scene in Wizard of Oz where Dorothy steps into Technicolor.
One reason Color TV wasn't implemented like this is the need for a compatible signal and the ability to decode the compatible signal with analog circuitry.
I bet you could get a lot better color if you used the screen itself to generate the bayer pattern on film rather than printing it with a printer.
Get some large format color negative film and thin cyan, magenta, and yellow filters. Put a cyan filter on the LCD, followed by the film on top. In a darkroom, light up the "red" pixels and to expose that part of the film. Repeat with the magenta filter and "green" pixels, and finally the yellow filter and "blue" pixels. Develop the film and reattach to the LCD.
This is kind of how the shadow mask was made in color CRT monitors. A photographic process was used to put the red, green, and blue phosphor dots on the screen by shining light through the shadow mask from the same angles the electron guns would later illuminate the phosphors. This ensures that the red, green, and blue phosphor dot pattern lines up (fairly) accurately with the shadow mask.
An aside: I saw "Up" (the movie excerpted in the demo video) years ago, but I never stopped and thought about the physics until today. The house would have started rising as soon as sufficient balloons were inflated inside the house. Releasing the balloons doesn't magically start making them buoyant-- they're already buoyant when they're inside the house.
I think the limiting factor of the color intensity is the dye/ink used to make the mask. Probably each pixel is only blocking like 50% of the light in its stop band.
From the photos and video, only a small section of the screen has colour. Does that change as you move around, or is the coloured section static?
Off topic, but the visual effect seen here is a lot like how my memories and dreams feel; only the thing I'm currently focussing on has any colour or detail, whilst everything else is muted and fuzzy.