Suppose you have 6 bits per pixel. One thing you could do is have 64 colors. Another thing you could do is use 3 bits for one color palette, giving you 8 colors, but designate one of those colors as a transparency value. If the 3 bits match the transparency color, then refer to the other 3 bits.
This allows you to do stuff like redraw or scroll a lower or upper layer without changing the bits for the other layer. It takes less processing power and memory bandwidth to update 3 bits than it does to update all 6.
Oh also, even if the hardware doesn't have an overlay mode, you can make this happen by filling up your 64-entry color palette carefully.
For example, if you want 000 to be the 3-bit value indicating transparency, and you want 001 to be pink, you copy the RGB for pink into the palette at every index that begins with 001. So you set pink in 8 color palette entries (001000, 001001, 001010, 001011, 001100, 001101, 001110, and 001111). For the 8 colors that make up the other level of your overlay, you just set them once in the 8 positions that start with your transparency color.
Back in my greenhorn days on the Amiga, I remember feeling very pleased with myself for "inventing" (hah!) a cheesy semi-dynamic lighting system using a 1-bit stippled gradient plane to effectively switch the main bitmap between "bright" and "dark" sub-palettes, at no performance cost.
I'm sure this wasn't original, of course, but back before the Web most hobbyists were trapped in their own little silos reinventing the wheel over and over again. It was fun. Then Quake came out with its -ahem- rather better lightmapping and... yeah, couldn't really kid yourself any more.
Of course this also made the Amiga's HAM mode even more impressive, as they managed to squeeze out 4096-color (effective 12bpp) imagery out of the slow RAM of the day by allowing each successive pixel to be encoded in six bits by modifying only one component of the previous pixel (the famous Hold and Modify technique).
The major downside was that you had to do some computational contortions to build an on-screen image out of computed pixel data, but of course if you were lucky enough to have a hardware blitter (pc suxx0rz!), planar graphics was no impediment to drawing lines or compositing images at arbitrary positions quickly.
>Imagine a 320x200 screen with 32 colors. If we had padded each pixel out, it would require 64,000 bytes.
>but you cannot really cut up an array of bytes into 5-bit chunks easily. You could pad it
Sure you can! Nothing forces you to put full 8 bits of actual memory inside video address space. You could transparently translate memory accesses between 5/6 bit and 8 bit in the background, mapping 40-48KB real ram area into 64KB byte addressable window. You could even transparently translate between planar and "chunky" (Akiko). Those were never technical problems, but a result of a lack of affluent leadership. Projects at Commodore were mainly driven by accountants (lets get rid of all the outdated garbage in the storage = C128) and ignorant non technical execs.
If I understand correctly, the performance impacts are complicated and are both worse and better than your quick explanation goes into.
An example of how it might be worse: suppose I'm drawing a one-pixel-wide vertical line on a 32-color (5-plane) screen. I not only only need to 5 writes, but I also need to do 5 reads. Because it's a byte-addressable system and I need to change 1 bit but leave the other 7 bits unchanged. So I must read, apply a mask, and write.
One the other hand, if I'm copying a bitmap or filling an area, it doesn't create much performance penalty at all. Who cares whether I copy 500 bytes from one buffer memory location to another or 5 sets of 100 bytes from 5 memory locations to 5 others.
It also doesn't create much of a performance penalty if I'm drawing a horizontal line because those neighboring pixels are neighboring bits and you can do them all in one go. (You can get crazy and apply this to nearly-horizontal lines too.)
An interesting corollary of this is that horizontal lines are way faster to draw than vertical ones.
As you gain more bandwidth, memory, and computing power you want different tradeoffs. Look at the history of DirectX for example and you see a huge range of different approaches show up and then get discarded.
Trying to keep up with this curve takes a lot of resources for R&D which is why so much consolidation happened.
These trick becomes far less useful if you have cycles to spare.
I disagree with the categorical dismissal. IMO it was a good idea in 1985 with 256k chip RAM, not so much in 1994 with 2M chip RAM, post-Doom.
EDIT: Oops I partially messed up this description. The MSB in each byte of the display memory did set the "palette" (phase, actually, according to Wikpedia) for each set of 6 pixels. So you could either have blue/orange or green/purple in that set with the exact color depending on whether or not you had one alternating pattern of bits or another.
Also on the CoCo 3, there was the 640 x 192 4-color "hires" mode. Way, way back - I believe published in Hot CoCo - someone figured out that using a television, and setting those 4 colors to the grayscale in the 64 color palette (black, dark grey, light grey, and white) - and with proper 4-bit pixel settings, you could get a virtual 128x192 image with colors.
Basically the same technique (called artifact colors, or artifacting) - but applied to the high-res screen of the CoCo 3.
Unfortunately - he sent it to Hot CoCo! Had he sent it to the Rainbow instead...things might have been different. You see, by sending it to that magazine, it didn't reach a large audience, so only a few people saw it, nobody much played with it, and it faded into history.
On the Rainbow side of things, there were a couple of articles about using patterns of color to create "virtual colors" - but it was geared toward the RGB monitor (CM-8), and not really the TV - and using colors, not the gray scales...
So - what am rambling about then?
Well - long story short - this:
A virtually unknown "256 color" mode on the Color Computer 3 that was lost to history.
Now - this isn't the mystery 256 color mode that has been described elsewhere hiding in the GIME chip (and most people believe that it is a false rumor; but the Microware CoCo 3 prototype holds out hope) - but it is a working "hi color" mode available on the CoCo 3.
Unfortunately, today, it is only a curiosity more than anything, and it is unknown whether it could have been used back then; from what I recall, they had to use a PC to figure out the proper bit patterns for each of the colors, and that kind of processing would have been difficult at best back then.
EDIT: This really shows off what is possible with the mode:
Sure wish it had been used more back in the day. I always thought the Sierra games were a good fit.
The CoCo3 also does 50hz NTSC. Almost all US displays will take it, and it gives one a bit more time to push pixels.
One only needed a PC to derive a palette. The bit patterns are straight up binary. One byte per pixel, $0 to $ff.
No color redirection though. It is an absolute value to color display.
A palette, arranged nicely by hue and Luma takes just 1 8 bit page of RAM. Get the raw values into a PC, sort them, save off as byte data organized by huge and shade...
On 6809, 1 byte per pixel is sweet. Like the chip was made for it. If you want to abuse the stack, compiled sprites are fast.
It could be used back in the day. I did that on my CoCo in the 80's and 90's. Never developed anything big with it, but it was not difficult.
Found it one day when I did not configure the GIME chip properly. I just happened to write binary sequences... saw the rainbow, knew about artifact color on the Apple 2 and the rest was easy.
Later, with microcontrollers, I duplicated the CoCo 3 and then doubled the bit resolution and or depth. It is possible to get very full color with just monochrome bits running fast relative to the NTSC signal.
Similar to the Amiga's HAM mode, where you could get 4,096 simultaneous colors, but the restriction was that the succeeding pixel could only differ from the preceding in just one of the R, G, or B values.
So I am a little fuzzy on the NTSC standard, because it has been a while since I looked at it... but the hack goes something like this:
Monochrome TV puts video on an upper vestigial sideband with the carrier at (IIRC) 1.25 MHz up from the lower edge of the 6 MHz channel. (The high frequency parts of the lower sideband of the video signal are whacked off with a filter.) The FM sound subcarrier is up closer to the top edge, I forget what the offset is exactly.
Then color came along... so they added a color subcarrier just below the sound subcarrier. This reduced the bandwidth available for luminance, but got you color in return. The color subcarrier IIRC is kind of a flavor of independent sideband, but is really two double sideband signals on the same carrier, 90 degrees out of phase with each other, one skinny, and the other fat but with a notch filter applied to allow the skinny signal to ride in the middle. Or something like that.
Anyway... if you send a monochrome signal with high-frequency components (high frequency == high dot rate) that fall in the range of the color sub-carrier, you will get color artifacts. This is because the color decoder simply responds to the rf it is getting, whether on a proper subcarrier or not.
Another thing I used to know but am fuzzy about is I am pretty sure you need to put out the standard color burst on the back porch of the horizontal sync. Otherwise, the TV receiver will not sync to the "color subcarrier" that you are faking with artifacts.
I remember wondering how the game "knew" what kind of screen it was on. How did it change from a solid color (for the lucky kids) to a pattern (for the rest of us)?? I just shrugged it off as the computer knowing the display type from the connection.
Those two images in your link made it all just click right now, 30 years later.
Using a graphics card to do a byte per pixel main display (polygon + scaled sprite 3d), and three monochrome displays taken from the red green and blue output of the Amiga's standard video out. Each of the monochrome displays used a single bitplane plus shared a random static bitplane. By adjusting the palette you could individually control which monitor the image appeared on, how bright it was and how much static interference to display. Various other failure modes were also added via copper tricks (roll, glitching etc.) without modifying the base bitmap.
It also means you can build things like pixel copy of arbitrary width pixels with a single bit copy engine running 'n' times.
With memory and transistors getting cheap though, all of that became unnecessary it seems.
It makes sense for less than 8-bit per pixel colour modes, but now for 256 colours.
Pondering this question after 20 years.
There's a bit about this on the Wikipedia page: https://en.wikipedia.org/wiki/Mode_13h
But you can do RGB 565, which is exactly what mobile GPUs do (or used to do, back when memory cost and bandwidth were more limited).
RAM size only ha to double a few times for this to become an entirely outdated concern, but that's what they had to work with at the time they designed the chipset. Still no excuse for AGA!
As mentioned, with a SoA approach it's much easier to replace a subset of the entire data. If you have control over the array pointers you could simply swap pointers, leading to an essentially free update.
Downside is that it's a lot more work (and usually less cache friendly) if you need to update all the data.
Back in the days of planar graphics, usually cache didn't matter because there wasn't one. The difference between CPU and memory speeds was not so lopsided as it is today. Or you could say, memory was fast enough because CPUs were very slow.
Isn't a column oriented database pretty much just an SoA on disk. And a row oriented database is pretty an AoS on disk. modulo some complexity around indexes?
So I'd imagine it's just that a) very few people care any more and b) if you do care, it's trivial to do it yourself with a dependent read in your fragment shader.
Planes are a nice abstraction, because they allow you to use the concept of a raster surface for all kinds of things. These things might be bits of colour, but they might be metadata in an image analysis program, or a masks in a graphics editing program, z-buffers for 3-d graphics, or really anything.
(Admittedly, I am talking about an abstract concept of "plane", whereas the original article is presumably talking about the concrete behaviour of hardware.)
what do you mean? NES never changed, Amiga was always compatible with itself, nothing changed.
Together with two posts below about ram speed and >64k, seems quite clear. Thx.
As for 4k displays - most of those cones are in the center of your FOV, and eye movement is part of human vision (saccades). That's why 4k VR headsets still look pixelated. Also, you don't just sit in front of your screen staring straight ahead like a zombie (at least, I don't). You move your head. You lean in. By your logic, the edges of a screen shouldn't even be in color!
It's like saying a keyboard with more than 10 keys doesn't make sense because we only have 10 fingers.