Hacker News new | past | comments | ask | show | jobs | submit login
Oldschool planar graphics and why they were a good idea (codeplusplus.blogspot.com)
124 points by bibyte 14 days ago | hide | past | web | favorite | 65 comments



Another trick I believe you can do with planar graphics is overlays.

Suppose you have 6 bits per pixel. One thing you could do is have 64 colors. Another thing you could do is use 3 bits for one color palette, giving you 8 colors, but designate one of those colors as a transparency value. If the 3 bits match the transparency color, then refer to the other 3 bits.

This allows you to do stuff like redraw or scroll a lower or upper layer without changing the bits for the other layer. It takes less processing power and memory bandwidth to update 3 bits than it does to update all 6.

Oh also, even if the hardware doesn't have an overlay mode, you can make this happen by filling up your 64-entry color palette carefully.

For example, if you want 000 to be the 3-bit value indicating transparency, and you want 001 to be pink, you copy the RGB for pink into the palette at every index that begins with 001. So you set pink in 8 color palette entries (001000, 001001, 001010, 001011, 001100, 001101, 001110, and 001111). For the 8 colors that make up the other level of your overlay, you just set them once in the 8 positions that start with your transparency color.


Even without overlays, being able to reuse the same bitmap with multiple stretches of the palette (by combining it with a solid colour as palette offset) was nice. It got you a bit more variety in the scene when memory was tight, and memory was always tight.

Back in my greenhorn days on the Amiga, I remember feeling very pleased with myself for "inventing" (hah!) a cheesy semi-dynamic lighting system using a 1-bit stippled gradient plane to effectively switch the main bitmap between "bright" and "dark" sub-palettes, at no performance cost.

I'm sure this wasn't original, of course, but back before the Web most hobbyists were trapped in their own little silos reinventing the wheel over and over again. It was fun. Then Quake came out with its -ahem- rather better lightmapping and... yeah, couldn't really kid yourself any more.


Yes, I did the same on my Atari, fun times :-)


Planar graphics was also a thing because of the slow RAM bandwidth of the day. RAM was fast enough to allow the video hardware to stream a 1bpp image at the rate needed by the electron beam of a typical CRT -- but higher rates like 4 or 8bpp were troublesome and 24bpp was out of the question. However, by arranging the data in planar format, you could pull the bits for a single pixel from more than one RAM chip at a time, and through such parallelism achieve the much higher throughput it took to display high-color images. Innovations in the late 80s/early 90s, like high-speed VRAM, made true-color "chunky" graphics practical.

Of course this also made the Amiga's HAM mode even more impressive, as they managed to squeeze out 4096-color (effective 12bpp) imagery out of the slow RAM of the day by allowing each successive pixel to be encoded in six bits by modifying only one component of the previous pixel (the famous Hold and Modify technique).

The major downside was that you had to do some computational contortions to build an on-screen image out of computed pixel data, but of course if you were lucky enough to have a hardware blitter (pc suxx0rz!), planar graphics was no impediment to drawing lines or compositing images at arbitrary positions quickly.


Yep, this is consistent with my understanding, I'd never heard the rationale put forth in the linked post.


They werent, and pretty much signed Amigas death warrant. Planar is a cute trick when you are moving around static, limited in color shapes on the screen(less colors less writes), but fails miserably at single pixel addressing. All of a sudden updating one pixel costs you 5(ocs), 6(ecs) or 8(aga) individual writes! thus no Wolfenstein3d/Doom on Amiga. Commodore finally tackled this problem 7 months before going bankrupt, just in time as it ran out of money to actually manufacture new product, while still paying its CEO better than IBM.

>Imagine a 320x200 screen with 32 colors. If we had padded each pixel out, it would require 64,000 bytes.

>but you cannot really cut up an array of bytes into 5-bit chunks easily. You could pad it

Sure you can! Nothing forces you to put full 8 bits of actual memory inside video address space. You could transparently translate memory accesses between 5/6 bit and 8 bit in the background, mapping 40-48KB real ram area into 64KB byte addressable window. You could even transparently translate between planar and "chunky" (Akiko). Those were never technical problems, but a result of a lack of affluent leadership. Projects at Commodore were mainly driven by accountants (lets get rid of all the outdated garbage in the storage = C128) and ignorant non technical execs.


If it signed Amiga's death warrant, it was one of a long list of signatories.

If I understand correctly, the performance impacts are complicated and are both worse and better than your quick explanation goes into.

An example of how it might be worse: suppose I'm drawing a one-pixel-wide vertical line on a 32-color (5-plane) screen. I not only only need to 5 writes, but I also need to do 5 reads. Because it's a byte-addressable system and I need to change 1 bit but leave the other 7 bits unchanged. So I must read, apply a mask, and write.

One the other hand, if I'm copying a bitmap or filling an area, it doesn't create much performance penalty at all. Who cares whether I copy 500 bytes from one buffer memory location to another or 5 sets of 100 bytes from 5 memory locations to 5 others.

It also doesn't create much of a performance penalty if I'm drawing a horizontal line because those neighboring pixels are neighboring bits and you can do them all in one go. (You can get crazy and apply this to nearly-horizontal lines too.)

An interesting corollary of this is that horizontal lines are way faster to draw than vertical ones.


EGA and up handles this by beefing up hardware support for read-modify-write operations and setting/unsetting bits in bulk, eg. you can XOR 8 pixels with a bit pattern on three out of four planes simultaneously with a single memory write.


Like many optimizations it’s hardware dependent.

As you gain more bandwidth, memory, and computing power you want different tradeoffs. Look at the history of DirectX for example and you see a huge range of different approaches show up and then get discarded.

Trying to keep up with this curve takes a lot of resources for R&D which is why so much consolidation happened.


Yeah. I remember supporting a swathe of color depths for 3dfx and early nvidia & ati. (Eg rgba 5551, rgb 565 or rgba 4444) ... and these were just the more ordinary ones.


"mapping 40-48KB real ram area into 64KB byte addressable window" is couple hundred transistors to implement. VGA came out in 1987.


That’s not really the issue. With a 16 color pallet and planor graphics you can update one byte to change 8 pixels from one pallet to another. This also means you can really cheaply compress images if you only need say 8 colors.

These trick becomes far less useful if you have cycles to spare.


If you had a 5-bit memory could you just leave 3 data lines disconnected?

> They werent, and pretty much signed Amigas death warrant.

I disagree with the categorical dismissal. IMO it was a good idea in 1985 with 256k chip RAM, not so much in 1994 with 2M chip RAM, post-Doom.


The Apple II's somewhat exotic "hires" mode didn't even HAVE an indexed palette. Instead you just wrote 1s and 0s for pixels directly to the display memory, and their arrangement on the screen determined their color. This all owing to the fact that the Apple II only had color at all due to some crazy phase-based hacking of the NTSC color standard.

EDIT: Oops I partially messed up this description. The MSB in each byte of the display memory did set the "palette" (phase, actually, according to Wikpedia) for each set of 6 pixels. So you could either have blue/orange or green/purple in that set with the exact color depending on whether or not you had one alternating pattern of bits or another.


The TRS-80 Color Computer had a similar trick used by lots of commercial games, but you couldn't predict when the effect might result in red or blue. You'd have to reset your CoCo until an example graphic was the right color before proceeding.

http://vdgtricks.blogspot.com/


On the CoCo 3 you didn't have to hit reset; there was a way to switch it in software.

Also on the CoCo 3, there was the 640 x 192 4-color "hires" mode. Way, way back - I believe published in Hot CoCo - someone figured out that using a television, and setting those 4 colors to the grayscale in the 64 color palette (black, dark grey, light grey, and white) - and with proper 4-bit pixel settings, you could get a virtual 128x192 image with colors.

Basically the same technique (called artifact colors, or artifacting) - but applied to the high-res screen of the CoCo 3.

Unfortunately - he sent it to Hot CoCo! Had he sent it to the Rainbow instead...things might have been different. You see, by sending it to that magazine, it didn't reach a large audience, so only a few people saw it, nobody much played with it, and it faded into history.

On the Rainbow side of things, there were a couple of articles about using patterns of color to create "virtual colors" - but it was geared toward the RGB monitor (CM-8), and not really the TV - and using colors, not the gray scales...

So - what am rambling about then?

Well - long story short - this:

http://richg42.blogspot.com/2014/02/the-little-known-color-c...

A virtually unknown "256 color" mode on the Color Computer 3 that was lost to history.

Now - this isn't the mystery 256 color mode that has been described elsewhere hiding in the GIME chip (and most people believe that it is a false rumor; but the Microware CoCo 3 prototype holds out hope) - but it is a working "hi color" mode available on the CoCo 3.

Unfortunately, today, it is only a curiosity more than anything, and it is unknown whether it could have been used back then; from what I recall, they had to use a PC to figure out the proper bit patterns for each of the colors, and that kind of processing would have been difficult at best back then.

EDIT: This really shows off what is possible with the mode:

http://atariage.com/forums/blog/105/entry-6693-color-compute...


Hey, I did that in collaboration with Jason Law.

Sure wish it had been used more back in the day. I always thought the Sierra games were a good fit.

The CoCo3 also does 50hz NTSC. Almost all US displays will take it, and it gives one a bit more time to push pixels.

One only needed a PC to derive a palette. The bit patterns are straight up binary. One byte per pixel, $0 to $ff.

No color redirection though. It is an absolute value to color display.

A palette, arranged nicely by hue and Luma takes just 1 8 bit page of RAM. Get the raw values into a PC, sort them, save off as byte data organized by huge and shade...

On 6809, 1 byte per pixel is sweet. Like the chip was made for it. If you want to abuse the stack, compiled sprites are fast.

It could be used back in the day. I did that on my CoCo in the 80's and 90's. Never developed anything big with it, but it was not difficult.

Found it one day when I did not configure the GIME chip properly. I just happened to write binary sequences... saw the rainbow, knew about artifact color on the Apple 2 and the rest was easy.

Later, with microcontrollers, I duplicated the CoCo 3 and then doubled the bit resolution and or depth. It is possible to get very full color with just monochrome bits running fast relative to the NTSC signal.

Fun days.


You have any links for this? It sounds both terrifying and clever as hell.

Similar to the Amiga's HAM mode[1], where you could get 4,096 simultaneous colors, but the restriction was that the succeeding pixel could only differ from the preceding in just one of the R, G, or B values.

[1] https://en.wikipedia.org/wiki/Hold-And-Modify


"terrifying and clever" describes a lot of the Woz's work...

So I am a little fuzzy on the NTSC standard, because it has been a while since I looked at it... but the hack goes something like this:

Monochrome TV puts video on an upper vestigial sideband with the carrier at (IIRC) 1.25 MHz up from the lower edge of the 6 MHz channel. (The high frequency parts of the lower sideband of the video signal are whacked off with a filter.) The FM sound subcarrier is up closer to the top edge, I forget what the offset is exactly.

Then color came along... so they added a color subcarrier just below the sound subcarrier. This reduced the bandwidth available for luminance, but got you color in return. The color subcarrier IIRC is kind of a flavor of independent sideband, but is really two double sideband signals on the same carrier, 90 degrees out of phase with each other, one skinny, and the other fat but with a notch filter applied to allow the skinny signal to ride in the middle. Or something like that.

Anyway... if you send a monochrome signal with high-frequency components (high frequency == high dot rate) that fall in the range of the color sub-carrier, you will get color artifacts. This is because the color decoder simply responds to the rf it is getting, whether on a proper subcarrier or not.

Another thing I used to know but am fuzzy about is I am pretty sure you need to put out the standard color burst on the back porch of the horizontal sync. Otherwise, the TV receiver will not sync to the "color subcarrier" that you are faking with artifacts.


You are correct. Something like a colorburst must be there. Turns out you can abuse that very considerably. And still get stable, useful displays.


The bit patterns are described in Wikipedia: https://en.wikipedia.org/wiki/Apple_II_graphics#High-Resolut...


Yes and the 64:1 interleave pattern on the display is also another "fun" feature to code around.


and this goes into detail about how the bit patterns worked: https://www.xtof.info/blog/?p=768


Thank you. I just had an epiphany. In elementary school computer lab, we'd all fight for the color machines. There were two of those and like 25 of the green screens.

I remember wondering how the game "knew" what kind of screen it was on. How did it change from a solid color (for the lucky kids) to a pattern (for the rest of us)?? I just shrugged it off as the computer knowing the display type from the connection.

Those two images in your link made it all just click right now, 30 years later.


For a little bit of completeness sake - it was also possible on the PC CGA card to gain artifact colors in certain modes, using a composite monitor. This led to some interesting possibilities, which were later exploited:

https://int10h.org/blog/2015/04/cga-in-1024-colors-new-mode-...


I only know of one way to output accurate color from Apple II's to HDMI. This is new as of last year. You have to email the developer and he'll add you to the waiting list. http://atariage.com/forums/topic/285010-vidhd-hdmi-board-upd...


Interesting! Sounds a bit like ClearType in reverse - there, the color determines the (perceived) position on the screen.


I once worked on a project that ran 4 monitors off one Amiga.

Using a graphics card to do a byte per pixel main display (polygon + scaled sprite 3d), and three monochrome displays taken from the red green and blue output of the Amiga's standard video out. Each of the monochrome displays used a single bitplane plus shared a random static bitplane. By adjusting the palette you could individually control which monitor the image appeared on, how bright it was and how much static interference to display. Various other failure modes were also added via copper tricks (roll, glitching etc.) without modifying the base bitmap.


This arrangement is highly questionable, but it does have one advantage, touched on in the last paragraph: you can write a 1bpp drawing routine that does whatever you need, and it can deal with any number of planes. Like, if you've got 4 planes, and you need to draw a line in colour 7, you just use your 1bpp line drawing routine to draw a 1 in planes 0, 1 and 2, and a 0 in plane 3.


I remember working with the Amstrad PCW, which had an interesting layout for the video memory. The pixels in the 90x32 character monochrome display were stored in character order. The 1st 8 bytes were the leftmost character, the next 8 were the character to the right, and so on. This wasn't the only complexity as there was a lookup table to give the memory location of each of the 32 screen lines. The advantage of this table was very quick character scrolling, at the expense of computing the location of individual pixels. It was rather tricky to write quick Z80 code to draw line with this quirky setup, but some people managed some rather nice vector games, such as Starglider.


The BBC Micro had a similar arrangement in some of its screen modes.

I think both machines must use a Motorola 6845 CRT controller, which is intended for character-based screens. It does not scan the video RAM sequentially, but in the way described by the GP, (all first char rows, then all second char rows, etc, with each chars rows stored sequentially in RAM)

I spent some time looking into this and found I was wrong - CRTC does generate linear addresses but the beeb (and possibly the CPC) translates them to the character-based screen format in hardware.

Bit planes are pretty easy to implement in hardware. One of the nice things is that the bit calculation is the same for each plane so you feed that value, bit offset, and plane start address into a barrel shifter and out pops the bit you want. those you gang together into a register and poof you can read them all at once. The ARM Cortex-M bit mapping memory region does this computation.

It also means you can build things like pixel copy of arbitrary width pixels with a single bit copy engine running 'n' times.

With memory and transistors getting cheap though, all of that became unnecessary it seems.


Does anyone know why VGA ”hacks" 320x240x256 and 320x400x256 modes operated planar instead of paged like 320x200x256?

It makes sense for less than 8-bit per pixel colour modes, but now for 256 colours.

Pondering this question after 20 years.


VGA CRTC could only address 64KB of ram at a time, so while you could program chunky 320x240, it would run out of ram and most likely repeat top of the picture somewhere at 85% of the screen. Planar hacks it into drawing pixels from different pages, something impossible to accomplish in chunky.


I think it had something to do with the 64K segment limit in real mode x86...?


If anything it would be ISA VGA being visible thru a 64KB aperture at A0000, and even that wasnt it. Why would you specifically program VGA into planar instead of nice "chunky" pages? It was an artifact of the way VGA CRTC worked.


I'd almost manage to forget about that. Almost.


Yup, 320*240=76800 > 2^16=65536


The planar modes are how the chip operates naturally, and chain 4 is the hack (sort of! - I mean, chain 4 is of course how the system is designed to be used).

There's a bit about this on the Wikipedia page: https://en.wikipedia.org/wiki/Mode_13h


Excellent! So it was related how the underlying memory chips are wired.

> but you cannot really cut up an array of bytes into 5-bit chunks easily

But you can do RGB 565, which is exactly what mobile GPUs do (or used to do, back when memory cost and bandwidth were more limited).


Two bytes per pixel for, say, a single 640x256 pixel screen would already use more RAM than available on a 256k Amiga 1000.

RAM size only ha to double a few times for this to become an entirely outdated concern, but that's what they had to work with at the time they designed the chipset. Still no excuse for AGA!


This is like Array of Structures vs Structure of Arrays[1].

As mentioned, with a SoA approach it's much easier to replace a subset of the entire data. If you have control over the array pointers you could simply swap pointers, leading to an essentially free update.

Downside is that it's a lot more work (and usually less cache friendly) if you need to update all the data.

[1]: https://en.wikipedia.org/wiki/AOS_and_SOA


I guess column-oriented databases are also conceptually similar, though structure of arrays is certainly a closer analog.

Back in the days of planar graphics, usually cache didn't matter because there wasn't one. The difference between CPU and memory speeds was not so lopsided as it is today. Or you could say, memory was fast enough because CPUs were very slow.


> I guess column-oriented databases are also conceptually similar, though structure of arrays is certainly a closer analog.

Isn't a column oriented database pretty much just an SoA on disk. And a row oriented database is pretty an AoS on disk. modulo some complexity around indexes?


How times change. Back in the day OpenGL had a paletted texture extension (EXT_paletted_textures), but nothing supports it anymore. I guess the extra indirection doesn't work well with GPU hw architectures?


OpenGL actually had support for color index rendering. See glIndexi(), glClearIndex(), etc. I think it was primarily used for hardware overlay planes on graphics systems like RealityEngine.

That was back in the fixed-function pipeline day.

So I'd imagine it's just that a) very few people care any more and b) if you do care, it's trivial to do it yourself with a dependent read in your fragment shader.

https://www.khronos.org/opengl/wiki/Common_Mistakes#Paletted...


It's trivially done in shaders, there's nowhere to extend that.


Do game engines still use palettized textures to save memory?

This takes me back many years. I remember writing code to display pcx images from the pcx spec for ega and vga displays. This was pre-web so no help there. Some of the best fun I've ever had coding.


I am surprised the reasons for planes being to fit in odd bit sizes. Although I suppose that is a reason.

Planes are a nice abstraction, because they allow you to use the concept of a raster surface for all kinds of things. These things might be bits of colour, but they might be metadata in an image analysis program, or a masks in a graphics editing program, z-buffers for 3-d graphics, or really anything.

(Admittedly, I am talking about an abstract concept of "plane", whereas the original article is presumably talking about the concrete behaviour of hardware.)


I'm glad someone finally explained this! It makes some sense, but I'm curious, was the combining of the bits of the plane to get the index to look-up done in hardware? Because it sounds like storing 3 5-bit pixels in the expected way in 2 bytes (with a bit left over) wouldn't be any harder in software than combining 5 bits from different planes. But if the hardware handled assembling the various bits, then I guess it would probably be faster. Am I understanding that correctly?


On the Amiga it was definitely hardware, the Denise chip, which could do all sorts of weird and (debateably) wonderful things in addition to the straightforward use case you describe. Hardware sprites being at the useful end of the scale, HAM being at the not-so-much one.

https://en.wikipedia.org/wiki/Original_Chip_Set#Denise


Even if it was done in software, it was still easier to write against planes because you had to run on hardware with wildly different hardware capabilities, so drawing routines were far easier to parameterize over planes than over various bit packing mechanisms.


>wildly different hardware

what do you mean? NES never changed, Amiga was always compatible with itself, nothing changed.


Pfft, you kids with your frame buffers. Try having to reload the display line-by-line during the blanking interval, like on an Atari 2600.


I believe that this is the same techniques that were covered in the excelent series "How oldschool graphics worked"[1]

[1] https://www.youtube.com/watch?v=Tfh0ytz8S0k


Planar graphics are still a thing, in YUV (YCbCr) formats. The Y component is usually stored separately and UV are interleaved in a second plane.


Interesting. First time no graphic no code just English clearly explain graphic code.

Together with two posts below about ram speed and >64k, seems quite clear. Thx.


We stopped where we are at because the eye cannot see more colors. The same goes for 4K displays. More resolution doesn't make sense. 4K pixels is more pixels than there are cones in your eyes.


Go look at any CIE chart. The human eye is far more sensitive than the standard sRGB colorspace. For that matter, it's impossible to represent the full perceptible colorspace with just three colors. And even if you use a wide gamut, 8 bits per channel is not enough to prevent banding. Particularly over the full brightness range the eye can perceive, which is considerable. That's why "HDR" screens are at least 10 bit nowadays. We didn't stop at all.

As for 4k displays - most of those cones are in the center of your FOV, and eye movement is part of human vision (saccades). That's why 4k VR headsets still look pixelated. Also, you don't just sit in front of your screen staring straight ahead like a zombie (at least, I don't). You move your head. You lean in. By your logic, the edges of a screen shouldn't even be in color!

It's like saying a keyboard with more than 10 keys doesn't make sense because we only have 10 fingers.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: