Hacker News new | past | comments | ask | show | jobs | submit login
How “oldschool” graphics worked [video] (youtube.com)
434 points by brk on Aug 18, 2015 | hide | past | favorite | 72 comments

In the Legend of Zelda, the envirvonment's color rules were even more strict: You could use one palette for all the tiles at the screen's edge or one tile away, and a second palette for all the tiles in the middle.

That's the reason, if you zoom in on the water in this screenshot, you'll see that the edge of the lake has brown sparkles, and the inner lake has green ones:


Note though, that this extra restriction isn't a technical limitation of the NES, rather, it's a choice by the developers of Legend of Zelda for the purpose of shrinking the amount of bytes needed for each "screen" of the world.

By doing this, you only needed to store 2 palettes per "screen", rather than 16 tiles x 11 tiles = 176 palettes per "screen".

A clever hack! It exploits the tendency of each screen to have impassable obstacles (rocks, trees, walls) around the 4 edges of the screen, with optional openings. By using a separate palette for the middle area, the screen as a whole looks more colorful.

This is so fascinating. Thank you for this knowledge bomb.

Any chance you are aware of any other hacks from old school video games?

Did you ever notice that every item, human, and non-boss enemy in Zelda is made of one of exactly four palettes, each of which is made of only three colors (plus transparent)?

    1. Link's tunic color, his skin tone, and his hair color
    2. White, orange, red
    3. White, light blue, dark blue
    4. Black, dark teal, and red
For example, #1 is used for Link, Zelda, the merchants, and the raft, which is why when you get the blue ring or the red ring and your tunic color changes, it also changes the color of the other people's clothes and the binding on the raft.

Here's a graphic I made of every sprite in the ROM six times -- one with the three versions of palette #1 (no ring, blue ring, red ring) and one for each of the three other palettes: http://i.imgur.com/omBWovb.png

It allows you to see that, e.g., Wizzrobes are made of only three colors, and they're the exact same three colors as the red candle, heart containers, tektites, like-likes, octoroks, rupees, the boomerang, the map, fire, the life potion, fairies, and the master sword.

It seems this limitation/concept is something that I haven't seen too much in modern pixel-art. We often see modern pixel art using larger pallets, but just low resolutions.

This is the problem I have with the widespread use of pixel art. Most of it is ignorant of why the old graphics looked the way they did. Like you said it is adopting one of the limitations but ignoring the others.

The developers of Shovel Knight actually emulated the way NES graphics work (and added some enhancements to it) and the result is a much more convincing retro feel.

IMO this is also due to modern pixel-art actually emulating 16 bit pixel art.

The 16 bit computers already had graphics capabilities much more similar to modern graphics cards than their predecessors.

>the edge of the lake has brown sparkles, and the inner lake has green ones

Thank you for answering a question I've had for 26 years or so now

Hahaha...I had always chalked that up to my old CRT TV and the hardware of the time.

I've always had a lot of respect for Woz's color hack on the Apple, which will apparently be covered in a later part. But I didn't see IBM PC hacks on the list, so here's an overview of old & new tricks to get colors out of an old CGA card. http://8088mph.blogspot.com/2015/04/cga-in-1024-colors-new-m...

Oh very cool. That was eye opening. I have been running that demo occasionally since I happen to have my first computer, an original IBM PC with CGA card, standard PC beeper/speaker, (and no monitor, but this demo uses NTSC to a TV anyway!)

Fortunately I also had modem software (Telix) on the PC's 20MB hard drive so I could copy the demo over a null modem cable with Zmodem protocol, from a win2k box that also had a serial port, USB port, and hyperterm on it! And I had pkunzip on it already as well to unzip the file. It was so slow! I forgot how it was slow just to unzip files back then. (And to get the file onto THAT computer I had to put it on a USB stick from my windows 7 box because it's the one with wifi and USB but no RS-232 serial port.)

Now I get why the demo sometimes comes out in black and white when it tries to do 1024 colors. Somehow it's not triggering the "color burst" correctly.

Old story: I was excited the first time I realized flight simulator 3 could show more colors (blue sky, green grass) if I selected the right mode and hooked it up to a TV. Same for old Sierra games like King's Quest: https://m.youtube.com/watch?v=Km7UB9CRMyE

Apple //e graphics were weird from what I remember.

Highres (hgr) 7 colors though 2 whites, 2 blacks purple, green, blue and orange. Certain colors next to each other caused weirdness I think thats why 2 black/white colors.

You could dump the video memory to disk though and load it back. Kind of like a modern screenshot with no compression. the disks couldn't hold a lot of images though made young me frustrated trying to make a graphical adventure. I did a maze game with the 16 color low res graphics..

You could directly manipulate the graphics with the poke command (push values into memory).

"call 62454" inexplicable changed the whole screen to the current color. You could have fun with a loop though the 7 colors.

good times

Yep. There were two whites and two blacks due to the pixel shift triggered by the high bit of the screen memory byte.

Unlike most every other machine, the Apple 8 bit computers offered 7 pixels per byte, with the high order bit shifting those just a little bit.

That shift presented the two sets of colors.

At the pixel clock of the Apple, artifacting basically offered a 4 color display. Two pixels on = white, two pixels off = black, and 01, 10 patterns were color.

With the pixel shift, the Apple 8 bit computers offered up a 6 color display. While the two blacks and whites seem redundant, they are actually necessary to get bit patterns lined up precisely, or to avoid additional color artifacts along some image boundaries.

And thus the Apple II is the only computer that was ever capable of shifting something half-a-pixel to the left.

Note: The Apple /// could probably do that too.

I think it could.

That little shift ended up being double high resolution in the later //e machines.


Apple video is a total hack through and through.

To add to this, and to go a little later, the PC Game Programmer's Encyclopedia (PCGPE) was basically my bible in the mid 90s and was packed with awesome articles about doing tricky stuff with PC graphics: http://www.oocities.org/siliconvalley/2151/pcgpe.html

As a companion piece I recommend Fuck the Super Game Boy http://loveconquersallgam.es/post/2350461718/fuck-the-super-...

I thought this was certainly interesting as a companion piece (for a decidedly different era of graphics). I can't know this for sure, but I suspect a brief summary of the article -- in particular, an indication that it's not merely lambasting the Super Game Boy -- would have been helpful for folks who were turned off by the article's title.

Yes, the title is extremely unfortunate for such an interesting piece of technical history :(

Great article. Don't miss the next pages! The clever use of tiles in e.g. DK'94 is really wonderful. That the Picross title screen uses such an unique trick is crazy.

I almost got more enjoyment out of this article than out of the Super Game Boy I bought 20 years ago.

Very cool, undersampling of color still exists in modern codecs, if you see something like YUV420 that means there's 2 chroma (color) pixels for every 4 luminance (brightness) pixels:


Since our eyes are more preceptive to detail rather than absolute color, you can lower your bitrate without perceived video quality.

A lot of these "rules" with color use some broad generalizations. When you work with video compression a lot, you start training yourself to see color differently and the illusions begin to fall apart.

For me: I really don't enjoy looking at 4:2:0 or 4:1:1 chroma subsampling (4:2:2 usually doesn't cause problems)

4:2:0 sort of works for live scenes which don't typically have sharp chroma boundaries but when you see solid red/blue graphics superimposed over a scene, the blocky bleed of color across the scene is like knives in my eyes.

> but when you see solid red/blue graphics superimposed over a scene, the blocky bleed of color across the scene is like knives in my eyes.

I still wonder why that happens to red, mostly because it looks like someone's bleeding everywhere. I don't seem to notice it for green or blue. At first I thought it was the downsampling, but it has to be something else (a combination, maybe?), since JPEG seems to handle downsampled red comparatively better when highly compressed. Right now, I just think it's a flaw in H.264, it's encoders and/or decoders.

Bleed definitely happens with blue too but since solid blue chrominance is less common in graphics (graphics tend to use brighter sky/azure/royal blue) whereas solid red is quite common.

But bleed won't happen with green because green is basically the luminance channel in YCbCr/YUV. This means that green runs at full sample resolution compared to red and blue which run at 50% to 25% resolution (depending on subsampling).

I'm interested in this. When you said, "When you work with video compression a lot," what did you mean? Working with video compression at an algorithm level, or applying different compressions to the same video and comparing the results, or something else?

The middle one ("applying different compressions to the same video and comparing the results").

I write streaming video servers. An aspect of the job is continuously optimizing parameters and codecs to satisfy PSNR and perceptual video quality test cases across a large library of test files. It's not a fun aspect of the job.

Sun's forgotten (I believe) Cell codec used the colour cell idea even more directly:


"A cell encoder breaks the video into cells. A cell is 16 pixels, arranged in a 4x4 group (Figure B-1). Cells are encoded into the bytestream in scanline order, from left to right and from top to bottom.

The basic encoding scheme used in both versions of Cell is based on an image coding method called Block Truncation Coding (BTC). The 16 pixels in a cell are represented by a 16-bit mask and two intensities or colors. These values specify which intensity to place at each of the pixel positions. The mask and intensities can be chosen to maintain certain statistics of the cell, or they can be chosen to reduce contouring in a manner similar to ordered dither.

The primary advantage of BTC is that its decoding process is similar to the operation of character fonting in a color framebuffer. The character display process for a framebuffer takes as input a foreground color, a background color, and a mask that specifies whether to use the foreground or background color at each pixel. Because this function is so important to the window system, it is often implemented as a display primitive in graphics accelerators. The Cell compression technique leverages these existing primitives to provide full-motion video decoding without special hardware or modifications to the window system."

This is very, very similar to DXT encoding that all GPUs use (and is also how the ZX Spectrum's hardware worked).

More specifically our eyes see luminance more than hue or saturation, and the change of gradients more than gradients or absolute values.

This guy's videos are great. Just got sucked into a youtube hole for an hour.

Not the same era but if you like that, you might also love this dude: https://www.youtube.com/watch?v=HQYsFshbkYw

Me too, and I love this era.

Total tangent but the color cells reminded me of Fair Isle knitting, in which some of the most colorful knitted sweaters you've likely seen are done using only two colors on any given row. https://en.wikipedia.org/wiki/Fair_Isle_(technique)

Wow, I love this. The graph-paper sprite design thing brought back a flood of memories from the mid 80s, as that's exactly how I learned to make sprites.

Me too :)

I remember programming custom characters into my 9-pin IBM ProPrinter so that I could print certain logos and symbols directly as characters. It was the same layout on graph paper and convert to binary format. I think that was how I was first introduced to binary in general.

It's interesting how the colour palettes of those old computers went a long way to define their character. I was an Amstrad CPC owner, and its games tended to have a bright, saturated look whereas C64 games always looked a bit brown and washed-out to me[1]. Later on, as an Amiga owner I was always a little jealous of the SNES for the quality of its colours.

It would be cool to see a video on Amiga graphics and the crazy things you can do with bitplanes and copper lists.

[1] check out these shots of Trantor on different 8bit platforms. You can tell straight away which one is CPC or C64 (http://frgcb.blogspot.co.uk/2014/09/trantor-last-stormtroope...)

Great video. Is there any reason why the color cells had to be split into an even grid? It seems like if the API allowed the software developers to define custom grids on the fly (perhaps with different scenes or levels), it would open up lots of creativity (though I suppose that would also use some memory).

The c64's default video mode was a 40x24 grid of 8x8 monospaced text characters. The video chip was designed around this: each scanline it would look at the characters occupying the current row, then use that plus how many times it'd displayed that row as an index into the 1k of character ROM to pull out the appropriate byte, and display its individual bits as pixels.

You could also tell the video chip to look somewhere in RAM for these character definitions. Maybe just to replace the standard font with something cooler; maybe some of your font would contain little 8x8 building blocks you could use to make bigger images. Most games would build their backgrounds this way - you'd only have so much variation available, but you also only had to move about 900 bytes around to scroll the screen, rather than 8k. Which sounds like nothing to a modern machine but was a serious difference for a 1mhz computer.

The bitmap mode jiggled with this a little, and ignored the 'look at the characters on this row' part in favor of just stepping through an 8k chunk of RAM over the course of the display. (Well, the multicolor bitmap mode looked at the characters in the row as well, but used them for color data rather than an index to the character map.)

If you want to know about this in more detail then try firing up a c64 emulator and going through the 'graphics' chapter of the c64 Programmer's Reference Guide: http://www.commodore.ca/manuals/c64_programmers_reference/c6...

I'm not sure the acronym "API" even existed at this point in time. I sure never heard it when I was fooling around with 6502 assembly.

tl;dr: severely limited hardware, programmed right on the bare metal. API? What's that?

The video chip was designed around this: each scanline it would look at the characters occupying the current row, then use that plus how many times it'd displayed that row as an index into the 1k of character ROM to pull out the appropriate byte, and display its individual bits as pixels.

The C64 reads the current characters every 8 scanlines, not each scanline (well, you can get it to read on arbitrary scanlines by manipulating the vertical scroll register, but that's beside the point here). Due to the way the C64's data bus works, the main processor has to stop executing instructions while the graphics chip reads the character data.

You have to account for this if you're trying to synchronize code to particular scanlines -- on the scanlines where character data is read, you'll only get 23 cycles instead of the usual 63 (on a PAL machine, at least). For even more info, see http://www.zimmers.net/cbmpics/cbm/c64/vic-ii.txt

Yeah, I should have been more clear on that. I didn't feel like explaining the care and feeding of badlines. grin

The cells were split into an even grid because that's how the video chip was designed. There was no other grid size and no customization. There was no room on the silicon to do other grid sizes.

There is no graphics API in the Commodore 64. You programmed the hardware directly in assembly. If you wanted an API you wrote it yourself.

Ha. This took me back to being 8 years old again. My mom loved/hated the fact that her kids had graph paper ALL OVER the house because we were building sprites for games on the C64.

Boy, assembly was a bitch at first.

Wonderful production value and nice presentation. Looking forward to more of these.

It's worth noting that when he says the NES had a limit of 64 sprites the limit is 64 per screen not 64 total. Each being 8x8 pixels in size. There was another limitation though, you could only have 8 sprites displayed on the same horizontal line (scanline). Games got around this by "flickering" the sprites so it would appear that more were displayed at once.

One thing about computer graphics is that full color didn't really exist for the average computer until the mid 90's. During the 80's, full color 24-bit graphics were strictly the domain of workstations and high-end Macs with 24-bit graphics cards.

The average computer couldn't display a proper 24-bit photo until about when Windows 95 came out...

Wow that was a pretty cool video. I have a lot of respect and admiration for the guys and gals who worked on these things back in the day.

But I am lost on the math part about the color cells. Why does each color cell only need 1 byte? If each cell is 8 bits wide and 8 bits deep, wouldn't that be 8 bytes?

You need one byte for the colors (4 bits to specify each of foreground and background) and 8 bytes for the on/off; if you have 1000 cells, then you need 9000 bytes in total.

Another common thing to do, especially on machines just a little bit earlier, was to use one byte for the colors, one byte for a glyph identifier, and then put the 8 bytes per glyph somewhere else, maybe ROM, which was way cheaper than RAM. You could only have 256 distinct glyphs on the screen at a time, and maybe you couldn't even reprogram them, but you could do quite a bit of useful graphics inside those constraints. Text especially, obviously. And you could do it with a lot less memory.

Nonprogrammable fonts were mostly just idiocy, because a quad-input NAND gate would have been sufficient to distinguish a 16-glyph RAM area from the ROM, and 128 bytes of RAM would have been sufficient for 16 8×8 glyphs; also, 5×8 glyphs were actually pretty common at the time. In the VT-52 and ADM-3A era, you had a good excuse; by the time of the VT-100 and H-19, it was just dumb. For better or worse, the less-idiotic Apple and Commodore mostly swept those pointlessly-crippled devices aside.

I hardly ever subscribe to youtube channels, but I clicked it as soon as I finished watching part one.

This was very awesome!! Thank you!

Very cool! I wonder if the rise of isometric games have something to do with the pixel down scaling. Isometric games do make use of the 'horizontal pixels'.

This is very similar to some of the tricks used in text mode[1] to give "pseudographics". You can basically view each grid square as a character position, and the definition of the tiles as a font.

[1] https://en.wikipedia.org/wiki/VGA-compatible_text_mode#Fonts

I really need to play around with some of the more bizarre graphics hardware systems, as my junior nerd years were more D&D-centered than fiddling around with my C64, and compared to weird 8-bit systems, forcing the VGA card to switch to planar mode seems positively pedestrian...

I wish modern computers had some reasonably efficient APIs for doing pixel-level screen buffer stuff in a straightforward way. Without using pixel shaders on polygons and other hacks.

Why not Canvas?

Canvas does exactly what I'm talking about, but for my purposes I would prefer something that works outside of a browser.

This is well produced and fun to watch. Great memories, thank you.

Now this just shows complexity of Metal Slug, mortal combat trilogy and SF :)

Pretty good overview! Thanks for posting.

Much like almost any video that's a person talking to the viewer, this would be much better as an article. The average reading speed seems to be about 300 WPM while average talking speed is 110-150, along with talking generally being less information-dense because of filler words and repetition. There is the option of increasing the video's speed at least.

If you prefer reading there are plenty of articles on this subject, for example this one from April: https://news.ycombinator.com/item?id=9454670

I liked David's video though, great production values and good pacing.

I, too, hate videos that consist of the face of some idiot as he blathers. However, this video mostly consisted of animations that illustrated the material much better than still images could have, and I think that probably speech interferes less with understanding the animations than text does, even for me.

You seem to be suggesting a "Snow Fall"-style presentation of this material. If that's what you think would be best, I'd be interested in seeing your remix!

I generally prefer text, but this was a topic best explained through visuals. Works far better in video than in text.

Some people enjoy learning from videos more than learning from articles.

It is a symptom of global shortening of the attention span due to constant distractions and multitasking (on the internet and elsewhere). Watching a movie in a window while browsing the web and chatting on irc seems to be quite usual these days.

Possibly, but I personally prefer videos to articles, so I don't mind.

Plus you can listen to or watch the video while doing otherwise mindless stuff like household chores.

That edge-case doesn't justify video over text in general

That is not an edge case by any stretch of the imagination. Most, if not all, people I know have videos playing while doing other stuff.

I've never done that or seen anyone do it, personally

I guess my most such cases are watching a video an browsing at the same time. Or cooking food/cleaning while watching videos. Not once have you done that? I find that odd, but each to their own.

I've followed some bloggers who transitioned to videos. The common reason given was more people are interested in watching videos than reading, and they can monetize better.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact