That's the reason, if you zoom in on the water in this screenshot, you'll see that the edge of the lake has brown sparkles, and the inner lake has green ones:
By doing this, you only needed to store 2 palettes per "screen", rather than 16 tiles x 11 tiles = 176 palettes per "screen".
Any chance you are aware of any other hacks from old school video games?
1. Link's tunic color, his skin tone, and his hair color
2. White, orange, red
3. White, light blue, dark blue
4. Black, dark teal, and red
Here's a graphic I made of every sprite in the ROM six times -- one with the three versions of palette #1 (no ring, blue ring, red ring) and one for each of the three other palettes: http://i.imgur.com/omBWovb.png
It allows you to see that, e.g., Wizzrobes are made of only three colors, and they're the exact same three colors as the red candle, heart containers, tektites, like-likes, octoroks, rupees, the boomerang, the map, fire, the life potion, fairies, and the master sword.
The developers of Shovel Knight actually emulated the way NES graphics work (and added some enhancements to it) and the result is a much more convincing retro feel.
The 16 bit computers already had graphics capabilities much more similar to modern graphics cards than their predecessors.
Thank you for answering a question I've had for 26 years or so now
Fortunately I also had modem software (Telix) on the PC's 20MB hard drive so I could copy the demo over a null modem cable with Zmodem protocol, from a win2k box that also had a serial port, USB port, and hyperterm on it! And I had pkunzip on it already as well to unzip the file. It was so slow! I forgot how it was slow just to unzip files back then. (And to get the file onto THAT computer I had to put it on a USB stick from my windows 7 box because it's the one with wifi and USB but no RS-232 serial port.)
Now I get why the demo sometimes comes out in black and white when it tries to do 1024 colors. Somehow it's not triggering the "color burst" correctly.
Old story: I was excited the first time I realized flight simulator 3 could show more colors (blue sky, green grass) if I selected the right mode and hooked it up to a TV. Same for old Sierra games like King's Quest: https://m.youtube.com/watch?v=Km7UB9CRMyE
Highres (hgr) 7 colors though 2 whites, 2 blacks purple, green, blue and orange. Certain colors next to each other caused weirdness I think thats why 2 black/white colors.
You could dump the video memory to disk though and load it back. Kind of like a modern screenshot with no compression. the disks couldn't hold a lot of images though made young me frustrated trying to make a graphical adventure. I did a maze game with the 16 color low res graphics..
You could directly manipulate the graphics with the poke command (push values into memory).
"call 62454" inexplicable changed the whole screen to the current color. You could have fun with a loop though the 7 colors.
Unlike most every other machine, the Apple 8 bit computers offered 7 pixels per byte, with the high order bit shifting those just a little bit.
That shift presented the two sets of colors.
At the pixel clock of the Apple, artifacting basically offered a 4 color display. Two pixels on = white, two pixels off = black, and 01, 10 patterns were color.
With the pixel shift, the Apple 8 bit computers offered up a 6 color display. While the two blacks and whites seem redundant, they are actually necessary to get bit patterns lined up precisely, or to avoid additional color artifacts along some image boundaries.
Note: The Apple /// could probably do that too.
That little shift ended up being double high resolution in the later //e machines.
Apple video is a total hack through and through.
I almost got more enjoyment out of this article than out of the Super Game Boy I bought 20 years ago.
Since our eyes are more preceptive to detail rather than absolute color, you can lower your bitrate without perceived video quality.
For me: I really don't enjoy looking at 4:2:0 or 4:1:1 chroma subsampling (4:2:2 usually doesn't cause problems)
4:2:0 sort of works for live scenes which don't typically have sharp chroma boundaries but when you see solid red/blue graphics superimposed over a scene, the blocky bleed of color across the scene is like knives in my eyes.
I still wonder why that happens to red, mostly because it looks like someone's bleeding everywhere. I don't seem to notice it for green or blue. At first I thought it was the downsampling, but it has to be something else (a combination, maybe?), since JPEG seems to handle downsampled red comparatively better when highly compressed. Right now, I just think it's a flaw in H.264, it's encoders and/or decoders.
But bleed won't happen with green because green is basically the luminance channel in YCbCr/YUV. This means that green runs at full sample resolution compared to red and blue which run at 50% to 25% resolution (depending on subsampling).
I write streaming video servers. An aspect of the job is continuously optimizing parameters and codecs to satisfy PSNR and perceptual video quality test cases across a large library of test files. It's not a fun aspect of the job.
"A cell encoder breaks the video into cells. A cell is 16 pixels, arranged in a 4x4 group (Figure B-1). Cells are encoded into the bytestream in scanline order, from left to right and from top to bottom.
The basic encoding scheme used in both versions of Cell is based on an image coding method called Block Truncation Coding (BTC). The 16 pixels in a cell are represented by a 16-bit mask and two intensities or colors. These values specify which intensity to place at each of the pixel positions. The mask and intensities can be chosen to maintain certain statistics of the cell, or they can be chosen to reduce contouring in a manner similar to ordered dither.
The primary advantage of BTC is that its decoding process is similar to the operation of character fonting in a color framebuffer. The character display process for a framebuffer takes as input a foreground color, a background color, and a mask that specifies whether to use the foreground or background color at each pixel. Because this function is so important to the window system, it is often implemented as a display primitive in graphics accelerators. The Cell compression technique leverages these existing primitives to provide full-motion video decoding without special hardware or modifications to the window system."
I remember programming custom characters into my 9-pin IBM ProPrinter so that I could print certain logos and symbols directly as characters. It was the same layout on graph paper and convert to binary format. I think that was how I was first introduced to binary in general.
It would be cool to see a video on Amiga graphics and the crazy things you can do with bitplanes and copper lists.
 check out these shots of Trantor on different 8bit platforms. You can tell straight away which one is CPC or C64 (http://frgcb.blogspot.co.uk/2014/09/trantor-last-stormtroope...)
You could also tell the video chip to look somewhere in RAM for these character definitions. Maybe just to replace the standard font with something cooler; maybe some of your font would contain little 8x8 building blocks you could use to make bigger images. Most games would build their backgrounds this way - you'd only have so much variation available, but you also only had to move about 900 bytes around to scroll the screen, rather than 8k. Which sounds like nothing to a modern machine but was a serious difference for a 1mhz computer.
The bitmap mode jiggled with this a little, and ignored the 'look at the characters on this row' part in favor of just stepping through an 8k chunk of RAM over the course of the display. (Well, the multicolor bitmap mode looked at the characters in the row as well, but used them for color data rather than an index to the character map.)
If you want to know about this in more detail then try firing up a c64 emulator and going through the 'graphics' chapter of the c64 Programmer's Reference Guide: http://www.commodore.ca/manuals/c64_programmers_reference/c6...
I'm not sure the acronym "API" even existed at this point in time. I sure never heard it when I was fooling around with 6502 assembly.
tl;dr: severely limited hardware, programmed right on the bare metal. API? What's that?
The C64 reads the current characters every 8 scanlines, not each scanline (well, you can get it to read on arbitrary scanlines by manipulating the vertical scroll register, but that's beside the point here). Due to the way the C64's data bus works, the main processor has to stop executing instructions while the graphics chip reads the character data.
You have to account for this if you're trying to synchronize code to particular scanlines -- on the scanlines where character data is read, you'll only get 23 cycles instead of the usual 63 (on a PAL machine, at least). For even more info, see http://www.zimmers.net/cbmpics/cbm/c64/vic-ii.txt
There is no graphics API in the Commodore 64. You programmed the hardware directly in assembly. If you wanted an API you wrote it yourself.
Boy, assembly was a bitch at first.
The average computer couldn't display a proper 24-bit photo until about when Windows 95 came out...
But I am lost on the math part about the color cells. Why does each color cell only need 1 byte? If each cell is 8 bits wide and 8 bits deep, wouldn't that be 8 bytes?
Another common thing to do, especially on machines just a little bit earlier, was to use one byte for the colors, one byte for a glyph identifier, and then put the 8 bytes per glyph somewhere else, maybe ROM, which was way cheaper than RAM. You could only have 256 distinct glyphs on the screen at a time, and maybe you couldn't even reprogram them, but you could do quite a bit of useful graphics inside those constraints. Text especially, obviously. And you could do it with a lot less memory.
Nonprogrammable fonts were mostly just idiocy, because a quad-input NAND gate would have been sufficient to distinguish a 16-glyph RAM area from the ROM, and 128 bytes of RAM would have been sufficient for 16 8×8 glyphs; also, 5×8 glyphs were actually pretty common at the time. In the VT-52 and ADM-3A era, you had a good excuse; by the time of the VT-100 and H-19, it was just dumb. For better or worse, the less-idiotic Apple and Commodore mostly swept those pointlessly-crippled devices aside.
I liked David's video though, great production values and good pacing.
You seem to be suggesting a "Snow Fall"-style presentation of this material. If that's what you think would be best, I'd be interested in seeing your remix!