Hacker News new | past | comments | ask | show | jobs | submit login
How to Fit a Large Program into a Small Machine (1999) (mud.co.uk)
148 points by tosh 8 months ago | hide | past | web | favorite | 35 comments



Many of you have on-the-fly code compression in your pocket.

When I worked in their R&D dept, I developed a feature for Qualcomm's modem chips that allowed the code to be shipped in compressed form. I wrote a custom paging handler that could fetch and decompress pages on-the-fly.

It started with just code and RO data. Then we added RW compression, which meant our handler now had to re-compress evicted pages of RW data that been modified.

I came up with the eviction scheme. I even added some tooling to our build system that would analyze hardware traces of typical usage in order to provide input to the linker to organize and optimize the image layout so as to reduce page fetches (thus decompressions) in those common use cases.

They've since gone wild with it. I even remember there being discussion on stack compression. I think they are working on a hardware compression/decompression engine these days and integrating with caching.


The crème de la crème of text compression for games back then was Huffman coding. I'm a bit surprised Zork didn't use it, as surely it would have saved space.

Compression algorithms like LZ weren't viable in games until much later, as most computers lacked the memory and clock speed to run them quickly.


80s C64 demos already used compression (both Lempel-Ziv style and Huffman). Suppose the demo had 3 parts. When part 1 was running, part 2 and 3 were present in memory in compressed form (both code and data). When it was time to run part 2, it would be decompressed over the code and data of part 1. etc


Compressing cracked games was a big thing too. There would be a lot of competition to get the smallest version of a game for faster downloading from a BBS, getting more games on a floppy disk etc. Getting the best compression software was a sign that you were an ‘elite’ cracker. (Disclaimer - was part of that scene as a young teen)


Wow, that is innovative. Something like overlays in certain versions of Turbo Pascal, but better.


Never used Turbo Pascal overlays but we had a product on OS/2 using a Realia compiler that when ported to DOS didn't fit. The overlays destroyed performance. I ended up making a segmented that spilt the call graph into mostly subtree by selecting the subroots. A small set of the shared pets got to live in non overlay memory. There was still a little thrashing because the splits weren't exclusive or of similar sizes but it worked remarkably well.


>The overlays destroyed performance.

True. A space-time tradeoff.

>I ended up making a segmented that spilt the call graph into mostly subtree by selecting the subroots. A small set of the shared pets got to live in non overlay memory.

I didn't get what you mean by "making a segmented" and "shared pets". Typos?


Some of the Infocom games certainly used compression I believe by doing things like using 4 or 5 bits for characters instead of 8. I couldn’t tell you where I read that though.



It'd be tricky to use a compression scheme with a history window, since these interpreters also used a virtual memory scheme for static data with page sizes of around 1 KB.


LZ decompression is far faster and simpler than Huffman --- and has at times been seen to be even faster than memcpy().


Unisys's US patent on the LZW algorithm expired on June 20, 2003. It was vigorously enforced. Many programs used huffman to avoid paying royalties. PNG was created to avoid problems with LZW patents. A lot of compression technology choices were based on avoiding proprietary algorithms.


I think you've confused LZ with LZW.


Haven't the speed of CPU and memory flipped since then?


Didn't they have the ARC compression back then? Not as good as LZ but good enough to save space.


If you're interested in this kind of thing, Jon Bentley wrote a column called Programming Pearls which was later collected into books. One of the instalments was called Squeezing Space, and talked about stuff like this. He also discussed the trade-offs involved, as sometimes the code introduced to compress the data more than offsets the savings.


Even if most (if not all) problems in the book are something nobody cares about anymore in the industry, I still reccomend Programming Pearls to anyone writing code.


Later DOS programs used overlays to get more code into the limits of 640KiB and smaller machines. https://en.wikipedia.org/wiki/Overlay_(programming)



A couple years back in FIRST robotics, everything used to be analog and we would use these Microchip branded controller which had only 32kB storage. We were taught pretty quickly how to memory and storage optimize. I feel like memory and storage is so abundant nowadays that we tend to write overbloated software.


Sure, 640kb should be enough for anybody.


What might be interesting to many modern HN readers is that this article (or so I believe) was part of what we today call a “content marketing” campaign.

The aim was to make Byte readers aware of Zork.

IIRC this article is a transcript of an article from an issue of Byte magazine in the early 1980’s.


Various sources refer to it as: Blank, Marc and S. W. Galley, "How to Fit a Large Program into a Small Machine," Creative Computing. volume 6, no. 7, July 1980, 80-87.

Ahh, here it is: https://archive.org/stream/creativecomputing-1980-07/Creativ...


I managed to get my own empire to fit locally on my 5mb (4mb/1mb) amiga 500 back in the day when developing MUDs.

my approach was a little bit different - all strings were shared with reference counts, and were copied on write. it took a while to boot and load the world, but once it was loaded it worked well.

that was still a little bit too big, so paging was introduced, which made it run quite nicely in what little ram I had at the time.


If I'm not mistaken, the title should actually say "(1980)". Didn't Infocom write this article when they were about to release Zork?


There are what appear to be several OCR bugs in the text. I think this may be a 1999 scan of an earlier print copy.


That's a good question, one I ask myself rather often when I'm on an Edge connection with 2-10kb/s loading a >1MB website, haha.


Wouldn't it be easier to put the repeated operations in subroutines rather than in a VM?


There's another consideration: portability. Competition among home computer platforms was strong in those days, with new ones coming and going every year; Infocom could quickly make their entire catalog available for a new machine because all they had to do was port the VM interpreter, then stick it on a disc along with the already-compiled game files.

New games would come out simultaneously on every platform they supported; Wikipedia lists those as "the Apple II family, Atari 800, IBM PC compatibles, Amstrad CPC/PCW (one disc worked on both machines), Commodore 64, Commodore Plus/4, Commodore 128,[3] Kaypro CP/M, Texas Instruments TI-99/4A, the Mac, Atari ST, the Commodore Amiga and the Radio Shack TRS-80."

(Eventually they expanded the restrictions on the Z-Machine and began creating games too big to fit on the smaller machines, but they were still able to do simultaneous releases on everything with enough room to juggle the larger games.)

Also, the VM implemented a domain-specific language, designed with "writing text adventure games" in mind; this was a huge advantage in a time when most games were programmed in assembly language. Both in terms of a faster development cycle, and in terms of building better text-based games by hiring people who were writers first and foremost rather than programmers.

(The VM interpreter also implemented virtual memory; while a 128k game file doesn't sound like much, it was pretty huge in an era when typical home computers had 32 or 64k of RAM. Swapfiles didn't exist on anything but huge minicomputers.)


With a VM, a single byte can encode one of 256 subroutines/jump targets. On the 6502 processor, whether you're using JSR or JMP, your shortest option is a 3-byte instruction. That's a 3x code size reduction. Of course you pay a cost in dispatch overhead. Incidentally, there's also an intermediate option which is called a direct-threaded interpreter in the Forth world, where your VM code stream consists of jump targets rather than opcodes. That requires 2 bytes per operation rather than 3 while eliminating the table lookup to map the 1-byte opcode to the 2-byte jump target.

The 3:1 reduction doesn't necessarily apply to operations that would normally map directly to CPU instructions. But especially for that generation of 8-bit microprocessors like the 6502, as soon as you're working with 16-bit data (which you definitely want for something like Zork) you end up spending a lot of code space for even simple operations. There's a famous example of a VM designed for the 6502 explicitly for reducing the code size (and inconvenience) of working with 16-bit operands: https://en.wikipedia.org/wiki/SWEET16


In a way, that is what was done.

"In effect, Z-code is like P-code: a string of subprogram calls, with the bodies of the subprograms executed by a Z-machine or ZIP. Any often-used sequence of pperations [sic] in Zork programs could, in principle, be compressed into a Z-code instruction, thereby moving the sequence of operations into the Z-machine or ZIP, where it needs to appear only once."

The goal wasn't to do it the easy way, but to achieve something that happened to be difficult.

It's also important to remember that loading a program into a computer back then was very different than today. Today, you load a program and it exists along with the operating system and all its support routines, needed or otherwise. For example, you may load a car racing game on Windows or macOS, but the printer drivers are still there.

Back then, most games on most computers would wipe out the entirety of the machine down to the last byte. Even the fonts could be blown out if you needed the space. (Storing changing variables in what we would call the font space was a fun way to create an interesting display while heavy math was being done, but without wasting resources on a "loading..." progress bar.)


Is there a Windows version of Zork I can play? The game looks fun!

Edit: cool, found it.

http://www.infocom-if.org/downloads/downloads.html


You can also play it on your browser:

https://archive.org/details/softwarelibrary_msdos_games?and%...

The archive.org site has a vast collection of old DOS games that are browser-playable:

https://archive.org/details/softwarelibrary_msdos_games


You need a Z-machine interpreter, like frotz


[flagged]


Your comments aren't meeting the guidelines, so we've banned the account. We're happy to unban accounts if you email hn@ycombinator.com and we believe you'll start using the site as intended.

https://news.ycombinator.com/newsguidelines.html




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: