
How to Fit a Large Program into a Small Machine (1999) - tosh
http://mud.co.uk/richard/htflpism.htm
======
matmann2001
Many of you have on-the-fly code compression in your pocket.

When I worked in their R&D dept, I developed a feature for Qualcomm's modem
chips that allowed the code to be shipped in compressed form. I wrote a custom
paging handler that could fetch and decompress pages on-the-fly.

It started with just code and RO data. Then we added RW compression, which
meant our handler now had to re-compress evicted pages of RW data that been
modified.

I came up with the eviction scheme. I even added some tooling to our build
system that would analyze hardware traces of typical usage in order to provide
input to the linker to organize and optimize the image layout so as to reduce
page fetches (thus decompressions) in those common use cases.

They've since gone wild with it. I even remember there being discussion on
stack compression. I think they are working on a hardware
compression/decompression engine these days and integrating with caching.

------
pubby
The crème de la crème of text compression for games back then was Huffman
coding. I'm a bit surprised Zork didn't use it, as surely it would have saved
space.

Compression algorithms like LZ weren't viable in games until much later, as
most computers lacked the memory and clock speed to run them quickly.

~~~
toolslive
80s C64 demos already used compression (both Lempel-Ziv style and Huffman).
Suppose the demo had 3 parts. When part 1 was running, part 2 and 3 were
present in memory in compressed form (both code and data). When it was time to
run part 2, it would be decompressed over the code and data of part 1. etc

~~~
vram22
Wow, that is innovative. Something like overlays in certain versions of Turbo
Pascal, but better.

~~~
karmakaze
Never used Turbo Pascal overlays but we had a product on OS/2 using a Realia
compiler that when ported to DOS didn't fit. The overlays destroyed
performance. I ended up making a segmented that spilt the call graph into
mostly subtree by selecting the subroots. A small set of the shared pets got
to live in non overlay memory. There was still a little thrashing because the
splits weren't exclusive or of similar sizes but it worked remarkably well.

~~~
vram22
>The overlays destroyed performance.

True. A space-time tradeoff.

>I ended up making a segmented that spilt the call graph into mostly subtree
by selecting the subroots. A small set of the shared pets got to live in non
overlay memory.

I didn't get what you mean by "making a segmented" and "shared pets". Typos?

------
EdwardCoffin
If you're interested in this kind of thing, Jon Bentley wrote a column called
Programming Pearls which was later collected into books. One of the
instalments was called Squeezing Space, and talked about stuff like this. He
also discussed the trade-offs involved, as sometimes the code introduced to
compress the data more than offsets the savings.

~~~
Pamar
Even if most (if not all) problems in the book are something nobody cares
about anymore in the industry, I still reccomend _Programming Pearls_ to
anyone writing code.

------
modells
Later DOS programs used overlays to get more code into the limits of 640KiB
and smaller machines.
[https://en.wikipedia.org/wiki/Overlay_(programming)](https://en.wikipedia.org/wiki/Overlay_\(programming\))

------
dang
Short, but nine years ago:
[https://news.ycombinator.com/item?id=1032408](https://news.ycombinator.com/item?id=1032408)

------
syntaxing
A couple years back in FIRST robotics, everything used to be analog and we
would use these Microchip branded controller which had only 32kB storage. We
were taught pretty quickly how to memory and storage optimize. I feel like
memory and storage is so abundant nowadays that we tend to write overbloated
software.

~~~
the-dude
Sure, 640kb should be enough for anybody.

------
stevoski
What might be interesting to many modern HN readers is that this article (or
so I believe) was part of what we today call a “content marketing” campaign.

The aim was to make Byte readers aware of Zork.

IIRC this article is a transcript of an article from an issue of Byte magazine
in the early 1980’s.

~~~
eesmith
Various sources refer to it as: Blank, Marc and S. W. Galley, "How to Fit a
Large Program into a Small Machine," Creative Computing. volume 6, no. 7, July
1980, 80-87.

Ahh, here it is:
[https://archive.org/stream/creativecomputing-1980-07/Creativ...](https://archive.org/stream/creativecomputing-1980-07/Creative_Computing_v06_n07_1980_Jul#page/n81/search/%22How+to+Fit+a+Large+Program+into+a+Small+Machine%22)

------
jerrysievert
I managed to get my own empire to fit locally on my 5mb (4mb/1mb) amiga 500
back in the day when developing MUDs.

my approach was a little bit different - all strings were shared with
reference counts, and were copied on write. it took a while to boot and load
the world, but once it was loaded it worked well.

that was still a little bit too big, so paging was introduced, which made it
run quite nicely in what little ram I had at the time.

------
mwcampbell
If I'm not mistaken, the title should actually say "(1980)". Didn't Infocom
write this article when they were about to release Zork?

~~~
hinkley
There are what appear to be several OCR bugs in the text. I think this may be
a 1999 scan of an earlier print copy.

------
k__
That's a good question, one I ask myself rather often when I'm on an Edge
connection with 2-10kb/s loading a >1MB website, haha.

------
eternauta3k
Wouldn't it be easier to put the repeated operations in subroutines rather
than in a VM?

~~~
egypturnash
There's another consideration: portability. Competition among home computer
platforms was strong in those days, with new ones coming and going every year;
Infocom could quickly make their _entire_ catalog available for a new machine
because all they had to do was port the VM interpreter, then stick it on a
disc along with the already-compiled game files.

New games would come out simultaneously on every platform they supported;
Wikipedia lists those as "the Apple II family, Atari 800, IBM PC compatibles,
Amstrad CPC/PCW (one disc worked on both machines), Commodore 64, Commodore
Plus/4, Commodore 128,[3] Kaypro CP/M, Texas Instruments TI-99/4A, the Mac,
Atari ST, the Commodore Amiga and the Radio Shack TRS-80."

(Eventually they expanded the restrictions on the Z-Machine and began creating
games too big to fit on the smaller machines, but they were still able to do
simultaneous releases on everything with enough room to juggle the larger
games.)

Also, the VM implemented a domain-specific language, designed with "writing
text adventure games" in mind; this was a huge advantage in a time when most
games were programmed in assembly language. Both in terms of a faster
development cycle, and in terms of building better text-based games by hiring
people who were _writers_ first and foremost rather than programmers.

(The VM interpreter also implemented virtual memory; while a 128k game file
doesn't sound like much, it was _pretty huge_ in an era when typical home
computers had 32 or 64k of RAM. Swapfiles didn't _exist_ on anything but huge
minicomputers.)

------
fatdickens
Is there a Windows version of Zork I can play? The game looks fun!

Edit: cool, found it.

[http://www.infocom-if.org/downloads/downloads.html](http://www.infocom-
if.org/downloads/downloads.html)

~~~
wenc
You can also play it on your browser:

[https://archive.org/details/softwarelibrary_msdos_games?and%...](https://archive.org/details/softwarelibrary_msdos_games?and%5B%5D=zork&sin=)

The archive.org site has a vast collection of old DOS games that are browser-
playable:

[https://archive.org/details/softwarelibrary_msdos_games](https://archive.org/details/softwarelibrary_msdos_games)

