
An old Macintosh IIci 25Mhz running Apache under NetBSD - emersonrsantos
http://littledork.err0neous.org
======
vidarh
It's quite amusing that this seems interesting today. Apache works on really
old, slow hardware quite well. E.g. here's Apache, MySQL and PHP for AmigaOS:
[http://www.amigaos.net/software/77/aamp-apache-mysql-
php](http://www.amigaos.net/software/77/aamp-apache-mysql-php)

My first job was an ISP I co-founded. In retrospect, the biggest mistake we
did was over-investing in hardware, by buying two 120MHz Pentium servers when
we could have gotten away with 486's and a leased Cisco access server or two.

But those two 120MHz Pentium servers with 128MB a piece, between them served
up dozens of corporate websites, handled 32 dialup connections (we had a
horrific setup with Cyclades serial port multipliers wired up onto a large
wooden board with US Robotics Sportster modems wall mounted - the Sportster
were cheap consumer grade modems prone to overheating, so stacking them in any
way was generally a bad idea), ran a NNTP server with 30,000 USENET groups,
mail for all our customers, and shell access.

We got 5-6 25Mhz or 33MHz 486's with 16MB RAM to use as X servers, and ran all
the clients (Emacs, Netscape and a bunch of terminals was a typical session)
on the two Pentiums.

~~~
bane
Funny, my 2nd job was as employee #1 for a local ISP. We had a very similar
arrangement (USR modems and all) for dial-in until we financed out and bought
proper hardware, I wish I could remember the details. We hosted something like
3000 users, all their websites, shell access, Usenet, everything off of just 2
or 3 Pentiums with even fewer specs than what you describe. Everything ran
quite well. We even ran a couple very high volume corporate websites for some
rather famous television shows of the time off of this arrangement.

I'd point out that it appears even a high volume site like HN hasn't knocked
this ancient system offline.

It drives me nuts when I read about figuring out how to handle some couple
hundred requests a second on racks full of modern hardware with _Gigabytes_ of
RAM. Any clearance bin laptop you can buy at Costco should be able to handle
thousands to tens-of-thousands of requests per second. We're clearly doing
something wrong these days.

~~~
vidarh
The thing is, so much was static. It's easy to shuffle bytes.

There are a few problems: People don't know or care about profiling the basic
stuff like context switches and data copies; people don't have a baseline idea
of what _should_ be possible; abstractions upon abstractions upon abstractions
even when it complicates the code.

One of my biggest pet peeves in that respect is that people tend to not even
know how many objects they create in many dynamic languages. Many people don't
even know of any simple ways of finding out.

My most successful application speedup (EDIT: measured in speedup per hour
expended - plenty of examples of much greater speedups, but they are rarely as
quick) was spending an afternoon eliminating string copies in a late 90's CMS
written in C++, and cutting page generation time by 30% in a system that was
already heavily optimized (it provided a statically typed C++ like scripting
language that was bytecode compiled at a time when most competing solutions
were using horribly inefficient interpreters, and were often themselves
written in slow intepreted languages).

My second biggest pet peeve is when people think system calls are cheap
because they look like function calls. Sometimes you can increase throughput
by an order of magnitude just by eliminating small read()'s (non-blocking
filling of userspace buffer instead, and "read" from that). When I see slow
throughput, the first thing I do is break out "strace" to look for unnecessary
system calls...

But these problems are often a direct result of abstractions that means huge
numbers of developers don't know or understand the cost of a lot of the
functionality they depend on any more, and can get away with that "too often"
because they work on hardware that is so fast it is extremely forgiving (e.g.
my new "weird unknown Chinese brand" 8-core phone does about 10 times as many
instructions per second as my ISPs total computing capacity in 1995, and it's
nowhere near the peak performance of todays flagship phones), ignoring the
fact that at "web scale", things are not as forgiving any more: server costs
quickly start adding up.

But of course, most of us most of the time work on stuff that needs to handle
little enough traffic that paying an extra $20k for hardware is cheaper than
spending the developer time to optimize this stuff. Ironically this makes the
optimization more expensive: Fewer developers ever get to obtain the
experience to optimize this stuff at scale.

~~~
bane
>My most successful application speedup ever... > people tend to not even know
how many objects they create in many dynamic languages

One of my biggest speedup success stories was an internal inventory reporting
tool for a large-ish online retailer. The code was written in Perl, with huge
amounts of data aggregation happening in memory and using loads of Perl hashes
pointing to other Perl hashes point to others (and turtles all the way down).
On small subsets of the data, it would run for 4 or 5 hours then spit out some
graphs -- this was acceptable performance for daily reports. But as the site
grew, it started taking 8 hours, then 10 hours and then finally crashing when
the system simply ran out of memory.

So I did a simple conversion, all the hashes pointing to hashes etc. into
arrays pointing to arrays. Pretty simple in the code, only a few lines of code
here and there and a couple functions to map hash keys into array indexes.

The next day the entire inventory system did a run (with this one change) and
finished in 10 minutes and AFAIK they still use that same system even though
the site and user activity has grown a hundredfold.

The developer (self-taught) of the original code asked why this worked. I
walked him through the big-O complexity of arrays vs. hashes, and more
importantly the amount of stuff the hash type was dragging around through
memory. There used to be a fantastic site with visualizations of all the perl
datatypes and we sat down and started counting bytes until he understood the
difference.

~~~
dylanrw
It's excellent that you'd do that for him, and he was curious/receptive enough
to ask and step through it all. So many times I've seen people stay ignorant
to save face.

~~~
bane
He's a great guy, really open minded and always learning. He taught me tons
also, and he's moved far beyond where I could ever hope to technically.

~~~
Flow
You sound sane and very competent, what on earth is that guy doing today?

~~~
bane
He works as a department head for a commercial tech contracting company. Not
too sexy, but they get free beer.

------
userbinator
Is this a "load test"? It's certainly handling it quite well! Performance of
the CPU is ~9 Dhrystone MIPS - comparable to a 33MHz 386 (9.9 DMIPS). Serving
static content is really not CPU-intensive; most of the computation would
probably be in TCP/IP processing. I think this, and other examples of old
hardware commonly thought to be too slow for anything but still doing
something useful, shows that much of the software we use today is really
vastly less efficient than what's theoretically possible. I.e. how to make use
of limited computing power efficiently can be more important than how much of
it there is.

For comparison, here is a static site served by a machine with a _4MHz_ custom
CPU: [http://www.magic-1.org/](http://www.magic-1.org/) (more details at
[http://www.homebrewcpu.com/](http://www.homebrewcpu.com/) )

------
zak_mc_kracken
This thread is going to contain a lot people complaining that even though
today's hardware is orders of magnitude more powerful than twenty years ago,
our computers seem to be just as slow.

~~~
flomo
I worked in a college lab full of Mac IIcis, and they felt ridiculously slow
even back then. :) I think people's memories get funny and they forget that it
took minutes to launch programs and you could even watch the windows redraw.

~~~
wiredfool
I remember setting up a IIci with a fast scsi drive and system 6. It booted in
about 5 seconds. I don't think anything I've seen has come close to that fast
until the era of SSDs.

Later, when I got my first DSL line, I had a different cast off IIci running
netbsd doing NAT for my little network. That would have been 1997 or so, 640k
symmetric, I think the IIci had 24 megs of memory or so. Old and slow at the
time, but less powerful than this one.

~~~
flomo
IIcis had some sort of onboard video which made them feel sluggish compared to
similar Macs. We retrofitted them with cache cards (?) which helped a little.
But, yep, very slow disks, System 7, and they IIRC had 4MB of memory. (24MB
would have been an unimaginable luxury.)

~~~
wiredfool
Yep, ci's had onboard video, the cx didn't. I think that they were the first
color/030 machines with onboard video.

They always seemed fast to me, because I bought a Mac SE just before the
classic/lc/si came out. That was when Apple did a massive sales push on campus
in late August/early Sept, and announced new machines in October.

~~~
flomo
It was an era where you could "feel" things like a few CPU Mhz or a 32KB cache
upgrade. And you had to pay hundreds or thousands for those little upgrades,
so it had to be worth it.

I stuck with a IIfx until Windows 95 came out, and learned that the little
things (e.g. disk and video) to some extent matter more than the big things.

------
rbanffy
The first Unix machine I used had 2 megabytes of RAM and was connected to two
dozen terminals serial. It ran on a 68020. It probably wouldn't be able to run
Apache, but it ran all our accounting and managed production for a large-ish
corrugated paper factory.

A Mac IIci is ridiculously powerful in comparison.

~~~
vidarh
A 68020 can run Apache just fine. A 68030 is only about 20% faster per cycle
on a lot of stuff than a 68020.

Of course, with two dozen terminals active, things does change a bit...

~~~
whoopdedo
The advantage the 68030 has is the PMMU. I don't think you can use virtual
memory on a 68020.

My first Linux machine was a IIsi, the little-brother to the IIci. But I had
to put NetBSD on it because the Linux keyboard driver was broken.

~~~
spc476
You can use virtual memory on the 68020, but you need a 68851 between the
68020 and memory. The 68851 is an MMU coprocessor, like the 68881/68882 is a
floating-point coprocessor.

~~~
kjs3
Or you could roll your own MMU for the 68020, like Sun and Apollo did.

------
api
I had a Commodore 64 with GEOS as a kid. It always amazes me what you can do
without umpteen gazillioplex levels of abstraction.

~~~
vidarh
Including surfing the web: [http://www.techrepublic.com/blog/classics-
rock/surf-the-web-...](http://www.techrepublic.com/blog/classics-rock/surf-
the-web-on-your-commodore-64/)

And some people have run web servers on C64's too.

Contiki OS runs on C64 and provides a full TCP/IP stack for the C64 as well as
a simple browser: [http://www.contiki-os.org/](http://www.contiki-os.org/)
(This site will apparently create a customized image:
[http://contiki.cbm8bit.com/](http://contiki.cbm8bit.com/))

~~~
boomlinde
There used to be a quite persistent website hosted from a C64 running Contiki.
The pages were probably cached in RAM, but I still like to imagine that a
beige 1541 disk drive spun up every time someone made a request.

~~~
nkozyra
Or one of those weird cassette systems that I vaguely remember struggling
with.

~~~
vidarh
The thing preventing that is that the C64 "Datasette" couldn't trigger
play/rewind/fast forward from software, so it'd require it to show a prompt to
the operator for each file it needed to load... Someone should hack together
an interface - the C64 has usable extra IO pins on the cartridge port that
surely should be possible to hook up to them.

From a very cursory look at a diagram, it might actually be possible (don't
know about safe) to hook the input that normally comes from the play/rewind
etc. switch straight to suitable pins on the user port (or you may fry
something, but the C64 is remarkably resilient to crazy people like me
soldering stuff straight onto the PCB without knowing quite what we're doing,
or replacing IC's with "mostly the same but not 100% compatible chips" just to
see what would happen... it's a wonder I never broken my C64's)

~~~
LeoPanthera
Perhaps you could use a continuous loop of tape and just wait for the page you
want to come around. A bit like how teletext broadcasts work.

~~~
vidarh
Serve up a "Please wait, your page will load in 10 minutes" interstitial...

------
sliverstorm
... and it still loads faster than most of the web today.

~~~
_delirium
A single page load with no AJAX roundtrips, and no client-side scripts running
helps a lot. Was reminded of that when I clicked on a search result to a
mailing list archive today that happened to be hosted at Google Groups (vs the
usual Mailman-style archives), and it took something absurd like ~4-5 seconds
to load.

~~~
sliverstorm
Basically I see it as an example of how we spend performance on complexity and
overall stay at a similar net level of performance.

------
adfm
The Macintosh IIci came out in 1989. Pretty good for a 24 year old machine! I
pull mine out every once in a while to remind myself that we haven't advanced
that much in a quarter of a century.

------
mixmastamyk
Interesting!

I ran my first webserver on a Mac IIci at work, circa '94\. We didn't have a
firewall and were right on the internet. I soon was able to apple? script up a
page with a live QuickCam image, text-box, cgi-bin, and the speech synth and
let my friends have at it!

I received nothing but innuendo and sheep noises, but it was a blast. This was
a year or so before I started administering the definitive Quake server on the
corporate network, and dare I say... internet. Up to twenty players at a time,
some on dialup.

Ah the good ol' days, hehe. ;)

------
Bud
In 1997 or so, I was running Apache under NetBSD on an SE/30\. Ran quite well
and was very stable.

------
rburhum
Too many of you trying to see it. My requests are timing out every single time
:(

~~~
jorgis
They're still working on the Z80-based load balancer so they can bring up
another server.

------
nsxwolf
Static page performance appears very high.

