
Geocities Cage at the Exodus Datacenter - imaginator
http://www.detritus.org/mike/gc/
======
shams93
I worked for geocities as a front end person back then. We got to work on some
cool javascript stuff for the page builder app that was really pretty far
ahead for its time. I got into trouble for bringing a laptop with suse linux
on it into the office, back then linux was considered a major security risk,
you notice all the machines are from Sun. They were also very early adopters
of server side java, but most of the plumbing was composed of c code, the
javascript people were kept far away from the server side and we didn't make
it up to yahoo, but they did snatch up almost all of the unix admins.

~~~
dkresge
Wow, Page builder (formerly Geobuilder) -- crazy times. I don't remember the
Suse incident! But your mention of server side java reminds me that we also
experimented with server side javascript via Netscape Enterprise Server.
Compared to the C CGIs that comprised most of Geo's backend at that point it
seemed a panacea. Too many bugs and not ready for prime time later we stuck w/
Apache (but moved more to handlers).

------
chrissnell
Great memories. During this era, I was at Citysearch and spent a lot of time
in the Orange County Exodus DC. We had a nice rack of Sun gear, Extreme
Networks switches and Alteon AceDirector load balancers.

I remember vividly that there was a cage down the row from us that was
populated entirely by eMachines, which were a low end desktop PC that you
could buy at Circuit City and Best Buy. We laughed at their cage but the
company, 911gifts.com, ended up getting acquired for a nice sum, while our
site and company was basically gone a few years later.

~~~
alexhawdon
That's a great story. It'd be interesting to know whether that was all they
could afford, or if they were being very forward-thinking in using COTS
hardware and a failure-tolerant architecture.

~~~
beachstartup
probably a combination of both.

i worked at a startup in the late 90's and the price of sun microsystems gear
was mind-blowingly high. from memory: ultra 5 workstation with a scsi card and
disk array was $15,000+. an ultra e450 server was on the order of $100,000 and
went up from there depending on how you wanted to build.

of course, that was exactly the time people started switching to linux on x86
en masse. pentium pro's were good and cheap enough to scale out less
expensively on a per unit basis. today you can buy a 72-core xeon server with
gigabytes of memory and terabytes of ssd's and 4x 10G ethernet for less than
10 grand. amortized over its functional lifetime, it costs less than a cell
phone bill.

~~~
kyledrake
I got the impression "hardware dogma" still happens when I was planning our
infrastructure. The datacenter world is still a place where you can spend $36k
for something you can get for $1000 if you're not careful. I didn't use Cisco
for our switches and some IT guys I talked to acted like I was a heretic. But
the decision saved us several thousand dollars and we ended up with a design
that was IMHO more reliable than stacking proprietary switches.

The big trick is to make everything redundant (redundant power, network
bonding, Ceph instead of a NAS) and not have a SPOF. Then it matters a lot
less if anything fails, and you can use cheaper hardware if you need to. That
said, I still prefer server grade hardware - just not always the newest or the
most brand name.

~~~
technofiend
Unfortunately storage dogma still seems to be a thing for EMC shops. Just try
suggesting Ceph over Scale I/O. :-)

------
staked
I love seeing stuff like this. Though I'd be a happy man if I never had to see
the words "Veritas Volume Manager" again. Definitely one of my least favorite
pieces of software from that era.

I worked for a company around this same time that had servers in the NJ Exodus
data center. Used to have to head up there once a month to swap out tapes in
the Sun L11000.

~~~
tyingq
Brought back some memories for me as well. In corporate IT, unix sysadmins had
to know how to manage multiple different hardware architectures and operating
systems that all did things different. AIX, Solaris, HPUX, Ultrix, etc, all
with different filesystems, raid hw/sw, command paths, and so on.

That picture of the ethernet hornet's nest too. Ugh. At least it wasn't AUI
cabling.

~~~
setq
Indeed. AUI was a killer. We had a policy of turning it into 10base2 right
away with an adaptor, then 10baseT when that became a thing.

I liked this era so much I used to skip dive all the kit that was chucked out.
Had myself a nice stacked Sun 1000E as a desktop in 1999, until I got the
electricity bill. Must have cost as much as a house when it was new.

Then I found HP/UX was horrid. Had a run in with some HP N-class systems with
Oracle. Yeuch, and that turned me to open source.

~~~
tyingq
>>Must have cost as much as a house when it was new.

I do remember some $100k - $300k invoices for larger SMP servers from HP, Sun,
and the like. For machines that probably had less overall horsepower than my
current cell phone :)

~~~
setq
Yeah pricing was awful. I remember someone playing £12k for a single PA RISC
CPU option and when they cracked it open to have a look it was 95% heatsink.
Cue the _" bloody expensive heatsink"_ comments.

It had about as much go as one of those "big slab" xeon slot CPUs at the time
which was 1/10th of the cost.

~~~
SSLy
Well, I've seen appliances over $50k a piece ( you need at least two to be
useful.) So systems like that are still here.

~~~
tyingq
Right, because they aren't currently commodity items. Wouldn't take much, for
example, though, to get haproxy to a state where it starts eating F5 lab's
lunch. A nicer ui, etc.

It took Linux and commodity servers a while to kill the $100k+/each
proprietary unix server market.

------
chiph
We were hosted at Exodus in the Herndon Virginia area around this same
timeframe. The staff gave us a small tour and showed us a cage about the size
of a double-wide trailer. Inside were two Sun Enterprise 10000 servers and a
whole wall of drive arrays. Plus networking gear, tape drives, etc. Easily
$7-8 million worth of stuff.

They said it was from a search engine we probably had heard of. Our guess was
it was Altavista.

We didn't own our Compaq servers - we leased them from Exodus, like a lot of
firms. And when the bubble popped, all those startups stopped paying for all
that expensive equipment (which was now _used_ and worth much less), and
Exodus was on the hook as the owner of it all. Killed them.

Edit: Found this image from someone who picked one up for a song to add to
their collection. From a prized million-dollar enterprise-class server, to
being hauled around in the back of a pickup truck.

[http://imgur.com/a/lXvOk](http://imgur.com/a/lXvOk)

~~~
drewg123
Hmmm.. I'm pretty sure AltaVista ran on DEC Alpha hardware, not Sun. And
Google used self-built high-density racks of Intel hardware.

Maybe it was Inktomi? I'm pretty sure they used Sun hardware.

~~~
rsync
"Hmmm.. I'm pretty sure AltaVista ran on DEC Alpha hardware, not Sun."

Yes, they did. Remember - the original URL for altavista was:
altavista.digital.com.

------
joezydeco
_" As you can imagine (and see from these pictures), this equates to a whole
bunch of ethernet cables. Cable management gets increasingly difficult to
grasp each time a new box is added to the mix_"

Eeeeeek. I'm not in IT and _I_ got the sinking stomach feeling when looking at
this.

Thankfully there is
[https://www.reddit.com/r/cableporn/](https://www.reddit.com/r/cableporn/) to
cure that.

~~~
tyingq
That mess wasn't common in any environment I worked at during the same time
period. Pretty sure that would have got someone sacked in the various
datacenters I happen to have worked in.

~~~
acveilleux
It was common in many customer cages. I even saw cages setup in 2000 where a
cage contained 500k of sun and cisco hardware still in their box after 8
months. I guess that startup was growing so fast they hadn't gotten around to
plugging things in yet /s.

More likely they either ran out of funding or never got the growth they
expected. Back then there was often a large disconnect between H/w bought and
H/w needed/used. But money was there and hypergrowth was "just around the
corner."

~~~
flomo
My conspiracy theory was VC funding required companies to spend X% on
Sun/Cisco equipment, whether they needed it or not. In any case expensive,
unused Sun servers were lying around everywhere during that period. And you'd
hear thing like people bragging about their spare E10K.

------
acveilleux
So much memories from that era. I saw my first 1TB storage array back then, it
was an EMC Clariion fibre channel raid box, it cost nearly 1m$ and it was a
full rack of 9GB dual-ported 15k RPM disks.

~~~
roganp
And that was EMC's "low-end" array. The Symetrix (Symetrics?) I remember being
much more expensive for the same throughput.

~~~
acveilleux
Symmetrix, two Ms. They were more of a mainframe thing I thought. I never
really got to play much with the really big gear, Starfires, Superdome and so
on. By the time I got to use a 64-way SMP box (Origin 3800) it was already
obsolescent and replaced a few years later with a pair of dual-quad core
Xeons.

------
jsjohnst
I had forgot that GeoCities started migrating to NetApp filers after the
Yahoo! acquisition. As many thousands (tens of thousands?) of the filers they
bought from NetApp, Yahoo! really shouldve bought them when they had the
chance I feel.

------
Overtonwindow
Does anyone know of an article like this about the dial in modem systems and
infrastructure for some of the early dialup services?

~~~
acveilleux
A few decent things I could find online that jive with what I remember:

[https://www.patton.com/technotes/build_yourself_an_isp.pdf](https://www.patton.com/technotes/build_yourself_an_isp.pdf)

[http://www.gwi.net/behind-the-scenes-of-a-90s-internet-
start...](http://www.gwi.net/behind-the-scenes-of-a-90s-internet-start-up-2/)

[https://news.ycombinator.com/item?id=8352432](https://news.ycombinator.com/item?id=8352432)

The absolute minimum, and representative of the very first dial-up ISPs:
[http://www.linuxjournal.com/article/2025](http://www.linuxjournal.com/article/2025)

[http://www.datamation.com/erp/article.php/615281/My-own-
priv...](http://www.datamation.com/erp/article.php/615281/My-own-private-
ISP.htm)

There used to be a lot of nice material on this subject but a lot of it has
been obsolete and rotted away from the web over the last 15-20 years. The
larger dial-up ISPs used Cisco AS series boxes (or equivalent) with PRI (i.e.:
phone over T1) connections (24 lines/each) to a centralized RADIUS server for
authentication. They are/were the last hold outs providing dial up.

Smaller ISPs were more of a '94 to '99 thing. Usually they used cyclades or
equivalent serial port cards with up to 16 serial ports per card and an
external modem per port. Eventually this morphed into boxes with multiple
modems in them and access servers that did the ppp termination and the
authentication (to a RADIUS server) as scale increased. US Robotics was
probably the best reputed player in the modem space.

~~~
halbritt
Lots of people used the US Robotics Total Control boxes in the early 90s.

[http://www.kmj.com/tcont.html](http://www.kmj.com/tcont.html)

~~~
acveilleux
That's what I had in mind when I said "boxes with multiple modems in them".

------
RamenJunkie_
And then years later you could download all of Geocities as a few hundred GB
Torrent.

~~~
biofox
Which makes me wonder: what would it take to match the capabilities of this
setup with modern hardware?

~~~
freeone3000
Could be done with a moderate-spec single server. Replace the JBOD enclosures
with on-server 8xSamsung EVOs in hardware RAID6, and gain a bit in speed
because we don't have drives that slow anymore. Xeon E5v4 could replace the
computational power. Network multihoming would be done the same, except with
10000baseT instead of multiple cables. Run nginx on it and you're good.

Or, one could host the entire thing on S3 with CloudFront at a fraction of the
cost.

~~~
vidarh
> Or, one could host the entire thing on S3 with CloudFront at a fraction of
> the cost.

Or, you could rent capacity somewhere with decent bandwidth prices and pay
2%-20% of CloudFront/S3 bandwidth prices...

------
sulam
And right there is why Sun was such a hit in the 90s and crushed in the bust.

~~~
acveilleux
Everyone in SV in 1999 used Sun boxes. Racking Ultra-2 Pizza Boxes was being
very Internet Professional and money was flowing freely enough to pay for
them.

~~~
jrnichols
Yep. I was at Netscape in Mountain View around that time, and the data center
was mostly full of SGI stuff and Sun stuff. The new datacenter on Ellis opened
up, the AOL thing happened, and even _more_ Sun hardware started showing up.
E4500s, E450s, you name it. I had an Ultra 5 sitting in my cube just because I
could. And then the Intel stuff started showing up in droves. Compaq servers
running Linux. And I don't remember seeing a single Sun box show up in that
data center after that...

~~~
icedchai
Sun hardware was everywhere in the late 90's / early 2000's. I remember
working at a couple of startups that had E3500s (mini-fridge size machines?)
and a few E450s. We had Ultra 5's and 10's on the developer desktops.

I have an Ultra 10 rotting away in my basement. It cost a pretty penny at the
time, but I haven't booted it up in almost 15 years.

------
stpe
Not sure what's the best find, the data center article or the author's boy
band-esque photo:
[http://www.detritus.org/mike/pics/inpark.jpg](http://www.detritus.org/mike/pics/inpark.jpg)

------
tenaciousJk
Good memories.

Does anyone remember where an Exodus colo facility was in SF around that same
time (1999-2001)? That was my first visit to the area and I can't seem to
locate the neighborhood now that I live here.

