I remember vividly that there was a cage down the row from us that was populated entirely by eMachines, which were a low end desktop PC that you could buy at Circuit City and Best Buy. We laughed at their cage but the company, 911gifts.com, ended up getting acquired for a nice sum, while our site and company was basically gone a few years later.
i worked at a startup in the late 90's and the price of sun microsystems gear was mind-blowingly high. from memory: ultra 5 workstation with a scsi card and disk array was $15,000+. an ultra e450 server was on the order of $100,000 and went up from there depending on how you wanted to build.
of course, that was exactly the time people started switching to linux on x86 en masse. pentium pro's were good and cheap enough to scale out less expensively on a per unit basis. today you can buy a 72-core xeon server with gigabytes of memory and terabytes of ssd's and 4x 10G ethernet for less than 10 grand. amortized over its functional lifetime, it costs less than a cell phone bill.
The big trick is to make everything redundant (redundant power, network bonding, Ceph instead of a NAS) and not have a SPOF. Then it matters a lot less if anything fails, and you can use cheaper hardware if you need to. That said, I still prefer server grade hardware - just not always the newest or the most brand name.
Frankly that's part of what makes working now such fun: the fact these systems just keep getting better and better.
I worked for a company around this same time that had servers in the NJ Exodus data center. Used to have to head up there once a month to swap out tapes in the Sun L11000.
That picture of the ethernet hornet's nest too. Ugh. At least it wasn't AUI cabling.
I liked this era so much I used to skip dive all the kit that was chucked out. Had myself a nice stacked Sun 1000E as a desktop in 1999, until I got the electricity bill. Must have cost as much as a house when it was new.
Then I found HP/UX was horrid. Had a run in with some HP N-class systems with Oracle. Yeuch, and that turned me to open source.
I do remember some $100k - $300k invoices for larger SMP servers from HP, Sun, and the like. For machines that probably had less overall horsepower than my current cell phone :)
It had about as much go as one of those "big slab" xeon slot CPUs at the time which was 1/10th of the cost.
It took Linux and commodity servers a while to kill the $100k+/each proprietary unix server market.
I get that Veritas also did logical volume, raid and so on and that was a guenuine value added at the time. But so many box I saw had veritas so they could partition and get a file system that was OK with 36GB+ monster SCSI disks.
When you have a 100k+ machine it makes more sense, when you can just rebuild a new instance it makes less sense. I do NOT miss using it, but it was nice for its niche.
They said it was from a search engine we probably had heard of. Our guess was it was Altavista.
We didn't own our Compaq servers - we leased them from Exodus, like a lot of firms. And when the bubble popped, all those startups stopped paying for all that expensive equipment (which was now used and worth much less), and Exodus was on the hook as the owner of it all. Killed them.
Edit: Found this image from someone who picked one up for a song to add to their collection. From a prized million-dollar enterprise-class server, to being hauled around in the back of a pickup truck.
A week later, in Atlanta at QTS metro, they said almost the exact same words.
Maybe it was Inktomi? I'm pretty sure they used Sun hardware.
Yes, they did. Remember - the original URL for altavista was: altavista.digital.com.
Eeeeeek. I'm not in IT and I got the sinking stomach feeling when looking at this.
Thankfully there is https://www.reddit.com/r/cableporn/ to cure that.
More likely they either ran out of funding or never got the growth they expected. Back then there was often a large disconnect between H/w bought and H/w needed/used. But money was there and hypergrowth was "just around the corner."
Patchpanel backs are cabled only once by electricians. All is connected to another patchpanel that's near the catalyst and it is the place the mess starts growing.
You could pull trays of drives, one of the RS/6000's, or any of the controllers, and it'd keep on humming along thanks to redundancy pretty much everywhere. If any components went bad it'd call home to IBM via a modem.
And it could do things like automatic mirroring to a remote site.
1.5TB was a common configuration, but you could connect multiple boxes to increase capacity.
The absolute minimum, and representative of the very first dial-up ISPs: http://www.linuxjournal.com/article/2025
There used to be a lot of nice material on this subject but a lot of it has been obsolete and rotted away from the web over the last 15-20 years. The larger dial-up ISPs used Cisco AS series boxes (or equivalent) with PRI (i.e.: phone over T1) connections (24 lines/each) to a centralized RADIUS server for authentication. They are/were the last hold outs providing dial up.
Smaller ISPs were more of a '94 to '99 thing. Usually they used cyclades or equivalent serial port cards with up to 16 serial ports per card and an external modem per port. Eventually this morphed into boxes with multiple modems in them and access servers that did the ppp termination and the authentication (to a RADIUS server) as scale increased. US Robotics was probably the best reputed player in the modem space.
The local telco had problems supplying enough lines from the nearest exchange, so they ended up hanging a thick cable bundle in the trees for several hundred meters, make a hole in one of our windows and put a large multiplexer cabinet in our office... Then three months later we moved to different offices - they were not pleased.
A typical configuration would terminate the analogue lines over ISDN, which supported somewhere up to 30 B channels ('voice calls') over a single cable, running alongside one or two Ethernet cables to a single router with one or more modem option cards installed.
We had Cisco boxes in the last place I worked that handled dialup, looking pretty much like this: http://www.ciscomax.com/datasheets/3600/Cisco_3640.jpg
ISDN is a catch-all term tat included BRI (2 B channels) and T-1 (24 B channels) and E-1 (30 B channels) so the parent comment is correct.
* ISDN BRI = two 64k B channels and one 16k D channel
* ISDN PRI over T1 = twenty three 64K B channels and one 64k D channel
* ISDN PRI over E1 = thirty 64k B channels and one 64k D channel (time slot 0 on the E1 was used as a sync channel and wasn't considered a B or D)
T1/E1 was considered an analog circuit with 24 or 32 respective 64kbps voice channels using in band signaling effectively limiting each channel to 56k. ISDN PRI used the same cabling and channel separation, but instead ran a digital circuit with out of band signaling enabling the full 64k to be available for voice and data. ISDN PRI (and BRI) also allowed the option to be packet based (similar to X.25) rather than circuit switched for data transmission.
Sorry, no fancy pictures or anything interesting, just reminded me of the time.
Would guess the transition is slow because if it ain't broke why fix it, and its often a lot more than just provisioning a new circuit, you have to update process documentation, etc.
Makes me wonder what you'd find wardialing in 2016.
More recently though it's all been on ethernet handoff either via the DC or a local provider
This was a few years before everything went digital with PRI lines (T1s that let you do digital modems, basically.)
Or, one could host the entire thing on S3 with CloudFront at a fraction of the cost.
Or, you could rent capacity somewhere with decent bandwidth prices and pay 2%-20% of CloudFront/S3 bandwidth prices...
Or a smallish EC2 instance and the whole of the rest in S3...
I have an Ultra 10 rotting away in my basement. It cost a pretty penny at the time, but I haven't booted it up in almost 15 years.
Does anyone remember where an Exodus colo facility was in SF around that same time (1999-2001)? That was my first visit to the area and I can't seem to locate the neighborhood now that I live here.