
Giving away the company's secret sauce - gmays
https://rachelbythebay.com/w/2018/04/22/sauce/
======
bvinc
I used to work at RackShack during these times, and when it was renamed to
Ev1Servers, which this article is about. I can answer any questions and clear
up any misconceptions.

The company started out pretty much exclusively with Cobalt Raqs at first.
They eventually bought their own real datacenter and physically moved all of
the servers there and had a lot more room. I remember when they got these
machines they're talking about. They sold them as AMD Athlon "white boxes" I
believe. I was a little surprised when I noticed they were literally super
cheap desktop machines, similar to "emachines" back then. They even had CDROM
drives! But it didn't really phase me. Why the hell not? He obviously got a
good deal on them. We just made racks, 5 on each row, 4 rows. They ran Linux
just fine and had 100mbit network cards in them.

And yeah, no firewalls. That's your job on your server. Most people ran
servers on these machines, why would they want a hardware firewall? The
networking was simple. Just 5 daisy-chained switches, one mounted on reach
rack. 100 servers would share a block of 512 IP addresses. If your subnet ran
out of IPs, we would literally move your server to a different subnet.

They also saved money because most support techs were 18-20 and made just
above minimum wage. But it was a great opportunity and I learned a lot. And we
really did have a lot of really smart people working there.

Someone else in here said that they would require your root password on
support forums. No one was stupid enough to request that and customers were
generally not stupid enough to post it, but I'm sure it probably happened once
or twice. It was a similar situation in the official IRC channel. Some
customers didn't grasp the concept and would accidentally paste their root
passwords in there.

The story about their fiber connection being down, I remember that I think,
but it was before my time. I believe they only had 3 fiber connections, and
one of them was severed. They decided that it was because of a dump truck
picking up a garbage can. They were pretty open about it and posted pictures
on their forums. They kept acquiring fiber connections. I believe they
eventually had 10 different 1-gigabit connections at a single DC.

~~~
wink
Would love to know when that was (more accurate than "late 90s") because at my
first workplace we also paid for Cobalt Raqs (and I'm not sure if "random
desktop pcs in a DC" were available), but because it was the summer of 2001
iirc it's maybe a bit too late to draw parallels (3 years back then could be a
lot more revolutionary in hosting than today).

~~~
mbrumlow
Rackshack had a a few racks of colbalts too. They were fragile, and some what
of a mystery to work with. Once they started offering white boxes on bread
racks things actually looked up and support could actually help people because
it was the same standard tools they already knew.

Later in the days they made big deals with Dell, and even then we were racking
dell desktops and office services on bread racks for a long time. I remember
when we got our first few lines of real rack mount servers -- they were sooo
sexy.

------
imglorp
Rachael's story aside, I think the idea of commodity PCs for mass storage only
became feasible after the software to handle them as a system arose. The
popular term was RAIN: Redundant Array of Inexpensive (independent) Nodes -
when one box dies, the software spins up another and assigns some load to it.
The advancement was that 5 or 10 sketchy PCs could be had for the price of 1
more reliable rackmount box, but you didn't care much about reliability.

This all got some press after the Bigtable paper came out, OSDI 2006, after
they'd had it in production for a few years and weren't afraid of anyone
catching up.

2006:
[https://research.google.com/archive/bigtable.html](https://research.google.com/archive/bigtable.html)

2004: [https://www.networkworld.com/article/2330258/tech-
primers/ra...](https://www.networkworld.com/article/2330258/tech-primers/rain-
architecture-scales-storage.html)

2004:
[https://patents.google.com/patent/US6704730B2/en](https://patents.google.com/patent/US6704730B2/en)
(search for RAIN)

~~~
chubot
This is a tangent, because hosting providers didn't really use distributed
computing back in 90's. They don't really have to solve any coordination
problems because every customer is separate by definition. As far as I
understand, their conception of distributed computing was limited to sys
admins maintaining NFS and so forth.

But going off on this tangent: I don't think BigTable was a major milestone in
terms of fault-tolerant clusters of commodity PCs.

For example here is a write-up paper from 2003 that describes Google's search
clusters:

[http://www.eecs.harvard.edu/~dbrooks/cs246-fall2004/google.p...](http://www.eecs.harvard.edu/~dbrooks/cs246-fall2004/google.pdf)

 _Google’s racks consist of 40 to 80 x86-based servers mounted on either side
of a custom made rack (each side of the rack contains twenty 20u or forty 1u
servers). Our focus on price /performance favors servers that resemble mid-
range desktop PCs in terms of their components_

This architecture existed for a few years before BigTable did.

Also, Google didn't invent this architecture. I did some research on this
several years ago -- it appears that Eric Brewer's work at Inktomi in the 90's
had an architecture that was largely similar to Google. I also think there was
a name for it other than "RAIN". But I can't recall now.

This isn't the paper I'm thinking of, but it's from 1997 and has a lot of the
ideas from Google (which became a company in 1998):

 _Cluster-Based Scalable Network Services_

[https://people.eecs.berkeley.edu/~brewer/papers/TACC-
sosp.pd...](https://people.eecs.berkeley.edu/~brewer/papers/TACC-sosp.pdf)

 _We observe that clusters of workstations have some fundamental properties
that can be exploited to meet these requirements: using commodity PCs as the
unit of scaling allows the service to ride the leading edge of the cost
/performance curve, the inherent redundancy of clusters can be used to mask
transient failures, and "embarrassingly parallel" network service workloads
map well onto networks of workstations._

\----

Ah OK I found it, the term is "Networks of Workstations". From 1994:

 _A Case for NOW (Networks of Workstations)_

[https://scholar.google.com/scholar?cluster=10897873705675337...](https://scholar.google.com/scholar?cluster=1089787370567533783&hl=en&as_sdt=0,5&sciodt=0,5)

I believe this work influenced first Inktomi and then Google... since the last
author is Patterson, that's pretty easy to believe.

There are some stories about the Google prototype at Stanford (backrub) using
every workstation they could cobble together.

 _In this paper, we argue that because of recent technology advances, networks
of workstations (NOWs) are poised to become the primary computing
infrastructure for science and engineering, from low end interactive computing
to demanding sequential and parallel applications. We identify three
opportunities for NOWs that will benefit endusers: dramatically improving
virtual memory and file system performance by using the aggregate DRAM of a
NOW as a giant cache for disk; achieving cheap, highly available, and scalable
file storage by using redundant arrays of workstation disks, using the LAN as
the I /O backplane; and finally, multiple CPUs for parallel computing. _

~~~
peterwwillis
Hosting providers still don't really use distributed computing. Cloud hosting
providers do, but mainly because they are great at selling the idea that the
customer should help the hosting provider save money by accepting increased
complexity.

What you're describing, clusters based on commodity PCs, have existed in some
form since shortly after commodity PCs were introduced. Clusters in general
have existed since 1977. The thing is that most computers were big and
expensive, so there was practically no hardware available to make clusters on
the cheap until the 90's.

The NOW workstation you reference was the progenitor of Beowulf clusters,
developed at NASA in 1994.
[[https://www.researchgate.net/publication/24321665_Clustered_...](https://www.researchgate.net/publication/24321665_Clustered_Workstations_and_their_Potential_Role_as_High_Speed_Compute_Processors)]
But vendors were making similar clusters at the same time, because they all
had to solve the same problem: handle workloads too big for a single machine.
The _Tandem Himalayan_ and _IBM S /390 Parallel Sysplex_ were also both
introduced in 1994, but those were big commercial products, not commodity
clusters. Many other commercial cluster solutions had been produced in the
previous 10 years. So it's not exactly some mind-blowing paradigm shift, and
more an increased focus on reducing the cost of computing, which has always
been the pink elephant in the room.

~~~
chubot
Well, I'd argue that Inktomi and Google are qualitatively different than
earlier clusters for 2 of the 3 reasons mentioned in the NOW paper:

\- _dramatically improving virtual memory and file system performance by using
the aggregate DRAM of a NOW as a giant cache for disk;_

\- _achieving cheap, highly available, and scalable file storage by using
redundant arrays of workstation disks, using the LAN as the I /O backplane;
and_

Also:

\- Compute clusters with shared memory are completely different than NOW
clusters.

\- Compute clusters existed before, but most were not fault tolerant as far as
I understand. Commodity disks have a (much?) higher failure rate than
commodity CPUs.

As far as I know, NASA was mainly using compute clusters. As far as I
understand, "Beowulf" clusters were mostly about compute. That is, doing a lot
of computation on a relatively small amount of data. In contrast, Google and
Inktomi were about doing a small amount of computation on huge data, and also
operating live services.

Though, I'm interested in any specific examples of earlier systems that
unified separate memories and disks from commodity PCs. For example, BigTable
definitely unified distributed memory and disk into a single resource, but it
came much later.

Also, to be truly called "commodity", the memory/disk should be unified over
Ethernet, e.g. using a Unix socket interface and not something custom.

~~~
peterwwillis
The ideas of unifying independent memory, disk and cpu stretch back to at
least 1988 with LOCUS, eventually becoming OpenSSI. Single system image
clusters were designed to manage tasks running on multiple computers as if
they were running inside a single system. There were several SSI cluster
solutions as of 1994.

PVM, beginning in 1989, was a popular choice for parallel processing and
distributed computing. It extended a virtual machine concept to allow
processes to interact between disparate, loosely networked machines, even with
different platforms and architectures.

With increased parallel processing came a need for more distributed file
systems (around in one form or another since the 1960s). They allowed
processes on different nodes to interoperate without care of where the data
was actually stored, and also more efficiently than if stored in a single node
or array.

PVFS, a parallel filesystem designed by NASA in 1993 to work with PVM,
distributed storage and access of an object's parts over multiple nodes. This
would later be ported to Linux for use in Beowulf clusters. GPFS, an IBM
filesystem also developed in 1993, worked similarly. Google File System was
eventually designed by the same researchers who created Coda in 1987, an
object data store turned into a distributed [parallel] filesystem.

So distributed filesystems evolved in parallel by different groups that based
their work on earlier research projects such as AFS2 and Vesta. Google and
Inktomi definitely had success with their implementations, but basically
everyone was starting down that path as community computers became more
available and parallel processing became more important.

~~~
chubot
Hm, I've heard of PVM but don't know much about it. I don't doubt that there
were projects to to unify memory/disk/CPU across distinct machines before
Inktomi/Google.

I don't think they used Ethernet and Unix sockets, but I don't know for sure.
It's debatable how important that distinction is. Although, to me, commodity
clusters also means commodity networking, not just commodity PCs, because
otherwise the clusters wouldn't have been as cost effective overall.

Probably the more important distinction is how many nodes are working
together, which of course correlates with how many faults there are, and the
sophistication of the software that handles the faults.

I imagine earlier systems were in the 10-100 node range. Inktomi and early
Google were more in the 1000 node range as far as I understand, and Google's
clusters are up to at least 10K nodes and 100K CPUs/disks (probably more by
now).

At every order of magnitude there are different engineering challenges. There
are always precursors, and the NOW paper cites a couple in section 2.3
(MiLi91, Zho92). A 10 or 100 node cluster is definitely a precursor, but you
could still call it qualitatively different than 1K to 10K nodes.

Also one funny thing is that Google/Facebook no longer use "commodity"
clusters, if you take that word literally. You can no longer buy what they use
in small quantities! e.g. a design from
[http://www.opencompute.org/](http://www.opencompute.org/)

They're back to the model of highly engineered, relatively homogeneous,
relatively reliable hardware. Not a bunch of PCs cobbled together with clever
software!

~~~
peterwwillis
Unix sockets? You mean Unix domain sockets? That only works on a single host.
Perhaps you meant Berkeley sockets, the API for both Internet and Unix
sockets. This became a POSIX standard and many systems emulate it, even
Windows. PVM used them, and built abstractions and new methods on top of the
socket APIs to make building distributed parallel-processing applications
easier.

And Ethernet was, for a time, much more expensive and slower than other
solutions, not to mention buggy. It is still slow in some cases (top
supercomputers use InfiniBand). Adopting Ethernet was just something that
happened due to a confluence of factors (I believe mostly IBM supporting it,
and then everyone else, and it becoming cheaper, faster and more reliable over
time). So this was just an industry thing, not really a distributed processing
thing.

The number of nodes is less important than the number of systems - that is, if
LLNL wants to do grid computing with CalTech, you're talking about parallel
processing over two distinct systems. Even at Google, they may have multiple
teams, each with a large distributed system on a different network, and they
all have to interoperate. For this, you have to use PVM/MPI, or something
similar. This was a significant step in distributed systems engineering.
Google didn't _need_ to interoperate with _random_ systems, though, so they
didn't _need_ to use PVM/MPI, so their systems were simpler.

I think the commodity computing helped them do massive processing cheaper, but
at a point there's a lot of waste there, so they engineered out the waste in
their new custom systems... which happen to be more expensive and complicated.
Everything old is new again.

------
meirelles
I think she's talking about ev1servers.net. I used to be their client. Yes, it
was crappy white-boxes, but it worked okay with the proper setup.

Look what happened with EV1Servers:

EV1servers.net > bought by ThePlanet.com > bought by SoftLayer.com > bought by
IBM.

The Softlayer was formed by former ThePlanet employees.

I used to be a client of the Softlayer when they started too. They elevated to
the next level the budget servers market, instead of crappy white boxes they
used great Supermicro hardware. The secret was everything automated, very well
executed by skilled and experienced people.

Today, even Joe's DC is automated and most of them even use branded hardware.

~~~
majewsky
> he

Pretty sure that someone called "Rachel" is female, although I cannot find a
full name or imprint on the website.

~~~
pdpi
She's Rachel Kroll, a very very senior production engineer at Facebook.

~~~
bigiain
From her blog recently, I'm 99% sure that's "ex Facebook very very senior
production engineer" now?

~~~
pdpi
Yeah looks like. Seems like I was on holiday when she left, and i then left
shortly thereafter, so I missed it

------
petercooper
_Apparently, once this made the rounds, other people realized that customers
were willing to pay for something so rough and dirty, and so why not create
such a service themselves?_

This idea of getting "permission" to do things after seeing someone else do it
crops up in my thoughts a lot. A lot of new fads in technology aren't
necessarily _that_ novel in any technical sense but are simply people
realizing it's OK to cut a corner, "misuse" something, or otherwise take an
unorthodox approach.

~~~
Regardsyjc
I think this idea might be mimetic theory, possibly one of Peter Thiel's
secrets, that people are mimetic and simply copy one another thus leading to
competition. Another related idea might be blue ocean strategy which is
basically similar to Peter Thiel's zero to one, on innovation.

------
tluyben2
Sorry for the the low res pic but we ran millions of sites on these things
[1]. Much cheaper than any hosting we could find.

It was a nice time before the real race to the bottom. We hardly had downtime
compared to the massive commercial hosters because we lived in the server
cages those days.

I believe much of my tinitus comes from spending too much time there.

[1] [https://picturepush.com/+154WU](https://picturepush.com/+154WU)

~~~
iagovar
What is that? Servers? I may be blind but I don't what those are.

~~~
tonyarkles
My guess, having done some things like that at the time, is a motherboard,
hard drive, and power supply strapped to a steel tray with no enclosure at
all. The sides look like they may be plywood or 2x4s. Noisy and fragile, but
easy to work on and dirt cheap to put together.

~~~
tluyben2
Yep! Exactly that!

------
inopinatus
I once worked with a hosting provider whose battery backup was that every
server was actually a second-hand thinkpad.

~~~
hxtk
I can't decide if that's horrifying or brilliant. Leaning towards both.

------
mbrumlow
> I may never know if this picture and/or article ever happened, but it does
> raise an interesting point. If the only thing keeping you the only player in
> a certain market is ignorance of how you built your environment, just how
> long do you think that will last?

As a note, it lasted long enough for the guys to make a very very healthy exit
and for the data centers to eventually transition into the backbone (along
with a bunch of other - mostly non profitable data enters) of Soft Layer now
owned by IBM.

Edit:

Also here are some pictures of the place.

[http://www.datacenterpics.com/displayimage.php?album=3&pid=6...](http://www.datacenterpics.com/displayimage.php?album=3&pid=614#top_display_media)

Picture in question:
[http://www.cloudcomputingexpo.com/node/102155/print](http://www.cloudcomputingexpo.com/node/102155/print)
[https://www.bizjournals.com/houston/stories/2006/05/08/story...](https://www.bizjournals.com/houston/stories/2006/05/08/story4.html)

~~~
mbrumlow
Also should note that at the time this strategy allowed the company to host
about 1% of the internet at the time.

------
walrus01
ISP here: bread racks and minitower PCs are still a common thing in low end
hosting. In many cheap facilities the limitation is total amount of
electricity and cooling available within a low cost facilities budget, so
there would be no way to densely populate 44U cabinets with 1U servers anyhow.

------
rdl
This kind of "ghetto colo" is prevalent in the cryptocurrency mining space,
now (where power consumption is probably 10x higher) -- either using hardware
ASIC miners or PCs with many GPUs. Often the hardware has an in-service
lifetime of months, vs. ~10 years for some enterprise or networking equipment,
and outages cost a lot less (a 75% reliable datacenter would be a joke; a 75%
uptime crypto mine that cost $0.01/KWh might be fine -- "turn it off during
peak load periods" or otherwise pre-emptable by the grid.

------
imeron
I guess it depends, I don't see too much competition for Backblaze. Yet, they
show you everything they got and use it as marketing. Link:
[https://www.backblaze.com/b2/storage-
pod.html](https://www.backblaze.com/b2/storage-pod.html)

~~~
baud147258
Because perhaps their "secret sauce" is not in the hardware?

------
AlexB138
This story is possibly about Rackspace. Rachel was a Racker at one point, as
was I, and the origin story matches up.

~~~
baking
Robert Marsh, Head Surfer

[http://web.archive.org/web/20030802132245im_/http://rackshac...](http://web.archive.org/web/20030802132245im_/http://rackshack.net:80/english/images/hs.gif)

~~~
walrus01
Bread racks and ordinary atx motherboards in mini/midtower cases are a much,
much older idea than 2003... like mid-1999 to 2000 or so.

------
N8works
during these days my company was in a data center in Los Angeles next to "Web
hosting for life" a company who's business model was based on a single upfront
payment that got you hosting for "life". (Whatever that means). Their secret
sauce was cages full of 19" Telco racks with shelves filled with motherboards,
power supplies, and hard drives. No cases. Some of the hard drives were zip
tied to the shelves. We'd tour potential customers through the data center and
say "this is what shared hosting gets you, what if your site was bring served
from that "server" with it's hard drive hanging by a broken zip tie?" It was a
great sales technique.

------
PeterStuer
related story including previous HN link:
[https://web.archive.org/web/20140121020351/http://www.search...](https://web.archive.org/web/20140121020351/http://www.searchenabler.com/blog/build-
your-own-data-center/)

------
chillingeffect
A great story... it leads me to ponder the architecture and extent of
"ignorance," as used in her writing. It encompasses so much... from security
through digital keys to unknown quantum structures to unoptimized algorithms
and the obscurity especially of human biology and sunconscious desires.

------
mbrumlow
My post was too long so I am breaking it down.

Part 1

So...

This is where I got my start in tech. I worked there and and supported and
racked many of said bread rack servers. I guess I could be wrong and they
could be speaking of another company but there was only one company I ever
knew to do this at it was rackshack, before it became ev1servers, and then
later theplanet and now softlayer.

There were a few others in Houston at the time trying to get this working too,
namely redhost (I think, or was it alphared or something like that. ) which I
had the chance to visit once. This place was much worse they cooled the iles
the customers would walk down meanwhile the servers got no AC. They also
shimmied servers into parking lot storage rooms. But that is not likely the
place the article was about.

While what described in the article sounded horrible it was not all that bad.
But then again this was also the early 2000s. While each rack did not have a
dedicated firewall they did provide some niceties globally to all servers such
as basic dos attack prevention and some interesting but easily avoidable
hosting panel exploit prevention.

It really was the wild wild west of hosting. You had the tech support named
webtech who was normally off site and basically was a job for anybody who
could spell linux. They did in house training to support the most common linux
related issues. When I was there there was a guy named Bill Badoux. This guy
ran training and had a fairly comprehensive training class for all incoming
techs, but none of that really mattered, most of the real work always filtered
down to a handful of knowledgeable people. What was important was somebody
answered the phone and was there to listen to the customer -- mostly
complaining. On the other end of things we had DC techs. They were mostly
about rebooting servers as none of the whitebox servers on bread racks
supported any sort of remote reboot. At the time you would phone in your
reboot, or submit a ticket in this POS software called "pipeline" which was a
total shitstorm in house ticketing system, that would more likely get you
fired for it doing something different than you thought it might do than
actually help you do anything. But us techs made the best of it and often
resorted to IRC to actually get the real work done. The NOC was a separate
entity -- I think their paychecks even came from a different org. They mostly
sat around and watched graphs -- and again would call somebody who knew what
to do when "the graphs looked bad". I know one guy nicked named Brain at the
time who was one of the knowledgeable guys. He ended up writing some software
to be able to display the graphs in real time. This really helped the techs
spot things much faster.

I was there about ~4 years. Man there were stories.

One time I saw a DC tech drop a handful of hard drives on the way to a server
to replace drives that had just failed. He picked the dropped hard drive up
off the floor, dusted it off and proceeded to replace a failed dive with the
drive he just dropped. By chance the server booted up and entered a FSCK -- we
might be okay! -- but atlas the tech just powered the server off and powered
it back one and it never saw the light of day again. Back then and probably
still interrupting a FSCK was most defiantly doom to the FS.

Slightly NFSW but a true story. Rackshack hosted 2 things - Porn, and Jesus
sites. Neither were all that nice to work with. The Jesus guys would always
find out or suspect their server, hard drive, or IP was used to host porn in a
previous life and demand we change one or all of the things they suspected had
porn on it. This more than often resulted in your request causing your server
to be shut down for a few hours and then turned back on and a note updating
the ticket from DC stating the hard drive or whatever that had previously had
contact with porn was swapped out. In reality the server just sat powered off
for a few hours. Request for IP changes did happen and were actually carried
out if they could, but again this is was the early days so that often required
the customer to migrate their site to a new server in the data center. But
back to the porn. One time I got a call from a guy wanting to know if his site
was up. He ask me to navigate to a obvious porn site -- sure enough on our
network with the less than 1 ms ping time. He proceeded to give me login
information and navigate me to a specific video and asked me to click on it.
It played instantly. After a few seconds I reported back to the customer and
he asked a few question about the speed of the site. Then he proceeded to tell
me the video was of him and his wife.... Now at that point I kept good notes
in the POS ticketing system named pipeline. Those notes included the login
credentials to the site for future testing -- I was a noob for sure. The next
day I walk in to work and it was clear everybody had found the login
credentials to the site and that was the first time I caused a policy change
of do not record such things as the techs will abuse it for their personal
pleasure.

~~~
mbrumlow
Part 2

On another occasion I had to calm a customer down who was clearly about to
have a aneurysm on the phone -- over a faster reboot request. At the time I
was working in webtech. I had no control over reboots and could only forward
the request to the DC techs to take care of when they could. At the time it
was like 1 hour reboot request. This guy demanded to speak to my supervisor
and was yelling and screaming. My supervisor was already listening in on the
phone (shout out to Steve!) and had told me in IRC to just get him off the
phone, there was nothing we could do and stick to the normal script of "Your
server reboot request has been made and will be processed in the order it was
received". And of course this was not acceptable. After a few more screams of
rage from the guy and me asking him to calm down I told him I could not
continue the call and that he should call back once he had a chance to cool
off -- and then hung up the phone.

Once when I was working in what was known as DC2 the newest addition at the
time to the data centers located in a area of town we lovingly called "guns
point" because of the prolific amount of gun violence that accorded at "guns
point mall". Side note, I was actually at the mall once when gun shots went
off. Side note two, I shook the hand of a man who later washed me car, set a
building on fire and then proceeded to go guns point mall to work out, attempt
to rape a woman, and then shot the lady and him self. But atlas, we are
talking about DC2. Somebody had taken what I can only describe as a "baby
sized" dump. By baby sized I mean the size of a new born baby. It towered out
of the water with a impressive girth. Nobody wanted to deal with it so the
bathroom which happened to be the woman's bathroom sat shut for about two
weeks. It was mostly fine but then the complaints form the women in the office
started to roll in about the obvious issues; seat left up, pee on the seat,
having to wait too long, smells. Being the problem solver type I am I decided
I would rectify this issue. The entire office (about 8 people) were all
integrated and decided to help me. We grabbed several paper towel rolls from
the men's bathroom, and proceeded to wrap me up like a mummy -- head to toe,
just tiny slits left for my eyes. Armed with a plunger they opened the door to
the woman's bathroom and I shimmied in with them closing the door behind me. I
told them not to open the door back until there was a proper flush. I
proceeded to battle this baby sized dump for the next few minutes. It took
about 6 flushes to get the entire thing to go down without flooding the place.
The blame for the baby sized shit always landed on the smallest girl in the
office :/ \-- note nobodies feelings were hurt, it was all fun and games, for
fucks sake we were making like $7 a hour.

This has gotten much longer and I have only just given a few of the crazy
stories about this place, but it is almost lunch time so I will leave you with
this one last story.

During the DC2 bring up I volunteered to work on the wiring crew. I thought it
would be a good opportunity to learn more about networking and make a little
extra money. There I learned how to crimp network cable and the art of zip
ties. We were on a super tight deadline so some times corners were cut, or
should I say not cut. This one rack me and my coworker were wiring we failed
to properly measure the correct cable length. Or goal was to work on both ends
at the same time. The end result was we had a HUGE amount of cable left over.
So with the art of zip ties we bundled it up, about 20 cables -- in to a big
burrito and placed it on the top center of the bread rack -- out of the way
and all... Little did I realize this rack was right in view of one of 2
network cameras that broadcast the DC live on the main page of the companies
web site. The next day I was called to Pete's desk, the guy running the wiring
crew (also know for having a box fan pointed directly at his junk while at his
desk, and responsible for ordering copper power cables that would not really
work for powering the types of servers we planned on powering.) Pete proceeded
to pull up the website and point at the web cam picture (at the time updated
once every 1 min) and asked me what was wrong with that picture. I did not see
it at first, and said nothing. Pete then asked me about the huge blue burrito
on top of the rack :(. I spent that day rewiring that rack.

Of the stories missing are the make shift data recovery operations on failed
drives, platter swaps, incompetent employees who refused to use computers and
required paper print outs, limo rides, foosball, bike building, tails of
hacking, and the FBI.

All and all the the place was a awesome place to work with unlimited freedom
to learn or do anything you wanted so long as it appeared to maybe help the
business and it did not interfere with your daily duties. This is the place I
learned to code, and got serious about computers hardware and software. I
often look back at the things I learned here and how they help me do my job
today.

Shout out to Brain, David (both of them), Sammy, Steve, Todd shorty pants,
Bill, Jason, Curtis, Patrick, Bridget, Phil, Samantha, Cory, Fox, Brandon,
Ken, Jen, and many others I am sure I have not forgotten just at a loss for
names.

TLDR; Severs on bread racks, people paid crap, fun place to work.

~~~
peterwwillis
Is it possible for you to edit these posts and put the contents in a
pastebin.ca or similar link? They take up quite a bit of screen real estate,
especially on mobile. Thanks

------
pbreit
What's the takeaway here?

I would say: there's business to be had by questioning all the conventional
wisdom.

And: customers like low prices.

