
Racking Mac Pros - zacman85
http://photos.imgix.com/racking-mac-pros
======
jallmann
This seems risky from a business perspective: it's voluntary vendor lock-in.

What if Apple decides to change the Mac Pro form factor for the next
iteration? Then you have to retool and are left with a bunch of incompatible
chassis. What if Apple stagnates with hardware upgrades? You'd be stuck
running obsolete hardware. What if Apple discontinues the entire Mac Pro line?
Not to mention the price premium of Apple hardware itself, then the time and
expense incurred to design and fabricate this.

The fact that their software depends on Apple's graphics libraries doesn't
seem like a good justification for doing this. What it says is they are
willing to throw a ton of money and effort towards (very cool) custom
hardware, but are unwilling to hire a person to write optimized OpenGL shaders
for Linux, which would work on pretty much any other server they choose to
build/buy/lease/cloudify. Certainly there will be other "debt" to overcome,
especially if much of your codebase is non-portable Objective-C or GCD, but
that has to be weighed against the possibility of your only vendor pulling the
rug out from under you. And looking at Apple's history, that is a very real
possibility...

Owning your hardware like this makes complete sense if your core business is
directly tied to the platform itself, eg an iOS/OSX testing service. But as
far as I can tell, imgix does image resizing and filters... their business is
the software of image processing, and they're disowning that at the expense of
making unrelated parts of the business more complicated. Not a good tradeoff,
IMO.

~~~
MCRed
Why pay 5-10X as much to host on AWS? It's not for free.

Hosts have a really nice markup, compared to hosting yourself. Hosts make a
lot of sense for small companies who can benefit from the aggregated demand
and capital costs being spread over many clients.... but not when you're at
the level of building your own datacenter, or even using a full rack.

It's funny how since 1980 people have been talking negatively about Apple as
"vendor lock in". For most of that time it was advocating vendor lockin to
windows.

The thing is when you build your system on an OS or hardware choice you're
making "vendor lockin" to that platform. Build on Linux and you're locked into
just Linux, unless you port.

There is little risk being "locked in" to the largest most successful company
in the world. Plus the costs being dramatically lower given the rabidly higher
performance of Apple's technology for this particular service more than covers
the cost (in fact I think one Mac Pro probably replaces 4 or 5 Linux boxes
doing this.)

If you think Optimized OpenGL shaders would do this, you're not understanding
what it is that they are doing. You're just assuming it's a trivial problem,
it is not.

Owning your hardware makes a great deal of sense when you are operating at
scale.

~~~
frugalmail
>Why pay 5-10X as much to host on AWS?

You have no idea what the comparison is, and I don't either. But again, the
criticisim is around running a business off of a bunch of Apple "trash cans".

>advocating vendor lockin to windows.

Linux is no lockin, Windows lockin via software vs. Apple for hardware and
software.

>"vendor lockin" to that platform

Java, Scala,or any other JVM language protects from that, and to a lesser
degree Python, PHP does as well.

>There is little risk being "locked in" to the largest most successful company
in the world.

Price gauging? Deciding not to support your platform anymore? Forcing you to
upgrade?

>Build on Linux and you're locked into just Linux, unless you port.

Only that there are a bunch of Linux options to choose from, they are all open
source so you can do whatever you want as far as upgrade paths and support,
and if you use the JVM languages this isn't an issue.

>in fact I think one Mac Pro probably replaces 4 or 5 Linux boxes doing this.

There is no fact there, that's your delusional opinion.

>If you think Optimized OpenGL shaders would do this, you're not understanding
what it is that they are doing. You're just assuming it's a trivial problem,
it is not.

It's a CDN + image manipulation tool, you don't need 3D libraries. And if you
use exiting libraries or tools, it is quite trivial. Here is their API:
[http://www.imgix.com/docs/reference](http://www.imgix.com/docs/reference)

~~~
panglott
Platform lockin isn't exactly vendor lock-in, but there's a kind of lockin
nonetheless. You're going to be dependent to some extent on your platform
whatever your platform happens to be.

~~~
frugalmail
>Platform lockin isn't exactly vendor lock-in, but there's a kind of lockin
nonetheless. You're going to be dependent to some extent on your platform
whatever your platform happens to be.

So you making your own chips off of beach sand or something? /s After a
certain point you get ridiculous.

JVM and C/C++ (python and other scripting languages to some degree) are the
options if you want cross platform environments.

But on a scale of suckiness:

1) Hardware lock in

2) vendor lock in

3) service lock in

4) OS lock in.

5) app server lock in

6) framework lock in

7) Library lock in

8) programming platform lock in

------
rsync
I have two conflicting responses to what I am seeing here ...

First, this is awesome. Just like I want to live in a world where people are
paying picodollars for cloud storage[1], I also want to live in a world where
a bunch of mac pro cylinders are racked up in a datacenter. Very cool.

Second, this is complete silliness. I'm not going to go down the rabbithole of
flops per dollar, but there is _no way_ that you can't build a hackintosh 1U
with _dual_ cpus and multiple GPUs and not come out _big_ money ahead.
Whatever management overhead gets added by playing the hackintosh cat and
mouse is certainly less than building new things out of sheet metal.

Let me say one other thing: right around mid 2000 was when certain companies
started selling fancy third-party rack chassis gizmos for the Sun e4500, which
was the cadillac of datacenter servers at the time. Huge specs on paper, way
underpowered for the money they cost ($250k+) and the epitome of Suns brand-
value. And there were suddenly new and fancy ways to rack and roll them.

This reminds me a lot of that time, and that time didn't last long...

[1] Our esteemed competitor, tarsnap.

~~~
tinco
Obviously, but running OSX on non-Apple hardware is a violation of its EULA.

I have contacted a lawyer for this (I wanted to run Hackintosh in the office),
the language is very clear. The author of the software has the full power to
license its use to you with any restrictions they find necessary no matter how
ridiculous. If Apple only sells you the license if you promise not to run it
on a thursday, you'll be in violation of their terms if you run it on a
thursday.

~~~
nulltype
Even if that were not the case, I doubt the performance of the OS X image
pipeline is tuned for non-Apple hardware if it supports it at all.

~~~
tinco
Non-Apple hardware does not differ in any meaningful way from Apple hardware.
The performance of anything in OSX is perfectly tuned for any generic desktop
computer. It also supports most hardware straight from the box.

~~~
KeytarHero
> any generic desktop computer

Any generic desktop computer with the same hardware. It sounds like they're
using Apple's image pipeline, which I imagine would be designed around the
specific graphics hardware in the Mac Pro. Sure it could work on other
hardware, but when you know exactly the hardware you're running on you can do
a lot of low-level optimizations you couldn't otherwise do.

~~~
wtallis
Apple supports Intel, AMD, and NVidia GPUs. Their graphics pipeline has over
the years supported substantially all of the graphics chips produced by those
vendors in the past 10-15 years. Their current full feature set may only be
supported on GPUs that admit an OpenCL implementation, but that's still every
bit as broad as the generic desktop computer GPU market—about two
microarchitecture per vendor. Apple's not getting any optimization benefits
from a narrow pool of hardware, for GPUs or anything else. The _only_ benefit
they get along the lines of narrow hardware support is that they don't have to
deal with all the various motherboard firmware bugs from dozens of OEMs.

------
kaolinite
Three possible reasons I can think of for doing this over using PCs or Linux
servers:

1\. Using the same operating system as the developers of the software, plus
access to Apple's fantastic imaging libraries.

2\. The Mac Pro, whilst expensive, is good value for money. The dual graphics
cards inside it are not cheap at all. As servers with GPUs are fairly niche,
this might actually be a cheaper solution.

3\. The form factor. Even if you could create PCs that are cheaper with the
same spec, they'll use more power, possibly require more cooling (Mac Pro has
a great cooling architecture) and will take up a lot more space.

I'd be very interested in hearing how they manage updates and provisioning,
however. I can't imagine that'd be much fun on OS X but perhaps there's a way
of doing it with OS X Server.

~~~
skuhn
(I'm the datacenter manager at imgix, and I wrote this article)

1\. Yeah, the OS X graphics pipeline is at the heart of our desire to use Macs
in production. It's also pretty sweet to be able to prototype features in
Quartz Composer, and use this whole ecosystem of tools that straight up don't
exist on Linux.

2\. I mentioned this elsewhere already, but it is actually a pretty good
value. The chassis itself is not a terrible expense, and it's totally passive.
It really boils down to the fact that we want to use OS X, and the Mac Pros
are the best value per gflop in Apple's lineup. They're also still a good
value when compared against conventional servers with GPUs, although they do
have some drawbacks.

3\. I would love it if they weren't little cylinders, but they do seem to
handle cooling quite well. The power draw related to cooling for this rack
versus a rack of conventional servers is about 1-5/th to 1/10th as much.

In terms of provisioning, we're currently using OS X Server's NetRestore
functionality to deploy the OS. It's on my to-do list to replicate this
functionality on Linux, which should be possible. You can supposedly make ISC
DHCPd behave like a BSDP server sufficiently to interoperate with the Mac's
EFI loader.

We don't generally do software updates in-place, we just reinstall to a new
image. However, we have occasionally upgraded OS X versions, which can be done
with CLI utilities.

~~~
sajal83
Why not unassemble the cylinders and re-assemble into rectangle chasis? Im
sure that would give you a more dense layout.. Sure it would void warranty and
resale value.. but do you really care?

~~~
JonathonW
The whole machine's custom built to fit inside the cylindrical case... the
best you could do would be to take the outer case off, and then you've just
got a slightly smaller cylinder.

Electrically, everything's built around a round "central" PCB using a custom
interconnect. You're not going to be able to reassemble the thing into a
rectangle and still get a functioning machine (not without tons of custom
design work, at least).

See
[https://www.ifixit.com/Teardown/Mac+Pro+Late+2013+Teardown/2...](https://www.ifixit.com/Teardown/Mac+Pro+Late+2013+Teardown/20778)

------
TD-Linux
Given all of the effort spent to use Quartz's graphics operations, I was
curious as to how they actually performed. I opened an account and tried out
the upsampling, and was a bit disappointed.

[http://chen.imgix.net/rose.png?w=560](http://chen.imgix.net/rose.png?w=560)

What other upsamplers look like: [https://github.com/haasn/mpvhq-
upscalers/blob/master/Rose.md](https://github.com/haasn/mpvhq-
upscalers/blob/master/Rose.md)

Looking at the other operations available, I fail to see what is done better
by Quartz than just by imagemagick.

~~~
angersock
As I recall, the basic idea was for something lighter-weight than spinning up
and spinning down imagemagick.

Then again, one wonders why not just use FreeImage or something?

~~~
72deluxe
Is running imagemagick really that intensive?

Isn't running shaders intensive as it needs to be compiled on the fly and
handed off to the GPU driver?

~~~
angersock
Switching between shaders is orders of magnitudes cheaper than spinning up and
spinning down a process.

------
mmastrac
Building on OSX seems like it must add a ton of complexity to your workflow,
despite getting access to some of Apple's GPU-optimized image code.

Then again, it's often cheaper to throw silicon at problems than people. If
you have in-house expertise in Apple's graphics libraries, that might be
cheaper than hiring someone who could write the whole thing to run under a
lower-cost Linux solution.

Alternatively, OS X might give you automatic access to patent licenses for
some of the more expensive image formats.

Have they ever blogged about why they've gone down this path?

~~~
skuhn
(I'm the datacenter manager at imgix and wrote this post)

From a pure hardware perspective, I would love to move this part of the
service to Linux systems with GPUs. I spent some time evaluating this before
we committed to the Mac Pro solution -- built some prototype hardware and did
a cost analysis. It just wasn't the right move, because of the engineering
cost for us. OS X's graphics pipeline is really strong, and we've built a lot
of cool things with it. There is no analog whatsoever on Linux -- we would
have to commit a lot of resources to re-build what we already have, and it
would in the best scenario not be a customer-visible change. As a lean
startup, we have to be ruthless with the work do: if it doesn't move the
needle for our customers, it's probably not the right thing to do right now.

So instead, I've spent some time (and engaged with partners like Racklive) to
get the Mac Pros to be as operationally acceptable as possible. This rack
design and the chassis we designed go a long way towards achieving that goal.
Airflow is taken care of, and the rack hits my power quota almost exactly (at
full load). Cabling and networking and host layout follow our patterns from
our conventional server racks. USB and HDMI ports on the front allow me to
easily use a crash cart.

The lack of IPMI is my biggest operational headache. We have individual power
outlet control and can install the OS over the network, so that's something at
least.

The OS itself is also challenging. I'm not a fan of launchd. Finding
legitimate information about how to do something on OS X is pretty tough,
given that most of the discussions are focused around desktop users (who may
be prone to pass on theories of how things work rather than facts). We've
gotten it to a point where things work pretty well -- we disable a lot of
services, run our apps out of circus, use the same config management system as
on Linux, and so forth. We treat the Macs as individual worker units, so
they're basically a GPU connected to a network card from the perspective of
our stack.

~~~
15155
> who may be prone to pass on theories of how things work rather than facts

This is the biggest nightmare about working with OS X, to me.

Any forum discussion you find on Macrumors or the Apple forums is hilariously
misguided with pathetically bad "theories" on why something isn't working and
how to fix it.

"Zap the PRAM!" can be found in any/every thread, and that's a mild example.

~~~
skuhn
Zapping the PRAM is a pretty frequent joke around the office.

There are some OS X groups that are more focused on automated deployments for
IT type stuff, so those can often be a source of more enlightened discourse,
even though it still isn't exactly catering to our niche.

------
bane
It's really kind of mind-boggling that Apple makes and sells the Pro, which
can be upgraded to a really nice high performance GPU workstation, but then
doesn't sell the same hardware in rack mountable forms for clusterable
computing.

I'm sure they've performed some kind of market analysis for this, but there's
enough differences between OSX and Linux solutions that for people who use HPC
solutions (a growing market) a cleaner path from OSX to HPC would be very
helpful.

~~~
lsllc
Apple used to sell the rack mounted X-Serve, but discontinued that a few years
go.

~~~
bluedino
>> but discontinued that a few years go.

11 years ago.

~~~
grecy
Discontinued Jan 31, 2011

[http://en.wikipedia.org/wiki/Xserve#Intel_Xserve](http://en.wikipedia.org/wiki/Xserve#Intel_Xserve)

~~~
selectodude
They were last updated in 2009 though, so those Xserves they were selling in
2011 were pretty dated.

------
justinph
The mechanics of this are pretty neat. But the photography in the article is
incredibly distracting. Does every shot have to be at an off-kilter angle? If
this is a story about engineering, how about some head-on shots of the
engineered thing.

I get that the Mac Pro is a beautiful object, but this isn't about the mac.
It's about the rack, and none of these photos let me understand it in one
shot.

~~~
damon_c
I agree, I think. To me the interesting part about putting Mac Pros in a rack
is integrating its relatively unusual approach to air flow.

None of these pictures really show how that is accomplished here. In fact many
of them seem to be deliberately hiding that specific aspect.

~~~
skuhn
(I'm the datacenter manager at imgix, and I wrote this article)

I had originally intended there to be a totally disassembled chassis with an
airflow overlay on top, but it turned into a lot of work. All of the chassis
were already assembled by the time we took the pictures.

The high level view is that air is drawn in to the vent on the front right,
which has a separate channel that all 4 Pros sit in. They are sealed in place,
so the air has to pass into each Pro's air intake to go anywhere. The other
side of the chassis is open to the back of the rack and holds each Pro's
exhaust vent.

I'll go through the photos we took and see if there's something that would
help to illustrate this better.

~~~
damon_c
They're really nice pics and you did a great job explaining everything!

------
striking
Apparently you can fit a round peg in a square hole and achieve high
efficiency density while you're at it.

------
windsurfer
Is there a summary somewhere that explains what makes "OS X’s graphics
frameworks" worth going to all this trouble?

~~~
skuhn
We haven't done any in-depth technical articles yet, and there's the worry of
giving away our secret sauce, but it is something that I'd like to explore
more in the future.

------
a-dub
This seems really crazy to me. I get it, when you're a startup sometimes you
end up with bubblegum and scotchtape solutions like this and sometimes that
really makes the most sense on many levels.

But usually you keep that to yourself! To me, this reads sorta like: "Well, it
was really hard to find someone who knew how to build a replacement bridge
across the creek. We were pressed for time, and Bob didn't know anything about
bridges, but luckily, he used to be in the Air Force and we have a bunch of
venture capital. ... So we bought a helicopter instead. We only cross a few
times a year, so for now we're coming out ahead and it works out for us. Plus
the pictures are nice..."

------
ilzmastr
Wow, learn something new everyday. I thought everyone who did image processing
and cared about performance used NVidia cards for the CUDA libraries. I never
knew apple [GPU image
libraries]([https://developer.apple.com/library/mac/documentation/Graphi...](https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_intro/ci_intro.html))
made AMD a competitive choice.

It is much more expensive, though a lot less engineering work, than buying
some used Tesla's on ebay:
[http://www.ebay.com/sch/i.html?_from=R40&_trksid=p2050601.m5...](http://www.ebay.com/sch/i.html?_from=R40&_trksid=p2050601.m570.l1313.TR0.TRC0.H0.Xnvidia+tesla.TRS0&_nkw=nvidia+tesla&_sacat=0)

or even brand new

~~~
listic
How come these used Tesla's are so cheap? (e.g. $150 for an M2090)
[http://www.ebay.com/itm/NVIDIA-Tesla-M2090-6GB-
GDDR5-PCIe-x1...](http://www.ebay.com/itm/NVIDIA-Tesla-M2090-6GB-
GDDR5-PCIe-x16-GPU-Computing-Processor-Video-Car-/231500299455)

~~~
skuhn
I theorize that it's because they're server grade equipment, and the used
market for server gear is not that large. Most established businesses don't
want to risk buying something that straight up doesn't work or will fail
later, even at a 50-75% cost benefit. It just isn't worth the time spent
dealing with it.

If you're a one person startup, then you do what you have to do to survive.
Eventually you get to the point where free stuff actually costs you more than
just paying for it in the first place.

------
Gracana
I'm really impressed by the quality of engineers on this forum. It's amazing,
it seems that just about everyone here knows how to do skuhn's job better than
he does!

~~~
qnaal
I think the big picture is that this looks like a joke, and everyone's having
fun trying to articulate their feelings about it.

------
Pfiffer
Some previous discussion here about using OS X, mounting Mac Minis, etc:
[https://news.ycombinator.com/item?id=8138791](https://news.ycombinator.com/item?id=8138791)

~~~
skuhn
Thanks for posting this. This is the second article in the series (although it
took forever to finalize).

We're also working on a third, which I think will be in the format of an
interview with the Mac Pro chassis's designer.

------
msandford
Seems like you could have gotten higher density going vertical instead of
horizontal. It would have been 50% taller (6U instead of 4U) but it could have
held 100% more Mac Pros.

A Mac Pro is 9.9 inches tall and 6.6 inches in diameter. 9.9 / 1.75 = 5.65 and
6.6 / 1.75 = 3.77 [https://www.apple.com/mac-
pro/specs/](https://www.apple.com/mac-pro/specs/)

~~~
skuhn
This was one of our initial ideas for the design, but it boiled down to an
airflow concern. There are some products that do this, such as
[http://www.h-sq.com/products/mprack/index.html](http://www.h-sq.com/products/mprack/index.html)

If you look at how the airflow works on that shelf, I think you'll see why I
don't have confidence in that solution. The air paths to each system seem to
be based on wishful thinking.

We also didn't need to go that dense after considering each host's power draw
at full load. I design towards a 208v/3ph/50a circuit on each rack, and 44 Mac
Pros at full load (plus a switch) are about 13.5kW in my testing. So we would
need to build for 60A circuits, or not completely fill the rack, to make the
vertical orientation worthwhile.

~~~
msandford
That product you linked to isn't all that bad, provided that you run with a
front plate to actually block off the rest of the cold isle and force the air
to flow to the hot isle. But that doesn't seem to be included anywhere. So I
agree that it's wishful thinking.

The reality of the power budget makes the most sense really. There's no point
in cramming extra units in if you're going to have to rewire for them. Systems
engineering!

~~~
skuhn
H-Squared's product isn't terrible by any means, but I see it as phase 1 of at
least a 2 or 3 phase solution. If you were running one shelf of Pros in a
rack, it wouldn't matter much -- but at 10 racks of 88 Pros each, you'll run
into cooling issues unless you put more work into it.

On the topic of density: our chassis was originally specced to support 6 units
rather than 4. I vetoed that because it would require a second top-of-rack
switch, and would have been too power dense for our current site design.

44 turned out to be the magic number this time around. The design is also
flexible enough that if the specification changes dramatically in future Mac
Pros, we can tweak as necessary to achieve ideal density.

------
CPLX
Would it really have killed Apple to keep on making rack mountable OS X
servers? I bought and configured quite a few of them back in the day and was
quite fond of them.

I realize it's not the Apple Way™ but considering just how bizarre and niche
the current trash-can Mac Pro line is, it hardly seems more niche than that.

~~~
alextgordon
Apple doesn't even make standalone desktops anymore. The Mac Mini 2014 is
twice as slow as the Mac Mini 2012!

------
protomyth
At this point, Apple should just license OS X for VMWare installations on non-
Apple hardware so we can skip all this foolishness.

~~~
adamio
They would probably run their own OSX Based EC2 competitor before doing this

~~~
protomyth
I'm an Apple fan (well, actually a NeXT fan that went with the flow), and I
can think of no way I'd trust an EC2 competitor from Apple given their history
with the cloud. I don't doubt your right, but I just couldn't see using it.

------
acqq
It's deep in the comments and I really like the sentence:

x0054: "You are fitting triangular shaped computers, wrapped into round cases,
into square shaped boxes."

And place them horizontally. And without additional fans!

And surprisingly, if you read skuhn's answers here, for them it all still has
sense, financially.

And also surprisingly, Apple says it's OK to use the Mac Pros horizontally:

[https://support.apple.com/en-us/HT201379](https://support.apple.com/en-
us/HT201379)

Fascinating.

~~~
skuhn
I wish that I could share some of the internal cost analysis that was a big
part of the decision process; I've dropped breadcrumbs here and there, but
exposing the whole thing just isn't a possibility.

Physically, the Mac Pro itself is really densely constructed. Even with some
empty space inside our Mac Pro chassis, the solution is effectively 1U per 2
GPUs. That's pretty dense, and it hits our power target for the current site
design, so going denser would only lead to stranding space ahead of power
(which leads to cost inefficiencies).

But, let's consider some hypothetical configs with list prices that I just
looked up. Anyone can do this, and these are not reflective of my costs (you
can always do better than list). In reality, I would do a lot more digging on
the Linux side, but this is a reasonable config that is analogous in
performance and fits into my server ecosystem.

I'm excluding costs that would exist either way: the rack itself, CDUs, top-
of-rack switch, cabling, and integration labor are all identical or at least
very similar. Density is very similar, so there's no appreciable difference in
terms of amortized datacenter overhead.

    
    
      Mac Pro config (4 systems in a 4U chassis):
        - 4x Mac Pro ($4600)
          - Intel E5-1650 v2
          - 16GB RAM
          - 256GB SSD
          - 2 x D700
        - Our custom chassis
    
      Capex only: $0.70 per gflop
    
      Linux config (4 systems in a 4U chassis):
        - SuperMicro F627G2-FT+ ($4900)
          - 4x Intel E5-2643 v2 - 1 CPU each ($1600)
          - 8x 8GB DIMMs - 16GB each ($200)
          - 8x 500GB 7200rpm (RAID1) HDD - 500GB RAID1 boot drive ($300)
          - 8x AMD FirePro S9050 - dual GPU ($1650)
    
      Capex only: $1.03/gflop
    

For comparison, I'll give EC2 pricing as well. It's a tad unfair, since we
aren't including on-going maintenance and electricity for the Mac or Linux
options -- but 3 years of power is also not nearly equal to the cost of a
server. EC2's pricing becomes truly atrocious when you consider network costs
-- there is simply no comparison between 95th percentile billing and per-byte
billing.

    
    
      EC2:
        - g2.2xlarge @ 3 year reserved pricing discount ($7410)
    
      Instance operating cost only: $3.23/gflop
    

The Linux config for sure offers many more hardware options and greater
flexibility -- and it also requires us to rewrite our imaging stack that is
working out pretty well for us and our customers.

I firmly believe that we've made a pragmatic and sensible choice for our image
rendering platform today. imgix has a number of smart and talented people
constantly evaluating and improving our platforms, and I'm confident we will
keep making the right decisions in the future (regardless of how nicely the
Mac Pro may photograph).

------
intrasight
It is hard not to think that a great deal of time an money would have been
saved by removing those "Parts of our technology are built using OS X’s
graphics frameworks, which offer high quality output and excellent
performance."

~~~
jfb
The video and imaging pipeline on OS X is light years beyond anything you
could roll yourself in a reasonable timeframe. It's really good stuff.

~~~
CarVac
I imagine more advanced things like face recognition and such are not so
simple, but from my experience writing a raw converter, a lot of image
processing is far simpler than you'd expect.

~~~
skuhn
A lot of the complexity comes down to not just doing the operation, but doing
it correctly and quickly. ImageMagick does multiple passes, for instance; this
is sub-optimal for both quality and speed.

------
chx
IF that's what you want, that's a fairly ingenious solution. If you look at
[https://macstadium.com/mac-pro](https://macstadium.com/mac-pro) you'd think
you can just build the racks with the Mac Pro opening back and front. But it's
6.6" inches wide so you can only have two, three would be more than 19" (and
you can't put 19" of equipment inside a rack). But this way you can squeeze
four in the same space and since the cylinder is 9.9" high if you'd need you
could squeeze in quite a few external hard disks as well although that would
require using some fans to help moving air. You perhaps have 6-8 inches free
space, one 3.5" HDD is 5.75 inch high so you could stand it on the shorter 4"
edge and put in 6 taking 1" from the 19" and probably have two banks of this
to arrive to 3 disks per Mac Pro and leave 4-6 inches still for moving air. It
might not be impossible to squeeze in 6 disks per Mac Pro but cooling would
need to be very impressive for that.

~~~
skuhn
For our use case, the disk portion doesn't live on the Mac Pros (we have Linux
systems that act as storage servers).

We toyed with open shelf type solutions that would let us mount the systems
front-to-back, but as you noted, anything above 2 Mac Pros across won't fit in
a 19" rack. We also thought about mounting 23" rails in our standard cabinet,
but ultimately settled on this chassis and orientation.

One of our early design ideas:
[https://www.dropbox.com/s/15u19aivay4hfiu/2014-01-13%2017.14...](https://www.dropbox.com/s/15u19aivay4hfiu/2014-01-13%2017.14.50.jpg?dl=0)

------
exelius
This is interesting -- they actually manage to get greater density out of this
setup than many traditional rack mount systems offer.

And to those questioning "Why would you use such expensive systems when
commodity hardware is just as fast at half the price?" I would reply that the
Mac Pro isn't all _that_ expensive compared to most rack mount servers. If
you're talking about a difference of $2000 per server, even across a full rack
you're talking less than $100k depreciated over 5 years.

Though Apple is sorely lacking a datacenter-capable rack mount solution. I've
always felt they should just partner with system builders like HP or
SuperMicro to build a "supported" OS X (e.g. certified hardware / drivers,
management backplane, etc.) configuration for the datacenter market. It's kind
of against the Apple way, but if this is a market they remotely care about,
channel sales is the way to go.

~~~
ansible
_This is interesting -- they actually manage to get greater density out of
this setup than many traditional rack mount systems offer._

If they are GPU limited...

A full 4U rack of Mac Pros is 8 AMD Fire GPUs (6GB VRAM each), 256GB main RAM,
48 2.7GHz Xeon cores (using the 12-core option), and 4TB of SSD. 10G Ethernet
via Thunderbolt2.

Let's set aside differences in GPU and processor performance; we're just
looking at the base stats. All for about $36K USD, not including the rack
itself.

An alternative is the SuperMicro 4027GR-TR:

[http://www.supermicro.com/products/system/4U/4027/SYS-4027GR...](http://www.supermicro.com/products/system/4U/4027/SYS-4027GR-
TR.cfm)

So, maxed out, you've got 8 Nvidia Tesla K80 cards (dual GPU), 1.5TB RAM, 28
2.6GHz Xeon cores, and a lot of storage (24 hot-swap bays). That's in a 4U
rack too.

Call it about $13K USD for the server, and $5K per GPU. Plus a little storage,
call it about $56K USD with 10G Ethernet.

The SuperMicro system is designed to be remotely managed. Each GPU has double
the VRAM of the AMD Fire ones (12GB vs. 6GB).

I don't know the exact performance figures of the AMD Fire vs. the Kepler
GK210, but I'm sure the Fire it isn't nearly as good. And you've got twice as
many Nvidia chips on top of that.

At some point its going to get cheaper to re-write the software...

~~~
skuhn
The Tesla K80 didn't exist when I started this project, but to do some quick
math:

K80 gflop/s: 8740 2x FirePro D500 gflop/s: 3500

K80 runs about $4900 a card, whereas the entire Mac Pro (list price) is $4000.
So it's 2.5x the performance at easily 2x the cost if not more.

You're right that there is a cost advantage to going with commodity server
hardware, but I don't think it's as great as most people think in this
particular case. It's also far from free for us to do the necessary
engineering work, and not just in terms of money. It would basically mean
pressing pause on feature development at a crucial time in the company's life,
and that just isn't the right move.

~~~
ansible
_2x FirePro D500 gflop /s: 3500_

That 3500 gflop/s for the D700? It is instead 2200 for the D500.

[http://www.amd.com/en-
gb/solutions/workstations/d-series](http://www.amd.com/en-
gb/solutions/workstations/d-series)

 _K80 runs about $4900 a card, whereas the entire Mac Pro (list price) is
$4000. So it 's 2.5x the performance at easily 2x the cost if not more._

The 6GB VRAM version with the D700 costs another $600 USD each.

The K80 has 12GB VRAM per GPU (24GB total per card).

If your code can use the additional memory, that is a huge difference.

Anyway, 3500 gflop/s times 8 is 28 tflop/s for the Mac Pros.

With 8 K80s, you're at 70 tflop/s. Single precision. So that's double the raw
performance, and double the memory. Actual performance for a given workload? I
wouldn't care to say.

I'd be concerned about thermal issues too. I wouldn't be surprised that the
Mac Pro gets throttled after a while when running it hard. The kind of server
you can put the K80 in usually has additional (server-grade) cooling.

I'm not disrespecting you guys, if you've got a solution that works, and makes
you money, more power to you!

But I stand by my claim that at some point, it will be cheaper to rewrite the
software for the render pipeline. Not this year I guess, and who knows, maybe
not next year either.

~~~
skuhn
Sorry, I do have this evaluation in a spreadsheet somewhere (except against
the Tesla K20, K80 wasn't out then), but I just quickly looked up the Mac Pro
specs. We do use the D500, so I should have quoted those gflops. There is a
benefit to off-the-shelf GPUs, but I don't see it as a make-or-break kind of
situation for imgix right now.

I agree that some day in the future, it does seem like it will make sense to
bite the bullet and rewrite for Linux. It probably won't solely come down to a
cost rationale though, because there are a TON of business risks involved in
hitting pause on new features (or doubling team size, or some combination
thereof).

Fundamentally I don't believe in doing large projects that have a best case
scenario of going unnoticed by your customers (because the external behavior
has not changed, unless you screwed up), unless you absolutely have to.

The real reason to migrate to Linux would have to be a combination of at least
three things:

    
    
      1. Better hardware, in terms of functionality or price/performance
      2. Lower operational overhead
      3. The ability to support features or operations that we can't do any other way
    

Much more likely, we would adopt a hybrid approach where we still use OS X for
certain things and Linux for other things.

~~~
ansible
_We do use the D500, so I should have quoted those gflops._

Well now I'm curious as to why you aren't using the D700s. The extra gflops
seem like a good value to me. Approximately 60% greater GPU performance for a
15% increase in cost, everything else being equal.

But you probably have to get some work done, rather than answer random
questions from the Internet. :-)

Good luck!

~~~
skuhn
It is intriguing, and we have one D700 Mac Pro for test purposes. At the time
we ordered the Pros for the prototype rack that is the subject of this
article, we found that other parts of our pipeline were preventing us from
taking full advantage of the increased GPU performance. So we ratcheted down
to the D500.

Keep in mind that either of them offer significantly higher gflop/s per system
than the best GPU ever shipped on a Mac Mini (480 vs 2200 vs 3500).

However, we have fixed bottlenecks in our pipeline as we identified them, so
it is probably time to re-evaluate. I actually just had a conversation with an
engineer a minute ago who is going to jump on this in the next few days.
Higher throughput and better $/gflop is always the goal, just have to make
sure we can actually see the improvement in practice.

------
pjungwir
Just last night I was asking why with so many mobile app companies no one is
building their server side in Objective C. Wouldn't that have the same
personnel advantages as Node.js (supposedly) offers the web world? I haven't
looked to see if there is a decent Objective C web framework, but if it's just
an API I guess you don't need too much.

I mean I can think of lots of reasons to stick with Rails/whatever (and that's
what _I_ 'd do), but I'm surprised it is quite so unheard of. You'd get much
better performance. Skipping garbage collection with ARC would be awesome.
Coding time is still pretty fast, and it's not as unsafe as C/C++.

Just a crazy idea for anyone about to start a mobile app company. :-)

------
frugalmail
This seems like a company destined to fail:

1) Massive premium for compute

2) They're at the mercy of Apple, a single completely unpredictable vendor.

3) Apple changes it's form-factors to the latest "design" way to frequently

4) Apple sucks to manage in mass

~~~
SG-
1) They did a cost analysis and it was alright, I can't imagine high-end
servers with dual GPUs being much cheaper or much more expensive myself.

2) This isn't the 90s where Apple was at risk of folding and going away.

3) They actually don't other than phones. Go back to all their Pro desktop
lines starting with the PowerMacs. The previous MacPro case lasted quite long
and came from the PowerMac G5.

~~~
skuhn
The longevity of the previous Mac Pro form factor definitely gives me hope,
although one can never be sure when it comes to Apple.

Consider Mac Rumor's lifespan of the various Mac Pro models:
[http://buyersguide.macrumors.com/#Mac_Pro](http://buyersguide.macrumors.com/#Mac_Pro)

The previous form factor (silver tower) lived from August 2006 to December
2013. If we see that kind of longevity out of the black cylinder form factor,
I'd be thrilled (although preferably with more internal updates). However,
there's nothing stopping us from adapting our design to whatever new models
come out.

We have current rack designs for Mac Minis and Mac Pros now, and we can add a
third if the need arises.

------
salibhai
Isnt this ridiculously expensive? Couldn't you achieve the same thing using
cheaper PCs?

~~~
smeyer
>Couldn't you achieve the same thing using cheaper PCs?

They say "Parts of our technology are built using OS X’s graphics frameworks,
which offer high quality output and excellent performance". So they couldn't
achieve the "same thing" in the sense of running their software on racked
computers, because it won't run on PCs, and if you're thinking about expense
you'd have to consider the cost of making the software run equivalently well
on PCs.

~~~
kubov
I'm really curious about any study/compassion between OS X's graphics
frameworks vs other open/closed source solutions available. How 'output
quality' is measured?

------
ezafer
This reminds me of the scene from the matrix where all those humans are
suspended in cords as their energy is being drained off by a dark power.

------
mschuster91
How do you remote provision OS X? I mean, how do you get to the boot menu to
choose network boof? With the crashcart?

~~~
skuhn
The crash cart is the method of last resort, and it's come into play a fair
amount as we were figuring out how to do this.

The better solution is to have a NetRestore server on the network, and
configure the Macs to boot from a particular server with the bless and nvram
commands. Then on the server, you control if the image gets served or not
based on some external logic (in my case, an attribute in my machine
database).

At the moment, NetRestore is running on an OS X Server machine hanging out on
the network, but integrating it with our existing Linux netboot environment is
on my to-do list.

~~~
jhickok
Have you considered a thin-imaging solution like DeployStudio?

~~~
skuhn
I think our solution is pretty similar in concept. We just deploy the base OS
(with enough config done to ssh in later) via NetRestore. Then whatever
packages or setup tasks are required is done in a post-install step using
Ansible.

------
mcmullen
Really interesting post; the Mac Mini rack looks insanely cool.

This has myself and a colleague wondering what Apple run in their data
centres. Can anybody hazard a guess? Is it Apple hardware with OSX? Is it
custom/third-party hardware running *nix? I seem to remember somebody
mentioning Azure not too long ago.

~~~
skuhn
I mentioned this in another comment, but my understanding (without having
worked there) is that they do not use OS X or their own hardware internally.

I think that things can be quite different between orgs though, with some
adopting a more enterprise-y appliance setup (NetApp Filers, InfoBlox, etc.)
and others building services more like an Internet company (Linux servers and
open source based services).

------
hdmoore
A beautiful demonstration by people who like macs and don't understand
geometry or economics.

------
batbomb
Reminds me of a tube amp.

~~~
johansch
Right?

Server hardware evaluation by how well it comes in in photos.

------
pholz
This seems like an odd choice to me.

I think OS X has been the best all-round Desktop OS for many years now, but
what does it give you as a server that a linux-based system can't, and that's
worth the trouble of custom racks, vendor lock-in and high costs?

In fact, if you're working with OpenGL, OS X can be frustrating since it only
ever supports an OpenGL version a few years behind the latest release - IMO
one of the platform's biggest drawbacks.

Then again, I've seen some pretty strange errors on server machines doing GPU-
heavy work on linux machines with Nvidia cards, and it's probably easier to
get support on a standardised SW/HW system such as the Mac Pro...

------
anoother
It would be interesting, once these have been in use for a while, to see some
stats on the relative temperatures (+ fan speeds) of each machine within the
enclosure.

I can imagine the dynamics of 4 machines scavenging air from a single chamber,
with an opening on one end, will result in the machines nearer the warm aisle
having to work harder to keep cool...

I also wonder what kind of ducting could be implemented to minimize this
effect.

Anyway, a very cool project ending in what looks to be a fantastic end
product. I wish I had the chance to work on something like this!

~~~
skuhn
I'm graphing this type of data, but it's too early to draw firm conclusions. I
was mainly concerned that the upper, rear units might not fit within my
desired thermal envelope, but so far it hasn't been an issue at all. This
might make it into our follow-up post, which is intended to explore the design
decisions behind the chassis.

If heat or airflow did become a problem, we could add fans to the chassis
(either in the intake tunnel or along the exhaust vent). The ideal solution is
probably to also attach a chimney to the rear of the rack, but so far it
hasn't been necessary.

------
ubercow
Ever since the new mac pros came out, I was curious how they were going to
solve the rack problem with the crazy round design. Especially in regards to
cooling.

------
boyter
Having just finished rolling out a largish scale Thumbor implementation has
anyone compared it to Imgix?

The feature set I required is served by both equally, so it comes down to
performance/ddos prevention/cost for me mostly. I am unlikely to modify what I
have just done since it is working fine, but for the future would love to know
if anyone has experience with this.

------
cp9
I'm getting the how, just not the why

~~~
johansch
It's designed for believing Apple fans, silly.

------
lurkinggrue
Wow, that's cool and yet doesn't seem to be the best power vs cost solution.

Vendor lock-in is a bitch.

------
sabujp
8 mp's in a 7u : [https://www.facebook.com/pages/Rack-Your-
Mini/10374582632541...](https://www.facebook.com/pages/Rack-Your-
Mini/103745826325418)

~~~
skuhn
We use MK1's Mac Mini shelf (as seen at [http://photos.imgix.com/building-a-
graphics-card-for-the-int...](http://photos.imgix.com/building-a-graphics-
card-for-the-internet)), which is pictured at that page. I just wasn't in love
with their Mac Pro shelf design, so we went a different direction.

Keep in mind also that 8 Pros in 7U = 48 in a 44U space. So it's a pretty
similar density, but I don't think it is as ideal in terms of airflow.
Instead, it's more ideal for working on the systems individually (such as in a
colocation environment), but that isn't a particular concern of ours.

------
kiddico
I'm sorry, but those look absolutely ridiculous. That being said, I want one.

------
derefr
Apropos to nothing, but when you stuff a bunch of Mac Pros in a box, they
begin to look like enlarged vacuum tubes/capacitors. I can almost imagine them
being "screwed into" the rack chassis.

~~~
skuhn
You do have to employ a bit of a twisting motion to remove them, since they
have some gasketing in place. I wanted to add a dry ice smoke machine and blue
LEDs, but alas...

------
kubov
> Parts of our technology are built using OS X’s graphics frameworks, which
> offer high quality output and excellent performance.

I'm really curious about any study/compassion between OS X's graphics
frameworks vs. other open/closed source solutions available. How 'output
quality' is measured? Is really that great and unique? I hardly think that
simple image operations like cropping/blurring/masks implemented in OS X
framework are significantly faster and with 'better quality' than the same
algorithms implemented in Linux/Windows. Not mentioning that you can boost
your computation using cuda/opencl on Linux practically seamlessly. But again,
citation is needed here.

------
istvan__
Very well organized datacenter indeed, clean and well designed.

~~~
skuhn
Thanks! A previous post in the series went into a little more detail about the
datacenter itself: [http://photos.imgix.com/building-a-graphics-card-for-the-
int...](http://photos.imgix.com/building-a-graphics-card-for-the-internet)

------
LukeShu
Heads-up: Using the default Iceweasel (Debian's Firefox) User-Agent, the
content of the article doesn't show up. If I switch to a Firefox User-Agent,
it does.

~~~
skuhn
Thanks, the article is hosted by Exposure (one of our awesome customers), so
I'll pass this note along to them.

------
rplnt
If this is for anything else than PR then good God...

------
JohnDoe365
What a horrible solution! The power to thermodynamic waste (=heat) ration must
be quite horrible)

------
tgokh
Definitely clicked the link thinking someone had rack mounted an army of
MacBook Pros

------
allochthon
I didn't know about the cylindrical Mac Pros. They are quite attractive.

------
lectrick
As a recent Mac Pro owner, those things are beautiful beasts.

------
freedevbootcamp
Great read. Nice website. Great Photos.

------
sabujp
apple's licensing terms force abominations such as these

------
foton1981
Mac porn :)

------
jkot
LOL

------
CyberDildonics
This is a joke right?

