Hacker News new | past | comments | ask | show | jobs | submit login

(I'm the datacenter manager at imgix, and I wrote this article)

1. Yeah, the OS X graphics pipeline is at the heart of our desire to use Macs in production. It's also pretty sweet to be able to prototype features in Quartz Composer, and use this whole ecosystem of tools that straight up don't exist on Linux.

2. I mentioned this elsewhere already, but it is actually a pretty good value. The chassis itself is not a terrible expense, and it's totally passive. It really boils down to the fact that we want to use OS X, and the Mac Pros are the best value per gflop in Apple's lineup. They're also still a good value when compared against conventional servers with GPUs, although they do have some drawbacks.

3. I would love it if they weren't little cylinders, but they do seem to handle cooling quite well. The power draw related to cooling for this rack versus a rack of conventional servers is about 1-5/th to 1/10th as much.

In terms of provisioning, we're currently using OS X Server's NetRestore functionality to deploy the OS. It's on my to-do list to replicate this functionality on Linux, which should be possible. You can supposedly make ISC DHCPd behave like a BSDP server sufficiently to interoperate with the Mac's EFI loader.

We don't generally do software updates in-place, we just reinstall to a new image. However, we have occasionally upgraded OS X versions, which can be done with CLI utilities.




Why not unassemble the cylinders and re-assemble into rectangle chasis? Im sure that would give you a more dense layout.. Sure it would void warranty and resale value.. but do you really care?


The whole machine's custom built to fit inside the cylindrical case... the best you could do would be to take the outer case off, and then you've just got a slightly smaller cylinder.

Electrically, everything's built around a round "central" PCB using a custom interconnect. You're not going to be able to reassemble the thing into a rectangle and still get a functioning machine (not without tons of custom design work, at least).

See https://www.ifixit.com/Teardown/Mac+Pro+Late+2013+Teardown/2...


This actually came up during the design phase, and it was tempting. However, you'd have to figure out how to connect the boards together, and you'd have to figure out where to put heatsinks and where to direct airflow.

Since we were able to get the Pros to the point where they effectively occupy 1U, there wasn't really any incentive to doing a disassembly style integration. Maybe if Apple announces the next Mac Pro comes as a triangle.

To your other point about the warranty and re-sale: we do care, but only a little. I budget machines to have a usable lifespan of 3 years, but the reality is that Apple hardware historically has significant value on the used market for much longer than that. So if we can recoup $500-1000 per machine after 3 years of service, that would be great.


> The power draw related to cooling for this rack versus a rack of conventional servers is about 1-5/th to 1/10th as much.

Do you mean your Mac Pros dissipate 1/5 to 1/10th as much heat as other x86 server hardware, or is there there some other factor in play that makes your AC 5-10x more power efficient?


I understand "related to cooling" as Mac Pro's cooling in this setup is 5-10x more efficient.


Sorry, just some off-the-cuff math. We use Supermicro FatTwin systems for Linux stuff, and they run a lot of fans at much higher RPMs to maintain proper airflow relative to the Mac Pro design (which runs one fan at pretty low RPMs most of the time).

As a result, I'm calculating that the Mac Pros draw a lot less power for cooling purposes than the Linux systems due to their chassis design. However, serviceability and other factors are definitely superior on the Supermicro FatTwins.


What's so good about this OS X graphics pipeline that isn't on anything else? I'm now super curious.


Core Image:

https://developer.apple.com/library/mac/documentation/Graphi...

http://en.m.wikipedia.org/wiki/Core_Image

I'm not super familiar with it or the competition, but I assume this is what they're talking about.


So, it's basically the MESA Intel graphics pipeline?

EDIT:

For the downvoters and the unclear, the relevant bit talks about compiling exactly the instructions needed to change the image. As I understand it, this JIT recompilation of pixel shaders is effectively what was implemented in the mesa drivers for Intel chipsets.


Compiling the shaders is a big win, since it allows us to do almost all operations in one pass rather than multiple passes. The service is intended to function on-demand and in real time, so latency matters a lot.


Thanks for replying and thanks for the article too - great read with some fantastic photography.

Really interesting to hear how you provision servers, had no idea that OS X Server came with tools for that, but it certainly makes sense. I wouldn't have thought Apple would have put much time or thought into creating tools for large deployments, but glad to hear that they have.


Thanks, the photography was done by our lead designer, Miguel. I am super impressed at what he's been able to capture in an environment that can easily come off as utilitarian and sterile.

He has some other work online that you might enjoy, not related to Macs or imgix: http://photos.miggi.me/


What's the noise level like with these machines? The typical pizza-box servers aren't exactly quiet.


They're pretty much silent relative to datacenter stuff.

One of the goals of the next revision is to have LED power indicators (maybe plugged in to the front USB ports) or LCD panels built into the front of the chassis. Right now you actually can't tell that the rack is powered on unless you walk to the hot aisle and look at the power readouts, it's that quiet.


Is fan failure reported through management APIs?


We wrote a little tool to probe SMC and graph the output, so we know CPU temp and fan speeds and whatnot. If a fan were to fail, it shows up as 0 rpm speed (in my experience thus far), so we can tell and take the host offline.

Even if you can't see when the fan itself has failed, the CPU core temp should eventually go out of the acceptable range without any forced air at all, which is also helpful to determine that hardware maintenance is required.

So far nothing has actually failed on any of our Mac Pros though. When and if that happens, the entire Pro will get swapped out as one field replaceable unit, and then put in the repair queue.


BSDPy, AutoNBI, and Imagr provides a bleeding edge OS X deployment solution that runs entirely on Linux. OS images can be generated with AutoDMG, and Munki will keep them configured and updated afterwards.

Pop into ##osx-server on freenode if you want to talk to the devs.


Thanks, I was aware of AutoDMB and Munki, but the rest are news to me. We'll check them out.


> It really boils down to the fact that we want to use OS X,

How the hell did you guys get funding to do this? I can't imagine any sane person wanting to put money behind this. Could I have their contact information?


The real question to me is: why would anyone fund doing this in EC2?

Here's the quick math on cost per gflop, including all network and datacenter costs:

  Mac Pro: $5/gflop
  EC2 g2.xlarge: $21.19/gflop


Not sure where you got ec2 out of my comment.

I also think you need to redo your math on the price per gflop for a Mac pro, ypou seem to be at least half the price of my back of the envelope work. Unless you have some crazy good supplier.


Exposing more detail behind this math is unfortunately not something that I'm ready to do, but I'm pretty comfortable with it in broad strokes. EC2 really is that much more expensive, when you factor in things like network bandwidth.

As I noted elsewhere, I mention EC2 because all of our (funded) competitors run there. We can split hairs over whether I could save 10% on Linux systems vs Mac systems, but the elephant in the room are all of the companies trying to make this sort of service work in EC2. You can't do it, and make money at the same time. Even if you can make money at small scale, you will eventually be crushed by your own success.

My overriding goal for imgix's datacenter strategy (and elsewhere in the company) is to build for success. To do that, we have to get the economies of scale right. I believe we have done so.


The choice isn't between a Mac Pro and EC2. You can rack up x86 boxes chock full of GPUs far more easily than Mac Pros.


I mention it because AFAIK, all of imgix's direct competitors run in EC2.


How long will it take to amortize the costs of the hardware based on EC2 g2.xlarge savings?


Not certain if I understand your question, but I'll take a shot at answering:

I expect a useful life span for any datacenter equipment of 3 years. A Mac Pros list price is about $4000. We pay less but I'll use public figures throughout. Using equipment leasing, I can pay that $4000 over the 3 year period, with let's say a 5% interest rate and no residual value (to keep this simple). So over 3 years, I spend $4315 in total per machine to get 2200 gflop/s.

Over 3 years with EC2, a g2.xlarge is $7410 up front (to secure a 57% discount) for 2300 gflop/s.

So I can pay over time, save $3100 over a 3 year period, and probably still resell the Mac Pro for $500 at the end of its life span. That's pretty compelling math to me. There are costs involved with building and operating a datacenter, and that evens things out a bit. What really kills EC2 though is the network bandwidth costs. It is just insane.


It'll be REAL f'in expensive in EC2, that's for sure.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: