What if Apple decides to change the Mac Pro form factor for the next iteration? Then you have to retool and are left with a bunch of incompatible chassis. What if Apple stagnates with hardware upgrades? You'd be stuck running obsolete hardware. What if Apple discontinues the entire Mac Pro line? Not to mention the price premium of Apple hardware itself, then the time and expense incurred to design and fabricate this.
The fact that their software depends on Apple's graphics libraries doesn't seem like a good justification for doing this. What it says is they are willing to throw a ton of money and effort towards (very cool) custom hardware, but are unwilling to hire a person to write optimized OpenGL shaders for Linux, which would work on pretty much any other server they choose to build/buy/lease/cloudify. Certainly there will be other "debt" to overcome, especially if much of your codebase is non-portable Objective-C or GCD, but that has to be weighed against the possibility of your only vendor pulling the rug out from under you. And looking at Apple's history, that is a very real possibility...
Owning your hardware like this makes complete sense if your core business is directly tied to the platform itself, eg an iOS/OSX testing service. But as far as I can tell, imgix does image resizing and filters... their business is the software of image processing, and they're disowning that at the expense of making unrelated parts of the business more complicated. Not a good tradeoff, IMO.
If you're worried about them not matching some piece of client software exactly (Quartz Composer, Photoshop, etc.), you still have options. And those options - e.g., webapp for previewing/something else/etc. - you'll probably want anyway, for the benefit of designers that don't run OS X.
(The filtering aspect of the system I find a little surprising anyway - the idea of an image-focussed client-aware DPI-aware CDN makes sense to me (and I like it!), but something that does your Photoshop filters in the cloud sounds less compelling. I would have expected people to prefer to do that locally, and upload the result. But... while I've worked with many artists and designers, I'm not one myself. So maybe they'll go for that. And/or maybe a lot of their customers take advantage of the fact that the processing appears to be free. And I'm prepared to contemplate the possibility, however unlikely, that they might know their customer base better than I do.)
Uploading pre-edited images takes time/resources, and in general a lot of our customers rely on us to do all of their image processing so that they don't have to.
Additionally, creating edited versions of images in advance presents two problems: 1) Any future site redesigns or edits must now be applied en masse to the existing images or risk older images not complying with the new scheme, and 2) Instead of only managing the one original source image in the origin, now we're talking about maintaining all of the different edited versions, which is very inefficient from a storage and image management perspective.
There are many advantages to applying all of the image transformations on-demand, rather than in advance. Keep in mind that we are not simply photo filters, but a full end-to-end image processing offering (which applies everything from simple photoshop edits like cropping, face detecting, color correction, watermarks, etc. to automatic content negotiation and automatic resizing/responsive design) that works on the fly; this means that our customers now can make batch edits to their entire corpus of images through a few simple code edits.
This can become extremely cost-effective, but also helps in reducing page weight significantly.
It would be interesting to hear how img.ix solves this, since you are arguing for the resources savings in the on demand approach.
The majority of the time taken in the first request is actually in fetching the image from the origin source, so once it's cached in our system it becomes a much faster operation: and of course delivering the cached image without it traversing our stack is even faster.
So yes, the initial request can take time, but all of the subsequent requests of that image are much faster than the alternative. And when you take into account that our service makes it possible to send the correctly sized image for the display (instead of loading a preset size and displaying it smaller), and optimal file types based on device and browser (webp for chrome, etc), load times/page weights on all of those requests are significantly improved.
In general, anyone who serves multiple requests for their images over time will see a marked improvement on page weight/speed, compared to rolling their own solution where they have preset image sizes and deliver jpeg only.
Look at the current state of Mackintoshes. People are having kernel panics and struggling to keep their machines running with current software. OS X moves pretty fast, possibly faster than Linux, and Apple builds it to support Mac hardware.... the teams who are porting hackintosh code have to support a lot more hardware variety and they have less resources than, say, linux.
Running mackintoshes in production makes no sense.
And I challenge the claim that you would save money.
Looking just at off the shelf costs of low end hardware does not tell you the TCO of serious machines that need to be running all the time.
To get comparable hardware quality to Apple products you have to spend more, generally, when going with "commodity" hardware.
The idea that Apple is expensive is a myth, born of two things-- people perpetuating it since 1980 (yes, 35 years this myth has been spread), with the vested interest of rationalizing their dislike of Apple... and the fact that Apple doesn't compete at the very low end.
In production, TCO is much more about reliability and other things than initial hardware cost.
I look after a mostly MAC environment and OX server is hopeless no migration from stand alone MAC's to Networked was the first shocker I found
And our Mac's are less stable than our windows 8 Box used for running hyperv VM's
Not here. My Mac is at about 11 days of uptime and it's under constant use. At this moment, I can't say it's less reliable then my Linux machines.
In this specific case, however, I'd consider ditching the enclosure and ducting cold air through the internal chassis/heatsink. A Macpro is, essentially a heatsink with boards mounted on it and I'd just let the chassis do that part.
11:28 up 117 days, 19:13, 4 users, load averages: 0.91 0.98 0.95
Still, there's no doubt in my mind that Apple are doing some "move fast and break things" OS development.
A guy I work with has been running several heavily used hackintosh servers without issue. They have been very reliable and he's happily converted existing Linux servers to hackintosh. He's been doing this for a while and knows exactly what hardware to use.
Legally? Yeah, good luck.
Edit: I think the more obvious answer to this is that they would rehouse these babies in a more convenient, albeit likely custom form-factor.
The chassis we designed represents my attempt to re-house the Mac Pros in something more suitable for the datacenter.
It'd be interesting to see someone rip apart a Mac Pro and build an entire form-factor around its setup.
Don't get me wrong, what you guys have done is extremely beautiful in all ways, but I can't help to think that if someone wanted to do this with say a Mini... say you take a 1u rack, drill some new holes into it... hmm.
I do have an existing rack design that holds 64 of them (and other people have gone denser, with operational compromises I prefer not to make), so there's no great impetus on my side to rip them out of their enclosures.
My Mac Mini rack design is shown in a little more detail in our previous article: http://photos.imgix.com/building-a-graphics-card-for-the-int...
What other companies utilize Apple hardware in this way at this kind of scale? While not "out of this world" in comparison to some of the big players who have tackled scaling, it's definitely significant considering Apple hardware.
Mac Minis are a little more common than it might seem at first glance, particularly for use cases that some other people have outlined in their comments throughout this thread. Mozilla uses them to test Firefox builds on OS X for instance. I would imagine that places like Sauce Labs must have a Mac Mini farm to facilitate browser tests on OS X.
I'm not aware of any other service that operates in the same space as imgix that runs outside of EC2, so they definitely aren't using OS X there. I think in general there's a sort of disregard for the particular graphics processing benefits that OS X provides (as evidenced by some of the comments in the thread).
I would also be remiss to not mention Mac Mini Colo (http://macminicolo.net/) who do co-located hosting. imgix started out with them, and they did a great job.
There's another interesting use case where you need to have OS X (or iOS): when you want to display photos taken on iOS devices with their applied filters (the images are stored pristine, and the filters are applied on top when you view them). To recreate these photos exactly as they were on the device, you ideally need to render it within an Apple environment. You can probably imagine the use case for a service that stores a lot of user generated photography, in a world where iPhones are the most popular cameras (https://www.flickr.com/cameras).
I also heard through the grapevine today that a certain film studio is interested in getting one of these chassis to test out, because they saw this article. That's pretty exciting to hear, even though we don't profit in any way from the sale of these chassis.
Hosts have a really nice markup, compared to hosting yourself. Hosts make a lot of sense for small companies who can benefit from the aggregated demand and capital costs being spread over many clients.... but not when you're at the level of building your own datacenter, or even using a full rack.
It's funny how since 1980 people have been talking negatively about Apple as "vendor lock in". For most of that time it was advocating vendor lockin to windows.
The thing is when you build your system on an OS or hardware choice you're making "vendor lockin" to that platform. Build on Linux and you're locked into just Linux, unless you port.
There is little risk being "locked in" to the largest most successful company in the world. Plus the costs being dramatically lower given the rabidly higher performance of Apple's technology for this particular service more than covers the cost (in fact I think one Mac Pro probably replaces 4 or 5 Linux boxes doing this.)
If you think Optimized OpenGL shaders would do this, you're not understanding what it is that they are doing. You're just assuming it's a trivial problem, it is not.
Owning your hardware makes a great deal of sense when you are operating at scale.
You have no idea what the comparison is, and I don't either. But again, the criticisim is around running a business off of a bunch of Apple "trash cans".
>advocating vendor lockin to windows.
Linux is no lockin, Windows lockin via software vs. Apple for hardware and software.
>"vendor lockin" to that platform
Java, Scala,or any other JVM language protects from that, and to a lesser degree Python, PHP does as well.
>There is little risk being "locked in" to the largest most successful company in the world.
Price gauging? Deciding not to support your platform anymore? Forcing you to upgrade?
>Build on Linux and you're locked into just Linux, unless you port.
Only that there are a bunch of Linux options to choose from, they are all open source so you can do whatever you want as far as upgrade paths and support, and if you use the JVM languages this isn't an issue.
>in fact I think one Mac Pro probably replaces 4 or 5 Linux boxes doing this.
There is no fact there, that's your delusional opinion.
>If you think Optimized OpenGL shaders would do this, you're not understanding what it is that they are doing. You're just assuming it's a trivial problem, it is not.
It's a CDN + image manipulation tool, you don't need 3D libraries. And if you use exiting libraries or tools, it is quite trivial. Here is their API: http://www.imgix.com/docs/reference
So you making your own chips off of beach sand or something? /s After a certain point you get ridiculous.
JVM and C/C++ (python and other scripting languages to some degree) are the options if you want cross platform environments.
But on a scale of suckiness:
1) Hardware lock in
2) vendor lock in
3) service lock in
4) OS lock in.
5) app server lock in
6) framework lock in
7) Library lock in
8) programming platform lock in
> Why pay 5-10X as much to host on AWS?
Nobody said anything about AWS...
> Hosts make a lot of sense for small companies
Sure, nobody is disputing their choice of colocating themselves.
> OS or hardware choice you're making "vendor lockin" to that platform
It is abundantly clear that the vendor lock-in refers to single sourcing your hardware. That problem is nonexistent on Windows, Linux, BSD, etc.
> I think one Mac Pro probably replaces 4 or 5 Linux boxes
Oh come on, now you're just talking crazy... see other posts in this thread for a cost/performance comparison.
> you're not understanding what it is that they are doing
On the contrary, I think I understand better than you. Do you perform a lot of image processing work on various platforms (including OSX and Linux)? I do.
If you want to run a business that builds/tests using the osx/ios ecosystem this is the only way to do it legally. Apple's licensing terms enforces this. Otherwise we'd be running OSX on generic pizza box servers since Apple's hardware is truly overpriced and not built efficiently at all for the datacenter (they work fine on desks). Apple really gimped the 2014 mac mini's btw. They perform worse than the high end 2012 mac minis.
What barrier to entry? Their customers don't care that OSX is running under the hood. You can offer an image processing service using any platform today. Sure, on Linux it probably wouldn't be as efficient, but it doesn't have to be. Scaling is a Good Problem to have.
Basically, as you grow, it helps to take a critical look at risk factors and the technical debt which contributes to that risk. The longer you wait to pare down that debt, the more expensive it is, and the more exposed you are to that risk. A little more work up-front saves a lot of work later on.
I completely agree with your concerns, and I'm constantly evaluating our business for operational risks and inefficiencies. There's a lot of stuff that I can't share in public about this, but what I can say is that the math works out (for now): OS X graphics processing is worth the downsides. It may not always be the case, and we're built in a flexible way where we can make a change when it makes sense to do so.
How different is it? Aren't they dependent on OpenGL and Nvidia/AMD GPUs/Drivers themselves? Wouldn't it make better sense/efficient to invest in becoming platform agnostic and optimize this.
I only say so cause it seems like Imgix could massively benefit from such a move and maybe look into other solutions which you currently can't consider (Custom ARM silicon - PowerVR-based servers, Professional AMD/Nvidia GPUs, etc)
Those are important components, and we're not talking about splitting the atom here, but Apple has had a number of smart people working on graphics technologies for a long time now. imgix also has a bunch of smart people working on this, but for a much shorter period of time.
It seems as though they're prepared for this. This version 2 of their process is already moving away from an existing Apple form factor to a new one. It doesn't seem to be a leap in logic to consider that, should a new form-factor be released, they'll modify their rack cases again.
What happens if a random upgrade causes major performance issues, or worse, just flat-out breaks their use case?
Looking at you, PS3 clusters.
Given recent history, that's not going to be for a number of years.
"What it says is they are willing to throw a ton of money and effort towards (very cool) custom hardware, but are unwilling to hire a person to write optimized OpenGL shaders for Linux, which would work on pretty much any other server they choose to build/buy/lease/cloudify."
Hardware will almost always cost less than engineers.
That is something that no one outside of Apple can say for certain. It doesn't even have to be a major change, but something like rearranging ports, adjusting taper or extrusions on the chassis, etc. Those kinds of adjustments happen all the time on consumer hardware, and most people don't notice, but may be an issue if you're trying to fit into precision machined slots.
> Hardware will almost always cost less than engineers.
For commodity, off-the-shelf hardware, absolutely. This is anything but, and still requires engineering effort to design, fabricate and assemble. And it's not always about the immediate dollars: sometimes a fundamental reworking means sacrificing short-term savings in favor of the long-term: flexibility, risk mitigation, reduced operational complexity, and cost over successive generations of hardware.
About the only thing that Apple could do that would render this chassis obsolete is to substantially change the exterior dimensions of the Mac Pro. Obviously if it's a different shape, we would have to adjust things.
If they kept the same shape but modified it somehow, the only dimensional change that would be truly tough to accommodate is an increase to circumference. This is the dimension with the least wiggle room built-in, and it would cause some headache. We would probably have to sacrifice some density by removing 1 chassis from the rack.
Otherwise, changes to ports or minor adjustments to the length of the chassis can all be accommodated for in this chassis design.
Yep, I was just responding to the assertion that it wasn't a risk.
For what it's worth, it does sound like you've thought this through really carefully, and thanks for taking the time to explain so thoroughly and respond to everyone.
First, this is awesome. Just like I want to live in a world where people are paying picodollars for cloud storage, I also want to live in a world where a bunch of mac pro cylinders are racked up in a datacenter. Very cool.
Second, this is complete silliness. I'm not going to go down the rabbithole of flops per dollar, but there is no way that you can't build a hackintosh 1U with dual cpus and multiple GPUs and not come out big money ahead. Whatever management overhead gets added by playing the hackintosh cat and mouse is certainly less than building new things out of sheet metal.
Let me say one other thing: right around mid 2000 was when certain companies started selling fancy third-party rack chassis gizmos for the Sun e4500, which was the cadillac of datacenter servers at the time. Huge specs on paper, way underpowered for the money they cost ($250k+) and the epitome of Suns brand-value. And there were suddenly new and fancy ways to rack and roll them.
This reminds me a lot of that time, and that time didn't last long...
 Our esteemed competitor, tarsnap.
I have contacted a lawyer for this (I wanted to run Hackintosh in the office), the language is very clear. The author of the software has the full power to license its use to you with any restrictions they find necessary no matter how ridiculous. If Apple only sells you the license if you promise not to run it on a thursday, you'll be in violation of their terms if you run it on a thursday.
Long story short, Stuart Knightley created a clean room implementation named js-xlsx to do the same thing, without the lawyer string attached.
I believe people are questioning that Apple hardware/software is a requirement of your business (and that it's not "some money", but a lot of money you'd be saving).
It's difficult to fathom Apple hardware/software being a hard requirement to operate any business (as-in you can't operate without it). Both Windows and Linux have a plethora of image utilities, audio, etc...
Sure, OSX might have some optimized image processing stuff, but couldn't the massive savings be used to scale wider with more generic hardware and still come out ahead?
The math may not work out this way forever, and when it doesn't, we'll make a change.
"Where the copyright holder makes available to his customer a copy – tangible or intangible – and at the same time concludes, in return form payment of a fee, a licence agreement granting the customer the right to use that copy for an unlimited period, that rightholder sells the copy to the customer and thus exhausts his exclusive distribution right. Such a transaction involves a transfer of the right of ownership of the copy. Therefore, even if the licence agreement prohibits a further transfer, the rightholder can no longer oppose the resale of that copy"
You can even buy the right to download future updates:
"Therefore the new acquirer of the user licence, such as a customer of UsedSoft, may, as a lawful acquirer of the corrected and updated copy of the computer program concerned, download that copy from the copyright holder’s website."
That said, it says on the box that the software is only for Apple hardware, and I think even only as an upgrade for an existing OSX install.
If you're a company, then EULAs are definitely binding, no matter where you are.
Although the "Only run on Apple Hardware" would probably be fine.
Any generic desktop computer with the same hardware. It sounds like they're using Apple's image pipeline, which I imagine would be designed around the specific graphics hardware in the Mac Pro. Sure it could work on other hardware, but when you know exactly the hardware you're running on you can do a lot of low-level optimizations you couldn't otherwise do.
I've alluded to this elsewhere, but the math doesn't add up to your gut reaction. It's cheaper, but not by a significant enough margin relative to the engineering costs, to go with commodity servers and GPUs.
Building things out of sheet metal is actually easier than migrating to Linux, for one big reason: we can pay someone else to do it, because it isn't part of our core competency. In fact, I'm pushing to open source the design of this chassis, in tandem with our design partner (Racklive). Not sure if it will happen, but I'd love to see it.
There are 2 problems I see with this design:
1: You are placing the Mac Pros on their side, which may lead to premature bearing failure on the main cooling fan. Apple designed the cooling fan to be as silent as possible, which means that they optimized the bearing and the fan to work in vertical orientation. Bearings designed for thrust (vertical) orientation may not work so well if placed horizontally for a long time.
2: You are fitting triangular shaped computers, wrapped into round cases, into square shaped boxes, resulting in significant loss of space density.
Considering that Apple is a huge company that owns huge data centers, combined with the fact that it would be simply stupid for a company who makes their own OS to run anything but that OS, and combined with the above mentioned problems with using Mac Pros as server "logs" (because you cant call them blades), I would assume that Apple has internally OSX servers designed in the traditional blade configuration.
They may not sell or advertise them, but they MUST have them. Given that you guys are buying a ton of hardware, and are located nearby, and would be actively promoting running Apple hardware, wouldn't it be wise to at least approach Apple and see if they would be kind enough to sell you some of those blade form factor servers they simply must have.
I may be completely wrong here, but apple did brag about how Swift is the new language thats so flexible that you can make a Mobile app in it, or a Desktop app, or even a full blown social network. If that's the case, they must have some plans for the server market? No?
Any way, in the end it's a cool design, but I would seriously consider at least stacking the Mac Pros vertically to avoid fan issues. You can actually get a tighter form factor that way as well, unless space is not the issue. And if it's not, then hell, what's wrong with just placing a bunch of Pros on an Ikea shelf in a well air-conditioned room :)
2. True, but 1U per server is not bad density by any stretch. For my app servers, they effectively occupy 0.5U; database and storage effectively occupy 1U. So this puts the Mac Pros on par with the larger server class. Were we to deploy renderers in conventional server chassis, a similar system would occupy at least an effective 1U if not a full 2U.
What Apple does internally is, of course, shrouded in mystery. I know some people there, and we talk to people when we can, but they just aren't the kind of company that is going to tell you how they make the sausage.
From what I've heard and my sense from speaking with them over the years, they do not use OS X in production. They used to use Darwin and Solaris, and now almost exclusively use Linux (presumably Solaris is still around to run Oracle). They did used to use Xserves internally, but even at their scale it isn't worth building them just for their own use.
It's also fascinating that they are running Linux internally nowadays, for their server side stuff. What next, I find out that all of the Microsoft data centers run Debian :) Considering that they employ all of those Objective-C and Swift engineers, you would thing that they would want to leverage their workforce write Obj-C or Swift backend code as well. For most backend tasks either Swift or Obj-C is as good of a language as any other.
Any way, rackable OS X systems are a missed opportunity for Apple. They can sell them to a company like yours, movie production houses, and even design some libraries and make a play for the web app market with Swift. Not sure how successful the last one would be. As for the economies of scale, they don't even need to manufacture or design the system, take an off the shelf rack mount server from another manufacturer, fiddle around with the casing a bit to give it that Apple feel, and load OSX on it. Perhaps the margins in the server side hardware are way too slim.
Not really. For a server software, why not run it on a mature, industry standard server OS?
> What next, I find out that all of the Microsoft data centers run Debian :)
Apple don't sell server software, not really. MS does.
Why would you assume that? There are a ton of things that linux does better than OS X - and it would be extremely stupid for any company regardless of size to not use the right tool for the job. For example, even IBM uses, sells, and supports Linux instead of AIX or OS/360 on their line of servers and mainframes. I think that your assumption is just really old fashioned.
Internally Apple does use Linux, just as Microsoft uses a blend of OS's - supporting Linux on Azure, for example. I read that they actually use Linux as a host for their Hadoop service on Azure.
I am not arguing that OS X is the perfect solution in most circumstances, but it can be a good solution in many situations, especially if you are Apple, and have the full source and the capability to adopt the OS as necessary.
Microsoft, especially nowadays, tries to be very cross compatible, so it's not surprising that Azure supports Linux apps and guests. But Azure RUNS on Windows Server 2008, not Linux, not Unix.
Red Hat/Suse/Oracle etc. all sell tailored solutions for that usage that are Linux specific technology (mostly, some stuff gets ported to other Unix derivatives but most doesn't). Sure Apple could do all that too, but they don't want to. It isn't their market, so why sink money and effort into engineering OS X to do it when they can just buy high quality products ready to go?
Those kinds of products are huge investments. Sure Apple might be able to market towards the enterprise, but they simply don't think there is any money to be made. They used to have for instance Xserve that tried to stay afloat in that market, but which made little money. Since they canceled it, Mac OS has only been developed as a small to medium server (which it isn't half bad at). But big time data centers are a different world.
For instance, as a very basic example, does Mac OS support Infiniband or the more exotic high-speed ethernet network interfaces? For Infiniband, the answer is no and in the other case the answer is "kinda, but not really."
In the pipeline:Migrating to the new shiny Mac Pros along with OS X Server
Reasons: Thunderbolt 2 connectivity is amazing and works fine to connect FibreChannel RAIDs.
OS X Server: Though it's correct that the GUI got simplified a bit, it's the same server package and complex as it always has been, however easy enough to support. And if configured correctly, a solid workhorse for many scenarios: network accounts for lab use, calendar and contacts server, along with some helper tools it works in heterogene environments fine, supports huge amounts of users in via LDAP..just to name some reasons. for 20 bucks the best server os to support Mac and iOS clients. And because the underlying foundation is UNIX, it's friendly with any networking stuff such as RADIUS for your WP2-Enterprise wi-fi needs..just to name a view.
One thing that is not quite right in the post above: SAN support exists via XSAN.
Well, it's a supported operating system on machines that aren't cylindrical.
Image processing doesn't require double precision, so we don't need GPUs tuned for it, which means we can use Fire Pro's and similar workstation or server grade cards.
Have you ever personally run a Hackintosh, full-time for a prolonged period of time?
It's anecdotal, but I can assure that once you're used to how OS X and the Apple hardware work together and never, ever, ever crash, using a Hackintosh is an exercise in frustration.
I had one of the known-best Hackintosh configurations in existence, and it didn't hold a candle to the MBP I had prior to it in terms of "it just works".
Sure, it was cheaper.
Guess what I did when that Hackintosh needed replacing? I walked in and dropped the coin on genuine Apple hardware without a second thought. I have never regretted it, and I'll never go back.
It's not a matter of cheaper for me, but a matter of fitting my needs. I don't want to run AMD graphics cards, I need PCI-E, I want lots of internal storage, I want really high single threaded CPU performance.
I can't buy that from Apple in a desktop form-factor. So I have my Hackintosh.
That being said, I don't disagree that Apple hardware is nice. I have a rMBP 13 and intend on replacing it with a newer model Apple notebook soon.
I did. But now I'm running Yosemite under KVM, VT-d motherboard, dedicated videocard and USB3 hub.
You can get to a point where "it just works".
For months and months on end of heavy usage without a single restart or issue?
EDIT: to expand a little - I was developing/compiling all day long on my ~2008 MBP with it plugged into an external monitor, network, mouse, kb. I'd close the lid and walk home with it, then watch movies, torrent, develop some more, surf etc. Close lid, and repeat for months on end. The only time I ever restarted was for OS updates, I never had a single app even crash in ~2 years of doing that.
My hackintosh (and the windows 7&8 HP machines here at work) don't hold a candle to that.
Well all I can say that there are no crashes and no causes for frustration on my end.
"It just works" for me is - I don't have to think about it, it does not get in a way.
As a bonus: configuration of the VM can be put in a VCS, whole virtual disk can be snapshotted and reverted if needed.
Regarding Windows machines, I've had desktops that would be used for months at a time (mostly rendering) without a restart and never crash.
A pretty good way to test for reliability is to let Prime95 and Memtest86 run for a week or so and see if it fails somewhere along the line (obviously proper cooling is a must), many consumer machines will fail this test.
Would you found a company and make your primary product hackintosh servers? Are you willing to stand behind your 'perfect' configuration and give those customers years of support?
These guys are running a real startup. A vendor with that exact promise and a failed delivery could tank them.
1. Apples EULA does not allow OSX on non-apple hardware.
2. Some major updates can break customizations and require some modifications (bootloaders, etc) to be re-installed
I have no problem helping a friend set up a Hackintosh when they want to save a few thousand dollars (I have set up a few already) with the understanding that they need to backup before doing any system updates and expect things to break after updating.
While Hackintosh's work well for personal use as long as you are somewhat techy and pick the hardware carefully, (putting aside the EULA issue) it does not make sense for anything large scale.
An example of the sort of hack I'm talking about would be a graphics driver that says it's for the NVidia model E532D. Your graphics card is an E532E. You looked on the internet, and you found out they are exactly identical except for branding, so you dive in the driver and simply flip a bit to make OSX recognize it.
It's unlikely that this company needs all the hardware features of the Mac Pro - probably just the beefy GPUs. That's combined with the power density problems (and higher monthly costs) of this solution, compared to modern rack or blade servers, making it far worse value.
Compare this also to John Siracusa's woes over buying a new Mac: He wants a graphics card powerful enough to game on, and remain useful for a number of years. He wants to be able to get a retina display.
He's for now stuck with a 2007 Mac Pro as Apple don't sell a suitable machine.
When I was comparing hardware, you really couldn't buy comparable cost/flop GPUs for any significant savings (and you'd spend more on some similar builds), which was my point. No idea if that's still true. The idea that you could get the same thing for half the money just wasn't true.
Your comment about John Siracusa's problem doesn't seem relevent to the OP, although it something to consider if you were buying a machine for home use.
Not having a screwdriver when you need it in a pinch is penny wise and pound foolish. At best you're now out 30 minutes while you drive to Home Depot, potentially during some sort of catastrophe. At worst maybe you simply cannot do the task that you need to do, because it's 2am and you're in Frankfurt. I've worked in a lot of datacenters that didn't stock basic tools to perform tasks, and frankly it sucked.
I keep a log of all of the purchases I made for the current datacenter build. Non-server / non-structural expenses account for less than $3000, which is less than the cost of a single server. This includes storage bins, carts, shelves, workbenches, chairs, supplies and tools.
A lot of cost estimates have been thrown around (here and elsewhere). The highest that I've seen is $4000 per unit. That is simply absurd. The initial run of prototypes was far less per unit, and this was a small batch made to iron out the kinks. Economies of scale and design tweaks will drive this down even further.
The chassis design is actually quite elegant from a manufacturing standpoint. That's something that I hope will be made evident by follow-up posts that delve into more technical detail.
Licensing costs, and legal costs when you get sued for violating the license?
1. Using the same operating system as the developers of the software, plus access to Apple's fantastic imaging libraries.
2. The Mac Pro, whilst expensive, is good value for money. The dual graphics cards inside it are not cheap at all. As servers with GPUs are fairly niche, this might actually be a cheaper solution.
3. The form factor. Even if you could create PCs that are cheaper with the same spec, they'll use more power, possibly require more cooling (Mac Pro has a great cooling architecture) and will take up a lot more space.
I'd be very interested in hearing how they manage updates and provisioning, however. I can't imagine that'd be much fun on OS X but perhaps there's a way of doing it with OS X Server.
1. Yeah, the OS X graphics pipeline is at the heart of our desire to use Macs in production. It's also pretty sweet to be able to prototype features in Quartz Composer, and use this whole ecosystem of tools that straight up don't exist on Linux.
2. I mentioned this elsewhere already, but it is actually a pretty good value. The chassis itself is not a terrible expense, and it's totally passive. It really boils down to the fact that we want to use OS X, and the Mac Pros are the best value per gflop in Apple's lineup. They're also still a good value when compared against conventional servers with GPUs, although they do have some drawbacks.
3. I would love it if they weren't little cylinders, but they do seem to handle cooling quite well. The power draw related to cooling for this rack versus a rack of conventional servers is about 1-5/th to 1/10th as much.
In terms of provisioning, we're currently using OS X Server's NetRestore functionality to deploy the OS. It's on my to-do list to replicate this functionality on Linux, which should be possible. You can supposedly make ISC DHCPd behave like a BSDP server sufficiently to interoperate with the Mac's EFI loader.
We don't generally do software updates in-place, we just reinstall to a new image. However, we have occasionally upgraded OS X versions, which can be done with CLI utilities.
Electrically, everything's built around a round "central" PCB using a custom interconnect. You're not going to be able to reassemble the thing into a rectangle and still get a functioning machine (not without tons of custom design work, at least).
Since we were able to get the Pros to the point where they effectively occupy 1U, there wasn't really any incentive to doing a disassembly style integration. Maybe if Apple announces the next Mac Pro comes as a triangle.
To your other point about the warranty and re-sale: we do care, but only a little. I budget machines to have a usable lifespan of 3 years, but the reality is that Apple hardware historically has significant value on the used market for much longer than that. So if we can recoup $500-1000 per machine after 3 years of service, that would be great.
Do you mean your Mac Pros dissipate 1/5 to 1/10th as much heat as other x86 server hardware, or is there there some other factor in play that makes your AC 5-10x more power efficient?
As a result, I'm calculating that the Mac Pros draw a lot less power for cooling purposes than the Linux systems due to their chassis design. However, serviceability and other factors are definitely superior on the Supermicro FatTwins.
I'm not super familiar with it or the competition, but I assume this is what they're talking about.
For the downvoters and the unclear, the relevant bit talks about compiling exactly the instructions needed to change the image. As I understand it, this JIT recompilation of pixel shaders is effectively what was implemented in the mesa drivers for Intel chipsets.
Really interesting to hear how you provision servers, had no idea that OS X Server came with tools for that, but it certainly makes sense. I wouldn't have thought Apple would have put much time or thought into creating tools for large deployments, but glad to hear that they have.
He has some other work online that you might enjoy, not related to Macs or imgix: http://photos.miggi.me/
One of the goals of the next revision is to have LED power indicators (maybe plugged in to the front USB ports) or LCD panels built into the front of the chassis. Right now you actually can't tell that the rack is powered on unless you walk to the hot aisle and look at the power readouts, it's that quiet.
Even if you can't see when the fan itself has failed, the CPU core temp should eventually go out of the acceptable range without any forced air at all, which is also helpful to determine that hardware maintenance is required.
So far nothing has actually failed on any of our Mac Pros though. When and if that happens, the entire Pro will get swapped out as one field replaceable unit, and then put in the repair queue.
Pop into ##osx-server on freenode if you want to talk to the devs.
How the hell did you guys get funding to do this? I can't imagine any sane person wanting to put money behind this. Could I have their contact information?
Here's the quick math on cost per gflop, including all network and datacenter costs:
Mac Pro: $5/gflop
EC2 g2.xlarge: $21.19/gflop
I also think you need to redo your math on the price per gflop for a Mac pro, ypou seem to be at least half the price of my back of the envelope work. Unless you have some crazy good supplier.
As I noted elsewhere, I mention EC2 because all of our (funded) competitors run there. We can split hairs over whether I could save 10% on Linux systems vs Mac systems, but the elephant in the room are all of the companies trying to make this sort of service work in EC2. You can't do it, and make money at the same time. Even if you can make money at small scale, you will eventually be crushed by your own success.
My overriding goal for imgix's datacenter strategy (and elsewhere in the company) is to build for success. To do that, we have to get the economies of scale right. I believe we have done so.
I expect a useful life span for any datacenter equipment of 3 years. A Mac Pros list price is about $4000. We pay less but I'll use public figures throughout. Using equipment leasing, I can pay that $4000 over the 3 year period, with let's say a 5% interest rate and no residual value (to keep this simple). So over 3 years, I spend $4315 in total per machine to get 2200 gflop/s.
Over 3 years with EC2, a g2.xlarge is $7410 up front (to secure a 57% discount) for 2300 gflop/s.
So I can pay over time, save $3100 over a 3 year period, and probably still resell the Mac Pro for $500 at the end of its life span. That's pretty compelling math to me. There are costs involved with building and operating a datacenter, and that evens things out a bit. What really kills EC2 though is the network bandwidth costs. It is just insane.
Compare a Mac Pro to an HP DL360 that can hold 4 8-core Xeons (32 cores total) and over 200GB of RAM along with a few FirePro or Titan GPGPUs, and the HP will give you far greater density (though a rack mount system with 4 8-core Xeons and 4 GTX Titans would be a power and cooling nightmare!). That said, the Mac Pro isn't as far behind as I would have expected.
But OS X also kicks ass at multithreading, especially if you use Apple's graphics libraries. It's entirely possible they get much greater performance from OS X than a Linux or Windows based solution could provide.
“OS X does not export interfaces that identify processors or control thread placement—explicit thread to processor binding is not supported. Instead, the kernel manages all thread placement. Applications expect that the scheduler will, under most circumstances, run its threads using a good processor placement with respect to cache affinity.
However, the application itself knows the detailed caching characteristics of its threads and its data—in particular, the organization of threads as disjoint sets characterized by their association with (affinity to) distinct shared data.
While threads within such a set exhibit affinity with each other via shared data, they share a disaffinity or negative affinity with respect to other sets. In other words, a set expresses an affinity with an L2 cache and the scheduler should seek to run threads in a set on processors sharing that L2 cache.”
Also, if you're trying to sync raw images between OS X clients and the cloud, then you're going to need OS X servers in the cloud.
It'll greatly complicate the clients workflow if they can't use their built in raw converters.
I mentioned this elsewhere, but considering alternative solutions was definitely a part of this project. Supermicro's GPGPU chassis was one of them, as well as some of the 2U FatTwin options (which we use for all of our other system types).
While it would probably have long term cost savings, it definitely isn't something that we could realize within deploying just a few systems. It would be a pretty time and labor intensive process on the software side, in order to save labor on the operations side that isn't particularly problematic for us. So, maybe in another few generations of our image renderers this will make sense, but it doesn't today.
If you want a solution that exactly matches OS X client, you need OS X.
E5-2658 v2 (dual cpu): $1440 per part
E5-4650 v2 (quad cpu): $3616 per part
None of these servers are going to be cheap.
It's the worst piece of any kind of hardware I've ever used, hands down.
I'd actually qualify this ever-so-slightly by saying "It's a good value for money if you need the specific features it offers." Which it evidently does to the OP! But many of us would prefer something with, say, one video card, one mainstream-ish desktop processor, and one mechanical hard drive, an way lower costs.
It's also a bit dear for use as a desktop machine, but it is pretty nice to have one hanging out for on your desk for a few weeks.
"Building on OS X technologies means we’re dependent on Apple hardware for this part of the service, but we aren’t necessarily limited to Mac Minis. Apple’s redesigned Mac Pro seemed like an ideal replacement, as long as we could reliably operate it in a datacenter environment."
What other upsamplers look like: https://github.com/haasn/mpvhq-upscalers/blob/master/Rose.md
Looking at the other operations available, I fail to see what is done better by Quartz than just by imagemagick.
The downsampling also isn't that great.
Original image: https://raw.githubusercontent.com/haasn/cms/master/rings_lg_...
Downsampled with imgix: http://chen.imgix.net/rings_lg_orig.png?w=400
Downsampled with imagemagick: https://0x0.st/1-.png http://i.imgur.com/Nvl7tAm.png
Downsampled with imagemagick, gamma correct: https://0x0.st/1i.png http://i.imgur.com/Hrm4COb.png
Note how the luminance becomes square in the center (step back a bit if you can't see it), and also the edge pixels on the imgix version.
You don't even have to click the link, just simply get Chrome to load it into memory
edit: looks like something to do with Chrome's pre-fetching and https cert parsing, I think they're literally parsing the "0x0" string within the cert as a memory location
Most importantly, it doesn't have the horrible box window that the imgix resampler has.
Then again, one wonders why not just use FreeImage or something?
Isn't running shaders intensive as it needs to be compiled on the fly and handed off to the GPU driver?
I imagine, especially if the traffic is mainly for downsampling, it'd be sufficient. If it's not, then writing some custom code to do the image transforms on a GPU and bring them back shouldn't be that gnarly--and if you can afford to stick a shitton of macs in a data center, you can afford a graphics programmer to get that done.
Then again, it's often cheaper to throw silicon at problems than people. If you have in-house expertise in Apple's graphics libraries, that might be cheaper than hiring someone who could write the whole thing to run under a lower-cost Linux solution.
Alternatively, OS X might give you automatic access to patent licenses for some of the more expensive image formats.
Have they ever blogged about why they've gone down this path?
From a pure hardware perspective, I would love to move this part of the service to Linux systems with GPUs. I spent some time evaluating this before we committed to the Mac Pro solution -- built some prototype hardware and did a cost analysis. It just wasn't the right move, because of the engineering cost for us. OS X's graphics pipeline is really strong, and we've built a lot of cool things with it. There is no analog whatsoever on Linux -- we would have to commit a lot of resources to re-build what we already have, and it would in the best scenario not be a customer-visible change. As a lean startup, we have to be ruthless with the work do: if it doesn't move the needle for our customers, it's probably not the right thing to do right now.
So instead, I've spent some time (and engaged with partners like Racklive) to get the Mac Pros to be as operationally acceptable as possible. This rack design and the chassis we designed go a long way towards achieving that goal. Airflow is taken care of, and the rack hits my power quota almost exactly (at full load). Cabling and networking and host layout follow our patterns from our conventional server racks. USB and HDMI ports on the front allow me to easily use a crash cart.
The lack of IPMI is my biggest operational headache. We have individual power outlet control and can install the OS over the network, so that's something at least.
The OS itself is also challenging. I'm not a fan of launchd. Finding legitimate information about how to do something on OS X is pretty tough, given that most of the discussions are focused around desktop users (who may be prone to pass on theories of how things work rather than facts). We've gotten it to a point where things work pretty well -- we disable a lot of services, run our apps out of circus, use the same config management system as on Linux, and so forth. We treat the Macs as individual worker units, so they're basically a GPU connected to a network card from the perspective of our stack.
This is the biggest nightmare about working with OS X, to me.
Any forum discussion you find on Macrumors or the Apple forums is hilariously misguided with pathetically bad "theories" on why something isn't working and how to fix it.
"Zap the PRAM!" can be found in any/every thread, and that's a mild example.
There are some OS X groups that are more focused on automated deployments for IT type stuff, so those can often be a source of more enlightened discourse, even though it still isn't exactly catering to our niche.
And we actually run a lot of them in production, so I've figured out how to do it and not pull my hair out constantly. That's something I'd like to write on as well, but it would be in a different medium. More technical depth, less pretty pictures.
By the way, thanks for clearly, completely and patiently responding to people in this thread.
I want to explore the design decisions around the chassis in a follow-up, and we have one interview in the can already with the industrial designer. Hopefully that article will be a little faster to get out; this one was written about 3 months ago.
The other angle that I'd love to explore in a more in-depth article is how we actually do this stuff in production, and what we've learned about it. This would delve more into the ugly OS X stuff that we painted over to get things nice and pretty in production.
One project I worked on was where we needed to use proprietary software that only worked on OSX that would take a video, perform waveform analysis on the audio, and the output would be a properly timed closed captioned master with the text having been provided separately.
This was of course a small project, and only had a few Mac Minis rack mounted for the task, but I can easily see situations similar where you're tied to the platform for one reason or another.
If you don't have OS X in the cloud, then you're going to have to write your own raw image converter, and that means you can't sync with the OS X client native raw converter, complicating the workflow...
Not to say you can't run OSX virtualized...
"(iii) to install, use and run up to two (2) additional copies or instances of the Apple
Software within virtual operating system environments on each Mac Computer you own
or control that is already running the Apple Software, for purposes of: (a) software
development; (b) testing during software development; (c) using OS X Server; or (d)
personal, non-commercial use."
might be different for each release though
A process would drop a video file and a text file in a directory, and then a script would execute the MacCaption binary for each file with a list of parameters to get the result we wanted. A captioned video file, as well as a WebVTT caption file, would be the results of the process. Those were then put into another workflow for dissemination.
Straightforward, although MacCaption was a terrible product to work with. They're owned by Telestream now (www.telestream.net/captioning/compare.htm).
Except they needed to build and maintain that silicon.
I'm sure they've performed some kind of market analysis for this, but there's enough differences between OSX and Linux solutions that for people who use HPC solutions (a growing market) a cleaner path from OSX to HPC would be very helpful.
It is pretty frustrating. We've joked around about how Apple will probably announce a new Xserve at WWDC next month, now that we've done the work to get the Pros happy in production.
I don't really see them re-entering this space though. Apple already has a LOT of businesses that they are clearly bored with. iPods, the Thunderbolt Display, their mice, and so on. They seem to be unable to get engineering motivation behind "unsexy" products, which I definitely think a new Xserve would classify as.
Plus, just making it rack mountable wouldn't necessarily cover our use case. What if it didn't have GPUs, or couldn't fit the ones we wanted? A lot of server class GPUs can't fit in a 1U enclosure, they need 1.5U or 2U chassis for airflow and heatsinks and whatnot.
Buyers of rackmounts require a totally different kind of service. It's not just about the iron, it's a largely separate operation from the consumer PC business. You don't exactly take your Xserve to the Genuis bar...
There simply isn't enough demand for Xserves to make it worth the investment for Apple. (As far as I remember, many companies that bought the original Xserves phased them out again because Apple couldn't deliver that kind of service.)
I try to lean on vendor support as little as possible, because it does me no good to point a finger at a vendor when something goes wrong -- I just want it fixed, even if I have to do it in-house. But you still need someone to go back to when push comes to shove, and I just don't see Apple being set up for that kind of support.
In fact, Apple isn't even set up for the kind of purchasing that goes along with it. They're a really old, staid organization when it comes to the sales structure. We wound up going with a VAR rather than direct, simply to improve the experience.
One can certainly imagine Pixar or whoever having a data-centre of Macs, but at their scale, where they also write all the software for their rendering pipeline, they can easily make that software cross-platform such that developers can test-render on a Mac, then grid-render on a Linux farm without any friction.
I used to have a tape measure from Marathon that was marked out in U, but I haven't seen it in years. They were a pretty cool company at the time.
11 years ago.
I personally felt it was a disgrace to see Apple logo on the apple's rack mount servers.
Considering how little rack mounted equipment is replaced versus consumer hardware, I can see why.
Yep, there are lots of other options out there. I considered at least 4 or 5 off-the-shelf ones before committing to designing and building our own.
In Sonnet's case, it is super expensive and no denser than this: http://mk1manufacturing.com/store/cart.php?m=product_detail&...
We're able to achieve twice that density, which put it right on target with where I wanted to be. 44 of 48 switch ports utilized, almost all CDU outlets utilized, and ~13kW out of 14kW utilized under load.
Neither of these are relevant in a rack-mounted environment with heavily customer-written backend/batch software with no user interface
I get that the Mac Pro is a beautiful object, but this isn't about the mac. It's about the rack, and none of these photos let me understand it in one shot.
None of these pictures really show how that is accomplished here. In fact many of them seem to be deliberately hiding that specific aspect.
I had originally intended there to be a totally disassembled chassis with an airflow overlay on top, but it turned into a lot of work. All of the chassis were already assembled by the time we took the pictures.
The high level view is that air is drawn in to the vent on the front right, which has a separate channel that all 4 Pros sit in. They are sealed in place, so the air has to pass into each Pro's air intake to go anywhere. The other side of the chassis is open to the back of the rack and holds each Pro's exhaust vent.
I'll go through the photos we took and see if there's something that would help to illustrate this better.
But usually you keep that to yourself! To me, this reads sorta like: "Well, it was really hard to find someone who knew how to build a replacement bridge across the creek. We were pressed for time, and Bob didn't know anything about bridges, but luckily, he used to be in the Air Force and we have a bunch of venture capital. ... So we bought a helicopter instead. We only cross a few times a year, so for now we're coming out ahead and it works out for us. Plus the pictures are nice..."
It is much more expensive, though a lot less engineering work, than buying some used Tesla's on ebay: http://www.ebay.com/sch/i.html?_from=R40&_trksid=p2050601.m5...
or even brand new
The Tesla card does have a significant advantage in terms of double precision math, but that isn't the kind of workload we're doing. If we were to go with GPUs on Linux systems, the NVidia GRID card or AMD FirePro server cards are probably a better fit. Or maybe even NVidia Quadro or GTX, although they don't have the proper fan layout and there would be some tears shed over getting the power sockets cabled.
If you're a one person startup, then you do what you have to do to survive. Eventually you get to the point where free stuff actually costs you more than just paying for it in the first place.
We're also working on a third, which I think will be in the format of an interview with the Mac Pro chassis's designer.
A Mac Pro is 9.9 inches tall and 6.6 inches in diameter. 9.9 / 1.75 = 5.65 and 6.6 / 1.75 = 3.77 https://www.apple.com/mac-pro/specs/
If you look at how the airflow works on that shelf, I think you'll see why I don't have confidence in that solution. The air paths to each system seem to be based on wishful thinking.
We also didn't need to go that dense after considering each host's power draw at full load. I design towards a 208v/3ph/50a circuit on each rack, and 44 Mac Pros at full load (plus a switch) are about 13.5kW in my testing. So we would need to build for 60A circuits, or not completely fill the rack, to make the vertical orientation worthwhile.
The reality of the power budget makes the most sense really. There's no point in cramming extra units in if you're going to have to rewire for them. Systems engineering!
On the topic of density: our chassis was originally specced to support 6 units rather than 4. I vetoed that because it would require a second top-of-rack switch, and would have been too power dense for our current site design.
44 turned out to be the magic number this time around. The design is also flexible enough that if the specification changes dramatically in future Mac Pros, we can tweak as necessary to achieve ideal density.
I realize it's not the Apple Way™ but considering just how bizarre and niche the current trash-can Mac Pro line is, it hardly seems more niche than that.
x0054: "You are fitting triangular shaped computers, wrapped into round cases, into square shaped boxes."
And place them horizontally. And without additional fans!
And surprisingly, if you read skuhn's answers here, for them it all still has sense, financially.
And also surprisingly, Apple says it's OK to use the Mac Pros horizontally:
Physically, the Mac Pro itself is really densely constructed. Even with some empty space inside our Mac Pro chassis, the solution is effectively 1U per 2 GPUs. That's pretty dense, and it hits our power target for the current site design, so going denser would only lead to stranding space ahead of power (which leads to cost inefficiencies).
But, let's consider some hypothetical configs with list prices that I just looked up. Anyone can do this, and these are not reflective of my costs (you can always do better than list). In reality, I would do a lot more digging on the Linux side, but this is a reasonable config that is analogous in performance and fits into my server ecosystem.
I'm excluding costs that would exist either way: the rack itself, CDUs, top-of-rack switch, cabling, and integration labor are all identical or at least very similar. Density is very similar, so there's no appreciable difference in terms of amortized datacenter overhead.
Mac Pro config (4 systems in a 4U chassis):
- 4x Mac Pro ($4600)
- Intel E5-1650 v2
- 16GB RAM
- 256GB SSD
- 2 x D700
- Our custom chassis
Capex only: $0.70 per gflop
Linux config (4 systems in a 4U chassis):
- SuperMicro F627G2-FT+ ($4900)
- 4x Intel E5-2643 v2 - 1 CPU each ($1600)
- 8x 8GB DIMMs - 16GB each ($200)
- 8x 500GB 7200rpm (RAID1) HDD - 500GB RAID1 boot drive ($300)
- 8x AMD FirePro S9050 - dual GPU ($1650)
Capex only: $1.03/gflop
- g2.2xlarge @ 3 year reserved pricing discount ($7410)
Instance operating cost only: $3.23/gflop
I firmly believe that we've made a pragmatic and sensible choice for our image rendering platform today. imgix has a number of smart and talented people constantly evaluating and improving our platforms, and I'm confident we will keep making the right decisions in the future (regardless of how nicely the Mac Pro may photograph).
We toyed with open shelf type solutions that would let us mount the systems front-to-back, but as you noted, anything above 2 Mac Pros across won't fit in a 19" rack. We also thought about mounting 23" rails in our standard cabinet, but ultimately settled on this chassis and orientation.
One of our early design ideas: https://www.dropbox.com/s/15u19aivay4hfiu/2014-01-13%2017.14...
And to those questioning "Why would you use such expensive systems when commodity hardware is just as fast at half the price?" I would reply that the Mac Pro isn't all that expensive compared to most rack mount servers. If you're talking about a difference of $2000 per server, even across a full rack you're talking less than $100k depreciated over 5 years.
Though Apple is sorely lacking a datacenter-capable rack mount solution. I've always felt they should just partner with system builders like HP or SuperMicro to build a "supported" OS X (e.g. certified hardware / drivers, management backplane, etc.) configuration for the datacenter market. It's kind of against the Apple way, but if this is a market they remotely care about, channel sales is the way to go.
If they are GPU limited...
A full 4U rack of Mac Pros is 8 AMD Fire GPUs (6GB VRAM each), 256GB main RAM, 48 2.7GHz Xeon cores (using the 12-core option), and 4TB of SSD. 10G Ethernet via Thunderbolt2.
Let's set aside differences in GPU and processor performance; we're just looking at the base stats. All for about $36K USD, not including the rack itself.
An alternative is the SuperMicro 4027GR-TR:
So, maxed out, you've got 8 Nvidia Tesla K80 cards (dual GPU), 1.5TB RAM, 28 2.6GHz Xeon cores, and a lot of storage (24 hot-swap bays). That's in a 4U rack too.
Call it about $13K USD for the server, and $5K per GPU. Plus a little storage, call it about $56K USD with 10G Ethernet.
The SuperMicro system is designed to be remotely managed. Each GPU has double the VRAM of the AMD Fire ones (12GB vs. 6GB).
I don't know the exact performance figures of the AMD Fire vs. the Kepler GK210, but I'm sure the Fire it isn't nearly as good. And you've got twice as many Nvidia chips on top of that.
At some point its going to get cheaper to re-write the software...
K80 gflop/s: 8740
2x FirePro D500 gflop/s: 3500
K80 runs about $4900 a card, whereas the entire Mac Pro (list price) is $4000. So it's 2.5x the performance at easily 2x the cost if not more.
You're right that there is a cost advantage to going with commodity server hardware, but I don't think it's as great as most people think in this particular case. It's also far from free for us to do the necessary engineering work, and not just in terms of money. It would basically mean pressing pause on feature development at a crucial time in the company's life, and that just isn't the right move.
That 3500 gflop/s for the D700? It is instead 2200 for the D500.
The 6GB VRAM version with the D700 costs another $600 USD each.
The K80 has 12GB VRAM per GPU (24GB total per card).
If your code can use the additional memory, that is a huge difference.
Anyway, 3500 gflop/s times 8 is 28 tflop/s for the Mac Pros.
With 8 K80s, you're at 70 tflop/s. Single precision. So that's double the raw performance, and double the memory. Actual performance for a given workload? I wouldn't care to say.
I'd be concerned about thermal issues too. I wouldn't be surprised that the Mac Pro gets throttled after a while when running it hard. The kind of server you can put the K80 in usually has additional (server-grade) cooling.
I'm not disrespecting you guys, if you've got a solution that works, and makes you money, more power to you!
But I stand by my claim that at some point, it will be cheaper to rewrite the software for the render pipeline. Not this year I guess, and who knows, maybe not next year either.
I agree that some day in the future, it does seem like it will make sense to bite the bullet and rewrite for Linux. It probably won't solely come down to a cost rationale though, because there are a TON of business risks involved in hitting pause on new features (or doubling team size, or some combination thereof).
Fundamentally I don't believe in doing large projects that have a best case scenario of going unnoticed by your customers (because the external behavior has not changed, unless you screwed up), unless you absolutely have to.
The real reason to migrate to Linux would have to be a combination of at least three things:
1. Better hardware, in terms of functionality or price/performance
2. Lower operational overhead
3. The ability to support features or operations that we can't do any other way
Well now I'm curious as to why you aren't using the D700s. The extra gflops seem like a good value to me. Approximately 60% greater GPU performance for a 15% increase in cost, everything else being equal.
But you probably have to get some work done, rather than answer random questions from the Internet. :-)
Keep in mind that either of them offer significantly higher gflop/s per system than the best GPU ever shipped on a Mac Mini (480 vs 2200 vs 3500).
However, we have fixed bottlenecks in our pipeline as we identified them, so it is probably time to re-evaluate. I actually just had a conversation with an engineer a minute ago who is going to jump on this in the next few days. Higher throughput and better $/gflop is always the goal, just have to make sure we can actually see the improvement in practice.
2200 gflop or 3500 gflop are the specs for just one of the Fire Pro cards. Whoops, I was writing a lot of comments that day.
So a Mac Pro with D700 GPUs has 7000 gflop/s and runs $4600 (list), whereas the Tesla K80 has 8740 gflop/s and runs $4900 or so. Since you still need a whole server to go with the K80, I stand by my thinking that it's not a great deal. We also don't need 12GB of VRAM for our use case, so that's a bit of a waste.
In Nvidia's product line, price/gflop is not at its best in their highest end cards. AWS uses the Nvidia GRID K2, for instance. You're paying a lot for the double precision performance in the Teslas, and imaging doesn't need it.
You don't even have to rewrite it, Linux imagemagick + OpenCV can handle the use cases of cropping and sizing trivially. They can keep the rest of the code (device mappings and CDN related I guess) unless that was implemented using ObjectiveC (this is another thing that I would think is crazy)
Not to say that it's totally impossible to do these types of operations on ImageMagick, but it wouldn't work nearly as well as our current solution does. ImageMagick is a shockingly awful tool for use in server-land for a variety of reasons, some of which are handled better in GraphicsMagick. IM was the bane of my existence at more than one previous company.
You as the server guy hiring a couple people to figure out how to squeeze another 10% value out of the system by hacking hardware is not fungible with hiring two more devs to try to avoid racking custom hardware. As if two devs could pull that feat off anyway.
The fact they discontinued it shows that it's clearly not a market - customers didn't want it in enough volume to justify the product.
That said, there's probably not much of a market for it anymore since we've gone a few years without an OS X rack mount machine and people have found other solutions.