
160 Mac Minis, One Rack - barredo
http://hackaday.com/2012/12/09/160-mac-minis-one-rack/
======
jurre
Before the thread becomes cluttered with people suggesting alternatives or
questioning why you wouldn't just run <insert manufacturer, OS, etc>, the
person that did this replied the following:

 _simbimbo says: December 9, 2012 at 11:03 am Thanks for the great write up
Hack A Day. I would like to answer some of the questions posted. @Geebles
these machines all run SSD’s and I ordered them with AppleCare, so I hope to
never have to change a drive ;-)

As for the reason I built this.. Well, I guess I’m just like a challenge ;-),
but seriously, the company I work for has a need to have large numbers of
machines to build and test the software we make.

There were plenty of discussions of Virtual environments and other “Bare
Motherboard”/Google Datacenter-type solutions, but the fact is, the Apple EULA
requires that Mac OS X run on Apple Hardware, since we are a software company
we adhere to these rules without exception. These Mac Machines all run OS X in
a NetBooted environment. We require Mac OS X because the products we make
support Windows, Linux and Mac so we have data centers with thousands of
machines configured with all 3 OS’s running constant build and test operations
24 hours a day 365 days a year.

As for device failure, we treat these machines like pixels in a very large
display, if a few fail, it’s ok, the management software disables them until
we can switch them out. This approach allows us to continue our operations
regardless of machine failures.

@bitbass I tried the vertical approach, but manufacturing the required plenum
to keep the air clean to the rear machines cost too much for this project, but
it’s not off the table for the next rack

@Kris Lee When I open the door I can literally watch the machine temps go up,
but I can keep it open for 15-20 minutes before the core temps reach 180F

@Adam Ahhh.. Nope, you can’t have my job ;-)_

~~~
monochromatic
> these machines all run SSD’s . . . so I hope to never have to change a drive

lol

~~~
sliverstorm
Try actually contributing.

~~~
monochromatic
Pointing out a ridiculous statement is contributing. And it _is_ ridiculous.
See, e.g., [http://www.codinghorror.com/blog/2011/05/the-hot-crazy-
solid...](http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-
drive-scale.html)

~~~
sliverstorm
Quoting the statement and captioning it with "lol" is not contributing, at
least not on HN.

It might have been contributing if you included the codinghorror link up
front.

In response to said link, I will observe that it was written 1.5 years ago,
and SSDs have been progressing very rapidly in both performance and
reliability.

------
patangay
We did something similar at Facebook for iOS and OSX automated testing and a
few of them doing iOS app builds.

Here is a post that Jay Parikh (VP of Infrastructure) made about it.
<http://tinyurl.com/cnvss4v>

Our density isn't as high (we have 64 minis) because of cooling and cabling
that we designed according to our datacenter cooling standards.

@jurre - If you want to chat about our design, message me and I can put you in
touch with our hardware designer.

~~~
jurre
I just copied that reply from the person that built the actual rack, but I
would love to hear more about your iOS testing infrastructure/process!

~~~
jevinskie
Me too! Before we used fruitstrap[0] we used DeviceAnywhere. This was a 2U
rack unit that contained a franken-iPhone. The hardware buttons were soldered
up to GPIO on the DeviceAnywhere.

[0]: <https://github.com/ghughes/fruitstrap>

------
alanctgardner2
It's very cool, but is anyone actually tied to OS X as a server platform?
Couldn't they move to FreeBSD and save a ton of money in an application like
this? I'm wondering if there's a real business case for this, or it's just a
fun hack.

edit: I guess lumped into this is the small market that seems to exist for
colocated Mac Minis. Is there something about them that is better than renting
commodity x64 hardware?

~~~
taligent
I am currently using a cluster of Mac Minis as a server platform. Some
benefits:

* Launchd is a massive improvement over the equivalent mess on Linux. This can't be understated if you are managing your own hardware. * You can develop on the same machine you are deploying to. * You have exactly the same toolchain as on Linux. * Lots of remote monitoring options that are unique to OSX e.g. OSX Server * The OS is stable and upgrades are safe enough to enable auto update. I could never do that on CentOS.

But really it comes down to hardware and resale value for me. 2 Mac Minis in
1RU is great value.

~~~
cheald
I don't have extensive experience with Launchd, but what makes it better?

OS X doesn't have the same toolchain as Linux - notably, Apple doesn't ship
anything licensed under GPLv3. This leads to OS X versions of common GNU tools
being extremely outdated.

I do agree with the point about developing and deploying in the same
environment. I do both in Linux for that exact reason; far fewer surprises
when your deploy environment matches what you've already debugged.

~~~
taligent
Everything. It is all in one place, very simple to use, has process monitoring
like Monit and well supported.

Nobody uses what Apple ships by default. Homebrew or MacPorts and I have
exactly the same setup as I have on my Linux server. Plus lots of amazing GUI
wrappers which can simplify setup and ongoing maintenance.

~~~
chimeracoder
>Homebrew or MacPorts and I have exactly the same setup as I have on my Linux
server.

My biggest problems with the above:

1\. Neither Homebrew nor MacPorts (nor fink, for that matter) has a complete
set of relevant packages

2\. Compilation from source is ridiculously slow.

3\. Certain utilities are best installed from .dmg files instead, and it's a
pain to remember which was installed which way.

~~~
rymith
1\. Name your package manager, and I can list the missing packages, or missing
flags which have made my life a pain at some point in my career. 2\. Yup, and
the only way, should you need to set any flags. 3\. True. Which is why it's
nice that you can mount dmg's and install pkg's via the command line.

~~~
shredfvz
Pacman [1] [2] [3]

1\. <https://wiki.archlinux.org/index.php/Arch_User_Repository>

2: <https://www.archlinux.org/packages/>

3: <https://aur.archlinux.org/>

~~~
rymith
Yea, I like Archlinux a lot; it's been my favourite linux distro for a while
now.

------
rdtsc
Mac Minis are horrible server hardware. We've had a couple running as servers.
They fail randomly. Their hard drives fail. They don't rack mount easily. The
only reason to have them is if you inherit some old ones, don't want to throw
them away, and then don't mind replacing and throwing failed units away pretty
often.

~~~
simbimbo
I agree, Mac Minis are not good servers, but these are simply for testing
software.

~~~
rdtsc
Yap. I really depends on the type of loading. In other words I didn't mean
"servers" as in necessarily web servers just machines that handle server
loads. In this case it just means constants high CPU and disk churn. So if
this is a high contention test rack, running tests non-stop its hardware will
experience a server type level of stress.

------
meaty
I'm actually surprised any DC would take that equipment. They, in my
experience at least, are very fussy about what what you put in the racks and
power draw etc.

Oh and we get 640 cores in 20U (8x4 core xeon machines each 1u) and that
leaves enough room for a 32Tb SAN, FC switches and a pair of redundant LAN
switches.

REgarding splitting the power using the hack described, 160 melted minis and a
halon cloud coming up.

Looks pretty though.

~~~
jws
_REgarding splitting the power using the hack described, 160 melted minis and
a halon cloud coming up._

You should have a talk with your power cord provider. You should be using
cables that can handle at least 4 amps in anything with a 110v plug on it. I
don't think you can buy one smaller than 18ga and those are good for 10 amps.
Remember, you have to handle enough current to blow the breaker if something
goes wrong (unless you are British and have your own fuse in the plug).

~~~
simbimbo
We went through several designs with the vendor. This cable is designed to
operate at the circuit rating. The wire from the PDU plug to the split is 12GA
600V and the cable from the split to the mini is 14GA 300V.

------
gonzo
it was when he started talking about using solder on a 220 VAC connection that
I lost the faith in him knowing how to do it right.

~~~
wlesieutre
Out of curiosity, what's the problem with that? I assume he'd have the copper
twisted together to make a good connection, with the solder just holding it in
place instead of acting as the conductor.

~~~
zalzane
depending on the resistivity of the copper wire versus the lead-tin mix in the
solder, there may end up being more current going through the solder than the
actual wire.

Even if the resistivity values of the copper and solder are similar (I'm
pretty sure they're at least in the same order of magnitude), you'll still end
up with the same amount of current running through the copper wire and the
solder.

~~~
dalke
You don't know what you're talking about. Copper is very conductive compared
to solder.

Pure copper has a resistivity of 0.0172µΩ⋅m while 63% tin/37% lead solder has
a resistivity of 0.145µΩ⋅m. (<http://alasir.com/reference/solder_alloys/>)
That's almost an order of magnitude difference.

Let's say you had the absurd case of 1/2 copper and 1/2 solder. I = I_cu +
I_solder, and V = I_cu * R_cu = I_solder * R_solder => 8x more current going
through the copper than the solder.

You would need 8x more solder than copper in order to get "more current going
through the solder than the actual wire."

~~~
zalzane
Well hey, I guess I hit it on the head with them being at least within an
order of magnitude, eh?

You forgot to take into consideration the thermal conductivity of solder.
Although there's only 1/8th the current going through the solder than that of
the wire, solder has a much much lower thermal conductivity than copper. After
a bit of googling, copper's thermal conductivity is 401W/(mK) and bismuth
solder is 19W/(mK). Although there's less current going through that solder,
it's so much more thermally conductive it'll probably see its temperature
increasing faster than that of the copper.

That certainly doesn't help when solder's melting point tends to be around
140C and copper's is 1000C. Also, while you may have said a 50/50 mix of
solder and copper is extreme, if the OP was an awful at soldering and was
using something like 16 gauge wire, it's not so ridiculous that there may be a
50/50 mix of solder to wire, hell that might even be a bit low.

~~~
dalke
Learn some basic physics.

The numbers you yourself reported show that copper is much more thermally
conductive than solder, and not the other way around.

Thermal conductivity describes how quickly the heat conducts through the wire.
Since the wire is uniformly heated, this is a minor detail (assuming that the
wire's thermal conductivity is considerably higher than the electrical
insulation around the wire). Instead, you need to look for the heat transfer
coefficient of insulated wire.

The melting point of Sn63Pb37 is 183C, not 140C. 183 is not "around 140."

If the wire were hot enough to melt the solder in a copper+solder combination
then it would be well more than hot enough to melt the plastic insulation
around just the copper wire itself, which is typically rated for only 90C.

------
madao
Considering you only really have to pay around the 60 dollar mark for the OS
now, I dont think its much of a big deal, I use one of these at home as a mini
fileserver/wiki it draws sweet FA makes little to no noise and has HDMI
connector direct into my tv. I would happily deploy one for our company
marketing team or small scale offices.

~~~
dfc
"Draws sweet FA"?

~~~
alanctgardner2
FA = "Fuck All"

It draws precious little electricity, in other words.

~~~
madao
Sorry, my Aussie is showing again...

~~~
mitchty
No worries, I'm american and caught on.

------
w3pm
I understand the idea of treating them like pixels, so if a fan dies or a NIC
card dies, no problem, just stop using that Mini. But what about memory
corruption or other issues that are more difficult to detect? Normally server
hardware has things like ECC memory to prevent these issues, but in this case
a Mini with bad RAM could intermittently corrupt data for some time before
it's noticed (if ever).

~~~
lallysingh
The machines are for testing. They'll detect those through secondary means. If
a machine's faulty, it'll cause two cases: (1) faulty software will register
as faulty; (2) good software will register as faulty. The third case (faulty
software marked as good), is really unlikely, and any time it does happen, a
later bug report will give a hint.

A test failure will probably bring up an engineer that will track down the
issue, and a re-test will inevitably occur. The faulty machine will eventually
(hopefully) get labeled flaky and will get repaired.

Of course, nobody may care and just use a double-test to verify that an
executable is good.

~~~
rdtsc
> A test failure will probably bring up an engineer that will track down the
> issue, and a re-test will inevitably occur. The faulty machine will
> eventually (hopefully) get labeled flaky and will get repaired.

Depending on how valuable the engineers' time is. I have seen this played out
like this: hardware gets blamed last after hours and days of testing have been
wasted. So tests are run and re-run, blame goes all around until finally after
hours and hours of testing it is determined that maybe it is hardware after
all.

In the end an engineers' time is worth a lot more than savings obtained by
running flaky but cheaper hardware.

------
georgebarnett
Interestingly, it looks like the front fans blow _into_ the rack. This means
that if the door isn't securely closed it'll blow open - being on hinges and
having massive fans attached.

It would be better to have the fans on the back and suck air through the rack
rather.

That said, DC floor space is cheap compared to power and cooling. I'm
surprised they didn't lower the density so as not to have a massive fire risk.

~~~
simbimbo
I looked into some rack cooling options, but was unable to find a solution
that would provide the amount and wide coverage of airflow I needed to move
air slowly and uniformly through the rack to provide effective cooling. ( I
was designing this to be used with the active cooling rear doors, so I
couldn't overwhelm the door with too much air or it wouldn't cool the air
effectively, and would raise the ambient temp of the room). So the fans move a
high volume of air through the entire cabinet (including the corners) at low
velocity resulting in very effective cooling of each of the 40 shelves.

The fans are large so they move a high volume of air at a low speed, the door
doesn't move is left unlatched.

Also, can you please explain your "Massive Fire risk" comment? All of the
hardware installed in this rack is UL certified and all of the machines will
simply shut down if they get too hot.

------
frozenport
Why not use 2 racks?

This would solve his thermal dissipation problem and probably be easier when
compared to getting custom hardware.

~~~
liquidise
i would guess floor real estate constraints. Why do in 2 racks what you can do
cost effectively in 1?

~~~
keypusher
Because it's going to be extremely hot.

------
nsxwolf
Curious - what do we call a computer like this? It's obviously not going to
make the TOP500, but is it a "supercomputer"? I thought perhaps
"minisupercomputer" might be fitting, but according to Wikipedia that is a
term for a class of computers that became obsolete in the early 90s.

------
tzaman
I'm a proud owner of a Mini Server (slightly customised - replaced memory and
primary disk with SSD) for over a year. I use it as my main workstation and I
love it; So small (and relatively cheap including the upgrade), yet so
powerful.

------
datums
Definitely a fun challenge. If you're going to invest in the hardware and
custom build. Forget the y cable. Figure out a better solution. Rent 1/2 rack
next to it to hold the pdus. +1 on the massive door fans.

------
dreadsword
Seems like an expensive way to do data center stuff. Why not create a rack
full of alienware laptops or something?

~~~
taligent
Actually it's far cheaper than even an equivalent Supermicro solution let
alone HP/Dell etc. You are getting at minimum 2 Mac Minis in 1RU which as of
today could be a 8 core Core i7 / Dual SSDs / 16GB RAM.

Plus if you want to upgrade them then you can put them on eBay and get 75% of
the original cost back. Try doing that with a server.

~~~
gonzo
The machine you quote (Mac mini: 8 core i7 / Dual SSD / 16GB ram) does not
exist.

The closest thing to it (Mac mini: 4 core i7 / single SSD / 16GB ram) costs
$1499 from Apple.

Are you proposing that I can't get a dual i7 @ 2.6GHz, dual SSD, 16GB ram from
Supermicro for less than $3,000?

I have a mini on my desk, but even as an Apple fan, I think you're off the
mark here.

~~~
taligent
Mac mini is hyper threaded so appears in the OS as 8-cores. The OSX Server
edition starts at $999. Replace the two hard drives with two SSD ($200) and
add 16GB RAM ($100). Now show me a Supermicro that will have two nodes in 1RU
for equivalent price.

Then remind me how much it is going to sell in a year when I decide to
upgrade. I guarantee that the Mac Minis would sell in a day on eBay.

~~~
kalininalex
Hyper-threading doesn't mean you magically get extra 4 cores. It's a
technology that works well for _some_ workloads, and for those it achieves up
to 30% performance boost, per Intel claim
(<http://en.wikipedia.org/wiki/Hyper-threading>). So, at best you could treat
it as approximately 5-core. This may not change the original argument, but mac
mini is not an 8-core computer. A true dual 4-core CPU server will have
significantly more CPU capacity.

~~~
hosay123
Hyperthreading works by duplicating just enough circuitry to allow use of CPU
resources for a thread while another is stalled on a load, nothing more. No "5
cores" about it, it's simply a hardware assisted scheduling trick.

------
lallysingh
I'm surprised both that the high density worked for consumer devices, and that
the rack wasn't prettier.

------
j45
I would love to get the specs on the tray that he put the 4 mac mini's on.

~~~
simbimbo
They're in production now.. H-Squared. great vendor.
<http://h-sq.com/products/minirack/index.html>

~~~
j45
Great, thanks for the pointer!

------
mewmoo
Yes.. but why?

------
sproketboy
Yeah but can it run Crysis? /s

