Hacker News new | comments | show | ask | jobs | submit login

I have built out a few racks of Supermicro twins. In general I would suggest hiring your ops people first and then letting them buy what they are comfortable with.

C2: The Dell equivalent is C6320.

CPU: Calculate the price/performance of the server, not the processor alone. This may lead you towards fewer nodes with 14-core or 18-core CPUs.

Disk: I would use 2.5" PMR (there is a different chassis that gives 6x2.5" per node) to get more spindles/TB, but it is more expensive.

Memory: A different server (e.g. FC630) would give you 24 slots instead of 16. 24x32GB is 768GB and still affordable.

Network: I would not use 10GBase-T since it's designed for desktop use. I suggest ideally 25G SFP28 (AOC-MH25G-m2S2TM) but 10G SFP+ (AOC-MTG-i4S) is OK. The speed and type of the switch needs to match the NIC (you linked to an SFP+ switch that isn't compatible with your proposed 10GBase-T NICs).

N1: A pair of 128-port switches (e.g. SSE-C3632S or SN2700) is going to be better than three 48-port. Cumulus is a good choice if you are more familiar with Linux than Cisco. Be sure to buy the Cumulus training if your people aren't already trained.

N2: MLAG sucks, but the alternatives are probably worse.

N4: No one agrees on what SDN is, so... mu.

N5: SSE-G3648B if you want to stick with Supermicro. The Arctica 4804IP-RMP is probably cheaper.

Hosting: This rack is a great ball of fire. Verify that the data center can handle the power and heat density you are proposing.




Wow Wes, these are all awesome suggestions. All of them (CPU, disk, memory, network, hosting) are things we'll consider doing. I added the Dell server with https://gitlab.com/gitlab-com/www-gitlab-com/commit/34bd78d8...

Would you mind if we contact you to discuss?


Go ahead.


I too have built out several racks of SuperMicro twins.

I'm never buying another SuperMicro, for many reasons. The amount of cabling for properly redundant connections is a killer; it's at least three cables per system (five in our case); a rack will have hundreds of wires to manage. Access to the blade is in the back, where the cables are, and you have to think ahead and route things cleverly if you want be able to remove blades later.

The comment about doing something other than three 48-port switches is bang on. And if you're running Juniper hardware, avoid the temptation to make a Virtual Chassis (because it becomes a single point of failure, honest, and you will hate yourself when you have to do a firmware upgrade).

19KW is still a ton of power, and I'm surprised the datacenter isn't worried (none of the datacenters we use worldwide go much above 16KW usable). Also, you need to make sure you're on redundant circuits, and that things still work one of the power legs totally off. Make sure you know what kind of PDU you need (two-phase or three-phase), and that you load the phases equally.


Can you elaborate on what deficiencies 10GBase-T has in server applications?


One, category 6A cables actually cost two and a half times as much as a basic single mode fiber patch cables. Two, cable diameter. Ordinary LC to LC fiber cables with 2 millimeter diameter duplex fiber are much easier to manage then category 6A. Three, choice of network equipment. There is a great deal more equipment that will take ordinary sfp+ 10 gig transceivers, then equipment that has 10 gigabit copper ports. As a medium-sized ISP, I rarely if ever see Copper 10 gigabit.


SFP+ 10Gbase-T 3rd party optics started hitting the market this year, but none of the major switch vendors offer them yet, so the coding effectively lies about what cable type it is. Just an option to keep in your back pocket, as onboard 10Gb is typically 10Gbase-T. Thankfully, onboard SFP+ is becoming more common however.

For short distances of known length, twinax cables which are technically copper can be used. They're thinner than regular cat6a but only about the same as thin 6a and thicker than typical unshielded Duplex fiber patch cables. Twinax can be handy if connecting Arista switches to anything else that restricts 3rd party optics, as Arista only restricts other cable types. Twinax is also the cheapest option.


Latency, power and cost.

The PHY has to do a lot of forward error correction and filtering, so it adds latency (for the FEC), power (for all the DSP) and cost (for the silicon area to do all of the above).


The latency is almost certainly immaterial, 0.3uSec v. 2uSec. That's MICROseconds not MILLIseconds. Power draw used to be an issue, but not anymore.

Consider the power pull from copper v fiber listed here [2]. The Arista 7050TX 128 port pulls 507W while the 7056SX 128 port pulls 235W. Yes, copper is more but we're talking half a kW for 2 TOR switches. And for this you get much cheaper cabling, as the SFP+ are much more expensive (go AOC if you do go fiber, BTW) and you have to worry about clean fiber, etc.

Where fiber is REALLY nice is with the density, although the 28AWG [3] almost makes that moot.

There's few DC builds that can't do end-to-end copper any more, at least for the initial 50 racks.

[1] - http://www.datacenterknowledge.com/archives/2012/11/27/data-... [2] - https://www.arista.com/en/products/7050x-series [3] - https://www.monoprice.com/product?p_id=13510


I figure it's simply the availability of datacenter level switches (and other components).


I politely disagree on the 10GBaseT. I built out hundreds of nodes with TOR 10G using Arista switches and it worked great.


With regards to the switches, I would argue that they should skip SDN switches all together and get some Cisco Catalysts as TOR Switches. 2x 48-port switches for each rack with redundant core routers in spine-leaf. SDN is cool, but at the current scale, it seems like it would be more resource intensive than it is worth.


This is one rack of equipment; no spines are needed. http://blog.ipspace.net/2014/10/all-you-need-are-two-top-of-...

Cumulus isn't really SDN so I'm not sure what you're saying there.

Traditional networking is fine but it's totally different than Linux so you need dedicated netops people to manage it. And Cisco is the most expensive traditional vendor.


I'm not super well-versed in the networking area. I misread the posting. I thought they were looking at 64Us of servers.

I've just gotten the notion that SDN is almost ready for primetime, just not yet.


It's been almost ready for primetime for over a decade now.


Agreed, skip SDN. Skip MC-LAG.

Juniper, Cisco, and Arista all have solutions for this environment.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: