
Minimalist Hyperscale Servers for the Rest of Us - rbanffy
https://www.nextplatform.com/2020/03/26/minimalist-hyperscale-servers-for-the-rest-of-us/
======
rubyn00bie
I have a really hard time seeing how this isn't just an Intel + SuperMicro
sponsored shitpost. There's nary a mention of pricing, CPU specs, or anything
else that might really help determine "for the rest of us."

Unless Intel is cutting the consumer a deal here (fat chance), I think it's
safe to say this is some paid shilling.

~~~
toast0
I basically agree, but they did tell us what CPUs, two sockets for "Cascade
Lake” and “Cascade Lake-R” Xeon SP processors. No pricing or availability info
makes it pretty unactionable.

------
leetrout
Is this the same space as oxide computers?

~~~
wmf
It's hard to tell since Oxide hasn't officially announced anything. Despite
the "computer" talk it sounds like they are working on more of a private cloud
or converged infrastructure product.

The most direct comparison would probably be Supermicro MegaDC and various QCT
servers that also use OCP mezz cards.

------
altmind
"Minimalist servers" dont mention the depth of their 1U offerings. they wont
even fit in your normal rack, yet they are "minimalist".

------
zdw
How is this not just marketing material for Supermicro, pointing out the niche
they fill?

I've used and enjoyed their products, but this seems a bit too close to being
an ad.

------
thedance
It doesn't seem like they took "minimal" very far. It still has dual NICs,
IPMI, and a bunch of expansion ports that most buyers won't need. It's hard
for an OEM to make a minimal server because they have to market to the union
of all feature requirements, whereas every customer only has a few.

~~~
rbanffy
You want dual NICs and IPMI if you have more than a handful boxes. They have
some trimmed down boxes with less storage and expandability in the "compute"
series.

Still, the "rest of us" in the title is more like "you're not Facebook" than
home hobbyist or small company.

~~~
erentz
> You want dual NICs and IPMI if you have more than a handful boxes.

It’s more the opposite. If you only have a few servers you might want dual
NICs for multihoming redundancy. OTOH if you have a lot of servers you don’t
do multihoming redundancy because it adds a lot of complexity and cost to the
network, so you’ll just let the whole rack die because you have thousands
more.

~~~
toast0
I've run thousands of servers in an environment with dual NICs and in an
environment with single NICs (or one NIC per 4 machines). I didn't run the
network in either environment, so I'm sure dual network costs at least twice
as much, and probably closer to three times the cost, but the amount of
disruption saved from single network hardware failures is worth it to me.

Letting the whole rack die and recovering elsewhere sounds nice, but you lose
all of the in-flight requests. You still need to be prepared for losing whole
racks from time to time, but it's way simpler to migrate away from hosts that
lost one out of two NICs than one out of ones.

~~~
erentz
Everyone’s experience is different. I’ve run networks with millions of servers
with single NICs and networks with thousands of servers with dual NICs. The
dual NICs are a cost and complexity and outage inducing nightmare that’s
totally unjustifiable at any large scale. Yes it can be warranted for smaller
and specialized use cases, particularly with older software. But you can’t
push that kind of complexity into the network at very large scale and have it
pay off. One day your rack will fail and your software will need to deal with
it gracefully enough. IMO solve it at the application level don’t make the
network try to do it for you.

~~~
toast0
I don't think I'm going to convince you, but let me try one more time :)

> One day your rack will fail and your software will need to deal with it
> gracefully enough. IMO solve it at the application level don’t make the
> network try to do it for you.

I agree, the rack is going to fail, and you have to be able to handle it. It's
a question of what the consequences are, and how often you have them.

Without a radically atypical socket setup, you're going to at least lose the
state of clients connected to hosts in the rack that fails. That will, at
least, cause significant communication delays while the connection loss is
determined. Probably your clients are well behaved and will reconnect and
retry their requests. Most requests will be retried in a timely fashion, but
some of your clients are going to have an independent network failure around
the same time, and their requests are going to suffer a major delay (I'm
mobile oriented, so I'm trying to send a message before I go into the dead
zone, etc).

Like you said, it's going to happen sometimes anyway. Redundant networks and
redundant power only mitigate outages, they don't eliminate them, and there's
always room for human error. But, I would pay several times the cost of the
network to be able to have a significantly reduced frequency of outages.

~~~
erentz
I hear what you’re saying :-) it is fairly easy to model once you have good
data, I can certainly see if you do the modeling there may be some that find
it’s worth it for their case.

