I have a really hard time seeing how this isn't just an Intel + SuperMicro sponsored shitpost. There's nary a mention of pricing, CPU specs, or anything else that might really help determine "for the rest of us."
Unless Intel is cutting the consumer a deal here (fat chance), I think it's safe to say this is some paid shilling.
I basically agree, but they did tell us what CPUs, two sockets for "Cascade Lake” and “Cascade Lake-R” Xeon SP processors. No pricing or availability info makes it pretty unactionable.
It's hard to tell since Oxide hasn't officially announced anything. Despite the "computer" talk it sounds like they are working on more of a private cloud or converged infrastructure product.
The most direct comparison would probably be Supermicro MegaDC and various QCT servers that also use OCP mezz cards.
It doesn't seem like they took "minimal" very far. It still has dual NICs, IPMI, and a bunch of expansion ports that most buyers won't need. It's hard for an OEM to make a minimal server because they have to market to the union of all feature requirements, whereas every customer only has a few.
You want dual NICs and IPMI if you have more than a handful boxes. They have some trimmed down boxes with less storage and expandability in the "compute" series.
Still, the "rest of us" in the title is more like "you're not Facebook" than home hobbyist or small company.
> You want dual NICs and IPMI if you have more than a handful boxes.
It’s more the opposite. If you only have a few servers you might want dual NICs for multihoming redundancy. OTOH if you have a lot of servers you don’t do multihoming redundancy because it adds a lot of complexity and cost to the network, so you’ll just let the whole rack die because you have thousands more.
I've run thousands of servers in an environment with dual NICs and in an environment with single NICs (or one NIC per 4 machines). I didn't run the network in either environment, so I'm sure dual network costs at least twice as much, and probably closer to three times the cost, but the amount of disruption saved from single network hardware failures is worth it to me.
Letting the whole rack die and recovering elsewhere sounds nice, but you lose all of the in-flight requests. You still need to be prepared for losing whole racks from time to time, but it's way simpler to migrate away from hosts that lost one out of two NICs than one out of ones.
Everyone’s experience is different. I’ve run networks with millions of servers with single NICs and networks with thousands of servers with dual NICs. The dual NICs are a cost and complexity and outage inducing nightmare that’s totally unjustifiable at any large scale. Yes it can be warranted for smaller and specialized use cases, particularly with older software. But you can’t push that kind of complexity into the network at very large scale and have it pay off. One day your rack will fail and your software will need to deal with it gracefully enough. IMO solve it at the application level don’t make the network try to do it for you.
I don't think I'm going to convince you, but let me try one
more time :)
> One day your rack will fail and your software will need to deal with it gracefully enough. IMO solve it at the application level don’t make the network try to do it for you.
I agree, the rack is going to fail, and you have to be able to handle it. It's a question of what the consequences are, and how often you have them.
Without a radically atypical socket setup, you're going to at least lose the state of clients connected to hosts in the rack that fails. That will, at least, cause significant communication delays while the connection loss is determined. Probably your clients are well behaved and will reconnect and retry their requests. Most requests will be retried in a timely fashion, but some of your clients are going to have an independent network failure around the same time, and their requests are going to suffer a major delay (I'm mobile oriented, so I'm trying to send a message before I go into the dead zone, etc).
Like you said, it's going to happen sometimes anyway. Redundant networks and redundant power only mitigate outages, they don't eliminate them, and there's always room for human error. But, I would pay several times the cost of the network to be able to have a significantly reduced frequency of outages.
I hear what you’re saying :-) it is fairly easy to model once you have good data, I can certainly see if you do the modeling there may be some that find it’s worth it for their case.
At the end of the day, if SuperMicro is going to sell these, they need to be able to sell enough volume to make it worth their while.
They're making a single motherboard for the 5 server configurations, so that's got to support everything they think will make it sell. It's more minimal than their normal lineup anyway.
Unless Intel is cutting the consumer a deal here (fat chance), I think it's safe to say this is some paid shilling.