I've been following Oxide for awhile and it certainly seems like a really neat operation, but I hope they consider going downmarket someday too. At least around here a lot of the demand for self-host stuff is in SMB, and while undoubtedly it's a much smaller market it'd be nice if some of this innovation, openness and self-reliance was available even to those without super deep pockets. I can completely understand starting with the high end, there is much higher margin and a much better revenue:support ratio. But it'd be awesome to have 5-figure or even 4-figure options down the road that could be used. Might also, as with so many times, end up helping develop pipeline down the road. As it is just fun to read about.
We for sure love the support we've gotten from folks who say "Hey I don't need a full rack but I love what you're doing."
The issue isn't desire, it's more of focus and design difficulty. The design of the product starts from "what if you're building servers by the rack," and so a lot of choices make sense in that context, but not necessarily others. So building something smaller means effectively building a different product. And of course, many companies have multiple product lines, but they're also much more mature than we are: we've only shipped two racks so far (more to come soon!). And that's where the "focus" comes in: we need to build a sustainable company first, and then expand second. I can't speak to where the future goes, and if we will eventually go in that direction, or on what timescale, but what I can say is that we hear you on our side.
First thanks so much for the reply! As I said I completely understand this as a starting approach, lots of good reasons for it. I just hope that if you do well and refine things you'll be able to scale down the approach someday. That said:
>The design of the product starts from "what if you're building servers by the rack,"
As you surely know, there are a lot of racks that are smaller than 42/48U ;). I'm not at all opposed to your "build by the rack" approach, I'm just hoping eventually there might be options from you that are 12U or 18U racks, or 42U racks where 6/12/18/24/36U/whatever are your stuff and then there is a bit of extra space for a handful of customer misc racked options that don't fit into your buckets (a couple of racked Macs for example which surprisingly yes can still be a thing).
Anyway, absolutely you'd have to make it that far, absolutely don't want to see you succumb to the classic startup issue of over expansion ahead of scale efficiencies and running out of runway. But what you're doing is exciting, and I guess I just hope to see the benefits get more decentralized someday is all. Right now I have a hodgepodge of OPNsense/Omada/UniFi/TrueNAS/Proxmox(formerly VMware but now that's down the toilet)/etc deployed with various clients and for myself in rural new england, works well overall including when the WAN drops as is fairly regular, but it'd be real helpful to have more refined options someday! So wishing Oxide the best of luck.
You can rent a partial rack space from almost any colocation provider, and they come in units all the way from 1U to 42U. When I did my first startup it was done out of two 1U servers that I rented the space for at about $150 a month including power and bandwidth. My partner and I got access badges, walked in and racked and cabled them ourselves.
I've exchanged messages a bit with Oxide. I wanted to write about them. But they are many timezones from me, so live talk is very hard to schedule.
I couldn't find enough info in text form to research a story. I am a speedreader. I can happily cruise along at 1000 words a minute and can burst up to 3000wpm for short periods. (I read whole novels this way in childhood: a 1970s novel of 200pp was 25min to half an hour.)
It's how I do my job as a writer: lots and lots of reading. Tens of thousands of words a day.
I can't do this with speech. I can't follow speech at 2x speed, and that's still less than half my normal reading speed. 1.5 to 1.75x is the maximum. I'd need a week to listen to the research material for every short article I write.
No text discussion == no coverage.
I am delighted that my colleague was able to do this story, but he's on the same continent as Oxide.
I can't bear talk radio, it's so slow. I've never listened to a podcast in my life; the mere idea horrifies me. You want me to squander an hour on some unscripted chatter about something I could read in 5-10 minutes? Dear gods, no!
:-(
> Rather than kludging together a bunch of servers, networking, storage, and all of the different software platforms required to use and manage them, Oxide says its rackscale systems can handle all of that using a consistent software interface.
Not really. More like one of those 8U blade boxes which holds >16 servers and reduces cabling by including a backplane - but extended to a full rack.
They have made a lot of fuss about the "hardware root of trust", but that seems more like a fetish of the founders than something regular customers are asking for.
Personally I am more interested in the software stack - a much simpler Openstack alternative is very welcome, as long as I can use my own hardware.
You laptop and your phone have hardware roots of trust. All the big cloud vendors have it. Developing modern hardware without it would be insane and irresponsible.
If I am gone invest many millions into infrastructure, you better expect that I expect some basic safety features.
I guess you just made my point for me - a hardware root of trust is commonplace in modern computing, so I don't really get why they spent so much time talking about it.
They have a podcast that talks about a lot of different parts of their product. Lots of part of their product are 'common place' that doesn't mean that talking about the specific approach they took can't be interesting. Just because other things have root-of-trust, doesn't mean they can't be improved. And of course having a fully open source root of trust isn't common, specially not servers.
Of all the things, they have actually not talked about the hardware root of trust itself that much. There is no full podcast on only the root of trust. Compared to things like boot fireware, VM migration, network design and so on and so on.
What they have talked more about is system firmware in general, including their embedded OS. Partly why the chip they used for root-of-trust came up is because they found multiple bugs in that chip. Turns out secure silicon is really hard, and if you don't have secure silicon you don't have a great hardware root-of-trust.
Pretty much everything with Oxide is, generally do what the industry has done before and uses, they just want to do it better and more open. And root-of-trust is one example of that.
I actually wish they would talk about it more, as they have not shared much. I think they have mentioned it will be part of some podcast episode sometimes.
Ok thanks, sounds like I should finally check out this podcast. Instances where I have seen the hardware root of trust mentioned were all on Twitter I think.
I didn't say no-one wants it, just that no-one is asking for it - which is exactly what you would expect for a component which is already a standard part of server hardware.
To be fair, a lot modern mainframe deployments juggle multiple VMs/LPARs at this point so you can juggle all sorts of transaction processing/batch processing workloads in a single cabinet. Hell, IBM will even sell you extra cores to run Linux VMs if you want, without paying more for support on top.
I am fascinated by these very high-end machines and their more direct competition. My feeling is that they are gunning more towards the Oracle Exacta and M8 and their Fujitsu counterparts and IBM mainframes than to racks full of generic Dell boxes (or OCP equivalents).
Wouldn't they be vulnerable, BTW, to an OCP integrator offering the same functionality with plug-compatible parts?
Maybe they aren't targetting IBM customers - but I definitely they are going to give the people used to bying a bunch of Dell boxes, some network gear and VMWare a more (but less proprietary) IBM-like experience. A huge box which just works, reliably, with impressive throughput.
> Wouldn't they be vulnerable, BTW, to an OCP integrator offering the same functionality with plug-compatible parts?
There's such a small market of folks openly selling OCP accepted or inspired (OCP's terminology) systems. You definitely are gonna need to get quotes.
OCP does a ton of super great things, but alas it seems like actually making reference designs for computers has somewhat fallen to the wayside. I could be totally missing the goods here, but I see a bunch of very abstract system configuration specifications in https://www.opencompute.org/wiki/Server/Working . Please, would love to be wrong here, but: I don't see actual plans for building systems. Just very abstract
OCP just doesn't seem like it means all that much. Sure sometimes they have good specs that one can ask for. But in terms of actually building systems it still seems like a free for all, that OCP offers little guidance.
> OCP just doesn't seem like it means all that much.
Sure, but a pre-assembled rack of OCP inspired (and compatible) hardware combined with the right software would be a significant competitor, with the advantage that parts of different specs (and still compatible) could be sourced from various suppliers allowing the system to be continuously expanded and/or updated.
What they seem to offer is mainframe-like throughput with generic software (mainframes also run Linux, and do that really well) and proprietary hardware made from generic parts.
That's just a fundamentally different company you are describing. That's not the kind of company they wanted to build and that approach simply wouldn't result in the product they envisioned.
If OCP stuff with some extra software could have easily enabled this kind of product, then the company wouldn't have to exist in the first place.
> sourced from various suppliers
They deliberately adopted a single supplier strategy. Again this is exactly what they did not want. See their 'Common Wisdom' podcast.
Multiple suppliers have advantages, but also disadvantages.
> What they seem to offer is mainframe-like throughput with generic software
The software isn't generic at all, its very costume but unlike with a mainframe open all the way down.
And a mainframe is just a very different architecture, the architecture isn't really like a mainframe at all.
> and proprietary hardware made from generic parts
Its their own costume board design both for the server and switch, a very different architecture overall.
It depends what you mean by generic parts. Sure like everybody else they didn't design every single part themselves.
But how the parts are combined and enabled with software is the unique selling point.
Although it's a ton of rambling at times, I love the Oxide and friends podcast. As I'm interested in storage, the episode "Crucible, The Oxide Storage service" was awesome.
P.S. renting physical servers or hosting in your own colo is also still viable, especially with amazing second-hand rack server prices. The cloud isn't the only option.
Counterintuitively clouds are heavy, in a way, because they're so large.
The Sun is another interesting (size * density) example. The Sun's "power per unit volume" is only about the same as a compost pile. But because it's so large, this is enough to keep it glowing brightly.
AIUI the 15kW power limit per rack is going to be a constraint down the line, since it appears that newer AMD hyperscale hardware will be built for a higher level of power density, one that pretty much relies on the use of liquid cooling. To some extent, that was the point of those new Zen 4c and 5c core designs. Even with Oxide's use of rack-scale fans, I'm not sure that they'll be able to shed the amount of heat that these newer chips are going to be designed for. Of course there are very similar concerns if you want to do HPC with CPU+GPU compute, too.
Plug is either CS8365C or L22-20P. 15 kW is fairly typical for a 'generic' data centre rack.
For something like HPC, many systems are getting into water cooling nowadays: either air-cooled servers with a rear-door radiator (60kW), or with water-cooling right to the servers.
I wonder how much such concerns will matter in a real-life scenario. They mentioned they can use power-save features of AMD CPUs better than any other hardware vendor. Interesting offering from Oxide nevertheless. They seem to take a broad system view. It's like a rack is like a very large PC, not a bunch of PCs in a rack. Makes me think of IBM a bit.
"With all the hype around AI, the importance of accelerated computing hasn't been lost on Cantrill or Tuck. However, he notes that while Oxide has looked at supporting GPUs, he's actually much more interested in APUs like AMD's newly launched MI300A.
The APU, which we explored at length back in December, combines 24 Zen 4 cores, six CDNA 3 cores, and 128GB of HBM3 memory into a single socket. "To me the APU is the mainstreaming of this accelerated computer where we're no longer having to have these like islands of acceleration," Cantrill said."
Ok, but does it provide the performance customers want?
From the AWS/Azure perspective, their "large" system is an AZ
Cloud providers use logical abstractions to draw a line around the "system"...beyond the crush HN has on Oxide, it isn't clear to me how their product is a gain. Who is their customer? Someone with needs bigger than one server but smaller than one rack?
3000 lb sounds like a secondary concern until you consider the wages and benefits of facilities folks.
Some will see this as a strong argument for workers' rights, some will see it as a strong argument for robotic workers. Some will ignore the issues with the servers' physical characteristics because they don't have experience working in server rooms.
The meta-point then would seem to be that material conditions dictate the unfolding of reality. It is the offering of opportunities and the economic conditions by which they become available that determines the destiny of every agent participating in said economy.
Use the right tool for the job.[1] There are battery-powered tugs for moving server racks around. With a 3000lb 9 foot tall rack, this is not optional. Next problem is earthquake resistance. Any load that top-heavy needs to be secured to some strong column or wall. And four little casters for a 3000 pound load? Those are far too small for a load heavier than many cars.
Have they shipped this thing, or is that just a render? No cables are attached, so it's not in operation. It's certainly possible to make this work, but they have to get serious about floors, ramps, access, and structural support.
I promise that we have shipped several of them around. We have one parked in a cage in the south bay, and I believe aside from getting the crate on and off the truck, it was almost certainly rolled into the DC on its castors. I have personally helped push one or two of them into the crates we use, which is something of a production but doable. I also occasionally have to shunt a few of them around the office, which I have been able to do on the castors there as well; the floor is relatively flat concrete.
Probably a stupid question, but wouldn't this be assembled in loco? I guess 3000 lb is the weight of the full system with all the blades in place and disk bays full, not the weight of the empty rack.