Hacker News new | past | comments | ask | show | jobs | submit login

Can somebody explain in a sentence or two what this actually is and what the benefits are?



I'll second that, they should make it clear what they're selling and what they have to offer. Every company should have this in their main/landing page... If I can't understand what you're selling/offering in 30sec, then I'm not interested :-)

If you take some time and read around their site you'll see that they're offering a ready to run (turnkey) server. They have everything packed together, they've integrated everything that is needed (CPU, disk, networking...) into a nice looking cabinet, with not too many wires and they're selling it as a complete package.

If you're in need of a server (cloud computer) and you don't want to but separate components and unpack them and connect them yourself, then this looks like a possible solution.


Literally on the main page there is a picture of a big computer. And then it says:

Oxide Cloud Computer

No Cables. No Assembly. Just Cloud.

Contact Sales

How much easier can they make it? They clearly want to sell computers.


Yes, but... what is a "Cloud Computer"? Is it a "computer in the cloud", like e.g. an AWS EC2 instance? Or is it a fancy name for a good old fashioned server rack (on which you will probably want to run your own cloud, because everybody's doing that nowadays, hence the name), like in this case? And if it's a server rack, how come you don't need any cables? And what do you do with this "cloud computer"? Do they host it in their data center or deliver it to you? So there is some potential for confusion - nothing that can't be mostly cleared up by reading a bit further, but still...


> Or is it a fancy name for a good old fashioned server rack

In a sense. Yes, it is a rack of servers. You're buying a computer. But we've designed the rack of servers as a full rack of servers, rather than an individual 1U. Comes with software to manage the rack like you would a cloud; you don't think of an oxide rack as individual compute sleds, you think of it as a pool of capacity.

> And if it's a server rack, how come you don't need any cables?

Because you are buying an entire rack. The sleds are blind mated. You plug in power, you plug in networking, you're good to go. You're not cabling up a bunch of individual servers when you're installing.

> Do they host it in their data center or deliver it to you?

Customers get them delivered to their data center.

Happy to answer any other questions.


> like e.g. an AWS EC2 instance? Or is it a fancy name for a good old fashioned server rack

I mean, the former is just the latter with some of the setup done for you no? Anyways, it’s a full server rack, with tightly vertically integrated hardware and software. Not sure if you’ve poked around the rest of their site, but it seems like their whole software stack is designed with some really nice usability and integration in mind: there’s a little half-snippet there suggesting that provisioning bare-metal VM’s out of the underlying hardware could as trivial as provisioning an EC2 with Terraform, and if that’s the case, that’s _massive_.

> And if it's a server rack, how come you don't need any cables?

Because they’ve gone to great lengths and care to design it to not need anything extraneous IIUC. I think the compute sleds all automatically mount into some automatic backplane that presumably gives you power, cooling and networking, and then, as above, you presumably configure all that via software, as you would your AWS setup. Not an expert here though, happy to be corrected by anyone who actually knows better.

> Do they host it in their data center or deliver it to you?

Presumably the latter, given they’re a hardware company, but if their software is even a 10th as good as it seems, I fully believe there’ll be a massive market for renting bare-metal capacity from them.


> there’s a little half-snippet there suggesting that provisioning bare-metal VM’s out of the underlying hardware could as trivial as provisioning an EC2 with Terraform,

We have a terraform provider, yes https://github.com/oxidecomputer/terraform-provider-oxide


That's what some of us are saying, it's not crystal clear what they sell.

You use words like: it seems, I think, presumably, I believe... This is what we're arguing. A company that has raised $44 million Series A for sure can afford to clearly write what they offer.

I understand, you can't have all the people happy and no matter what you do there will always be "weirdos" that don't like your page/design/wording, but hey at least recognize it :-)


Yeah but this just describes AWS UI. I even clicked on their demo, saw some dashboard for creating VMs, pretty unimpressive.

It wasn't obvious to me you can own the stuff where it runs (I hope I understood it correctly)


> Literally on the main page there is a picture of a big computer. And then it says:

So it is big, it looks good, but no specs. What architecture ? Some "cinebench" numbers ...



It does look pretty cool.


It appears to be an all in one, preconfigured/partially-or-completely assembled rack server for on-prem hosting, complete with pre-configured software.

Just my perception given the front page. I agree it's a bit vague.


If it is on-prem, calling it "cloud" computer is imho misleading / marketing term.


Every "cloud" computer is on-prem for someone, and "private cloud" has been a term for at least a decade to refer to "API-based provisioning of resources like with a cloud but in your own data centre/office" that may not be "technically" a cloud but carries the meaning clearly for anyone in the business.

What they're selling is building blocks for private/hybrid/public clusters that may or may not fit your definition of cloud depending on where they happen to be located but where what the term signifies is that it is built to be a building block of a cloud setup that includes features above and beyond a "regular" server to provide what most people tend to associate with a cloud. That is, you're getting APIs to spawn virtualized compute and storage, rather than having to install a hypervisor and management APIs etc. and combine the resources into a cohesive whole yourself.

My guess is that most of them will end up being sold to either hosting providers to provide cloud services to their customers or companies large enough to operate their own data centres where the IT department will use them to offer cloud services to other parts of the business, where the term really is not misleading anyone.

It's too big even for most "private clouds" in smaller companies, because they tend to be too small to be able to order by the rack even when there are APIs etc. offered to developers to provision compute and storage (e.g. years ago I used to operate a hybrid cloud setup with a ~1000 or so VM instances across several countries, and while we had several racks worth of gear we physically owned in aggregate, we didn't have any full racks in any individual location).


Absolutely. The point of cloud computing is that I don't have to care where something is running and that I (theoretically) have infinite elasticity and can scale in and out as fast and much as I need to.

None of that applies here.


This server is for cloud operators - i.e. for organizations that are building their own cloud infrastructure. That's why Oxide is drawing comparisons to hyperscalers.


If you are building your own cloud infrastructure, you're not doing cloud computing.

Unless you rent it out and are a cloud provider, but the website at least does not seem to target those.


"Private cloud" has been a term for at least a decade to refer to situations where people are building their own cloud infrastructure to provide a "cloud feature set" even though the servers are self hosted.

You may not like it, but it's been a long time since "cloud" has exclusively referred to "someone in another organisation entirely runs the computers" as opposed to virtualised allocation of resources.


"Private cloud" as a term still kinda makes me itch, but I don't recall encountering an alternative that was more obvious and it's clearly the usual term of art at this point.

I suspect that while I do appreciate how some posters upthread find the website a tad on the vague side, the target customers-in-potentia will understand it fine.


There are many “you”s in most enterprises. If your platform engineering team builds their own cloud, and offers an experience similar to other cloud providers (or even better and more targeted), this could be a clear win.


You = the enterprise

The definition of cloud is “out there in the sky - not here”


I understand that’s your definition and I’m saying that there are many companies where the cloud experience (of not worrying about physical infrastructure but having flexibility and elasticity) is offered to product teams by an internal platform team. That gives you an articulation point where you could migrate from AWS to Oxide racks, and yes, lose some functionality and some guarantees, but also gain more control and potentially make huge savings.


Actually, the definition of cloud is "a visible mass of particles of condensed vapor (such as water or ice) suspended in the atmosphere of a planet (such as the earth) or moon".

https://www.merriam-webster.com/dictionary/cloud


So basically hot air. In which case I guess Oxide has a point ;)


At least in the United States we have an official definition of cloud computing: https://csrc.nist.gov/pubs/sp/800/145/final

and that isn't it...


Nonsense. The operative bit of "cloud" is "provision and de-provision instantly via an API, without much concern for what's going on inderneath", not "lives in someone else's datacenter".


> provision and de-provision instantly via an API

But that is literally not possible with hardware you purchase yourself.

Sure, you can buy X amount of hardware, and provision up to X amount of virtual hardware via an API, but then what? You can't provision any more until you go and buy more hardware. This is why "cloud, but local" is a contradiction IMO. You can only be "cloud-like" if you're under-provisioning. The moment you want to actually use all of the capacity you already paid for, you're not a cloud any more, because you've provisioned all of it.

No elasticity, no cloud.


> But that is literally not possible with hardware you purchase yourself.

Sure it is. I think TFA is talking about a company selling you exactly that capability.

> You can't provision any more until you go and buy more hardware.

But this is also true of AWS etc. When their estate gets full, they need to go buy more hardware. Regardless of who owns the tin, someone's doing a capacity plan and buying hardware to meet demand.

The point of 'cloud' is that you move that function out of the domain who are actually using resources to solve business problems, which is where it traditionally sat. Historically, if you wanted to run a service, you had to go buy some hardware and hire someone to manage it for you.

A cloud-like model means that the application engineers no longer care about servers, disks and switches. Instead, they just use some APIs to request some resources and then deploy a workload onto them. The details of what hardware, where and how is fuzzy and abstracted. Or cloud-like.

> You can only be "cloud-like" if you're under-provisioning

Everyone under-provisions. Nobody runs at 100% utilisation.


It’s called cloud because it’s not in your own data center. Usually cloud symbols were used in network diagrams to depict systems/networks outside of our concern.

Also you could don’t really get elasticity with a system like this. If anything, that would be the operative bit for me.


With apologies to everyone who has a "The cloud is someone else's computer" T-shirt, things have changed, and the language as evolved as it is wont to do.

I've spent the last decade building on-premises systems very like what Oxide is doing, but I've had to build them out of stacks of servers, switches, storage appliances and VMWare licenses. And the network cabling, and fan noise, and the number of power cables, and.. oh man, I can't wait to install one of these things myself. Having a single point of responsibility for the whole thing shouldn't be underestimated either - I've spent far too long trying to resolve problems with vendors on both sides blaming each other.

It's worth mentioning too that building something equivalent to this would be across more than one rack, and easily cost in excess of $1M.


That was the case ~15+ years ago. Private cloud has been a term for the majority of that time to evoke the elasticity and virtualisation without the "not in your own data center" bit, because to most users of a cloud the operative bit is that they don't have to worry about where the computer is or talk to someone to provision one, not whether it sits in the corporate data centre or off at Amazon.

Hybrid clouds even means devs might not know whether it sits in the corporate data centre or a public cloud, because it could be either/or depending on current capacity.

> Also you could don’t really get elasticity with a system like this. If anything, that would be the operative bit for me.

"You" as in "the organisation as a whole" don't get elasticity. "You" as in "your department" or "you as an individual" do get elasticity.


> "You" as in "the organisation as a whole" don't get elasticity. "You" as in "your department" or "you as an individual" do get elasticity.

Right, but to the degree that you get elasticity, it starts to look more and more like "someone else's computer", no? If multiple people/departments/etc are provisioning virtual instances on one shared cloud infra, with nobody who's using the provisioning API caring about the underlying capacity (and capacity is planned indirectly by forecasting, etc), then it really starts to sound like "someone else's computer" to me. That "someone else" just happens to be another org within the same company.


Yes, that is why we talk about it as a private cloud -- it looks almost exactly like a public cloud to the people actually using it.


So in other words, “Someone else’s datacenter” continues to be a perfect description of what Cloud is.


As long as everyone has a shared understanding of what "someone else's" means, which this thread shows people don't.


> It’s called cloud because it’s not in your own data center. Usually cloud symbols were used in network diagrams to depict systems/networks outside of our concern.

That might be true historically, because the only way you could get resources provisioned on-demand via an API from someone else who'd built it. You had to run in someone else's datacenter to get the capability which you actually wanted.

Times have changed. Now, businesses think about "Cloud compute" as being synonymous with "on-demand", "elastic" etc. Where the actual silicon lives is merely an after-thought.

> Also you could don’t really get elasticity with a system like this. If anything, that would be the operative bit for me.

Buy enough of them and you will :)


> Where the actual silicon lives is merely an after-thought.

If you have to buy the silicon and plan capacity for it (as in the case with Oxide for example), then it cannot be an afterthought. Which is exactly why I would not consider it cloud computing.


The point is the application team doesn't have to do any of that.

Someone's always got to buy the tin and manage that. Some people are big enough that they might get a benefit from doing that themselves, rather than having Jeff Bezos do it for them.

From the application team's perspective, call API then container go whirr.


I work on Azure. Is it not a cloud for me because of where I work?


> you could don’t really get elasticity with a system like this

Of course you do, right up until the point where you’ve used all available capacity. Just like with public clouds (ask anyone using meaningful amounts of {G,T}PUs). Elasticity doesn’t imply infinitely elastic, that would be ridiculous.


> Absolutely. The point of cloud computing is that I don't have to care where something is running and that I (theoretically) have infinite elasticity and can scale in and out as fast and much as I need to.

Define "I"?

I-as-developer can call an VMware/OpenStack API with an on-prem/private cloud and get a new instance just as easily as calling an AWS API. I-as-developer does not have to worry about elasticity if the IT hardware folks have the capacity.


Private cloud is a marketing misnomer.


I'm sure all the folks running OpenStack in-house, including NASA and CERN, would disagree.

From a developer's perspective an API call is an API call, and to them it's just another instance.


VPC doubly so then.


How much you care depends on your role.

As an engineer, an Oxide system works like any other cloud provider. You’re just interacting with its API and tooling like you would with Google Cloud or AWS.

To someone on the IT/Operations side, obviously there are differences but theres SIGNIFICANTLY less labor required to build-out and operate an Oxide system vs a rack full of servers. The biggest difference for these people is that there’s actual hardware vs a Cloud Provider, but also costs are fixed so there’s likely no monthly or quarterly meetings with finance arguing over the cloud bill, tying people up to try and shave a few thousand off the bill every month.

In finance/accounting, Oxide is probably the most different: now compute is CapEx rather than OpEx. Depending on your company’s stage that can be a wonderful thing for the bean counters.


> To someone on the IT/Operations side, obviously there are differences but theres SIGNIFICANTLY less labor required to build-out and operate an Oxide system vs a rack full of servers.

But it’s also gonna be much more restricted. So I guess one could see as kind of “Apple for data centers”? Have a nice appliance and be happy as long as it runs as it should (but hope it never stops working as it should).


They have been on HN numerous times over the years iirc. But yeah, I had completely forgotten what they did and had to read through all of it.


They've reinvented the IBM Mainframe. Big rack-sized box with lots of redundant hardware that serves guest VMs. This is basically a zSeries in drag.

The key difference is the price structure -- IBM leases the hardware wherever they can get away with it, and uses a license manager to control how many of the machine's resources you can access (based on how much you can bear to pay). This, however, is like a mainframe you own.


Mainframes also do lots of other things that this doesn't really do. Like allowing you to pretend its a bunch of multi-core machines, or even a single one.


This is exactly what it is, plus custom "cloud" software


The way I interpret it: an integrated stack of compute, storage and networking hardware and monitoring software that you can plug in to your data center (owned or colo) and can e.g. deploy Kubernetes to, and then use as a deployment target for your services.

The main USP: you own it, you're not renting it. You're not beholden to big cloud's pricing strategies and gotchas.

Secondary USP, why you buy this rather than DIY / rack computer vendor: it's a vertically integrated experience, it's preassembled, it's plug and play, compatibility is sorted out, there's no weird closed third party dependencies. Basically, the Apple experience rather than the PC experience.


It's a rack of servers you can buy for a lot of money and put in a datacenter. And once you plug it in and turn it on and do whatever setup is required, you're supposed to be able to spin up vms on it.



The site is really hard to navigate. I eventually looked at the footer and found a link to the technical specifications https://oxide.computer/product/specifications They give an idea of the beast, especially the dimensions and weight

  Dimensions H x W x D
  2354mm (92.7") x 600mm (23.7”) x 1060mm (41.8")

  Weight
  Up to ~2,518 lbs (~1,145 kg)
It's a rack.

BTW, they are following some no pictures policy. I found only a few pictures of boards but no picture of the product as a whole.


Literally on the main page: https://oxide.computer/


Indeed, which demonstrates that an explicit link to Home in a visible place is never a bad idea. I didn't click on 0xide in the top left even if that's a common shortcut to the home and I didn't notice the link in the footer, which is where you bury the least interesting stuff. I kept clicking on the text links in the top bar.


Can you link the picture? I only see renderings on the main page.



The Oxide has everything you need to stand up your own private cloud, and everything works together down to the software. It's a great alternative to buying and managing servers, switches, storage, power management, KVMs, all of the software in between, and the tens of professional services contracts required to glue it all together.

It's the iPhone of hyperconverged infrastructure.

(Sorry; three sentences.)


Modern tech stack for managing data centers using tightly integrated hardware and software.


Like Dell and Bladelogic used to be? Or VMWare ESXi and friends? Or...


I'd say more like Sun, or DEC, at least in terms of the hardware. The software is Free, which wasn't the case with those older companies.


Designing a hardware-software combination to allow for the managing of compute (and networking and storage) to occur at the rack-level rather at the individual-device (server, switch, etc) level, so that large(r)-scale operators can manage cattle more easily (rather than herding cats/pets).


Also this. Looks technically great but how do you sell this to business? Whats the price range? Is this supposed to be an acceptable solution between on-prem and cloud as we start to realise the true costs of being hooked on cloud when your a large org and not a startup?


In a data center you have racks of computers performing all of the workloads. At this point these racks are fairly standardized in terms of sizing and ancillary features. These are built-out to solve the following:

    * Physical space - The servers themselves require a certain amount of room and depending on the workloads assigned will need different dimensions.  These are specified in rack "units" (U) as the height dimension.  The width is fixed and depths can vary but are within a standard limit.  A rack might have something like 44U of total vertical space and each server can take anywhere from 1-4U generally.  Some equipment may even go up to 6U or 8U (or more).

    * Power - All rack equipment will require power so there are generally looms or wiring schemes to run all cabling and outlets for all powered devices in the rack.  For the most part this can be run on or in the post rails and remains hidden other than the outlet receptacles and mounted power strips.  This might also include added battery and power conditioning systems which will eat into your total vertical U budget.  Total rack power consumption is a vital figure.

    * Cooling - Most rack equipment will require some minimum amount of airflow or temperature range to operate properly.  Servers have fans but there will also be a need for airflow within the rack itself and you might have to solve unexpected issues such as temperature gradients from the floor to the ceiling of the rack.  Net heat output from workloads is a vital figure.

    * Networking - Since most rack equipment will be networked there are standard ways of cabling and patching in networks built into many racks.  This will include things such as bays for switches, some of which may eat into the vertical U budget.  These devices typically aggregate all rack traffic into a single higher-throughput network backplane that interconnects multiple racks into the broader network topology.

    * Storage - Depending on the workloads involved storage may be a major consideration and can require significant space (vertical Us), power, and cooling.  You will also need to take into account the bus interconnects between storage devices and servers.  This may also be delegated out into a SAN topology similar to a network where you have dedicated switches to connect to external storage networks.
These are some of the major challenges with rack-mounted computing in a data center among many others. What's not really illustrated here is that since all of this has become so standardized we can now fully integrate these components directly rather than buying them piece-meal and installing them in a rack.

This is what Oxide has to offer. They have built essentially an entire rack that solves the physical space, power, cooling, networking, and storage issues by simply giving you a turn-key box you plant in your data center and hook power and interconnects to. In addition it is a fully integrated solution so they can capture a lot of efficiencies that would be hard or impossible in traditional design.

As someone with a lot of data center experience I am very excited to see this. It is built by people with the correct attitude toward compute, imo.


Weird there’s no mention of GPU at all when you’d think that’s what would prick up the ears of the hosting companies ….. stack a pile of GPUs in a rack, surely you can sell time on that.

The Oxide machines seem to be aimed at 2020.


Oxide's goal is that they, and by extension their customers, have as much visibility and control over the software stack in these racks as possible, and that includes firmware. They started developing these systems before the current wave of interest in machine learning led by ChatGPU and Stable Diffusion really got underway.

Nvidia GPU drivers are very proprietary, which means that admins and developers have limited visibility into them if they misbehave in any way. This goes against Oxide's philosophy of full visibility into a system that you purchase.

Nvidia's CUDA software has a significant lead ahead of AMD and Intel GPUs, and they're not going to open source it any time soon. But this is a rapidly changing landscape, and AMD and Intel and others are pouring an enormous amount of research into getting their hardware and software to match what Nvidia has going. Nvidia is in pole position, but they're not guaranteed to stay there.

There's still a large market for the CPU workloads that Oxide is offering. For now, Oxide will be concentrating on meeting this traditional compute demand. But you're right to point out that in 2023, the absence of a top tier GPU in these racks is noticeable. I suspect Oxide will want to include some form of GPU or TPU into the next version of their system, but they won't just grab whatever hardware happens to be in fashion. It needs to work with their system as a whole.


There is still a huge amount of stuff not using GPU and never will. Claiming that's only for 2020 is pure nonsense.

They are likely looking at future racks with GPU as well, but as a first product getting the basics right makes more sense.


I don't think they're trying to be AWS. They're trying to sell a product that greatly simplifies doing on-site cloud for companies. So they sell the physical hardware/software bundle and not time.


They want a lot of control over all the hardware in their servers. NVIDIA isn't interested in that, so they can't provide those types of servers.


AMD? Intel?


AMD helped them write their own bootloader:

> “Oxide is a strong believer in the need for open-source software at the lowest layers of the stack -- including silicon initialization and platform enablement. With the availability of AMD openSIL, AMD is showing that they share this vision. We believe that the ultimate beneficiaries of open-source silicon initialization -- as it has been for open-source revolutions elsewhere in the stack -- will be customers and end-users, and we applaud AMD for taking this important and inspiring step!”

* https://community.amd.com/t5/business/empowering-the-industr...

See Bryan Cantrill's (Oxide CTO) presentation on their adventures in this space:

* https://www.osfc.io/2022/talks/i-have-come-to-bury-the-bios-...

* https://news.ycombinator.com/item?id=33145411 (discussion at the time)


Now to see how their development timeframes and synchronization efforts with big hardware companies go.

This is where they will enter the real “hard” part of hardware, with an exec team from software. Can they respond to the market while making hardware?

They seem to have presented as generally thoughtful about their approach. If they can release major variants about annually or even sub-annually that is what I think will enable them to win.


This seems to be around selling the whole piece of hardware not just "time on that".

That being said I'd still expect a monthly service fee for networking, electricity and service in general.


Low risk of vendor lock-in.


This thing is the definition of vendor lock in. It's a rack full of non-commodity hardware using non-standard connectors and a bunch of custom software.

The industry moved to commodity x86 for compute for a reason.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: