Hacker News new | past | comments | ask | show | jobs | submit login
Oxide Computer Company: Initial boot sequence (oxide.computer)
509 points by steveklabnik 5 days ago | hide | past | web | favorite | 398 comments

There were three large threads. We merged them. The other URLs:



Comments about the garage were probably in response to the jessfraz.com post. Other than that I didn't see any merged comments that were specific to one URL.

This is just my first impressions so it may be wrong.

Looking at their blog - their proposition appears to me to be, AWS, Google and Azure can build and manage their systems better than Random Corp. trying to build and manage their own servers by ordering from Dell.

Oxide Computer will try and bridge that gap and let anyone buy efficient, manageable servers.

This is of some interest to me as there are lots of use cases for which cloud isn't practical or has regulatory challenges, and this should give organizations for whom this is an issue some good options. If nothing else, it'll be interesting as a target for Google's Anthos or Azure's On-Premises service.

Edit: This is a good summary[1], I think:

> Hyperscalers like Facebook, Google, and Microsoft have what I like to call “infrastructure privilege” since they long ago decided they could build their own hardware and software to fulfill their needs better than commodity vendors. We are working to bring that same infrastructure privilege to everyone else!

[1] https://blog.jessfraz.com/post/born-in-a-garage/

Call me skeptical. Part of what allowed Google/Amazon/Facebook to "build their own servers" is that they're ordering 10,000+ at a time from an ODM like Foxconn who doesn't want to deal with people that want 100 at a time.

But with that comes rigid standardization... which typically hasn't worked well in the enterprise. Everybody wants to be cloud-like until they figure out being cloud-like means you lose all of your hardware-level flexibility. Oh you wanted one with 512GB of memory instead of 768? Too bad.

I'm guardedly optimistic. Some people are being nervous about how vague they're being but I'm gonna wait an see if they're just playing very close to the vest. The problem with talking excitedly about the things that are safe to talk about is that it makes it very obviously awkward what isn't being talked about. I'm sure a lot of us got a refresher in how that feels last week.

Backblaze has their own custom server racks and we seem to like them just fine.

Backblaze isn't selling their custom server racks to third parties to make a profit... they built their own initially because they were trying to make it as cheap as humanly possible and cut a ton of corners to get there. They had a ton of growing pains too which is why we're on POD version 6.0, soon to be 7.0. They modified as they ran into hurdles, most of which would've been avoided going with one of the major server vendors. On the flip side, they would have spent a TON more money on features they decided they didn't need, and their business model likely wouldn't have been viable with that overhead.

I HIGHLY doubt Oxide is offering to make small batch custom servers as part of their "bring the cloud on-prem". It's FAR more likely they're talking about taking OCP designs and adding their own software management stack on top.

(What happened last week? Saw a few articles, honestly unsure which applies here)

Oh, Thanksgiving. The most important time of year for Americans not to bring up politics or old hurts.

I disagree. You don't lose hardware level flexibility, you gain hardware level flexibility. "Oh you wanted to only use 256GB memory and X CPU cycles during this time period, unless Y event occurs in which case boost to 1024GB and 4X CPU cycles? And then prioritize which jobs to dynamically scale back in order to make room? Since all your gear is homogeneous and uses a sane management scheduler that's now possible". You don't get that using disparate hardware typically.

> Oh you wanted one with 512GB of memory instead of 768? Too bad.

I believe Enterprises are (slowly) learning this lesson. If nothing else Pivotal Cloud Foundry and various on-prem k8s solutions are softening them up.

>Oxide Computer will try and bridge that gap and let anyone buy efficient, manageable servers.

>This is of some interest to me as there are lots of use cases for which cloud isn't practical or has regulatory challenges, and this should give organizations for whom this is an issue some good options.

Sounds, to me, as the exact premise of OpenStack, no?

The premise maybe, but in reality OpenStack (initially) punted on a lot of essential parts like physical server management, installation, upgrades, monitoring, billing, etc. Just things like firmware updates are super-buggy in existing servers so I imagine Oxide will be working on that.

I think Oxide bought the rights to the PC Weasel, now with thunderbolt support.


It seems like they are dismissing the hyperconverged offerings from Simplivity, Nutanix, Azure Stack, etc which are definitely on premises offerings that would aim to do what they are setting out on

There's probably room in the market for something in between hyperconverged and "here are some Dell servers, good luck".

A few months back I randomly ran across Cantrill, Frazelle, and Tuck while they were at a FedEx Center printing and signing some docs for Oxide. As a recent convert to Cantrill YouTube fandom, I recognized him and said hello. They were all tremendously welcoming and friendly, and Cantrill took a great deal of time to shoot the breeze with me about Rust, the plans for their new business, and the pain points of working with Xoogler architects (not in reference to Frazelle, I should note).

I've been watching off and on for their launch announcement ever since. Some of the reservations about their business plan expressed in this thread ring true with me, and yet I hope they will succeed in spite of it all. The last thing the world needs is an internet which runs entirely on computers owned by Amazon, Microsoft, and Google.

Here is a relevant quote from Jess' blog:

> Hyperscalers like Facebook, Google, and Microsoft have what I like to call “infrastructure privilege” since they long ago decided they could build their own hardware and software to fulfill their needs better than commodity vendors. We are working to bring that same infrastructure privilege to everyone else. [0]

Is this same infrastructure privilege an effective, internal cloud? From my experience with large internal cloud platforms, the problems are almost always organizational ones.

Regardless, I am definitely hoping that Oxide can provide a better experience. Maybe with centralizing all hardware into a single group, rather than have network folks do switches, another team do servers, and a totally separate group to do ACL's, etc. you could improve the experience of managing an entire datacenter.

Edit: Forgot the link: [0] https://blog.jessfraz.com/post/born-in-a-garage/

Source: https://blog.jessfraz.com/post/born-in-a-garage/

In the same post, the author writes: "If you want to read more about some of the deep technical problems we will be solving check out my ACM Queue articles" followed by some links. The first, on open-source firmware, even appears to be freely available: https://queue.acm.org/detail.cfm?id=3349301

So that may help get the necessary context for this company launch.

Ah sorry, forgot to link the post! Thank you.

When investors bet on an early stage startup, they are betting on the team. This is startup fundraising 101, the track record of the team is what shows they will stick it out and find a solution.

These are three people who know product! But they could be totally wrong about how the market will take the product, and have to be willing to work together and possibly pivot until they find product-market fit.

Finding teams like this pre-product is what makes early stage investment so valuable. There should be no confusion about the focus on team and vision.

I am stoked to see cloud scale tooling become even closer to the hardware, and become more accessible. The completeness and openness of the stack is what attracted me to -- and kept me on -- Joyent's products. Best wishes on your new venture!!

I follow Jessie and Bryan both on Twitter and at least with what Jessie has been exploring since she left her last job, this seems like a natural progression.

It trips me out to see other people in the comments that don't know who Jessie and Bryan are. They're legends to me, and I consider myself basically a SRE. I highly recommend following them on Twitter as well as virtually any talk they've given. Just go type their names into YouTube. Steve Tuck doesn't ring a bell for me, but considering the company he's in I want to learn more about Steve.

I'm super interested to see what comes out of this. I think Jessie has a passion for security (also many other things, but it's what comes to mind), and Bryan for debugging (similarly, many other things as well). I think those are really good backgrounds to have when approaching hardware these days, considering the recent security vulnerabilities around speculative attacks on hardware (specter etc.).

Some of the conversations Jessie started on twitter in the past few months raised some interesting questions around open hardware and open firmware, and the problems with proprietary and closed source hardware and firmware.

It sounds like this Oxide project might be in a similar space as https://www.opencompute.org/. I'll be interested to see if they partner at all. I'm also very interested to see what the actual products are, how open any hardware or firmware ends up being, and if this venture is successful, if they end up at the same place as any other major hardware company or if they tread new ground.

> It trips me out to see other people in the comments that don't know who Jessie and Bryan are.

I always find this funny. Why would everyone know who they are? There are thousands of interesting people in tech. It's impossible to know of everyone.

> There are thousands of interesting people in tech. It's impossible to know of everyone.

Totally! If you aren't familiar with who Jessie or Bryan are yet though I think you're in for a treat. So keeping in mind these are largely impressions from their public personas and I know neither of these people personally...

Bryan is one of my favorite speakers / presenters. Virtually every one of his talks conveys his passion and knowledge for the subject at hand, and it's presented in as much depth as the time allows for and any nuance involved, and Bryan conveys a great sense of humor with little tolerance for BS. Here's a non-comprehensive list of talks Bryan has given from his blog: http://dtrace.org/blogs/bmc/2018/02/03/talks/ .

Jessie, to me, is like the Adam Savage of tech: curious, interested in a ton of different things, a maker and tinkerer, and just a generally hard working huge talent. I love her sense of humor, and she puts content out into the world just to share that with others. Like she got to go tour CERN back in May and shared a bunch of cool stuff from it like this https://twitter.com/jessfraz/status/1129845849054253056 which I really enjoyed. If you're on Twitter, I highly recommend following https://twitter.com/jessfraz .

If you have recommendations for other awesome interesting people worth knowing, I'd love suggestions!

Jessie’s blog gets posted here with some regularity.

I don't know about everyone else but I personally read a lot of HN and I probabbly remember a handful of people's names whose articles I have read.

"It trips me out to see other people in the comments that don't know who ____ and ____ are..."

That's what is meant by "cult of personality."

Are you really "tripped out"? That phrase is often used as gatekeeping by people who want to shame others into liking what they like.

It's just another way of saying "my own mind is blown". Nothing more.

Given the players here (Brian and Steve are both from Joyent) and what little I can deduce from the marketing fluff on their website, this seems like they're just trying to build a hardware stack to go with something like Joyent's (most awesome) Triton:


Triton is/was the betamax of on-prem cloud orchestration. It is better tech in the beginning than Kubernetes, but then lost due to the stigma associated with Joyent/Solaris. Kind of a shame as it is really good tech.

A bit more non-technical overview: https://www.joyent.com/triton/compute

It's pretty good but if you want to do Docker, Kubernetes, etc, you're stuck using LX branded zones which (my layperson understanding) is a Linux user space on top of a SmartOS kernel with Linux system calls implemented. It works OK for most things, but compatibility is a real issue.

All 3 titles should [probably, IMO] be corrected to name "Server Company". A "computer" is a superset of servers, desktops, laptops, embedded devices and mobile devices. That way it is more accurate. Currently it is ambiguous what the "computer company" is actually going to "do"/produce/sell.

I was hoping they were going to go full Gateway 2000 and bring back desktop PCs.

This is what I was hoping/imagining.

Yeah, I was interested when they said "computer company", which the average reader would associate with Apple or Dell but the homepage described something entirely different, and much less interesting to anybody who isn't a sysadmin.

This is like arguing whether Amazon is a tech company or a retail services company. The S&P say it's a retail services company, but we all agree it is powered almost thoroughly by technology in general.

When Amazon formed, Bezos described it as a website you could buy books from. I'm sure the authors could have picked a plain-English description as well.

Although I understand your point, it's like saying this: "Eduardo Saverin, Sean Parker, and Chris Hughes on track to make social company"

Random observations on the garage photo.

Nice collection of floppies (3.5" and 5.25"). I wonder how long floppies stored in a garage actually remain usable?

The cans that say "RePop" and BSW on them are microphone pop filters from Broadcast Supply World. For podcasting?

On the shelf below the floppy shelf looks like some kind of tape drive, and maybe those are tape cartridges to the left of it?

Is that a couple of motherboards in anti-static bags on top of the tape drive?

What is the box to the right of that? It kind of looks like an adjustable DC power supply, common in electronics labs, except I don't see any terminals on front...unless those two round things on the lower right are jacks? But the jacks are usually black and red, and I can't see any color there in the photo.

Based on the box colors, the top box of Coke is twisted mango diet Coke, which was introduced in early 2018, putting an upper limit on the age of the photo.

There is some kind of tablet with pen on the desk. Anyone recognize what kind it is?

That's a lot of pens. They should have bought 3 or 4 fewer pens (which would hardly even be noticed) and used the money for a taller jar.

Interesting file cabinet. What applications is that style of file cabinet designed for? I don't think I've seen one with so many horizontal draws that are that short before.

I can't recognize anything on the keychain sitting on the desk (well...anything other than keys I mean).

The issue of the ACM is from August of this year:


Interesting idea, and I wish them the best of luck. However this line:

> _Further, as cloud-borne SaaS companies mature from being strictly growth focused to being more margin focused, it seems likely that more will consider buying machines instead of always renting them_

Is not something I think to be true from my experience of being in a large company that has moved away from on-prem hosting to cloud providers. Cloud is cheaper. A lot cheaper, especially if using managed services (with the proper arch! With the exception of logging which is surprisingly expensive). The hardware costs of rolling your own hosting is not the expensive part.

The expensive part is the capacity per dollar. It's not cloud vs on prem. The real value play is colocation. Imagine operational costs at $100/mo for a terabyte of RAM and 32 cores on unmetered bandwidth compared to the equivalent at aws.

Most sv startups don't care about operational costs but they do care about capital costs of buying the hardware. Primarily due to the very short term focus. I think their own resumes are important too vs their employers' long term viability so everyone pushes aws or azure etc because it's hot tech.

Thinking long term is not a silicon valley thing. It's a totally different mindset.

At the very lowest end, colocation is a clear winner in value, even up to a few servers, it stands up well. Once you have multiple servers, and need to start load-balancing some sets of servers to deal with load spikes, cloud-based infrastructure starts to have some benefits that make it competitive and possible cheaper in some instances, because of load scaling and ease of turn-up of new systems. If you're increasing load by 20% or more a month, you likely will have major problems provisioning servers fast enough unless you gamble that trend will increase for a year or more and make a much larger investment than your normal monthly spend to account for it up front.

Once you get really large though you can actually just colocate a bunch of systems and storage and throw a hypervisor manager on them and get almost all the benefits of a cloud based system for an up-front cost that's probably not much more than a dew months of your cloud costs (it looks like 3-4 months is the break-even on some large AWS instances I just looked at compared to some 64-core/512GB boxes I specced out recently). You won't be able to scale quite as immediately to large load increases unless you under-provision, but at that point you're large enough that you should have a handle on what your load spikes are like, and maybe even be able to supplement with could offerings in a pinch.

In a way, cloud-based offerings allow medium-to-large businesses to have a sort of insurance on scaling, in that they are paying more for it, but they can stop at the right amount for their need or even back-off because of problems (or engineering coming up with a better way to handle some load).

> from being strictly growth focused to being more margin focus

And hilariously, the margin gains are not in COGS (e.g. computing) but in reductions/efficiencies in S&M and R&D. Most modern SaaS companies are 80% gross margins.

> it seems likely that more will consider buying machines instead of always renting them

And the companies buying physical hardware will want to know if the vendor supplying the hardware is going to be around to support the hardware in 5 years when they need to be refreshed.

It seems like they are building a product that is anti-trend. Seems odd.


> Cloud is cheaper. A lot cheaper, especially if using managed services

This is not universally true.

Netflix famously moved their infrastructure from their own data centres to AWS.

Though isn’t that the “commodity” part of their infrastructure? They’re presumably still spending like gangbusters on their “on premises” ISP-local hardware & software.

I imagine that’s the extreme end of the market Oxide is targeting: Provide a HW/SW platform to build your differentiation on, because the capabilities and value (& moat) aren’t there in the commodity cloud.

I really think this shouldn't be an either-or thing, most especially for established companies, but to some extent for growing ones as well.

If you run a data center in the same city/cities as your engineering strengths are located, that doesn't necessarily give you the geographical diversity you need for many companies. But if you don't run any data centers, you're blind to some of the cost structures, and architectural limitations of the system, and you can become soft.

I was going to make a simile to Google and Facebook having their own hardware divisions, but I don't really need to, because they are planning to make cloud-friendly custom hardware for data centers. This seems like an area that the incumbents should be all over but I can't recall the last time I read of innovations from them. Which means there is space for someone new to establish a toe-hold.

If I were based in Chicago I'd want a data center in Chicago, and Cloud Servers on the West Coast, (and Europe, etc as applicable). But the lock-in situation is untenable to me right now. It's pretty easy to end up deploying multiple solutions for the same situation. That just complicates reasoning about the system. It bears a resemblance to the Lavaflow antipattern and I can tell you that either can be no fun at all.

I'm not sure why I chose the phrase 'planning to' about custom hardware, when we know that they've been doing this for a very long time, and FB in particular has not only published designs but multiple cycles of amendments to those designs.

They also have the size and early mover status where they probably inked a pretty sweet deal to do that.

Can someone explain in simple terms what this company will be building? I cannot tell from this post or the other on Oxide.

> Can someone explain in simple terms what this company will be building?

Computers. Literal, physical computers you can buy and stick in a server rack.

Specifically, computers with the hardware/software optimized and tuned for "hyperscaler" uses (think Kubernetes).

This sounds superficially like a company called Nebula from the OpenStack days. https://slashdot.org/story/13/04/03/034217/nebula-debuts-clo...

Sounds like they are building a new OS written in rust sitting at the level of bios, beneath a standard Linux kernel. This will be an open source bios running at or below the hypervisor supervisor ring level.

From almost a year ago, I think he presages this starting at around the 1:00:20 mark:


To the best of my comprehension, Hardware and Software designed hand in hand for a good on-premises “cloud.”

Who uses on-prem cloud?

I've been involved in two investigations into migrating from an incumbent on-prem cloud to The Cloud. Once directly, once tangentially. Both times, the conclusion was that The Cloud would almost certainly increase our IT costs, and also submit us to vendor lock-in.

I see The Cloud as being a great play for new companies, and companies with server needs that are both variable and out of phase with everyone else's. Small startups because time is their most precious resource, and picking a major PaaS provider minimizes the time spent making decisions about things that are tangential to the core business. e.g., if you use AWS then you don't have to hesitate for a moment on your object store; it's going to be S3. If you use on-prem cloud, then you'll likely have to burn a few weeks shopping around for vendors and getting it rolled out before you can be up and running. Multiply that cost by every single decision, and you've got a whole lot of distraction hitting you at the most inconvenient possible time.

Plenty of companies, including medium-sized companies and companies in non-tech industries (finance, healthcare, defense, etc.) Generally for regulatory/security reasons, and also because it can make financial sense.

I can see that Dell, Lenovo, etc. are already offering "Kubernetes" optimized servers; it's not clear how much of it is just marketing, though. Perhaps this new company wants to offer something similar, but be more competitive by taking advantage of superior software design skills (if I understand it correctly). Then it would just be a middleware company disguising itself as a "computer company".

Got it, thanks

"on-prem cloud" is also called "private cloud".

And plenty of people run, e.g., OpenStack on their own hardware in their own server rooms / DCs.

Moving data into and out of a remote cloud is expensive and slow.

The speed of light isn't getting quicker any time soon, so low-latency processing of locally-generated data requires local compute resource.

It's a meaningless word and just marketing. On premises cloud is called a network, as it has been for decades.

I wonder if folks that have been "bit" switch to their own cloud. (either by prices or by vulnerabilities/intrusions)

anyone who is stuck on an oracle databases, and sees how horrible "oracle cloud" is?

Hey Jessie, congrats on getting your company up and running!

> setting up infrastructure themselves is in a great deal of pain and they have been largely neglected by any existing vendor

Can you elaborate on what these pains are, and how OEMs aren't helping their customers with this?

I guess what I'm trying to ask is: what will Oxide do different compared to existing OEMs?

> largely neglected by any existing vendor

I am not sure about this. It seems all their experience is geared towards software with little to nobody from a traditional hardware engineering background. Wouldn't they be better off using open compute[1] designs and focusing on the software inside? Building hardware just seems like a stretch trying to compete against Dell, HP, and Supermicro all of which have "Kubernetes" / high density offerings.

[1] https://www.opencompute.org/

Wouldn't they be better off using open compute designs and focusing on the software inside?

How do you know that's not what they're doing? If they sell servers it doesn't mean they designed them. I wouldn't be surprised if they have some hardware changes for secure boot or something though.

Great Hacker News blasting but whats the product?

My initial impression is a return to on-premise data centers but with cheaper, commodity hardware and a software stack on top.

They are doing for on-prem with hotswappable computers in a cloud what hot swappable hard drives did for on-prem storage.

My prediction is that their technology will be used by existing data centers more than it will be used by enterprises wanting to return to on-prem from their current cloud operations. They might help serve internal customers in the enterprise, like developers, and intranets, but the cost of pipes will be prohibitive for companies needing lots of bandwidth to serve external customers.

It's a lot cheaper to get fat pipes than to pay for data transfer in the cloud at scale.

To function at youtube scale, you need access to multiple backbones. Yes, I agree, Sprint will run an OC3, or fiber, what have you, to your office anywhere in America for $15,000 a month, but that will still be several hops from UUNET.

That Sprint line is like a farm to market road to your warehouse, which works fine when you are serving customers from an ecommerce site or sending and receiving emails or even opening files from NYC and your onprem cloud is in NYC.

It breaks down when you need higher throughput to serve millions of customers. At that point, you need to be at the crossroads of a couple interstate highways which would be analogous to a carrier hotel like at 1 Wilshire in Los Angeles.

I expect oxide.computer will open up on demand scalability to existing service providers who already rent the physical space in locations like 1 Wilshire. Most of those providers today buy hardware from HP or Dell, which is expensive and then also have to manage the software layer on top.

From what I can tell, based on the very little actual product marketing I have found, is that oxide will provide a turnkey solution from one shop. Unfortunately, at the moment, it's hard to figure out what the company actually does, because it's about the team right now.

If you are a cloud customer, you probably don't need to function at Youtube scale.

When you need to serve millions of customers, you get space in a datacenter with multiple telecom providers and start peering at various exchanges. You certainly don't pay per byte of data transferred, and you don't start to build your own physical plant.

That sounds exactly like what Nutanix does.

Kind of and after some digging, I think the difference is this: https://www.nutanix.com/products/hardware-platforms#nutanix

Notice how the hardware is actually sold by IBM, Dell, etc.

The difference is oxide is building and selling the physical hardware themselves. When the customer needs more capacity, they could just order 10 new, let's call them "blade" type computers from Oxide and pop them in a rack at much lower cost than a blade from IBM.


No, cloud stuff in a box kind of product, i.e. software, hardware is obviously secondary to it.

People are confused because of the comedic level of bullshit (reminds me of "the box" from a Silicon Valley tv show episode).

Yeah I think they should have saved the HN bombing for when they actually had The Box to sell.

It's likely they are looking for further investors via hackernews.

When the blue checkmark crowd on Twitter thinks everyone knows who they are.

This is great news for the Rust community as Bryan has been a strong proponent of using Rust at lower levels of computer systems such as drivers and it sounds like Oxide will be focusing much of their development on this level of development in order to deliver best in class enterprise grade servers.

This is a fantastic development for the Rust community that will likely have a positive impact on the linux ecosystem, in the long run if they do this driver development in an open manner.

I look forward to following their developments.

I can't figure where all the negativity is coming from. Envy, sloth, pride? Good luck Oxide founders !!!

To me, and I suspect to many other people outside of Silicon Valley, this just sounds like "those 3 people you should know about just started a company". Fine. But in this case there is absolutely nothing: no product picture or description, etc. So we're left with guess work (which is also fine I suppose).

It's a bit like telling us that Susie and Jim from Facebook just got engaged. It's good news if you know them, but a bit disappointing otherwise.

Every time any company 'starts' and doesn't in detail lay out their 10 year plan to profitability, Hacker News will start shitting on the idea calling it a cheap ripoff of X and it should go away. Same pattern always.

Same! I saw someone post this originally to twitter and thought it sounded neat. Coming here, the reaction has been really weird/negative.

HN doesn't know anything about hardware + Oxide hasn't explained themselves in detail + cool cynicism. HN also thought that Dropbox was basically rsync.

Nice to see Bryan Cantrill going all in on his assertion that containers must run on hardware. https://www.youtube.com/watch?v=xXWaECk9XqM

This post title is a reference to The Soul of a New Machine, which both Cantrill and Frazelle have written posts on:



For those that don't know "Soul of a New Machine" is a book by Tracey Kidder. Its about the team at Data General building a new computer. I was skeptical about this book, but its really quite a good read. (My mom worked at Data General, which is how I ended up with the book)

The book won a Pulitzer prize for non-fiction.


If you like this book, then you'll also like Reminiscences of a Stock Operator, which is a fictionalized account of an occasionally successful stock trader named Jesse Lauriston Livermore who was born in 1877. It doesn't sound captivating, but it is.


Both my parents also worked at Data General, and my dad was there all the way through the EMC acquisition. Love that book.

Ah, finally we have the business plan:

> desire among customers for a true cloud-like on-prem experience

> it seems likely that more will consider buying machines instead of always renting them

They know companies are insistent on making the mistake of trying to build and run their own platforms, so they're becoming the company that you can throw your money at if you're not willing to pay for IBM/Oracle/etc private cloud.

Old folks remember the revolution in distributed computing came from commodities. The idea was to get a lot of the cheapest resources you could and loosely tie them together. This was fast, cheap and effective, because everything was disposable, flexible, open. Even if you hired a dozen engineers, you saved 3 mil a year on enterprise hardware and software licenses.

What these folks are betting on is that businesses want to "buy a cloud" cheaper than they'd get it from a traditional cloud vendor. But to get cheap hardware and software, you need to strip it down to bare essentials, use the cheapest, crappiest parts, literally throwing away parts rather than fix them. Purpose-built hardware and software is the opposite, and of course ignores the management costs (unless that's going to be part of the payment strategy?).

I love the marketing mumbo jumbo:

True rack-scale design, bringing cloud hyperscale innovations around density, efficiency, cost, reliability, manageability, and security to everyone running on-premises compute infrastructure.

How are they going to do this? Buy equipment from other vendors and resell them? Build their own servers from scratch? Nothing in the above paragraph tells me how they plan to be better than any other company right now and why, as a business owner I would want to buy my servers from them.

I'd remind everyone of the Openstack-based Nebula company that went bankrupt overnight and sold the same dream.

You don't bet your on-prem projects on newly born startups.

Seems like they’re building some custom rack servers for container workloads, leaning on Rust for system software.

Sounds cool. Should be a good team to pull it off.

> Don’t cheat: We believe in playing by the rules of the game, abiding by both their letter and their spirit. If we don’t like the rules, we work openly and collaboratively to improve them.

I can't help but think this is a huge swipe at companies like Airbnb, Uber, Lyft, and any of the others that knowingly break laws (or aid others to do so) and hope they can get away with it, or change the law(s) later.

This sounds fairly similar to where Hyve [1] and Penguin Computing [2] fit in, so what does Oxide plan to do differently?

[1]https://hyvesolutions.com/ [2]https://www.penguincomputing.com/

Hyve and Penguin probably have the same shit AMI UEFI and shit AMI BMC firmware as every other vendor. It sounds like Oxide is going to write higher-quality firmware.

I 80% understand what this company is doing, but I 100% know that this team is awesome. I'm very much looking forward to more news and progress as it happens.

After using GCP and AWS for years and paying exorbitant amounts as a student, I was thinking of just switching to some old Dell rack server.

Glad that Oxide will hopefully allow people and companies to utilize their infrastructure on components that may not be so old/may be easier to use.

well what i am getting from cantrill's article is that they wanna build the hardware and the software together. i think maybe on-prem contracts with ibm or dell might not be the target here. in fact open firmware got mentioned a few times.

i wish them luck, it's a very interesting approach they are taking here.

I feel like they could incorporate a lot from https://www.openstack.org/ for the software side.

After paying to colocate an old rack server and paying exorbitant amounts as a student, I now enjoy using pay-for-what-you-use AWS and GCP services.

A colo box can be great if you can amortize the cost among a dozen friends. Otherwise, I think your money goes much farther on AWS and GCP, especially if you know some basic cost saving techniques like reserved instances or not using instances at all (Lambda, Fargate, S3 and other on-demand services/their Google equivalents can cover a lot of needs nowadays).

Having done both rack server and cloud... I honestly prefer the rack server. Power is cheap in my area, we have fiber, and I like being able to buy an SSD on Amazon for $100 and walk over and slide it in without having to pay an ongoing increased monthly rate. Uptime isn't as good but that's fine if the stuff isn't critical.

“exuberant”? I think “exorbitant” makes more sense in this context. Edit: Unless you were really really happy about what you paid, I guess.

Yeah, exorbitant. Didn't like paying $300 a year just to run some simple websites/databases

Ok, I edited the GP for you.

$300 a year is nothing compared to colocating your own server, or even getting a business-grade network connection.

If you can get away with hosting on your own computer on a residential connection, that's great for you, but it's a totally different product from what AWS/GCP/colocation offer.

By the way, reserved instance pricing on AWS and GCP starts around $20/year. But if you know how to use S3/GS and Dynamo/Datastore, you don't even have to use an instance.

Can you point me towards getting an AWS/GCP instance for around $20/year? Is it AWS or GCP? Also what type of instance? What are the specs?

At the moment I just use EC2 and GCP instances (and a couple of GCS buckets)

Sure - on AWS, it's the 3 year reserved term for a t3.nano ($51 for 3 years or $27 for 1 year for 512 MB RAM and burstable CPU as described here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstabl...). https://aws.amazon.com/ec2/pricing/reserved-instances/pricin...

On Google, the comparable offering is an f1-micro instance, which goes for $3.88/month (https://cloud.google.com/compute/vm-instance-pricing#pricing).

EC2 instances?

For those confused about the company, this from their homepage :

“Oxide is building a new kind of server.

True rack-scale design, bringing cloud hyperscale innovations around density, efficiency, cost, reliability, manageability, and security to everyone running on-premises compute infrastructure.”

Every thread here about how it’s vanity is bananas. They’re going to build computers! With software to go with them! So that you can have the same experience that the hyperscale companies do. It’s a great bet.

This isn’t a blitz to help with fundraising: they already have the money.

They are also hiring - which is why they have a long list of principles and values, and talk about each other. They’re looking for kindred spirits.

This is a good idea, from people with a history of good ideas.

>They’re going to build computers!

You know they're mostly software people, right?

That's actually irrelevant, though. Even if all three of them were Woz-level hardware hackers, that's no guarantee that they're bankable success. Being an impactful software or hardware company takes a team, and in the US at the moment that team is mostly about financial and legal work, not technical.

We've all seen that "best technical idea" does not usually equal "best business success"

I wish these folks well, and I'm sure their fans will enjoy following them, but people acting like this is a major thing for the whole tech industry are gonna be disappointed.

The premise here is that there is market for folks who want to _preferentially_ be in private DC, rather hyperscalers.

> True rack-scale design, bringing cloud hyperscale innovations around density, efficiency, cost, reliability, manageability, and security to everyone running on-premises compute infrastructure.

Surely, this influences manageability (and $$). Does it influence the on-demand provisioning benefit of hyperscalers (achieved at scale) though?

_One_ of the reason me and my team chose hyperscaler is that I get my compute today, now. Rather having to wait weeks/months for my in-house IT to get for me.

Definitely have a good bit of work done on that whiteboard I hear naming is one of the hardest problems in CS.

My summary:

1. They believe there will be a shift back to on-prem, due to cost, improved security & latency.

2. They can take commodity hardware designs ( Open Compute Project ) and add their own software for manageability.

3. They are big on values, because in the past Bryan got a bit of a rep for being a bit hard to work with. ( eg https://blog.valerieaurora.org/2016/10/22/why-i-wont-be-atte... )

1. I'm seeing this in big companies - cloud can be an expensive option if you are generating your own data on-prem. While the economics might change, security and latency issues won't so easily. Also cloud is a big risk in that realistically what is your migration cost to another provider? If that's high, that's how much your cloud vendor can gouge you before you move ( I like to call that the Oracle business model ).

2. Don't know about the hardware, but those guys have a track record on systems software and Bryan was involved in building similar stuff before in the storage space ( http://www.oracle.com/us/products/servers-storage/zfs-storag... )

3. He is definitely older, perhaps he is wiser.

Cantrill's blog post explicitly references the Open Compute Project's base designs -- which are Facebook's published versions of the design they use for their own on-prem servers and network. They seem to be dropping lots of hints about wanting to add manageability features on top of that, which might come with hardware tweaks of one kind or another, but having proven designs to start with at least mitigates the risks on that side of the project.

More on OCP here: https://www.opencompute.org/

> They are big on values, because in the past Bryan got a bit of a rep for being a bit hard to work with

That's a bold claim. What I've seen in his talks is that he has always been big on values, for example he's been espousing the Sun values that they use here for quite some time, and he has also been very critical of companies with lousy values.

Don't know him personally.

> and he has also been very critical

Exactly. very critical - what if the thing he was being very critical about was you? Read the linked post I sent.

ie it's not about the big ideas 'don't do evil' it's about whether he was hard work personally.

As I said he seems to have recognized that.

BTW I don't know him personally either.

> what if the thing he was being very critical about was you?

I probably wouldn't like it. So?

To evaluate the situation, we would have to know what he was being critical of. For example, his take-down of Uber's "corporate values" was brutal, funny and IMHO fully justified.

> Read the linked post I sent.

I did read the blog post you sent. Had actually read it before. It didn't gain any substance on second reading, unsubstantiated accusations of...what exactly? Not making the poster feel as warmly cuddled as they want to feel. So while I probably don't agree with the poster even if things were exactly as she describes, we are not given any actual real situations to evaluate whether that it actually is the case.

And I've never seen him take down people, except Larry Ellison, maybe, only code, technology,

See: Egoless Programming, or "You are not your code".

I have worked in many different environments, and in the end the ones that were of the "cuddling" kind, particularly when self-proclaimed were the highly toxic ones, whereas the ones with the refreshing-sometines-verging-on-brutal candour were the healthy ones.

So while I also don't know the poster you referred to, to me it is those people that are turning this industry toxic.

Regardless of the type of environment me or you thrive in, I was just giving my interpretation of his blog post: http://dtrace.org/blogs/bmc/2019/12/02/the-soul-of-a-new-com...

Here is the particular passage I was refering to:

> And most important of all (with the emphasis itself being a reflection of hard-won wisdom), we three share deeply-held values [....] — and employees will be proud to work for.

> And I've never seen him take down people, except Larry Ellison, maybe, only code, technology,

Somebody wrote that code, or built that technology.

Note this is nothing to do with whether the code is rubbish or not, it's a about a communication style and a mindset that doesn't care or think about the person behind that code.

I'm sure he didn't mean it, but the endless taking down appeared to get to some of his colleagues, and perhaps because he didn't mean it, he didn't realize until it was pointed out to him.

that's pretty bold idea that there'll be a significant shift back to on-prem given that companies are still migrating into the cloud and the overall cloud market is predicted to continue growing, but we'll see


I remember Gartner being real big on SOAP and Web Services, I was an early REST convert, about 5 years after my conversion Gartner came to the government agency were I was working and made one of the guys in charge of lots of SOAP based initiatives cry by flatly announcing that SOAP had lost and REST had won. It was 5 years of Yes SOAP until suddenly it wasn't anymore.

So as the other commenter said skate to where the puck is going to be - and don't rely on Gartner for puck location trends.

Skate to where the puck will be.

I literally know nothing about their product, and it seems like both this site and the various blogs being floated (if not outright promoted, especially here) are about the founders, and not the product.

I don't care about the team behind it. Give me a tangible product. Or at least a sense of what the product will look like.

They've also started a podcast focusing on the hardware/software interface called On the Metal.


The contrasting pictures of the garage and the 'bigger space' are delightful!

Yeah, but what's the product?

"Oxide is building a new kind of server."

Bryan Cantrill is basically my role model. I'll be following this.

> ...the sharpening desire among customers for a true cloud-like on-prem experience (and the neglect those customers felt in the market) made it more in demand than ever.

Great to have options.

I’m very excited about this.

There’s definitely a hole in the market for “turnkey private cloud done right” that isn’t just a massively marked-up bundle of servers that requires consultants to effectively setup and operate.

Also, If CCIX takes off, along with workload-specific accelerators, they’re in a much better position to quickly serve those niches rather than waiting 6mo to rent some cloud instance that might be making all the wrong trade-offs.

No where on their website can I find what they are about, how their product adds business value, what makes them different from everyone else selling snake oil... basically nothing.. just some flashy graphics, 10$ words and a company name that makes me think of oxidation (literally rust).. this reeks of marketing.

Unless they really want to have him for the laughs, they need to drop Steve Tuck from the tech podcast. Using "instruction set" in place of "instruction” and not immediately understanding the forward slash / backslash disconnect between Windows and everyone else really puts him in a different mindset

Jessie Frazelle is listed as the company's "CPO" - huh? Isn't that a Star Wars character? :-P

(Obligatory clarification: of course I'm joking; but is that "Chief People Officer"?)

My guess would be "Chief Product Officer"

Might be Chief Product Officer

Bryan Cantrill recently interviewed Arthur Whitney, as discussed in another thread:


That was a repost, the interview happened a decade ago.

People on Twitter seem to just joke around about it...

who are these guys?

Cantrill is probably the most well-known. Long-time Solaris advocate, worked on Joyent for a long time, etc. You'll find his fingerprints on a lot of stuff. It'd be hard to find someone more capable of building on-prem cloud infrastructure at a systems level, I think.

Frazell is pretty well known for Docker and container-oriented stuff. I think she was at Google for a while, but maybe Microsoft? Somewhere big doing cloud things, for sure, though.

I don't know the other person. But, I would guess they have to be reasonably impressive to be on this team.

Steve Tuck was the long-time cornerstone of the sales team at Joyent. He was at Dell before that. He rose to Sales VP, then GM, then COO there. He's not a kernel coder; he's a tech-business-person, and a cool, noble, and legit one at that.

A quick Google shows he was the COO of Joyent, so likely Cantrill's sidekick.

Kinda reminds me of Transmeta. Except they had a product, a shitty one, but a product. Or Ampere, but they only have one product they inherited and no flashy names.

Silicon startup challenges are rough. RISC-V is probably the best hope for a new architecture (hint: its not a new architecture).

Transmeta was going to be the next big thing, code morphing! Linus was on board. News on slashdot every day! And then ... some product released but no needle moved. I wonder if they built something that got outpaced by intel, needed network effects to take off that never materialized, were ahead of their time, or just no market for their product?

These guys seem like they’ve identified a market and who their first customers are going to be.

Sort of... Transmeta was basically a VLIW chip optimized to recompile x86 code. The problem was by that point all x86 chips (even plucky little VIA/Centaur!) were recompiling x86 instructions into RISC-ish ones, and without sucking up RAM and more importantly memory bandwidth to do it.

So even if the VLIW chip was on paper more powerful/efficient (and I'm not sure it was), it just couldn't keep up with doing it in hardware.

Well, this new company seems to be about building servers (probably with custom mainboards?) with existing chips and open firmware.

Would be fun if they used Ampere :) but my guess would be that the first products at least are gonna be Corebooted Intel platforms.

Almost more interesting than two giants who, for better or worse, have drastically influenced modern computing practices starting a company, is that they started a podcast. The former happens every day, and sure, it's exciting, but how many times have you seen one of these people start a podcast?

Everyone and their mother is starting a podcast at the moment.

This, and most of them are just people who like to hear themselves talk.

Why is that nr 1 on hackernews?

Jessie = CPO = ???

I'd guess Chief Product Officer?

I apologize if this is a silly question, but what exactly does "computer company" mean? I read the link. I see "oxide computer" which makes me think, oh is this one of those newfangled oxide transistor but I clicked a few links like oxide computer principles of operation hoping to get an idea of what this is but instead I see some stuff about candor, empathy which is not what I thought principles meant in this context. I looked through the comments hoping to see someone explain it but no win there. Oh well. I'm sure someone will explain it.

This is a bad landing page.

1. Bad HN title (doesn't say anything about what they did about the computer they made)

2. Landing page image has some generic background and as far as I saw, no tangible product, other than some shelves of their latest workspace.

3. If they could just include some diagram here or there and put some bold text/say they're building AWS for people/startups, that'd be great and to the point

Not bashing them, but just some thoughts I had about their presentation.

Regardless, I'm extremely excited for what they're building. Been waiting for something like this to pop up.

> Landing page image has some generic background and as far as I saw, no tangible product, other than some shelves of their latest workspace.

From Jessie's blog post [0] it looks like this is the actual garage where she supposedly started the company.

But I agree, there is absolute zero information on the product they are planning to build.

Probably because they are still working on fundraising.

[0] https://blog.jessfraz.com/post/born-in-a-garage/

It's amazing it has >400 upvotes at all.

I suspect upvoters have some context that's implied by knowing the domain of these founders or their previous work.

Their main landing page:

"Oxide is building a new kind of server.

True rack-scale design, bringing cloud hyperscale innovations around density, efficiency, cost, reliability, manageability, and security to everyone running on-premises compute infrastructure."

They forgot to add “blockchain” and “machine learning” to complete their BINGO card. “cloud hyperscale innovations” is a good one, though—haven’t heard that one yet.

"Hyperscaler" does have a meaning in the data centre world, and their use of it here speaks to the benefits they hope to deliver, so it's not 100% a marketing buzzword.

Add also "built with ️"

Their company mission is to kick-butt. ¯\_(ツ)_/¯

Its from a quote by Scott McNealy, the CEO of Sun and is regarded as the final epitaph of Sun.

>Kick butt, have fun, don’t cheat, love our customers, change computing forever

Hell, I'll kick my own butt for a million less.

I am also quite confused. After looking at the homepage, I have no idea of what this company does.

It is a company whose product is hardware. For example: Sun, HP, Apple.

Looks like vaporware with nothing to distinguish themselves from the incumbents. Are they going to do anything special other than put their branding on Taiwanese boards?
kev009 5 days ago [flagged]

This is the funniest thing I've seen all day. What a hoot.

Look at the job req for "hardware engineer": they have no idea how to solicit EEs or what they are even looking for on the _most_ important roles for even doing something in this space (and none of the personalities have skills or credibility in). That they duped a VC into funding this is side-splitting hilarious and a sign of impending correction in tech.

Please don't be a jerk on HN. Even if you don't feel you owe the people you're attacking any better, you do owe this community better if you're posting here.


Are these people of note? The title could be a lot more contextually relevant

Cantrill is one of the people responsible for dtrace, and more or less the only Sun-alumni who's not retired or disappeared (a lot of them stopped blogging when Oracle killed the Sun blogging culture and site/killed Sun). He also was the CTO of Joyent.

Frazelle was one of the people responsible for Docker.

I think Tuck is a finance guy who used to be a sysadmin? And he was COO of Joyent for a while, apparently.

Care to shed some light on internet history?

How did Oracle kill the Sun blogging culture? What/Who was the Sun blogging culture?

Engineers blogging about Solaris etc. Internals, gotchas, upcoming things, heads-ups on problems. The patching guys' blog was particularly useful in the face of Solaris' atrocious packaging utils, for example.

There was a _huge_ amount of official-unofficial documentation in the blogs that disappeared from the Sun site when Oracle ran its bulldozer through it.

Sun very strongly encouraged their employees to have blogs, and was either one of or the first to do so (especially at their level). Basically no restrictions on subject matter. Even their CEO regularly blogged!

A better question would be "What didn't Oracle do to kill the Sun blogging culture?"

Oracle still has plenty of ex-Sun on their Java team, which blog seldom, but still blog.

Jessie use to be on the Docker core team [1]. I am not sure if she still is or not. She is also a very active blogger.

[1] https://blog.jessfraz.com/post/dogfooding-docker-to-test-doc...

She's not. She left Docker a fair few years ago, was at Google for a little while, then Microsoft for a while, then GitHub, and was then -- as far as I know -- freelancing (until this announcement).

EDIT: Got Microsoft and GitHub the wrong way around.

> She's not. She left Docker a fair few years ago, was at Google for a little while, then GitHub for a while, then Microsoft,

She went from Microsoft->Github, not the other way around, but other than that you're correct.

Right now all three accounts of this are adjacent news items on HN. Heh.

I'm really curious why. This seems like an ok enough idea but hardly anything innovative. Is it just because the founders are bloggers and have a large following that it's getting traction? There's barely a description of what the product will be.

Calling these founders bloggers is perhaps true in the strictest sense of the word, as in they surely have blogs but that's not why some people perk up at this news.

Jessie Frazelle did some serious Docker work, went on to work on Kubernetes, Hyper-V, a brief stint at Github saw her working on their Actions product.

Bryan Cantrill was working on the Solaris kernel for a decade, his dtrace is particularly famous. And then he was CTO at Joyent. You maybe have heard one of the projects they helped with called node.js?

I'm on the comment page here because all of the articles have less information than the comments! Thank you HackerNews users for giving us the skinny on Oxide.

I've read all of the founders' blog posts now and I'm convinced they've packed every anti-pattern they could find into this startup. It's focus is entirely on the personalities involved and not at all on the product. They talk about their cool new office, funding, they have a 50 point list on values, they have a podcast, a newsletter...

Yet there's essentially nothing about what the product is (that being the only thing most people care about). This reminds me of first time founders who can't wait to get "CEO" business cards. Worry about the product first, last and in-between. The rest is just there to signal to us that your priorities aren't in the right place.

Reminds me of the top comment from PragmaticPulp in another HN thread [1], where Pragmatic realized the personality very much is the product:

I know two people on the “30 under 30” list. Both of them are incredibly charismatic and charming in person. Their Instagram and Twitter accounts churn out constant brand building material. They both have pseudo-startups with noble causes and vibrant websites. Their startups have a list of impressive advisers, including B-list senators and industry executives.

However, neither of them have made any progress on building an actual business. One of them has supposedly been developing the same simple product for almost 7 years now, but they’ve never been able to produce even a proof of concept prototype.

I thought I was missing something for the longest time, until I let go of the idea that they were really trying to build a company. They’re not. They’re building their personal brand, and succeeding wildly thanks to publications like the “30 under 30” list that have an insatiable appetite for underdog success stories.

Surely some of these companies are legitimately successful with great business models, but they’re mixed into these lists with the brand builders who know how to game the system. I’d be interested in reading an honest “Where are they now” follow up series that checks in with these founders at the 5-year mark after they make this list to see who the real successes are.

[1] The Forbes ‘30 Under 30’ Hustle | Hacker News. (2019, December 02). Retrieved from https://news.ycombinator.com/item?id=21082523

Contrast this with the website of CoreOS when they were in start up mode. It was all about their product and why it would change the world. CoreOS was perhaps the best startup of the decade in my opinion.


Haven’t followed CoreOS, but it joined red hat. Does that comport with your view of it?

It means they had a successful exit from/transition out of "startup" status and joined a company with decades of Linux development experience.

So it’s not red hat scooping up competition in the startup phase? Like my Econ background expects.

Early CoreOS employee here. CoreOS technology forms the basis of Red Hat's OpenShift 4. It was definitely a product driven acquisition, not any attempt to kill competition.

And to Rob's point, many of the CoreOS folks were boomerangs in RH parlance (people who rejoined RH, by choice or acquisition).

Technically, there actually weren't that many. More folks were Rackers (ex-Rackspace employees).

Edit: I was an early CoreOS employee who was the first "boomerang"

Success is one of those things that can only be measured with respect to a certain frame of reference. The consumer perspective(s), the investor perspective, and the employee perspective can all lead to wildly different evaluations of a project's success.

On the positive side, you've got the fact that they made it to a profitable exit, which would be hard to ignore if you had a financial stake in the venture. You've got the implication that Red Hat believed that CoreOS was essential to its fortunes in a container-oriented future as a clear vote of confidence that they built something good. The fact that Red Hat kept CoreOS going suggests that they felt it was so good that their existing technology couldn't realistically catch up.

On the negative side, if you don't like Red Hat then, yeah, that certainly counts as selling out.

Red Hat is absorbing some of their technology rather than just firing everyone and terminating the product like Travis CI for example.

Regardless of the acquiring company's motivation, being acquired is one of the textbook criteria for a successful startup. The other is becoming a large company by themselves, but this is rare.

And since Red Hat kept Container Linux/CoreOS open source, this wasn't really a move to eliminate competition. They can support enterprise clients, but the open source nature doesn't stop another company from offering their own enterprise support.

> Regardless of the acquiring company's motivation, being acquired is one of the textbook criteria for a successful startup.

Depends on the acquisition price. Acquihires are failures. Investors might get their money back, and everyone else (including founders) walks away with nothing but a job at some company they didn’t necessarily want to work for.


And redhat's openshift product massively improved as a result of CoreOS technology.

But the real question is: who are they? I've never heard about them, I don't understand all this hype here

Bryan is an O.G. systems engineer who worked at Sun and who's given many great talks and worked on lots of cool projects, and Jessie rose to prominence as a core docker developer and prolific and talented blogger/speaker, among other things. Both are well-known engineers with major contributions to open and closed source systems infrastructure. Steve Tuck I don't know yet, unfortunately.

Here are some more links if you're interested: https://blog.jessfraz.com/ https://www.youtube.com/results?search_query=bryan+cantrill

* Edit: Rephrased from "O.G. Sun Engineer", I don't know how accurate "O.G. systems engineer is", it's all relative anyway. For reference, I I'd consider Jess an "O.G. containerization engineer" :)

> Bryan was one of the O.G. Sun engineers

From Bryan's linked blog post - he joined Sun in the mid-90's. Sun was founded in 1982, so he was an (excellent) engineer there, but not one the "O.G.s"

Ah whoops! I meant "O.G. systems engineer" in general, independent of his time at Sun, not O.G. in terms of Sun's history! I don't know much about his history other than what I've gleaned from watching a few of his talks.

> Bryan was one of the O.G. Sun engineers

He's not that old!

(Hehe thanks, fixed.)

Jessie was employed by Docker, Bryan was CTO of Joyent and Steve was COO of Joyent. They all did some important contribution to the container ecosystem and they are now starting what appears to be a company selling custom cloud hardware.

> custom cloud hardware

Uhh, you mean computers?

Looking at their pedigree, I'd hazard that the computers are geared towards running containerized workloads, so "custom cloud hardware" sounds about right.

That seems to be the common theory, but what does that even mean? There's nothing all that special about running containers at the hardware level, they are really just groups of processes that the kernel manages a certain way.

That's not completely accurate. Hardware & OSes have not been designed for heavy multi-tenancy, leading to (among other issues) cache interference. At Netflix [1], we have to do quite a bit of work to undo that. Bigger orgs like Google have spent more than a decade improving the Linux scheduler in their own fork for similar reasons.

[1] https://medium.com/netflix-techblog/predictive-cpu-isolation...

Very interesting link, thanks. Obviously your use case goes way beyond running a standard Docker installation. It sounds like you really want the kernel to schedule containers instead of processes -- which it doesn't really do by default, hence my comment. Perhaps it shouldn't be surprising, that you're able to get these clear performance gains from a highly optimized special-purpose scheduler. Still, I was a bit surprised. :)

However, that's at the OS level. What can realistically be done at the hardware level? It must be possible in theory to design a CPU that's better at this kind of context switching, but I don't know if a new "computer company" really wants to go there.

While that's true, this particular company doesn't seem to be targeting anything that would improve containers (OS optimizations, new CPU). So I think the OP was correct in that simply changing the BMC or making it more secure on boot won't affect containers.

I'd be surprised if they didn't leverage something like the SmartOS container capabilities https://www.joyent.com/smartos

People who worked at Docker, Joyent, and Sun would be the ones to answer your question :). Consider the possibility that the kernel and the h/w are increasingly at odds on commodity server hardware.


He means technically practically what is the difference?

My watch is a computer, so I don't think they want to make general purpose computers.


Not for a long while.

Bryan Cantrill was involved in Sun Microsystems before they got bought out by Oracle IIRC.

The other folks though, I have no idea.

Ever hear of dtrace?


> This reminds me of first time founders who can't wait to get "CEO" business cards.

I worked for such a CEO. In the first year we had four week-long retreats to: an island in WA state, Palm springs, Austin, and Banff (that's where the stats team was). TONS of swag. Aeron chairs. New MBPros. Oh, and champagne fridays. Needless to say it burned through it's good will seed round in 1 year, and the CEO begged for another year's worth of money from friends before shuttering. But hey, hype sells in the software world.

Chairs and laptops are fine. Even nice ones. They aren’t what brings down a tech startup. These purchases can speak to the mentality of the founders, but they are also nice all around for quality of life. We spend a lot of time on laptops, sitting in those chairs. The chairs last a long time and hold their value well. Worth every penny. The Trips are kind of telling, though. We are a small consulting firm and have nice laptops, chairs, desks, and keyboards. Those things aren’t the problem :)

Yeah, the red flag in a startup is a founding team who can't explain in a sentence what their product is or why it's different. So far I've read 3 blogs and 4 websites and all I can tell is that they have a garage and also maybe an office.

Their homepage is pretty clear - they are trying to make cloud technologies more accessible in on premises solutions.

I am not an expert in the field, but I work at a place that is very resistant to cloud solutions for certain applications, and getting the same stuff working on prem can be difficult and pricey.

Got to agree here. The idea of a company not providing decent hardware and chairs says a lot. It's a couple thousand per employee at most, for an easy gain in employee satisfaction.

Trips are pretty silly though.

God, I couldn't wait to give up the CEO title. When I brought on a partner I immediately ceded it to him when we discussed roles. He was shocked and thought he would start as the COO for years.

It's such a thankless job and one that has a specific skillset that is fairly rare. You can be the Founder but not the CEO; something people seem to forget in this business.

Agreed. I would just move on to the next article, except that there is a need here for a company I invest in. The company currently manages thousands of hosts and the cloud is eating into their profits. The hardware out there is way to expensive (or a bad job was done searching) to move away from the cloud. If they can provide good hardware that would allow the thousands of host company to save money if they did it on their own, then we would be interested in looking into it.

Off the top of my head of what would be required besides just cheap hardware. 1) Cloud like software to manage it or k8s native support. I don't know exactly how that would be done, but the administrative costs of using this hardware can't be so high as to make the cloud a more viable option. 2) Some options for network access. Cloud does not just provide VM's but the underlying reliable network with multiple redundant pipes. Comcast Internet access might be good enough, but some customers might require large redundant pipes. While they might NOT need to solve this themselves, they should make sure that the market does provide solutions that when taken into account, allows for cheaper than cloud solutions. 3) Physical location... same as item 2 but for physical location of the hardware.

I remember back in the day managing colos. It really sucked. It was not just hardware costs that sucked. They should just consider that.

> I would just move on to the next article, except that there is a need here for a company I invest in.

This is exactly how any "WeWork" type company sucks in investors. You'd just ignore the stupid out of hand except that it would be so nice to believe that if this company is on the level and IF their product does what they say it will and IF they deliver it in a reasonable time frame then it will solve the problem and somehow make money for investors.

Forget the names and reputations of the people involved... if three random people came up to you at a conference and said "We've formed a company to solve problem X!", wouldn't you wait until they showed an actual product to even think about potentially making decisions based on what they might do?

You're asking VCs to disregard social proof and FOMO, which will never happen, as those are the main things they rely on.

Depends on the kind of product. Some products take time and money to make. Others take far less.

I think providing reliable compute and storage is not so difficult and for that there are many options that are cheaper than AWS or Azure (e.g. Hetzner cloud, Scaleway, Vultr or DO). The hard part, in my opinion, is providing managed services like relational databases or key-value stores. For that there are far fewer vendors available, so if Oxide manages to build something that makes this easier I think they’d have a pretty solid business case. In general I think we see a trend towards simpler IT and system architectures, which I really like as I think the complexity has become way too high. I e.g. know a company that invested significant resources (people and hardware) in setting up an OpenStack cluster but even after three years never managed to use it in production. From what I’ve heard k8s has a similar complexity problem, so I really think there’s room for a simpler approach to scalable computing.

> The hard part, in my opinion, is providing managed services like relational databases or key-value stores.

Just moved from AWS to Vultr & DO and it's loads cheaper and faster for my use case (lightweight landing pages basically).

Managed services are still incredibly tough because of client support, and like you said, relational databases and key-value stores.

Honestly I feel that far too many companies are afraid of building their own servers. In addition to getting exactly what you need for each workload, it can save the company a boatload of money over time. System integrators like Dell charge companies millions of dollars to assemble servers that are no better than anything I could build myself.

Another overlooked benefit of building your own hardware is you don’t need to add complex overhead like virtualization or containers. You can “right size” each server with the perfect amount of CPU, IO and RAM for the processes you will run on it. Paired with a kernel compiled specifically for that server, your businesses applications will be vastly more performant than ones running on a cloud platform.

Honestly, too many businesses hop on the latest fads. The smart ones see the value in going back to the basics.

In terms of software that enables users to treat bare metal as a "cloud", consider MAAS: https://maas.io/

If SuperMicro plus real estate, power, provisioning, and operating costs isn't competitive with cloud costs, perhaps it might be because the large cloud vendors have entire teams focusing entirely on bringing down the cost of building and operating these things at scale.

Cloud vendors don't have 90% profit margins; the cost of cloud is what it is because operating computers at scale is hard.

Some pretty good points there! Can you think of systematic ways to make colo not suck to operate?

Agreed. I was somewhat confused on why they had more focus on their Podcast than telling me it was an AWS for startups/people.

Maybe sometime in the future when I have time I can go through these long-form types of content, but they say nothing about what they're building on their landing page/don't describe in the level of detail I'm assuming they have in their podcasts.

Similar to what you're saying, it's like buying the swag for a company that doesn't exist.

It seems like they plan to do hardware-software co-design. Devices (servers, switches?) and firmware designed specifically for their software stack. A software stack (drivers, OS) designed specifically for their hardware. Really interested in seeing where this goes.

They're apparently doing hardware-software co-design for rack-scale servers, integrating some kind of state-of-the-art systems-management features of the sort that "cloud" suppliers like AWS are assumed to be relying on for their offerings.

Kinda interesting ofc, but how many enterprises actually need something like this? If there was an actual perceived need for datacenter equipment to be designed (and hardware-software co-designed) on the "rack scale", we would probably be doing it right and running mainframe hardware for everything.

It starts looking appealing when your monthly AWS bill is deep into the 5 figures and you’re kinda trapped: do you pour your engineering resources into cost-optimizing your infrastructure (doable, but time consuming) or kick the can down the road?

Believe it or not, going private but not having to give up the niceties of AWS or Azure would be quite appealing.

Remember Eucalyptus? Was lead by the former CEO of MySQL. Built an open-source project that attempted to be compatible with AWS. The project is still alive.


To say the project is still alive is a bit of an exaggeration, don't you think?

Looks like the last release was in 2017. Eight years of releases is a relatively good run. May be of interest to those like Oxide, who are considering similar paths, with the added complexity of firmware, BMCs, roots of trust and OCP-style hardware.


Also, some companies just want their data in-house and not hosted on a remote cloud. Sometimes contracts preclude the use of 3rd party external companies to house critical data.

This is certainly a need in HPC environments. Think of natgas / big oil, finance, science (protein folding, genetic synthesis, genome research, etc). The cloud simply makes no sense for a lot of these industries. We're talking 20k physical computers and using MPI for their research jobs kind of scale. The cloud is not a good fit for those types of envs where they're using 100% of their compute 100% of the time if at all possible.

I would guess the problem with business as usual is that the cost magnitude of doing this means that only companies throwing off serious cash are doing this.

Which has the side effect of their solutions being bespoke, because acceptable cost looks like "Tell me when I need to stop adding zeros to make this happen."

To go another way you either (1) need to be Amazon-scale already (i.e. "there aren't enough zeros to make inefficiency worth our time") or (2) be willing to say no to huge profits out of ideological purity.

The well-funded client / custom trap is real.

I work for one of those HPC / financial firms that I alluded to above. The "no well funded client trap" is also real :)

"Hey look we have this cool solution" -> "Hey look we have a few small to middle tier customers" -> "Hey look we have a big customer" -> "Hey look our big customer bought our company". It's the silicon valley startup shuffle...

But if you have fun and make money in the process, what's wrong with it? Companies are not (or should not) be like children.

First comment talking about what they are doing.

This was in one of the blogs:

"...the sharpening desire among customers for a true cloud-like on-prem experience (and the neglect those customers felt in the market) made it more in demand than ever."

I think that's a good summary description.

In other matters I was went looking for the Rust connection (given a name like Oxide) and was not disappointed.

Does that mean "I have servers... Oxide manages the virtualization"?

Or "I bought a rack full of Oxide and now I have my own puddle of elastic compute on-premises." (If you condense a small cloud, you get a puddle, I presume.)

If hardware is involved it would seem like the latter. In which case they'll be competing with Amazon's on-premises solution and commodity hardware.

I wonder if they're trying to eliminate the VM substrate layer that clouds today currently run on by replacing it with hardware and an OS that is more amenable to running containers natively with good isolation/security properties?

So, like AWS Nitro?

Nitro doubles down on VMs instead of abandoning them, but it is indeed a good example of what integrated hardware and software can do.

What they did with Nitro is develop custom PCIe devices to handle storage and networking, so these devices' virtual functions (SR-IOV) are directly passed through to the VMs and now the hypervisor basically has nothing to do other than switching contexts.

> "a true cloud-like on-prem experience"

I'm afraid what people mean by that statement and what the startup is going to try to do are completely different things. As in people mean AWS-like cloud services running on multiple servers they control with all the fault tolerance and geographically distributed. But for the startup it probably means cloud services running on expensive servers in a fast local network.

”I've read all of the founders' blog posts now and I'm convinced they've packed every anti-pattern they could find into this startup .... they [list their values] ... yet there’s essentially nothing about what the product is.”

Imagine you and your friends excitedly announcing on your personal homepages that you are setting out to build your passion project into a going concern “not driven by power or greed, but by accomplishment and self-fulfillment” [1] and being flooded in a community of supposed hackers with derision that your post wasn’t a good enough press release.

For some people, the actual human excitement of friends earnestly getting together to build something new primarily because they want to see it in the world is so foreign that they can only snark about it.

Hopefully that kind of response vindicates their values more than it demoralizes.

[1]: https://blog.jessfraz.com/post/new-golden-age-of-building-wi...

Vaporware has been looked down upon in our industry for decades:


Until you have a product, there's nothing to talk about. Plenty of teams that seem good end up producing nothing. Announcements like this are a huge red flag to people who have seen this pattern many times over.

Until you have a product, there's nothing to talk about.

Yeah ... if you’re a marketer. People who actually make stuff are allowed to be excited about what they‘ve been working on in their garages and get to tell people more about it on their own schedules. They don’t owe you a feature comparison table every time they talk about their project.

In fact, they don’t owe you anything at all.


As this announcement stands, I ... won't be using what they build.

Considering you literally haven’t been told — let alone allowed to use — what they’ve built, that sounds like a reasonable prior.

When I use products for business I expect some level of professionalism. It's way too easy to get hung out to dry with a critical piece of infrastructure and a dead company behind it. What they've done here with the vaporware and vanity posts is a classic misstep. It's not product or customer centered, the two things you absolutely must be to succeed in the enterprise world.


This is literally the company blog that we're commenting on, not their personal blog. There's also 3 posts on the HN front page about this same topic and not one of them has any substance. So yes, I'm going to stick with my decades of experience and say they're producing red flags far faster than they're producing product.

I’m sure your indignation can be better used somewhere else, other than commenting at length on something you’re not interested at all.

I'm very interested in how our industry operates, how purchasing decisions are made and how investors vet opportunities. This intersects all of those areas, I think it's probably worth listening to what people are complaining about instead of just lashing out at them. This isn't just me complaining for no reason, there's a lot wrong with this announcement and it doesn't lead to a healthy industry.

A thing I'm curious about is how they plan on handling the supply chain. As a consumer of server products I observe that running a server company is much less a technical problem and much more of a supply chain management problem. As I understand it, it is one of the many reasons why Tim Cook is the CEO of Apple.

That being said, I wonder if the forcing factor behind building their own servers for the big three companies was mostly the inability to get hardware fast enough. Sure, there are tons of other benefits you get after you start building your own servers, but I wonder if they would have been pursued if Dell could land servers on time. In this sense, building your own machines is a much smaller scale to try to supply chain your way around, even if you're Google. That and private companies can be much more agile since they don't have to support existing workloads. Hard drive shortage? Change the spec last minute to not rely on them. This is pretty exciting to see, and there are plenty of third party vendors making money in the space, but they seem to want to revolutionize the space, and I'm curious to see how that happens.

They’re not selling you anything, they’re raising money. I’m sure investors are getting the details after signing NDAs.

They're converting Twitter fame into capital funding. Not having a product means you technically don't have an obligation to deliver anything in return for that capital.

I was under the impression that potential investors usually don't sign NDAs.

Investors do not often sign NDAs covering "ideas", they will sign (and will often ask you to sign) NDAs with scope limited to investment terms and due diligence information.

Correct. Or rather, I've never seen it. Some are even offended when asked. N=~150

Probably depends on how much leverage those seeking funding have.

There is one post that's marginally helpful: https://sysmgr.org/blog/2019/12/02/a-new-machine/

My guess is they want to sell servers with a first class control plane. Something much less clunky than what's currently possible to cobble together with IPMI, PXE, Intel ME, grub, etc. Similar to what AWS, GCP, etc, use internally. And maybe expanding into routers based on these servers, also with a control plane, etc.

I'm extrapolating a lot out of little though. They are very vague.

Remember 'Color'?

"Color Labs, Inc. was a start-up based in Palo Alto, California. Its main product was the eponymous mobile app for sharing photos through social networking. It allowed people to take photos in addition to viewing other photos also taken in the vicinity."

I wouldn't compare the venture in question at all to 'Color'...

You're missing the point. The only reason Color is even known is because its principal had name recognition before the venture was started.

Wow, that was eight years ago...

I agree. All I see here are developers with huge Twitter followings pitching themselves for capital funding with ideals as the product.

I dunno what your problem with these people is, but Jessie is a Docker maintainer and Bryan built dtrace and went on to do fascinating things with illumos and Joyent. Steve Tuck spent 10 years at Joyent, starting as a Sales Director and ending as President.

These are all serious people, and the first two have reputations built on strong technical contributions to (even ignoring the wild overuse of k8s for resume-driven-development) complex technical products.

I wish it said this on their “about the team” page.

There are plenty of investors who think just having a large Twitter following is reason enough to invest.

They must have defined the problem that they're trying to solve if they convinced an investor to fund their company right? If they want to build any interest in what they're doing it you'd think they'd share it.

But no we hear how excited they are and their fifty points defining what company they want to be. If the company's product is top secret then why share anything at all?

Not necessarily. If you ask most investors, they usually will boil down an early-stage investment to "I invested in this team".

They focus on the team because that’s the how startups raise money these days, investors figure they could always pivot to something else. Obviously this can be problematic especially when the founder(s) is good at raising money, see Wework

Fair point "WeWork", but this early on I think the focus of an investor should be more on the founders vs. product/business idea. This early on there's a potential for pivoting, and you'd rather have smart, talented individuals that can deliver vs. a product idea

Sometimes, though, getting noticed is the hardest challenge a company can face. In these companies, name brand recognition is worth more than a billion dollars of funding. Kylie Cosmetics is just one example.

I think this is more true in the consumer space. If you're asking me to build a business on your product, I'm going to bolt at the first sign of flakiness (of which these vanity blog posts are a big red flag).

Perhaps they do, but they are not ready to share it with the world yet?

It's baby steps all the way at this phase.

Exactly. They are trying to cash out their Twitter fame. This company is essentially a personality cult at this point. It might turn out wonderful and I'm excited to see what the product will eventually be.

Twitter fame? That's like saying Carmack was just trying to cash out his Facebook fame.

Cantrill and Frazelle both have created software used by millions.

> Cantrill and Frazelle both have created software used by millions.

And yet here they are building a sever hardware company??

> That's like saying Carmack was just trying to cash out his Facebook fame.

Thats an absurd analogy. Carmack is responsible for arguably the most important advancements in video game design. Specifically for code he personally architected and wrote.

For example - Frazelle is one of 1300 contributors (currently 13th in terms of # of commits) to Docker core. Not to diminish her contribution, but these individuals are contributors, not single handed creators (which was implied), of "software used by millions".

For the record - I enjoy Frazelle's content and personality. I wish her the best, but the commentary here is near cultish.

While I'm not very fond of the cultish commentaries, bashing them as soon as they announce their new venture isn't any better.

I think they have some merit and know a thing or two about computers.

Not sure why you're responding to me specifically, but I agree with your general premise, but disagree with your specific characterization.

The general criticisms here are warranted and I think are constructive enough that they'd be beneficial to these founders and others seeking to do something similar.

That's the point. It doesn't matter what they did previously or who they are. The current product can only be judged on it's merits.

It absolutely does matter... someone with a track record of success will and deserves to get the benefit of the doubt when starting something new compared to someone with a poor track record or no record at all.

Its a biased population? Self-selecting? A track record may be 'somebody who was at the right place at the right time'. A track record of winning the lottery, for instance, doesn't mean squat.

A track record means, at bottom, that they had no disqualifying traits.

I agree, but also “no disqualifying traits” is more valuable than you are giving it credit for. If you want to use the scarce signals available to judge the prospects of a very new startup, biasing for founders with no disqualifying traits is probably not a bad start. There’s a reason YC, TechStars, etc all explicitly say they invest in founders more so than product ideas.

Sure. But the number of that kind of person, number in the thousands. Then hundreds of thousands.

Ironic you should pick winning the lottery, because it absolutely DOES mean squat. For instance, the only person I'm aware of with a "track record" of winning the lottery had a system to consistently win the lottery. Had you given your money to him you'd be rich.


The track record of Software startupers/leaders/influencers/whatever going Hardware, boasting with overconfidence before even starting, is abysmal.

WRT to Frazelle, are you referring to her contributions on the Docker team as "creating software used by millions"? If so, that seems disingenuous, or perhaps she was more of a core contributor than I realized? If not, what software are you referring to?

Carmack had 5 or 6 game engines under his belt before any social media even existed.

Usenet was one of the first social networks, and Carmack posted there in the 1990s

what software used by millions have they created? Solaris is nearly dead, and Joyent utterly failed to go anywhere.

> Joyent utterly failed to go anywhere

Joyent was acquired by Samsung, and it's not one of the bullshit acquisitions that destroys the projects/products — Samsung is now using Manta/Triton (and ZFS on illumos/SmartOS) at their massive, massive scale.


Docker Dtrace

Isn’t it usually a red flag when the announcement of a startup talks about who the founder(s) and investors are before the product? Am I seeing shades of Theranos here? Why not focus on how far along the product is or the viability of the idea?

Seems like waporware. Unless their idea of rack hardware is bending sheets into racks it not very clear how they would suceed. It's not clear at all what they want to do acctually. Maybe selling kind off custom computer rack servers with vendor lookin? Like IBM mainframes? Except not doing the advanced chips inhouse.

I couldn't agree with you more!

This was the top comment by a long shot and is all of a sudden way down the bottom... hmmmmmm.

It's one of my highest upvoted comments ever with 250 points. Not sure why it dropped to the bottom so fast.

It immediately went from #1 to like #15 behind comments that obviously do not have a lot of votes. Take a guess why. HN fuckery afoot as usual.

We downweight bilious comments, especially at the top of threads. Why? Because the internet tendency is to make everything ill-tempered, peevish, and gross. The spirit of this site is curiosity (https://news.ycombinator.com/newsguidelines.html) and for that to have breathing space, there needs to be an active counterweight.

Call that fuckery if you like. The main thing to understand is that it's a global optimization, not something stupid like supporting particular PR campaigns.

That's very fair and I apologize for the snark.

There's a strong recency component to the comment ordering. New comments get a boost so they are visible enough to get upvotes if they deserve them, and over time the order settles.

It's just more signal not to use whatever they end up producing. Whether it was an astroturf campaign or they used personal connections to get HN to re-rank things, it doesn't speak well to transparency, honesty or being customer-centric. I think my original comment was a very valid criticism of vaporware and vanity blog posts used in lieu of actual product information. Trying to sweep that under the rug is not a good look.

It reminds me of that startup where each of the founders gave themselves a goofy title like 'Cheif Cleverberry' or something like that. I wish I could remember the details. I think it was ex-blackberry employees starting a phone company.

According to other comments on this thread all three of the names mentioned are already highly successful and financially independent individuals. I wonder if the high risk tolerance encourages them to focus more on the lifestyle aspects of this project (and if that is a trend we see in other startups founded by already successful individuals).

There is also something about well-off individuals taking investment money to get a fancy office rather then bootstrapping and keeping all the equity/destiny of the company themselves that smells funny to me. I guess its "free money" but I have to wonder if the products would be better if it was sweat equity instead.

> There is also something about well-off individuals taking investment money to get a fancy office rather then bootstrapping and keeping all the equity/destiny of the company themselves that smells funny to me. I guess its "free money" but I have to wonder if the products would be better if it was sweat equity instead.

Based on the description of the company, they are working on hardware.

I'd agree with your statement if this was a pure software company, but anything involving tangible products can require initial capital orders of magnitude more than what 3 engineers are worth..

Speaking of that, does anyone here have a ballpark number for what it would cost and what the minimum order quantity would be like, to get one of the major Taiwanese OEM companies to manufacture an initial run of a custom server product? My guess is that the capital requirement would be in the 10s of millions of dollars.

Who tries harder - pro sportsballer with the check in the bank, or the first-year college player?

The pro sportsballer, hands down. His superior work ethic is likely a big part of why he made the pros, and 99% of first-year college players won't.

The average NBA career is about 4.5 years

Why so short its not a full on contact spot like NFL or Rugby

They are starting in a garage https://blog.jessfraz.com/post/born-in-a-garage/

Read up before you make unfounded claims

I was misled by the top comment in this thread “talking about their fancy new office”.

Considering that apparently they will try to do something on the hardware side, additional capital may be necessary. Just guessing.

Real airports have control towers, baggage handlers, and hangers. Seeing them, it is a mistake to assume no flight operations. To assume it's only a cargo cult. Or to assume, in the age of Kickstarters, that it will never be more than marketing. None of which is my prediction of the company's successful shipping of quality product. Some companies never get past the low hanging fruit. Other companies do.

Airports also have flights going in and out, so you can see that they are operating.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact