Comments about the garage were probably in response to the jessfraz.com post. Other than that I didn't see any merged comments that were specific to one URL.
Looking at their blog - their proposition appears to me to be, AWS, Google and Azure can build and manage their systems better than Random Corp. trying to build and manage their own servers by ordering from Dell.
Oxide Computer will try and bridge that gap and let anyone buy efficient, manageable servers.
This is of some interest to me as there are lots of use cases for which cloud isn't practical or has regulatory challenges, and this should give organizations for whom this is an issue some good options. If nothing else, it'll be interesting as a target for Google's Anthos or Azure's On-Premises service.
Edit: This is a good summary, I think:
> Hyperscalers like Facebook, Google, and Microsoft have what I like to call “infrastructure privilege” since they long ago decided they could build their own hardware and software to fulfill their needs better than commodity vendors. We are working to bring that same infrastructure privilege to everyone else!
But with that comes rigid standardization... which typically hasn't worked well in the enterprise. Everybody wants to be cloud-like until they figure out being cloud-like means you lose all of your hardware-level flexibility. Oh you wanted one with 512GB of memory instead of 768? Too bad.
Backblaze has their own custom server racks and we seem to like them just fine.
I HIGHLY doubt Oxide is offering to make small batch custom servers as part of their "bring the cloud on-prem". It's FAR more likely they're talking about taking OCP designs and adding their own software management stack on top.
I believe Enterprises are (slowly) learning this lesson. If nothing else Pivotal Cloud Foundry and various on-prem k8s solutions are softening them up.
>This is of some interest to me as there are lots of use cases for which cloud isn't practical or has regulatory challenges, and this should give organizations for whom this is an issue some good options.
Sounds, to me, as the exact premise of OpenStack, no?
I've been watching off and on for their launch announcement ever since. Some of the reservations about their business plan expressed in this thread ring true with me, and yet I hope they will succeed in spite of it all. The last thing the world needs is an internet which runs entirely on computers owned by Amazon, Microsoft, and Google.
> Hyperscalers like Facebook, Google, and Microsoft have what I like to call “infrastructure privilege” since they long ago decided they could build their own hardware and software to fulfill their needs better than commodity vendors. We are working to bring that same infrastructure privilege to everyone else. 
Is this same infrastructure privilege an effective, internal cloud? From my experience with large internal cloud platforms, the problems are almost always organizational ones.
Regardless, I am definitely hoping that Oxide can provide a better experience. Maybe with centralizing all hardware into a single group, rather than have network folks do switches, another team do servers, and a totally separate group to do ACL's, etc. you could improve the experience of managing an entire datacenter.
Edit: Forgot the link:
In the same post, the author writes: "If you want to read more about some of the deep technical problems we will be solving check out my ACM Queue articles" followed by some links. The first, on open-source firmware, even appears to be freely available: https://queue.acm.org/detail.cfm?id=3349301
So that may help get the necessary context for this company launch.
These are three people who know product! But they could be totally wrong about how the market will take the product, and have to be willing to work together and possibly pivot until they find product-market fit.
Finding teams like this pre-product is what makes early stage investment so valuable. There should be no confusion about the focus on team and vision.
I am stoked to see cloud scale tooling become even closer to the hardware, and become more accessible. The completeness and openness of the stack is what attracted me to -- and kept me on -- Joyent's products. Best wishes on your new venture!!
It trips me out to see other people in the comments that don't know who Jessie and Bryan are. They're legends to me, and I consider myself basically a SRE. I highly recommend following them on Twitter as well as virtually any talk they've given. Just go type their names into YouTube. Steve Tuck doesn't ring a bell for me, but considering the company he's in I want to learn more about Steve.
I'm super interested to see what comes out of this. I think Jessie has a passion for security (also many other things, but it's what comes to mind), and Bryan for debugging (similarly, many other things as well). I think those are really good backgrounds to have when approaching hardware these days, considering the recent security vulnerabilities around speculative attacks on hardware (specter etc.).
Some of the conversations Jessie started on twitter in the past few months raised some interesting questions around open hardware and open firmware, and the problems with proprietary and closed source hardware and firmware.
It sounds like this Oxide project might be in a similar space as https://www.opencompute.org/. I'll be interested to see if they partner at all. I'm also very interested to see what the actual products are, how open any hardware or firmware ends up being, and if this venture is successful, if they end up at the same place as any other major hardware company or if they tread new ground.
I always find this funny. Why would everyone know who they are? There are thousands of interesting people in tech. It's impossible to know of everyone.
Totally! If you aren't familiar with who Jessie or Bryan are yet though I think you're in for a treat. So keeping in mind these are largely impressions from their public personas and I know neither of these people personally...
Bryan is one of my favorite speakers / presenters. Virtually every one of his talks conveys his passion and knowledge for the subject at hand, and it's presented in as much depth as the time allows for and any nuance involved, and Bryan conveys a great sense of humor with little tolerance for BS. Here's a non-comprehensive list of talks Bryan has given from his blog: http://dtrace.org/blogs/bmc/2018/02/03/talks/ .
Jessie, to me, is like the Adam Savage of tech: curious, interested in a ton of different things, a maker and tinkerer, and just a generally hard working huge talent. I love her sense of humor, and she puts content out into the world just to share that with others. Like she got to go tour CERN back in May and shared a bunch of cool stuff from it like this https://twitter.com/jessfraz/status/1129845849054253056 which I really enjoyed. If you're on Twitter, I highly recommend following https://twitter.com/jessfraz .
If you have recommendations for other awesome interesting people worth knowing, I'd love suggestions!
That's what is meant by "cult of personality."
Triton is/was the betamax of on-prem cloud orchestration. It is better tech in the beginning than Kubernetes, but then lost due to the stigma associated with Joyent/Solaris. Kind of a shame as it is really good tech.
A bit more non-technical overview: https://www.joyent.com/triton/compute
Nice collection of floppies (3.5" and 5.25"). I wonder how long floppies stored in a garage actually remain usable?
The cans that say "RePop" and BSW on them are microphone pop filters from Broadcast Supply World. For podcasting?
On the shelf below the floppy shelf looks like some kind of tape drive, and maybe those are tape cartridges to the left of it?
Is that a couple of motherboards in anti-static bags on top of the tape drive?
What is the box to the right of that? It kind of looks like an adjustable DC power supply, common in electronics labs, except I don't see any terminals on front...unless those two round things on the lower right are jacks? But the jacks are usually black and red, and I can't see any color there in the photo.
Based on the box colors, the top box of Coke is twisted mango diet Coke, which was introduced in early 2018, putting an upper limit on the age of the photo.
There is some kind of tablet with pen on the desk. Anyone recognize what kind it is?
That's a lot of pens. They should have bought 3 or 4 fewer pens (which would hardly even be noticed) and used the money for a taller jar.
Interesting file cabinet. What applications is that style of file cabinet designed for? I don't think I've seen one with so many horizontal draws that are that short before.
I can't recognize anything on the keychain sitting on the desk (well...anything other than keys I mean).
> _Further, as cloud-borne SaaS companies mature from being strictly growth focused to being more margin focused, it seems likely that more will consider buying machines instead of always renting them_
Is not something I think to be true from my experience of being in a large company that has moved away from on-prem hosting to cloud providers. Cloud is cheaper. A lot cheaper, especially if using managed services (with the proper arch! With the exception of logging which is surprisingly expensive). The hardware costs of rolling your own hosting is not the expensive part.
Most sv startups don't care about operational costs but they do care about capital costs of buying the hardware. Primarily due to the very short term focus. I think their own resumes are important too vs their employers' long term viability so everyone pushes aws or azure etc because it's hot tech.
Thinking long term is not a silicon valley thing. It's a totally different mindset.
Once you get really large though you can actually just colocate a bunch of systems and storage and throw a hypervisor manager on them and get almost all the benefits of a cloud based system for an up-front cost that's probably not much more than a dew months of your cloud costs (it looks like 3-4 months is the break-even on some large AWS instances I just looked at compared to some 64-core/512GB boxes I specced out recently). You won't be able to scale quite as immediately to large load increases unless you under-provision, but at that point you're large enough that you should have a handle on what your load spikes are like, and maybe even be able to supplement with could offerings in a pinch.
In a way, cloud-based offerings allow medium-to-large businesses to have a sort of insurance on scaling, in that they are paying more for it, but they can stop at the right amount for their need or even back-off because of problems (or engineering coming up with a better way to handle some load).
And hilariously, the margin gains are not in COGS (e.g. computing) but in reductions/efficiencies in S&M and R&D. Most modern SaaS companies are 80% gross margins.
> it seems likely that more will consider buying machines instead of always renting them
And the companies buying physical hardware will want to know if the vendor supplying the hardware is going to be around to support the hardware in 5 years when they need to be refreshed.
It seems like they are building a product that is anti-trend. Seems odd.
> Cloud is cheaper. A lot cheaper, especially if using managed services
This is not universally true.
I imagine that’s the extreme end of the market Oxide is targeting: Provide a HW/SW platform to build your differentiation on, because the capabilities and value (& moat) aren’t there in the commodity cloud.
If you run a data center in the same city/cities as your engineering strengths are located, that doesn't necessarily give you the geographical diversity you need for many companies. But if you don't run any data centers, you're blind to some of the cost structures, and architectural limitations of the system, and you can become soft.
I was going to make a simile to Google and Facebook having their own hardware divisions, but I don't really need to, because they are planning to make cloud-friendly custom hardware for data centers. This seems like an area that the incumbents should be all over but I can't recall the last time I read of innovations from them. Which means there is space for someone new to establish a toe-hold.
If I were based in Chicago I'd want a data center in Chicago, and Cloud Servers on the West Coast, (and Europe, etc as applicable). But the lock-in situation is untenable to me right now. It's pretty easy to end up deploying multiple solutions for the same situation. That just complicates reasoning about the system. It bears a resemblance to the Lavaflow antipattern and I can tell you that either can be no fun at all.
Computers. Literal, physical computers you can buy and stick in a server rack.
Specifically, computers with the hardware/software optimized and tuned for "hyperscaler" uses (think Kubernetes).
I see The Cloud as being a great play for new companies, and companies with server needs that are both variable and out of phase with everyone else's. Small startups because time is their most precious resource, and picking a major PaaS provider minimizes the time spent making decisions about things that are tangential to the core business. e.g., if you use AWS then you don't have to hesitate for a moment on your object store; it's going to be S3. If you use on-prem cloud, then you'll likely have to burn a few weeks shopping around for vendors and getting it rolled out before you can be up and running. Multiply that cost by every single decision, and you've got a whole lot of distraction hitting you at the most inconvenient possible time.
I can see that Dell, Lenovo, etc. are already offering "Kubernetes" optimized servers; it's not clear how much of it is just marketing, though. Perhaps this new company wants to offer something similar, but be more competitive by taking advantage of superior software design skills (if I understand it correctly). Then it would just be a middleware company disguising itself as a "computer company".
And plenty of people run, e.g., OpenStack on their own hardware in their own server rooms / DCs.
The speed of light isn't getting quicker any time soon, so low-latency processing of locally-generated data requires local compute resource.
> setting up infrastructure themselves is in a great deal of pain and they have been largely neglected by any existing vendor
Can you elaborate on what these pains are, and how OEMs aren't helping their customers with this?
I guess what I'm trying to ask is: what will Oxide do different compared to existing OEMs?
I am not sure about this. It seems all their experience is geared towards software with little to nobody from a traditional hardware engineering background. Wouldn't they be better off using open compute designs and focusing on the software inside? Building hardware just seems like a stretch trying to compete against Dell, HP, and Supermicro all of which have "Kubernetes" / high density offerings.
How do you know that's not what they're doing? If they sell servers it doesn't mean they designed them. I wouldn't be surprised if they have some hardware changes for secure boot or something though.
They are doing for on-prem with hotswappable computers in a cloud what hot swappable hard drives did for on-prem storage.
My prediction is that their technology will be used by existing data centers more than it will be used by enterprises wanting to return to on-prem from their current cloud operations. They might help serve internal customers in the enterprise, like developers, and intranets, but the cost of pipes will be prohibitive for companies needing lots of bandwidth to serve external customers.
That Sprint line is like a farm to market road to your warehouse, which works fine when you are serving customers from an ecommerce site or sending and receiving emails or even opening files from NYC and your onprem cloud is in NYC.
It breaks down when you need higher throughput to serve millions of customers. At that point, you need to be at the crossroads of a couple interstate highways which would be analogous to a carrier hotel like at 1 Wilshire in Los Angeles.
I expect oxide.computer will open up on demand scalability to existing service providers who already rent the physical space in locations like 1 Wilshire. Most of those providers today buy hardware from HP or Dell, which is expensive and then also have to manage the software layer on top.
From what I can tell, based on the very little actual product marketing I have found, is that oxide will provide a turnkey solution from one shop. Unfortunately, at the moment, it's hard to figure out what the company actually does, because it's about the team right now.
When you need to serve millions of customers, you get space in a datacenter with multiple telecom providers and start peering at various exchanges. You certainly don't pay per byte of data transferred, and you don't start to build your own physical plant.
Notice how the hardware is actually sold by IBM, Dell, etc.
The difference is oxide is building and selling the physical hardware themselves. When the customer needs more capacity, they could just order 10 new, let's call them "blade" type computers from Oxide and pop them in a rack at much lower cost than a blade from IBM.
People are confused because of the comedic level of bullshit (reminds me of "the box" from a Silicon Valley tv show episode).
This is a fantastic development for the Rust community that will likely have a positive impact on the linux ecosystem, in the long run if they do this driver development in an open manner.
I look forward to following their developments.
It's a bit like telling us that Susie and Jim from Facebook just got engaged. It's good news if you know them, but a bit disappointing otherwise.
The book won a Pulitzer prize for non-fiction.
> desire among customers for a true cloud-like on-prem experience
They know companies are insistent on making the mistake of trying to build and run their own platforms, so they're becoming the company that you can throw your money at if you're not willing to pay for IBM/Oracle/etc private cloud.
Old folks remember the revolution in distributed computing came from commodities. The idea was to get a lot of the cheapest resources you could and loosely tie them together. This was fast, cheap and effective, because everything was disposable, flexible, open. Even if you hired a dozen engineers, you saved 3 mil a year on enterprise hardware and software licenses.
What these folks are betting on is that businesses want to "buy a cloud" cheaper than they'd get it from a traditional cloud vendor. But to get cheap hardware and software, you need to strip it down to bare essentials, use the cheapest, crappiest parts, literally throwing away parts rather than fix them. Purpose-built hardware and software is the opposite, and of course ignores the management costs (unless that's going to be part of the payment strategy?).
True rack-scale design, bringing cloud hyperscale innovations around density, efficiency, cost, reliability, manageability, and security to everyone running on-premises compute infrastructure.
How are they going to do this? Buy equipment from other vendors and resell them? Build their own servers from scratch? Nothing in the above paragraph tells me how they plan to be better than any other company right now and why, as a business owner I would want to buy my servers from them.
You don't bet your on-prem projects on newly born startups.
Sounds cool. Should be a good team to pull it off.
I can't help but think this is a huge swipe at companies like Airbnb, Uber, Lyft, and any of the others that knowingly break laws (or aid others to do so) and hope they can get away with it, or change the law(s) later.
Glad that Oxide will hopefully allow people and companies to utilize their infrastructure on components that may not be so old/may be easier to use.
i wish them luck, it's a very interesting approach they are taking here.
A colo box can be great if you can amortize the cost among a dozen friends. Otherwise, I think your money goes much farther on AWS and GCP, especially if you know some basic cost saving techniques like reserved instances or not using instances at all (Lambda, Fargate, S3 and other on-demand services/their Google equivalents can cover a lot of needs nowadays).
If you can get away with hosting on your own computer on a residential connection, that's great for you, but it's a totally different product from what AWS/GCP/colocation offer.
By the way, reserved instance pricing on AWS and GCP starts around $20/year. But if you know how to use S3/GS and Dynamo/Datastore, you don't even have to use an instance.
At the moment I just use EC2 and GCP instances (and a couple of GCS buckets)
On Google, the comparable offering is an f1-micro instance, which goes for $3.88/month (https://cloud.google.com/compute/vm-instance-pricing#pricing).
“Oxide is building a new kind of server.
True rack-scale design, bringing cloud hyperscale innovations around density, efficiency, cost, reliability, manageability, and security to everyone running on-premises compute infrastructure.”
This isn’t a blitz to help with fundraising: they already have the money.
They are also hiring - which is why they have a long list of principles and values, and talk about each other. They’re looking for kindred spirits.
This is a good idea, from people with a history of good ideas.
You know they're mostly software people, right?
That's actually irrelevant, though. Even if all three of them were Woz-level hardware hackers, that's no guarantee that they're bankable success. Being an impactful software or hardware company takes a team, and in the US at the moment that team is mostly about financial and legal work, not technical.
We've all seen that "best technical idea" does not usually equal "best business success"
I wish these folks well, and I'm sure their fans will enjoy following them, but people acting like this is a major thing for the whole tech industry are gonna be disappointed.
> True rack-scale design, bringing cloud hyperscale innovations around density, efficiency, cost, reliability, manageability, and security to everyone running on-premises compute infrastructure.
Surely, this influences manageability (and $$).
Does it influence the on-demand provisioning benefit of hyperscalers (achieved at scale) though?
_One_ of the reason me and my team chose hyperscaler is that I get my compute today, now. Rather having to wait weeks/months for my in-house IT to get for me.
1. They believe there will be a shift back to on-prem, due to cost, improved security & latency.
2. They can take commodity hardware designs ( Open Compute Project ) and add their own software for manageability.
3. They are big on values, because in the past Bryan got a bit of a rep for being a bit hard to work with.
( eg https://blog.valerieaurora.org/2016/10/22/why-i-wont-be-atte... )
1. I'm seeing this in big companies - cloud can be an expensive option if you are generating your own data on-prem. While the economics might change, security and latency issues won't so easily. Also cloud is a big risk in that realistically what is your migration cost to another provider? If that's high, that's how much your cloud vendor can gouge you before you move ( I like to call that the Oracle business model ).
2. Don't know about the hardware, but those guys have a track record on systems software and Bryan was involved in building similar stuff before in the storage space ( http://www.oracle.com/us/products/servers-storage/zfs-storag... )
3. He is definitely older, perhaps he is wiser.
More on OCP here: https://www.opencompute.org/
That's a bold claim. What I've seen in his talks is that he has always been big on values, for example he's been espousing the Sun values that they use here for quite some time, and he has also been very critical of companies with lousy values.
Don't know him personally.
Exactly. very critical - what if the thing he was being very critical about was you? Read the linked post I sent.
ie it's not about the big ideas 'don't do evil' it's about whether he was hard work personally.
As I said he seems to have recognized that.
BTW I don't know him personally either.
I probably wouldn't like it. So?
To evaluate the situation, we would have to know what he was being critical of. For example, his take-down of Uber's "corporate values" was brutal, funny and IMHO fully justified.
> Read the linked post I sent.
I did read the blog post you sent. Had actually read it before. It didn't gain any substance on second reading, unsubstantiated accusations of...what exactly? Not making the poster feel as warmly cuddled as they want to feel. So while I probably don't agree with the poster even if things were exactly as she describes, we are not given any actual real situations to evaluate whether that it actually is the case.
And I've never seen him take down people, except Larry Ellison, maybe, only code, technology,
See: Egoless Programming, or "You are not your code".
I have worked in many different environments, and in the end the ones that were of the "cuddling" kind, particularly when self-proclaimed were the highly toxic ones, whereas the ones with the refreshing-sometines-verging-on-brutal candour were the healthy ones.
So while I also don't know the poster you referred to, to me it is those people that are turning this industry toxic.
Here is the particular passage I was refering to:
> And most important of all (with the emphasis itself being a reflection of hard-won wisdom), we three share deeply-held values [....] — and employees will be proud to work for.
> And I've never seen him take down people, except Larry Ellison, maybe, only code, technology,
Somebody wrote that code, or built that technology.
Note this is nothing to do with whether the code is rubbish or not, it's a about a communication style and a mindset that doesn't care or think about the person behind that code.
I'm sure he didn't mean it, but the endless taking down appeared to get to some of his colleagues, and perhaps because he didn't mean it, he didn't realize until it was pointed out to him.
So as the other commenter said skate to where the puck is going to be - and don't rely on Gartner for puck location trends.
I don't care about the team behind it. Give me a tangible product. Or at least a sense of what the product will look like.
Great to have options.
There’s definitely a hole in the market for “turnkey private cloud done right” that isn’t just a massively marked-up bundle of servers that requires consultants to effectively setup and operate.
Also, If CCIX takes off, along with workload-specific accelerators, they’re in a much better position to quickly serve those niches rather than waiting 6mo to rent some cloud instance that might be making all the wrong trade-offs.
(Obligatory clarification: of course I'm joking; but is that "Chief People Officer"?)
Frazell is pretty well known for Docker and container-oriented stuff. I think she was at Google for a while, but maybe Microsoft? Somewhere big doing cloud things, for sure, though.
I don't know the other person. But, I would guess they have to be reasonably impressive to be on this team.
Silicon startup challenges are rough. RISC-V is probably the best hope for a new architecture (hint: its not a new architecture).
These guys seem like they’ve identified a market and who their first customers are going to be.
So even if the VLIW chip was on paper more powerful/efficient (and I'm not sure it was), it just couldn't keep up with doing it in hardware.
Would be fun if they used Ampere :) but my guess would be that the first products at least are gonna be Corebooted Intel platforms.
1. Bad HN title (doesn't say anything about what they did about the computer they made)
2. Landing page image has some generic background and as far as I saw, no tangible product, other than some shelves of their latest workspace.
3. If they could just include some diagram here or there and put some bold text/say they're building AWS for people/startups, that'd be great and to the point
Not bashing them, but just some thoughts I had about their presentation.
Regardless, I'm extremely excited for what they're building. Been waiting for something like this to pop up.
From Jessie's blog post  it looks like this is the actual garage where she supposedly started the company.
But I agree, there is absolute zero information on the product they are planning to build.
Probably because they are still working on fundraising.
"Oxide is building a new kind of server.
True rack-scale design, bringing cloud hyperscale innovations around density, efficiency, cost, reliability, manageability, and security to everyone running on-premises compute infrastructure."
>Kick butt, have fun, don’t cheat, love our customers, change computing forever
Look at the job req for "hardware engineer": they have no idea how to solicit EEs or what they are even looking for on the _most_ important roles for even doing something in this space (and none of the personalities have skills or credibility in). That they duped a VC into funding this is side-splitting hilarious and a sign of impending correction in tech.
Frazelle was one of the people responsible for Docker.
I think Tuck is a finance guy who used to be a sysadmin? And he was COO of Joyent for a while, apparently.
How did Oracle kill the Sun blogging culture? What/Who was the Sun blogging culture?
There was a _huge_ amount of official-unofficial documentation in the blogs that disappeared from the Sun site when Oracle ran its bulldozer through it.
A better question would be "What didn't Oracle do to kill the Sun blogging culture?"
EDIT: Got Microsoft and GitHub the wrong way around.
She went from Microsoft->Github, not the other way around, but other than that you're correct.
Jessie Frazelle did some serious Docker work, went on to work on Kubernetes, Hyper-V, a brief stint at Github saw her working on their Actions product.
Bryan Cantrill was working on the Solaris kernel for a decade, his dtrace is particularly famous. And then he was CTO at Joyent. You maybe have heard one of the projects they helped with called node.js?
Yet there's essentially nothing about what the product is (that being the only thing most people care about). This reminds me of first time founders who can't wait to get "CEO" business cards. Worry about the product first, last and in-between. The rest is just there to signal to us that your priorities aren't in the right place.
I know two people on the “30 under 30” list. Both of them are incredibly charismatic and charming in person. Their Instagram and Twitter accounts churn out constant brand building material. They both have pseudo-startups with noble causes and vibrant websites. Their startups have a list of impressive advisers, including B-list senators and industry executives.
However, neither of them have made any progress on building an actual business. One of them has supposedly been developing the same simple product for almost 7 years now, but they’ve never been able to produce even a proof of concept prototype.
I thought I was missing something for the longest time, until I let go of the idea that they were really trying to build a company. They’re not. They’re building their personal brand, and succeeding wildly thanks to publications like the “30 under 30” list that have an insatiable appetite for underdog success stories.
Surely some of these companies are legitimately successful with great business models, but they’re mixed into these lists with the brand builders who know how to game the system. I’d be interested in reading an honest “Where are they now” follow up series that checks in with these founders at the 5-year mark after they make this list to see who the real successes are.
 The Forbes ‘30 Under 30’ Hustle | Hacker News. (2019, December 02). Retrieved from https://news.ycombinator.com/item?id=21082523
Edit: I was an early CoreOS employee who was the first "boomerang"
On the positive side, you've got the fact that they made it to a profitable exit, which would be hard to ignore if you had a financial stake in the venture. You've got the implication that Red Hat believed that CoreOS was essential to its fortunes in a container-oriented future as a clear vote of confidence that they built something good. The fact that Red Hat kept CoreOS going suggests that they felt it was so good that their existing technology couldn't realistically catch up.
On the negative side, if you don't like Red Hat then, yeah, that certainly counts as selling out.
And since Red Hat kept Container Linux/CoreOS open source, this wasn't really a move to eliminate competition. They can support enterprise clients, but the open source nature doesn't stop another company from offering their own enterprise support.
Depends on the acquisition price. Acquihires are failures. Investors might get their money back, and everyone else (including founders) walks away with nothing but a job at some company they didn’t necessarily want to work for.
Here are some more links if you're interested: https://blog.jessfraz.com/
* Edit: Rephrased from "O.G. Sun Engineer", I don't know how accurate "O.G. systems engineer is", it's all relative anyway. For reference, I I'd consider Jess an "O.G. containerization engineer" :)
From Bryan's linked blog post - he joined Sun in the mid-90's. Sun was founded in 1982, so he was an (excellent) engineer there, but not one the "O.G.s"
He's not that old!
Uhh, you mean computers?
However, that's at the OS level. What can realistically be done at the hardware level? It must be possible in theory to design a CPU that's better at this kind of context switching, but I don't know if a new "computer company" really wants to go there.
Not for a long while.
The other folks though, I have no idea.
I worked for such a CEO. In the first year we had four week-long retreats to: an island in WA state, Palm springs, Austin, and Banff (that's where the stats team was). TONS of swag. Aeron chairs. New MBPros. Oh, and champagne fridays. Needless to say it burned through it's good will seed round in 1 year, and the CEO begged for another year's worth of money from friends before shuttering. But hey, hype sells in the software world.
I am not an expert in the field, but I work at a place that is very resistant to cloud solutions for certain applications, and getting the same stuff working on prem can be difficult and pricey.
Trips are pretty silly though.
It's such a thankless job and one that has a specific skillset that is fairly rare. You can be the Founder but not the CEO; something people seem to forget in this business.
Off the top of my head of what would be required besides just cheap hardware.
1) Cloud like software to manage it or k8s native support. I don't know exactly how that would be done, but the administrative costs of using this hardware can't be so high as to make the cloud a more viable option.
2) Some options for network access. Cloud does not just provide VM's but the underlying reliable network with multiple redundant pipes. Comcast Internet access might be good enough, but some customers might require large redundant pipes. While they might NOT need to solve this themselves, they should make sure that the market does provide solutions that when taken into account, allows for cheaper than cloud solutions.
3) Physical location... same as item 2 but for physical location of the hardware.
I remember back in the day managing colos. It really sucked. It was not just hardware costs that sucked. They should just consider that.
This is exactly how any "WeWork" type company sucks in investors. You'd just ignore the stupid out of hand except that it would be so nice to believe that if this company is on the level and IF their product does what they say it will and IF they deliver it in a reasonable time frame then it will solve the problem and somehow make money for investors.
Forget the names and reputations of the people involved... if three random people came up to you at a conference and said "We've formed a company to solve problem X!", wouldn't you wait until they showed an actual product to even think about potentially making decisions based on what they might do?
Just moved from AWS to Vultr & DO and it's loads cheaper and faster for my use case (lightweight landing pages basically).
Managed services are still incredibly tough because of client support, and like you said, relational databases and key-value stores.
Another overlooked benefit of building your own hardware is you don’t need to add complex overhead like virtualization or containers. You can “right size” each server with the perfect amount of CPU, IO and RAM for the processes you will run on it. Paired with a kernel compiled specifically for that server, your businesses applications will be vastly more performant than ones running on a cloud platform.
Honestly, too many businesses hop on the latest fads. The smart ones see the value in going back to the basics.
Cloud vendors don't have 90% profit margins; the cost of cloud is what it is because operating computers at scale is hard.
Maybe sometime in the future when I have time I can go through these long-form types of content, but they say nothing about what they're building on their landing page/don't describe in the level of detail I'm assuming they have in their podcasts.
Similar to what you're saying, it's like buying the swag for a company that doesn't exist.
Kinda interesting ofc, but how many enterprises actually need something like this? If there was an actual perceived need for datacenter equipment to be designed (and hardware-software co-designed) on the "rack scale", we would probably be doing it right and running mainframe hardware for everything.
Believe it or not, going private but not having to give up the niceties of AWS or Azure would be quite appealing.
Which has the side effect of their solutions being bespoke, because acceptable cost looks like "Tell me when I need to stop adding zeros to make this happen."
To go another way you either (1) need to be Amazon-scale already (i.e. "there aren't enough zeros to make inefficiency worth our time") or (2) be willing to say no to huge profits out of ideological purity.
The well-funded client / custom trap is real.
"...the sharpening desire among customers for a true cloud-like on-prem experience (and the neglect those customers felt in the market) made it more in demand than ever."
I think that's a good summary description.
In other matters I was went looking for the Rust connection (given a name like Oxide) and was not disappointed.
Or "I bought a rack full of Oxide and now I have my own puddle of elastic compute on-premises." (If you condense a small cloud, you get a puddle, I presume.)
If hardware is involved it would seem like the latter. In which case they'll be competing with Amazon's on-premises solution and commodity hardware.
What they did with Nitro is develop custom PCIe devices to handle storage and networking, so these devices' virtual functions (SR-IOV) are directly passed through to the VMs and now the hypervisor basically has nothing to do other than switching contexts.
I'm afraid what people mean by that statement and what the startup is going to try to do are completely different things. As in people mean AWS-like cloud services running on multiple servers they control with all the fault tolerance and geographically distributed. But for the startup it probably means cloud services running on expensive servers in a fast local network.
Imagine you and your friends excitedly announcing on your personal homepages that you are setting out to build your passion project into a going concern “not driven by power or greed, but by accomplishment and self-fulfillment”  and being flooded in a community of supposed hackers with derision that your post wasn’t a good enough press release.
For some people, the actual human excitement of friends earnestly getting together to build something new primarily because they want to see it in the world is so foreign that they can only snark about it.
Hopefully that kind of response vindicates their values more than it demoralizes.
Until you have a product, there's nothing to talk about. Plenty of teams that seem good end up producing nothing. Announcements like this are a huge red flag to people who have seen this pattern many times over.
Yeah ... if you’re a marketer. People who actually make stuff are allowed to be excited about what they‘ve been working on in their garages and get to tell people more about it on their own schedules. They don’t owe you a feature comparison table every time they talk about their project.
In fact, they don’t owe you anything at all.
Considering you literally haven’t been told — let alone allowed to use — what they’ve built, that sounds like a reasonable prior.
That being said, I wonder if the forcing factor behind building their own servers for the big three companies was mostly the inability to get hardware fast enough. Sure, there are tons of other benefits you get after you start building your own servers, but I wonder if they would have been pursued if Dell could land servers on time. In this sense, building your own machines is a much smaller scale to try to supply chain your way around, even if you're Google. That and private companies can be much more agile since they don't have to support existing workloads. Hard drive shortage? Change the spec last minute to not rely on them. This is pretty exciting to see, and there are plenty of third party vendors making money in the space, but they seem to want to revolutionize the space, and I'm curious to see how that happens.
My guess is they want to sell servers with a first class control plane. Something much less clunky than what's currently possible to cobble together with IPMI, PXE, Intel ME, grub, etc. Similar to what AWS, GCP, etc, use internally. And maybe expanding into routers based on these servers, also with a control plane, etc.
I'm extrapolating a lot out of little though. They are very vague.
I wouldn't compare the venture in question at all to 'Color'...
These are all serious people, and the first two have reputations built on strong technical contributions to (even ignoring the wild overuse of k8s for resume-driven-development) complex technical products.
But no we hear how excited they are and their fifty points defining what company they want to be. If the company's product is top secret then why share anything at all?
It's baby steps all the way at this phase.
Cantrill and Frazelle both have created software used by millions.
And yet here they are building a sever hardware company??
> That's like saying Carmack was just trying to cash out his Facebook fame.
Thats an absurd analogy. Carmack is responsible for arguably the most important advancements in video game design. Specifically for code he personally architected and wrote.
For example - Frazelle is one of 1300 contributors (currently 13th in terms of # of commits) to Docker core. Not to diminish her contribution, but these individuals are contributors, not single handed creators (which was implied), of "software used by millions".
For the record - I enjoy Frazelle's content and personality. I wish her the best, but the commentary here is near cultish.
I think they have some merit and know a thing or two about computers.
The general criticisms here are warranted and I think are constructive enough that they'd be beneficial to these founders and others seeking to do something similar.
A track record means, at bottom, that they had no disqualifying traits.
Joyent was acquired by Samsung, and it's not one of the bullshit acquisitions that destroys the projects/products — Samsung is now using Manta/Triton (and ZFS on illumos/SmartOS) at their massive, massive scale.
Call that fuckery if you like. The main thing to understand is that it's a global optimization, not something stupid like supporting particular PR campaigns.
There is also something about well-off individuals taking investment money to get a fancy office rather then bootstrapping and keeping all the equity/destiny of the company themselves that smells funny to me. I guess its "free money" but I have to wonder if the products would be better if it was sweat equity instead.
Based on the description of the company, they are working on hardware.
I'd agree with your statement if this was a pure software company, but anything involving tangible products can require initial capital orders of magnitude more than what 3 engineers are worth..
Read up before you make unfounded claims