I was once-upon-a-time a Data General field engineer back in the 80's and bumped into The Soul of a New Machine around '87. It's a great read and a nice insight into DG as a company who were still considered a wee bit of a rogue outlier compared to DEC and IBM.
I'm a member of a closed group of ex DG employees on Facebook (was still permitted membership due to being in the broker game back then despite not being an employee), they're a really nice bunch of folks, though growing older by the day.
I had the pleasure and luck to work on older Novas (like the 800, 1200 and 3's) and the Nova 4 and the Eclipse 16-bit range such as the S/130's and all their associated peripherals (Phoenix and Gemini hard disks, Zebra disk drives the size of two washing machines etc). Fun fact - you could upgrade the Nova 4 to an S/140 by "obtaining" the correct microcode PROM's and performing some other minor patching; we did this, it wasn't always considered legal, but every other broker out there also was up to this game. DG didn't seem to mind because by then their mainline product by then were the MV's. DG was a very leaky company with regards to getting hold of "stuff", I don't recall anyone being sued for unlicensed and pirated copies of RDOS or AOS etc. A thing that was an expensive item from DG but almost like a consumable were these things called paddle boards. They're basically passive PCB's that allow interfacing between the inside of the machine to the outside world. We never bought these from DG, they always came from "some guy" and a box of 20 cost less than a bonafide DG part. DG knew this but never complained.
The diagnostic tools were tremendous (DTOS and ADES) which coupled with a portable fiche reader allowed you to diagnose and fix most problems on site. These were good times and I learned a huge amount about problem solving as a young broth-of-a-boy engineer. I still have a copy of "How to Microprogram Your Eclipse Computer" where I learned about microcode and that assembler wasn't really the true bare metal of a CPU :) I have other war stories I should write down some time.
TL;DR: DataGeneral (a spin off of DEC from before my time) with the Eclipse team was battling DEC (a 1950s tech giant that made its name subverting IBM) with the VAX team for "first 32-bit" bragging rights. (To make things interesting, for me at least, my Dad was an expert at Data General technology and I was a teenage mutant DEC nerd.) When I learned VAX MACRO32 they were drawing comparisons between it and the (even then, older than dirt) IBM 360 assembler and it was totally mind blowing. Getting rid of of the memory limit on the (preceding) PDP11's separate code and data segments and introducing a 4.3GB virtual address space (the "Virtual Address eXtension") changed everything. Prior to that, computer scientists had to rely on complex "overlay" techniques to swap program and data segments in and out of memory at the application level (under RT11 and RSX11, the preceding operating systems to DEC 32-bit VAX/VMS and under RDOS, the preceding operating system to DG 32-bit A/OS). Too hard for non-specialists, so someone wrote an entire operating system (RSTS/E) in BASIC which was quickly starting to dominate in the run up to VMS (also probably inspired Bill Gates and his BASIC interpreter ROMs for competing microprocessors).
iPhones/Macs and even Raspberry Pi are all dumping 32-bit now, but cost-effective 32-bit lives on in nRF and ESP class micro controllers that will surely "eat the planet" (who needs to carry a phone, dawn AR glasses/earplugs or sit at a desk/tablet when all planar surfaces for as far as you can see are I/O devices run by $0.10 micro controllers that network to edge nodes for any memory/compute heavy lifting).
He discusses "open firmware", which is not Open Firmware. That was a boot ROM system from the 1990s. It was written in Forth, and was intended for use with a console interface. That's not what you want today. A good question to ask is, what do you want today at that level, and what do you not want. For example, a security oriented "cloud" company might want to load the machine, restart the machine, and freeze and dump the machine in an emergency, but not have the ability to examine or alter memory while running. Who patches a running production machine any more? Today's server firmware, with an administrative CPU that phones home and listens for commands to do who knows what, tends to have way too much capability for making small changes quietly and listening to what's going on.
The one statement feels rather bland: "those advances have been denied to the mass market"
What does this mean?
It was not denied, they were just too complex for mass market. People are happy to pay AWS so that they can worry not the machines, and write JS code from day one.
In the end, the less sophisticated Citrix XenServer we use now for about 10 years seems to be more hackable in some ways.
Googling "tioga pass server" brings up https://engineering.fb.com/data-center-engineering/the-end-t... which says nothing and https://www.mitacmct.com/OCPserver_E7278_E7278-S who seem to be selling them.
Tioga pass appears to be a small dual-socket server. How is it different from a typical dual-socket blade server?
They are complaining about the PC heritage in the server world- this is actually a huge convenience in that it is standardized hardware (so for example, it's easy to install any software made for PCs on them, including Linux). The cost of this compatibility is not very much these days (in terms of silicon area).
Blade servers had centralized power supplies since forever ago..
Also you can certainly get servers without CD drives :-) Rack front and back panel area is actually a limited commodity, so for example many modern servers are just packed with 2.5 inch drives..
I've not yet watched the presentation, and I'm not familiar with this stuff, so apologies if I'm missing something, but what is the difference between buying a server from Oxide and buying e.g. https://www.opencompute.org/products/109/wiwynn-tioga-pass-a... (from one of the vendors on the right)?
OCP cannot produce their products to mass market unless there is a strong demand. Certainly it looks like market mainstream is not too passionate about building or managing their machines.
I don't deny that some people, in any circumstances, would demand different offerings from the market mainstream.
And I am totally understanding why such statement like "a was denied to b" was used here.
I was merely stating, for mass market, there is no serious demand for what's claimed to be denied from them. And I am stating that from a more technical perspective nor a marketing or PR one. (And I am very positive about the necessity of marketing and PR)
Also, cloud is very costly if you don't use the up and especially down scaling because your application/ infrastructure wasn't really designed for that. Also if you buy some new machine for the factory it usually comes with software (usually MS Windows Server + MS SQL Server + some machine control software) that has hardware requirements that don't really fit well with cloud pricing. Such machines tend to run for decades and the company certainly hasn't thought about being efficient with computing resources on the server. On premise hardware isn't that costly if you consider these factors, if the supplier cannot secure the machine properly, you slap it into its own VLAN and write an ACL for the RDP access (because that is how it is) and are done with it. Basically dedicated Gigabit speed with very little latency for any communication between the clients and the server. Remember, you are almost lucky if a Windows Update doesn't break the software/ software license on the server or the client...
It sounds like Oxide is trying to break out of that by providing the whole stack.
Many of them are not.
We have serious vendor lock-in now, where a very few companies are gatekeepers to almost any business that runs on the internet.
And their margins are _enormous_ on this business. It ends up costing much, much more to pay them to run our machines for us.
And increasingly, the expertise to do this is being consolidated in these companies, so the talent available to pursue any other way is diminishing as new grads never learn about the magic places their code runs.
The reliability outcomes are nearly the same, despite the deferral to their expertise.
Labor savings b/c you don't to learn about provisioning your own machines? Not much. AWS is so sophisticated you need to develop a nearly equivalent amount of (non-portable) expertise to actually operate it well. Remember, the alternative isn't just rack your own, it's... dedicated hosting! And lots of other options with less lock-in and more standards.
It's sort of frightening how complicit the broader technology industry is in this power consolidation.
Vendors refuses to take part in standardization if they absolutely have the leverage.
Remember Amazon's reluctance in joining the CNCF and container groups?
By stating you are not happy, Amazon is perfectly ready to do what ever they can to please you, ad stated in their "customer obsession" (and I assure that that statement is as sincere as any human stating any commitment).
But back to the point, people in mass market primarily are no longer interested in managing machines, let alone building themselves.
I find, Bryan Cantrill talks are generally worth it to watch even just for entertainment if for nothing else.
The problem that we're trying to solve is basically laid out on this slide: https://youtu.be/vvZA9n3e5pc?list=PLoROMvodv4rMWw6rRoeSpkise...
The business is "we will be selling servers." You can't buy any yet, but in the future, you'll be able to.
The talk lays out a history of servers, describes the problems with the servers that you can buy from vendors today, and lays out why we think we can build better ones.
The specific complaint I (and the GP) have is: synopsis, synopsis, synopsis. Give me a few paragraph summary before asking me to invest an hour and a half of my time.