Whenever I see these modern cloud centers with racks and racks of GPU servers, or 64-core custom ARM CPU blade servers with terabytes of RAM... I can't help but wonder how many years it will be until I can just pick one rack up off Ebay for a few hundo and play with it, like I used to do with old 80s and 90s obsolete surplus. The way things are going... probably never.
These things last 10+ years in the cloud so you'd need to be willing to buy something truly obsolete that's already been stripped for spare parts. Also, I imagine you'd have to be willing to cart off an entire rack, which won't be a standard rack but a weird shape, and then you'll need somewhere to plug it in, which won't be a plug as such but a point to which you'd need to wire 600VAC, or whatever that cloud operator was using. Finally, you're going to need some way to connect its weird network to your plain old network.
Oh, and all the management features of the rack won't work because I am sure they would wipe their proprietary software before resale. Basically you're buying raw materials in an inconvenient package.
I simply do not see corporations selling off obsolete equipment in the ways I used to. It seems it has become cheaper and less of a burden to take EOL hardware and shred and sell for scrap than to sell off in a way that could ever wind up in a normal person's hands. I've seen this first hand recently- this very, very well known company had a new plant, with a whole warehouse of industrial equipment just old enough to be considered obsolete- literal highbays full of what were once very advanced robotics and specialized machines, perfectly useable for hobbyist engineers or even small startups, but too much of a burden to try and sell off for this company as that would take too long- the space was needed yesterday. So a service came that literally sawed things off the floor with grinders and torches and used excavators to load up railcars with anything metal. Electronics and cables were simply cut and piled in separate bins. It all literally went for scrap value, and in ways that it could never be repurposed.
It seems the hyperscalers are more in favour of shreding their hardware rather than putting it for sale on ebay. But I'm not sure if this holds true, I have seen old Google servers on /r/homelab.
Have you seen actual google servers, or their whiteboxed dell search appliances they sold for a while for other people to run in their own datacenters?
They have data sanitization requirements that become difficult to manage at scale if they do anything else. Are you SURE there was no customer data stored in a recoverable-by-modern-physics manner on that machine you sold?
Would you stake billions of dollars on it?
It most definitely is not. And once you leave AWS or GCP proper the amount of control plane and other data that's encrypted at rest plummets. The industry is VERY not good at this at scale.
And besides, encrypted customer data is still customer data.
Metadata about encrypted data might divulge sensitive stuff, etc.
I'm wondering the same. Maybe they're implying that things are going to get so bad that Amazon will just never be able to afford to replace them and they'll stick around like an IRS mainframe?
Unfortunately if you got a rack of Amazon's custom ARM servers you'd need access to their software to get it to boot. You need the device tree description of the hardware to get a Linux kernel booting and there's no standard for distributing or discovering those in ARM boards.
It's deeper than that since they probably have hardware root of trust watching over boot. I'd be shocked if you could get it to boot at all without Amazon's signing keys.
Cloud hardware lasts longer than you think. There's a widespread misconception that it gets cycled out constantly, but the truth is, new hardware almost always supplements, rather than replaces, the old stuff. When I was at AWS we occasionally had to code workarounds for very old SKUs.
The cloud is the new mainframe. The difference vs the old mainframe is it's so much more accessible to anyone. Barrier to entry to build one will be high, but to consume is very low.
I'm trying to decide if lock in problem becomes bigger, but I think where people follow modern software engineering best practice, they can move if needed.
> The difference vs the old mainframe is it's so much more accessible to anyone.
As someone who works on zSeries mainframes, I am not sure I agree. For developers, this is true, no doubt. But for organizations, in the cloud your data are locked away in ways they are not on the on-prem mainframe.
Vendor lock-in seems to be similar - you use some middleware, you're locked in.
With mainframe, there is more control over the infrastructure than in cloud. I see our management (I work for MF utilities vendor) constantly wrestle for control over our customer's environment that would be easily given in cloud (e.g. telemetry).
What's also interesting (but slowly changing to the worse), the mainframe infrastructure (z/OS for instance) is quite open to custom modification, IMHO more than cloud (but it depends on type, IaaS/PaaS/SaaS).
> Vendor lock-in seems to be similar - you use some middleware, you're locked in.
Is there that much difference between building your infrastructure in DynamoDB vs using IBM DB/2? To me, they seem to create similar levels of "lock in" and create an equal barrier to switching out to a new system... and if you want your data, you're going to dump it, reformat it, and start over.
I think the only problem reside in applications that require longevity. Many apps nowadays get rewritten every now and then (especially front ends and Middleware)
The problem is different for database and core systems.
I think it still comes down to proper software engineering. If you have good interfaces, abstractions, and automated tests, you can move to new systems.
I've seen teams struggle to move from DB version x to x+1, taking many many months, but it's because they have no idea if it works after they upgrade.
On the flip side you have people like Snowflake who are building a database that runs across multiple clouds. From the outside it appears both portable and optimized for each platform.
Thoughtful software engineering, with the right abstractions and test automation are a big deal...
For those wondering, the definition of HPTS:
“High Performance Transactions System (HPTS) is a invitational conference held once every two years at the Asilomar Conference Center near Monterey California”
I like the concrete numbers in this deck. Over 20 million nitros installed, over 12GW power capacity. Gives us a chance to compare scale with the other bigs.
I think it's interesting to see how big these clouds are in absolute terms because it gives us an idea of when the cloud has finished eating the world. We have global IT equipment energy consumption estimates, and we have scattered data points on cloud energy consumption. Looking at the two, you can gauge the overall process.
Also as an investor I like to have a general idea of Amazon vs. Google in terms of overall size, to combine with their revenue figures, because that helps me understand how much of Google is being sold as GCP and how much is being used by Google itself.
> Where Have I Been? 2012 to 2022 around the world in a small boat. Worked full time at AWS. Only in North America 3 to 4 times/year. Great to be back!
WFH is over folks.
Joking aside, that's awesome, and I hope some flexibility remains for all. Especially for those with little kids and two working parents.
What does "Hello World" look like in the silicon world these days? (Looking for a complete example containing everything necessary to go from code to tapeout, I know multiple answers are possible).
More of an MVP than "hello world", but a 6 story building that utilizes girders and concrete with an elevator, using materials and construction techniques that would scale to a 110+ story building.
What's the next circle going to be? Mainframes become even more specialized & powerful, and then the cloud builds new more specialized silicon to match it?