"The Return of Time-Sharing" - now that's very much real. Amazon AWS is time-sharing. "Pay for what you use". By changing servers from a fixed cost to a variable one, more people have to worry about resource consumption in fine detail. Through most of the personal computing and leased server era, that was not the case.
I have a feeling that a lot of corporations don't care too much about their employee's quality of cloud queries. Wondering if they get a sort of unlimited subscription or something.
At a certain point they tend to start taking it seriously and cracking down on the bills. Disney had to ask Amazon to find all accounts linked to a Disney corporate credit card in order to track them all down, as I recall.
"We keep inventing new names for time-sharing. It came to be called servers ... Now we call it cloud computing. That is still just time-sharing." - Lester Earnest
It is amusing to note that on mini computers of the 1970's that what we now would call a "floating point co-processor" was a hardware option. Even with the Intel 8086, a "math" coprocessor was a separate 8087 chip.
On mini computers you used a different version of the language compiler depending on whether your hardware had this coprocessor or not. Typically you didn't make this choice, but the system administration people made sure the correct compilers were installed to generate hardware supported floating point or software floating point. That tradition seemed to make its way into PCs. It was a while before you could just assume that everyone had hardware supported floating point. Oh, and every language's software floating point was annoyingly different. So no portable way of writing binary floating point -- you had to convert to/from ASCII.
GPUs are like a new fangled hardware floating point external to the CPU. Now to gradually become part of the main system again.
Compilers like MS and Turbo C used a clever technique - a program could be compiled with a software floating point library and shipped, and the existence of the FP coprocessor would be detected at runtime and the program would self-modify appropriately.
If there was a coprocessor, the first time any particular FP routine was called, it would overwrite the instruction calling the library with the appropriate FP coprocessor instruction.
The Intel ‘486 had the FP coprocessor integrated, but they split the line into the DX and SX models. The only difference was that the SX had the FP unit physically disabled.
They still sold a 80487, of course. It was a '486 without the FPU being fused off, and when you plugged it into the "math co-processor" socket, it disabled the original CPU and took over both duties.
My first computer was an IBM AT-compatible Zenith with an 80286. I bought an Intel AboveBoard memory board to fill out conventional memory to 640K and add extended memory. At the time, there was a promotional mail-in coupon for a free 80287 coprocessor direct from Intel, so I added that for free!
> Metrocop: “She'll get years for that. Off switches are illegal!”
> -Max Headroom, season 1, episode 6, “The Blanks”
As a callow 18 year old in 1980, I suggested to one of my parent's friends that the computer would be as important an invention as fire. He'd been a hippie 10 years earlier, and thought I was being childish, possibly because fire is so vital, more likely because he had a hippie's distrust of the computer's relationship to authority. In retrospect, he was probably right for either reason, though it took me multiple decades to catch on. This article does a nice job of laying out the case for being cautious.
Back in the early days of BYTE magazine, early microcomputers, etc. I seem to recall that Steve Jobs coined the term "Personal Computer". Apple didn't market it to death. But at that time the way these computers were described was as "micro" or "home" computers. Or even "home brew", because before the "holy trinity" (Apple II, TRS-80 and Commodore PET), no two computers' hardware was the same. A real hinderance to mass software distribution.
IBM named its computer the PC, thus co-opting the term and forever being associated with it. But you have to remember back further.
Although the article claims IBM invented the term "Personal Computer" and Steve Jobs also claimed to have invented the personal computer, Alan Kay wrote at Xerox PARC about "A personal computer for children of all ages" in 1972 [1]. Many of his ideas went into the Xerox Alto (1973) personal computer [2]. The Apple I was introduced in 1976 and the IBM PC in 1981.
At once time in the early 70's DEC was calling the PDP-8 a "your own personal computer" stressing that its cost was such that you could dedicate the entire computer to a single individual, so why not buy several. This as opposed to the PDP-11 or VAX which were 'department' and 'organization' level computers.
> If you marketed it as a "server farm in a box", you could probably get people to start buying literal mainframes today.
Probably not. The path of computing has been directed by economics: cost and its counterpart, volume. There's no reason why any greenfield project would choose to host on a mainframe.
The administrators are rare and expensive, the hardware is expensive and single source, and the performance is mediocre.
I can kind of understand why core business processes don't get ported off of mainframes: the business risk is too high. I just don't understand why anyone would choose to start a new project on one.
If I were a business owner, that is not a bet I'd make.
1. z-series experts are rare, expensive and typically old. Old does not mean unemployable, but it does mean a limited future career. Who's going to replace the current crop of experts? Internal education is a possibility that could work, but any way you cut the knot, it'll be more difficult than going with x86 and linux.
2. z-series hardware sucks. They are low volume parts who's main value is their backwards compatibility. If you think the hardware performance is acceptable today, will it be so in 10 years? Computer performance improvements have certainly slowed down, but peripheral vendors like nVidia continue to make solid progress. Even if IBM gets around to supporting them, you'll always be a 2nd class citizen compared to the PC platform: drivers, bus availability, etc. will be half baked or late.
3. z-series is off the main stream of computing. Huge numbers of talented software developers are building great open source software for linux (and other OS's) on x86. I don't know if administration on z-series can be done better or less expensively than on x86 today. I feel confident in saying that there will be no contest in 10-15 years. The pace of innovation is enormous, and IBM is not a company that can credibly make a commitment to bring that innovation to the mainframe.
4. Yes, mainframes can run linux and do various sorts of emulation. I don't think that that is a cost effective. I'm not alone in that assessment: if running mainframes was a cost effective way to run linux, I'd expect that to be the way big internet businesses run their internal data centers. They don't.
5. You'd have to deal with IBM sales. At least with x86, you have multiple sources for parts, and if you're big enough you can do contract manufacturing.
6. The markups for EC2 (especially the network transit) is really high. There is a continuum from Lambda and App Engine, through, EC2 to dedicated hardware rental like OVH and finally running your own. That is way more options than you have running on a mainframe: certainly way more vendors.
I'm sure 100% of what you say is true, but there is no real reason for any of it to be true. Mainframes have many, many advantages over bullshit like the EC2. The sheer amount of containerized gorp, deployment scaffolding, management infrastructure and nonsense that goes along with it could be abstracted into something like zOS and into a Z-machine. Frankly it should be abstracted into something less ad hoc; EC2 ecosystem is total garbage from any sort of design point of view, and the fact that the world is now victim of their internal operations random walk is a tragedy. Originally it was supposed to be simple: now you need to hire a similar headcount to what you'd need for ops at a data center; arguably a much larger headcount than you'd need with a mainframe. To say nothing of the operational complexity of deploying scalable solutions on EC2 vs Mainframe.
I am not advocating people actually jump in the dying mainframe business; just that it arguably makes more sense than the EC2 from an engineering perspective. If IBM weren't run by risk-avoiding mental midgets, they'd find a way to beat Amazon at this, and the world might actually be better for it.
If you've ever worked at a firm with serious calculation and data loads (aka data science at scale), they almost always end up with big hardware in a data center. Because markups on EC2 (and google/azure) are insane.
If the cost of all cloud services is similar, then maybe that’s just what it costs to run a cloud service? That’s not high markup, that’s just cloud being more expensive than on-prem. Or are you suggesting price collusion to maintain artificially high profits?
Continuity seems like a good reason. If you have a team that has expertise in mainframes and need a new service, it is potentially cheaper to build a new mainframe than it is to hire a new team to run your new service.
The problem for IBM is that startups can't afford them.
For example, does Amazon have the CPU load to run many System Zs? Of course they do.
But Amazon probably started with one cheap PC connected to the web somewhere. And then two ... three .. and so on. By the time Amazon had the CPU load and business to need/afford a System Z, it was too late to change.
The wheel of reincarnation is so called because it turns, and it will probably turn again. The new buzzword "edge computing" refers to the use of computers close to the user to do what was once done in "the cloud." Peer to peer is on the rise too, linking "edge" devices directly together. Voila, PC revolution 2.0 but with new terms. Soon we will reinvent the BBS, shareware, etc.
With edge computing, though, the computing is done under the control of whoever controls the mainframe even though it's physically close to the user, so it's quite different from personal computing.
For now, but edge computing diminishes the role of the mainframe. This makes it easier to replace the mainframe. Eventually someone will do that.
Previous PCs came from chips and designs that were originally built to power terminals for mainframes like the Intel 4004/8008, the 6502, and the Z80. Today's handheld dumb terminals for mainframes are built with ARM64 chips that are approaching "desktop" performance levels just like those old dumb term chips started to approach useful performance levels before someone realized you didn't need the mainframe anymore.
... and so the wheel turns.
I think the biggest current barrier to de-cloudification of mobile devices is the heavily and cryptographically locked down OS. That's a software thing, not a hardware thing.
I get overall sentiment, it is thought provoking. At the same time, one of our clients has on-prem AS/400 with only half of CPUs enabled by IBM. Same client moved payloads between two cloud providers and one on-prem option in last 18 months. Same same, but different.
At an architectural level a number of mainframe elements have returned, such as I/O and microcode AKA writable control store.
One of the defining features of the minicomputer, and later microprocessor, was that the CPU did all the I/O. In mainframes of the 60s and beyond, the IO devices had their own small processing units (to be distinguished from the Central Processing Unit) so that if you wanted to write a group of blocks to a tape drive you could set them up and then send the tape drive controller a block of commands to make it happen and DMA the data.
While early machines like, canonically the alto in this case, bit-banged ethernet interfaces directly. The early UARTS were cool in that you could actually get them to do something with the bitstream to/from the terminal (i.e. buffer up to 8 bits at a time: wow!).
half duplex in your terminal came from blocks of terminals attached to a terminal controller (HDX existed in the teletype system before computers, but this channel controller model is why we still have it in our modern terminal drivers).
I'm wondering whether there is a way to connect to a real mainframe, not through Hercules, just to play with it and maybe do some good programming on it.
If only there were evil people somewhere insidiously committing evil deeds and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?