Before anyone starts a huge debate on everything I said it wrong, please understand I know it is not exact. This is just a base level set of one of the main structural differences in the HW/OS that everything is derived from.
You can even walk up to the mainframe and pull out the cpu the job is running on and you still get your paycheck.
I mean that in x86 you design so that the workload has parts, app server, db, web server etc. in order to spread the workload around. This is not really required on a mainframe as you can just run it all in the same instance (you do not have to nor do you really want to).
This is a very SIMPLE view. I am trying to point out that the HW/OS on a mainframe is design for a very different thing then how people today think of "cloud" design.
>"I am trying to point out that the HW/OS on a mainframe is design for a very different thing then how people today think of "cloud" design."
Might you or anyone else have any resources you could share this design paradigm? Thanks.
What is meant by "chuck" here? To throw? If so what is the thing thats needs to be chucked exactly in the case of x86 vs mainframe? Thanks.
the usual comparison is a cluster of cheap nodes vs a mainframe
Almost all value of mainframe lies in software compatibility. There's no theoretical or practical advantage they have over clusters of x86 servers.
I have worked on, designed and built both type of systems. Distributed systems cloud style are hard and require the software to sort for everything. For the most part the HW/OS on the mainframe just handles it. It is just a different way to build.
For those of us with only exposure to PCs and commodity servers, I'd be interested to know who has encountered one in their day to day, and how that experience differed from the norm.
A system that runs a bank or an insurance company and wants good availability can be built in one of two ways: you either spend money on software that deals with the hardware being unreliable and save money on software (pioneered by Google) or you spend money on hardware that promises to be highly reliable and save on software.
No new player believes mainframe (extremely expensive hardware) is cost effective, they all use commodity hardware.
A bank that needs to run binaries from the 70s for which they don't have the source code can keep paying IBM and not investing in reverse engineering the binary and implementing it in Java.
A bank that has a billion lines of code in Cobol can compile it to run on jvm on commodity hardware and run it in parallel with the mainframe for a year to validate and then switch over to the new system and stop overpaying for hardware, but that sounds risky, so they keep paying a million dollars for a system with the same performance as a 50 thousand dollar server.
Seems like the first "save money on software" shouldn't be there.
That approach is, in scientific computing, at least as old as Beowulf (1994, four years before Google was founded), the prototypical system from which we get the term “beowulf cluster”.
Edit accidentally included runaway thought process...
I was told they did an assessment a year or two ago to price out what it would take to move the business completely off mainframe and onto an x86 stack. It was in the hundreds of millions of dollars to do so because so much other software has been built to interact with and rely on the mainframe over the years that switching off it would be a multi-year effort across every department in the company. So of course that ROI calculation was pretty damn easy and the mainframe isn't going anywhere.
So just... have the new server expose itself over the TN3270/TN5250 protocols, such that these interoperating systems see what they expect to see? (You could even build the new system as a regular REST API or something at its core, and then build the TN3270/TN5250 exposure as a separate gateway service on top, such that it'd be easy to shut it off later if everything finally moves off it one day.)
You don't necessarily need to move the $new project off the legacy hardware, just write it in such a way that you can do easily later.
I'm eliding many details here, but the principle stands.
- Spend $300,000,000 over 10 years moving to a new system, and hope that by the time you've done that it's not obsolete.
- Spend $1,000,000 a year on a system that still works.
You could run the system for 300 years for what it would cost to replace the system.
In spite of the collective wisdom on HN, there aren't a lot of companies in the world with hundreds of millions of dollars sitting around doing nothing. Even ones that work on mainframes.
- Spend $1.5 million maintaining the current system, but as bits get updated keep in mind the system that you'd like to have in 10 15 years time.
You're going to have to replace the system within 300 years anyway, you aren't saving that money, every feature you add that is reliant on the old system is literally technical debt, because eventually you'll have to rewrite it for the new system.
I'm not even saying you need to have a new system in mind, just keep in mind that you will be moving to a new system, so code appropriately.
And it will still be able to run all your programs that were written and tested since the late 20th century, by the kind of organic entity we used to call "human".
Slightly more serious retort: I'm not aware of any brands from 300 years ago, I'd be surprised if any of IBMs customers survive, let alone enough to keep IBM as a going concern.
Btw when you start folding the space time mesh, terahertz figures just become marketing numbers, what you really want to know is how many parsecs it can do the Kessel run in.
You might be, you just don't realize that they're hundreds of years old.
For example, the insurance company Lloyds of London is fairly well-known around the globe. It's 333 years old.
If anything, this is a sign that mainframe users have way too much profit for what should probably be commodity software.
Remember that these are large, public businesses. Explaining to shareholders that profits are going to take a noticeable hit for years because of IT investment that isn't strictly necessary is effectively a non-starter.
I wasn't thinking that. I was thinking more like when you add a new feature, fit it into an API that is portable. Or add a translation layer so the feature can be written how you would like system to be in the future, but it works on your hardware today.
2nd I'd say theres a half life of best practise. Over the 10 20 year tome frame, id expect some of what you were doing is going to become outdated yes, on the same way some of your knowledge over your career will become outdated, or the phone I your pocket will, you wouldn't use that as an argument against education, or buying that phone.
Btw, if its as costly to deal with the interface as with the mainframe directly, that's a win. The interface can move off the mainframe to somewhere cheaper/better.
It doesn't become wrong, it just evolves into another dead-end. The 3270 is an interface, too, but the reason everybody suggests replacing it or augmenting it is precisely because the spartan tooling and mindshare makes it expensive. As XML recedes into history it is likewise becoming more expensive as an interface.
I chose XML vs JSON because I figured it was a transition everybody was somewhat familiar with. And younger programmers have an almost visceral dislike of XML, which I thought might help get the point across--that an interface someone once thought (and probably still thinks) would help ease future interoperability becomes a reason or excuse for future programmers to avoid that integration.
> Btw, if its as costly to deal with the interface as with the mainframe directly, that's a win.
I think the problem is that you don't really know if it's as costly. The error bars on that sort of risk assessment are huge because our industry sucks at accurately predicting migration costs. And it sucks because complex software systems are intrinsically unique. Commercial solutions that claim to be able to capture and control all those dimensions of complexity tend to be sold by vendors with names like IBM and Oracle. Such vendors also pioneered interfaces like SQL, which is both a soaring achievement in terms of capturing complexity behind a beautiful interface while also falling epically short of what's needed to actually reduce long-term integration costs.
But alas, language has changed and these kids won't get off my lawn.
‘Whatever is fitted in any sort to excite the ideas of pain, and danger, that is to say, whatever is in any sort terrible, or is conversant about terrible objects, or operates in a manner analogous to terror, is a source of the sublime; that is, it is productive of the strongest emotion which the mind is capable of feeling.’
What is a problem is that the last time a lot of code was touched may be 5-10 years ago. That code may have started being written 20-40 years ago. It may well have been maintained by people who think "if it was hard to write, it should be hard to read" or "documentation is for the weak". It definitely will have been written when the cost of a gig of memory and storage was many orders of magnitude higher than today (indeed, last time I priced memory for a mainframe, a Z10, it ran to $10,000 a gig); hence terseness in everything from table and column names through stored data and everything else was prized. Dropping from COBOL into assembler is not uncommon for critical path performance.
Making any changes will be a week of coding and three months of working out the what and why of the code, because the last person who worked on it retired a couple of years ago.
It has worked for the past 30 years and hasn't been touched since 20.
It's OK to do most things with chunk of normal servers, but when you need to handle very large number of transactions with fast commits, and stuff like eventual consistency is not allowed, it becomes expensive to handle them no matter what.
One of the other big Australian banks (Westpac) has an "exit IBM" project as well, but it isn't complete yet.
There is no information anywhere saying that they moved their critical banking systems and transaction processing.
Unisys always had a smaller mainframe share than IBM, and they stopped making new chips sometime in the 2000s.
For at least ten years the fastest Unisys mainframes have been very big (proprietary) x86 machines running Windows with a Unisys mainframe emulator.
For some time I defined a "serious computer" as something that didn't have ports for keyboard and monitor.
Having said that, the pre-x86 A series were pretty cool. And ran the most user hostile OS ever created, to the point its very name was used for Tron's villain.
Monzo? They're getting bigger now, circa 3M customers. More than First Direct, Starling, Metro Bank.
I personally worked on a project there that involved building new virtualization platforms for not only x86/x86_64 Intel, but also AIX (Power) and Solaris (SPARC).
Who are the vendors that provide a migration path from mainframe to a set of commodity hardware running an emulator, and subsequently provide continual maintenance and support?
Global 100 company gets IT from other global 100 company
Global 100 company gets IT from small mainframe support shop
there's a question of shareholder liability, being able to adequately sue them for M's of $, expectation they will be around in 20 years, etc.
A single computer with 4 sockets of xeons will outperform a mainframe but will have more downtime.
The mainframe has best possible single thread performance and as much cache as possible, redundancy and parts can be replaced while it's online, but not that many cores.
The cost is very high - when I looked, 1 million per year is the baby version with only 1 CPU enabled and no license to run the cryptographic accelerator and limits on software, etc.
Commodity hardware you buy and use for 5 years, so the amount of good hardware you can buy for the price of owning a mainframe for 5 years is a lot.
For the money, you can buy a lot more CPU, ram, network, storage, etc and hire Kyle Kingsbury to audit your distributed database.
At one point, IBM was selling base-level mainframes for $75,000 (see https://arstechnica.com/information-technology/2013/07/ibm-u...)
It is true though that a 'realistic' configuration is likely to cost north of $1 million, and that none of these numbers include the price of the software.
Another is that the software and hardware designed to turn a baby system into anything useful is astonishingly expensive to people not used to dealing with this end of the market. Enjoy finding that your hypervisor is licensed at 5 figures per core, and that your cores are six figures a pop.
Even if the mainframe never goes down, the entire site will go down (eventually the rack power supply, HVAC, fiber, natural disaster, backhoe, etc. will get you even if your CPUs and RAM are redundant and replaced before they fail), and then either your entire business stops or at least processing for that region stops, or your system is resilient to site failure because you built a distributed system anyway.
If you could rewrite your software to be distributed and handle a node/site going down, you could run a single site on 5 servers that together outperform the mainframe (by a lot) and can be serviced on a whole server basis (though of course, expensive x86 servers also have reliability features), or use really cheap hardware without even redundant power supplies, but have enough of them to not care.
The modern solutions are better than the mainframe, and the only reason to use them is management risk aversion and unwillingness to learn new things.
100 miles will add 5ms (round trip) to your disk flush on commit. So a system like this has the sequential and random IO latencies of a RAID of SSDs but the flush (database commit) times of a 15K RPM spinning rust disk. People lived with mechanical disks, it's ok.
Sync disk replication (in one direction) over a fiber line is not an exclusive feature. Having both sides be active, instead of active and hot standby requires some smarts from the software, but modern distributed databases do that, and if you're careful you can get far with batch sync jobs.
For read-only batch computations you can always add some extra redundancy and partition the problem. So, I don't think it is likely that a mainframe would be useful here.
But we have literally thousands of internally developed applications. We can move thousands of apps the cloud and still have a need to keep thousands on virtual/physical machines. My own apps are stuck on commodity physical hardware for at least the next few years.
The type of applications that can historically been run on mainframes is not really moving to AWS/cloud. Most of what's going to the cloud is what I would consider to be "supporting" applications, not core applications.
My own experience, that of others may differ.
Programmers are taught that database transactions exist so that when you move money from one account to another and crash in the middle, no money is ever created or destroyed. Well, cat picture websites might do that, but banks don't. They reconcile logs at end of day.
> you either spend money on software that deals with the hardware being unreliable...or you spend money on hardware that promises to be highly reliable and save on software.
Had to perform regular backups on the system as part of my internship.
Nowadays kids are all up with WebAssembly and WASI, well that is just how OS/400 works during the last 30+ years.
Originally designed in a mix of PL/S and Assembly, everything else (RPG, Cobol, PL/I, C, C++) compiled into ILE (Intermediate Language Environment) and cross language calls are relatively easy to do.
ILE applications are AOT compiled either at installation time, or anytime some critical hardware has changed or the application themselves have been updated.
Nowadays there is also Metal C (real native not ILE), Java via IBM's own JVM (which in early versions converted JVM bytecodes into ILE ones), and the C and C++ compilers are also to target actual native code besides ILE.
The database backed filesystem took some time getting used to, coming from Amiga/MS-DOS/Windows 3.x/Xenix experience, and the command line felt more cryptic those OSes, given the use of special characters as part of the name.
The company I did my internship was using them for their accounting, everything else were MS-DOS/Windows computers connected via Novel Netware, zero UNIX flavour in sight.
IBM z and Unisys ClearPath are two other mainframe models that also follow similar bytecode based deployment formats.
So in a sense you can say that on every Android phone, watchOS or Windows PC lives a little mainframe.
Ie. it shares way more with regular server boxes than mainframes.
The main three benefits of the mainframes (as a non-mainframer):
- crazy amounts of caches
- crazy amounts of pcie-slots (and sufficient internal io and processing power to make it balanced)
- production environment mentality for everyone involved, incl. extremely engineered (and redundant) hardware
I think new customers are likely to run Linux on such a box in the future... The main cost-problem for mainframe customers is software cost (esp. z/OS, cobol etc..).
I have heard stories (from actual mainframers) about the sql-performance and key-value performance (read like mongodb) on these boxes that are eye-watering..
And when it comes to reaching things outside of the mainframe itself, generally anything you can do to make accessing the data wider is better... Why use a single 16GB/sec HBA when you can interleave 8 or more FCAL paths to the same disk. Why use FCAL when you can use Infniband, etc etc.. They have a LOT of PCIe lanes available, and enough chips to keep most of them busy most of the time.
TL;DR if it's expensive and offers more paths between things, it's probably an option for a mainframe.
Does that mean great or awful?
I would love to see an article that breaks down clustering v. supercomputers v. mainframes in tangible use-cases. It is all a bit opaque to me where one officially starts and the other one(s) begin. As well as where (and to what degree) the value prop is of one versus the others.
I've heard folks say mainframes are just legacy stuff carried forward - I've heard others say they have specific use-cases that are not well solved by other current technologies (usually a cost of refactoring). That said, are there any current use-cases, where if I were to write code from ground zero, that are best solved on mainframe, hands down? That is my root question.
There are a lot of systems that are simpler (cheaper and less risky) to scale up than re-engineer to be distributed.
For example at the end of each day a bank has to generate the final balances of the accounts that it has with other banks and then it has to check for irregularities and then finally set those balances just right (e.g. enough to cover payments made by its customers but without leaving there millions) => if it doesn't then it might not be able to use the money that is "parked" there by mistake, or it might have a huge position with a risky bank, or if the balance is too low other banks might decide not to perform the final credit into the target accounts involved in the customers' payments (which would then generate a lot of complaints + lower the bank's reputation with its long-term consequences), etc... . (the opposite happens as well - https://www.investopedia.com/ask/answers/051815/what-differe... )
Nowadays even accounting (coupled with risk management) has become time-critical because of the many regulations - a 1-day delay filing the numbers with regulators and/or central banks can result in huge fines and loss of reputation (if the news about the problem becomes public it could happen that the news services try to highlight it increasing the focus on that negative publicity). In our case the detailed/low-level data (e.g. client X bought N shares of some company, client Y retrieved N$ from the ATM, etc) is all processed by the mainframe, which works fine 99.99999% of the time => then anything that comes afterwards which involves analysis/aggregation/reporting/etc of that data happens in other non-mainframe apps (they're all very different - from tiny to huge apps with distributed databases) and in general any such app has usually some kind of problem that results in a delay at least once every quarter due to the SW itself or even the HW => it's usually not a big problem (usually some hours are lost but there is a buffer) but if the central SW (running on the mainframe) would have the same problem then there would be a chain-reaction on all the other apps that need that data (and then their specific problems would be on top of that, and then there would be a hotspot of needed 100% CPU/RAM/network resources as they would all run at the same time as fast as possible which could cause further delays, and so on).
COBOL was in effect designed for business use so fixed point arithmetic has a lot of support in the language, libraries, and tooling.
Once you do that, it becomes clear Java is not exactly a competitor.
Which is neither here nor there. COBOL has end to end support for financial/commercial calculations, and a whole lot more.
"When you are a major financial institution processing millions of transactions per second requiring decimal precision, it could actually be cheaper to train engineers in COBOL than pay extra in resources and performance to migrate to a more popular language. After all, popularity shifts over time."
But because is not in-built any advantage of use it get lost in the noise.
That's one of the best typos I've ever seen.
(Although I’m sure the precise problem set could be replicated with commodity hardware too).
Only pretty recently. You would be surprised at the number of crazy smart people that failed to make a commercially viable TPF replacement.
Including ITA, whom Google bought for ~$700M. Lots of top talent, lots of funding. They did build a reservation system, but nobody of note would use it.
Amadeus only got rid of their TPF mainframes in the last year or so. As far as I know, Sabre hasn't finished.
I don't know what progress VISA has made (another heavy TPF user). They are still hiring TPF programmers: https://usa.visa.com/careers/job-details.jobid.7439996782301...
People moving away usually tried to extract the business logic such that the TPF layer was mostly a distributed NoSQL store. That bought a lot of time, but it seems like the skill shortage is now hitting that layer.
You can certainly have a warm second mainframe too, and I think most large banks are run that way, but it wouldn't be very fashionable to add that to your airline now, if it wasn't already there.
It's a lot easier to find good engineers to write and maintain software for Linux on x86-64 than it is to find mainframe engineers, which makes things simpler, arguably.
How long is Linux on x86-64 going to stay a thing? What are you going to do about staging Linux updates on business-critical facilities?
Perhaps not so simple after all.
Which suggests an interesting contrarian career path - train on COBOL + mainframe you'll never lack well-paid work. You'll also avoid the usual fad tracking.
Citation needed, especially on that implication that x86-64 lacks reliability.
I’ve run reliable services on commodity hardware that literally processed over 5 billion transactions per day.
> How long is Linux on x86-64 going to stay a thing?
If something better comes along, why wouldn’t you want to move to it? But “better” in this context would imply popular support and plentiful access to developers, so you would have years of warning. It wouldn’t come as a surprise. As of now, it has been going for at least a decade, and shows no signs of stopping.
> What are you going to do about staging Linux updates on business-critical facilities?
This is not some huge, unsolved problem. It has been solved many times, in my opinion, so just learn how others do it and follow suit. I’m not here to teach sysadmin “tips and tricks.”
I would be completely clueless on how to stage updates for a mainframe. Turn it off and back on? And I’m betting all of the good training material there is locked behind expensive paywalls.
> Perhaps not so simple after all.
Disagree, especially since you can hire people to solve these problems for a lot less money than you can get just the hardware for a mainframe, let alone hire the extremely rare (read: expensive) personnel needed to maintain and develop applications for mainframe.
If IBM would focus on bringing the entry-level cost of mainframe down, they would probably be able to get more adoption, and more people would be able and motivated to learn their systems.
Basically, the hardware and software on a mainframe does it for you. Versus having to implement redundancy and resilience into your app. You can pull a CPU on a running mainframe, and it keeps chugging. Batch failures, app failures, etc, have a very well defined ecosystem for recovery that's consistent across apps.
As you imply, though, provided you pick the right software, there's not much difference in reliability these days. The variety of software choices is what kills reliability on x86/Linux. Too little established experience because there's too many choices.
Compared to what? A cluster of x86 computers where all the security, cryptography, observability, reliability, availability has to be written by the owner on top of something like Kubernetes? And that can achieve the kind of throughput a mainframe has?
I'd say a mainframe is much simpler than that. It's already done. All you need to do is to sign the check and read the (hundreds of) manuals.
if it's already in place, it's not 'adding' anything, it simply 'is'.
ITA did build the full thing, but only Cape Air used it. The "full thing" being shopping, schedules, inventory, booking, cancel, change, check-in, etc.
The shopping engine itself isn't trivial. ITA has the best shopping engine available.
Really similar to having a big monolith, and taking pieces out chunk by chunk rewritten to run as microservices until you're left with a new codebase with no sign of the original monolith or any of it's code.
Batch is the other aspect that takes mainframes to unbeatable status. When you run a batch job, you basically say "ok all daily OLTP is halted, we are going into a completely new architectural mode". Obtaining an exclusive lock on a table or an entire database for a single series of processes to operate on can yield insane amounts of throughput when you are finalizing the data from the day's operations.
A few years ago as a more junior developer, I will admit to being strongly "anti-mainframe" based on the principles I held at that time. Why can't we just put it all in the cloud, throw some MongoDB out there and pray it all works? After witnessing actual business cases for mainframe/batch unfold, I quickly started to change my tune. Mainframe is not for every business, but it does seem to be a tool you can reach for when someone says "everything just has to work always or someone dies", and "we have infinite money".
Due to io offloading to co-processors and a whole range of supporting cpu types you can set up a system which can handle thousands of transaction per second. Most of the application that are used on mainframe's are databases or message que's
Recently there is an increased interest in running Linux on mainframe's something which can be done since early 2000 you get the benefits of high available and secure hardware and the relative ease of Management of Linux. Another benefit is that you don't really need to train personnel in more exotic operating systems like z/OS.
Think of the number of transactions entities such banks and airlines perform a day. This article talks about this in the section "what is a mainframe today":
I have seen a handful of mainframes on use, but I have never seen one used for something that it is good for.
Long story, short: We will really have to watch out for our privacy because those two remaining platforms give The Management(tm) an irresistible temptation to stomp on us.
Evidence of the demise of desktop computing abounds, but I do hope you're merely being alarmist, not prescient.
Great essay. Please write more.
It seems to me if desktop general purpose computing becomes a distinctly minority need, then the future of hardware design will bend towards that article's view. Large scale design and manufacturer of hardware platforms will be (mostly) exclusively towards central servers and to specialized devices on the edge. I expect there's an awful lot of legacy desktop design that will disappear.
We're starting to see borderline-draconian privacy and data protection laws such as the GDPR in Europe, and while I question the implementation of that law and how effective it will be, the very fact that a group of politicians covering all of Europe managed to identify a risk and make actual law to try to deal with it is noteworthy in itself.
I suspect what will really make a difference in the near future though is that now social media and "fake news" and other consequences of the centralisation and commoditisation of online communications are messing with elections and our democratic systems. Politicians, even those who might otherwise give big businesses a pass on questionable ethical behaviour, do care about the systems that get them into positions of power, and they care very much about attempts to compromise those systems in ways that might remove them from power. If there's one thing in life that I have found quite reliable under all circumstances, it is the ability of a class of people in power to recognise threats and take steps to protect itself.
I also take some comfort in a few conversations I've had with non-techie friends in recent years, particularly those who are of the younger, digital native generations. It's become very clear to me that while my slightly older generation have sometimes been quite naive about the implications of new technologies, those behind us are much less so. Things like basic steps to stay safer online are taught in schools now. Social media accounts are ephemeral and kids switch to different networks in a way that would make it very difficult for the likes of Facebook to reach critical mass as it has with the older generations. The constant updates and sometimes breakages of software or access to multimedia content are getting old. And again, perhaps most heartening of all, while the younger generations consider these technologies an integral part of their lives and accept to some extent that there are compromises made in order to use them, that doesn't mean they like those compromises, and they will switch away if better options become available.
All of this is probably bad news for the long term prospects of businesses like Facebook and Google (and all the other big data hoarders we don't see because they run their marketplaces discreetly behind the scenes instead of with big public websites on the front). But it's probably good news for those of us hoping the centralised/distributed pendulum for computing is starting to swing back towards the distributed side again, in part driven by privacy, reliability and longevity concerns. The biggest weakness I saw in the article was quite a big jump to the conclusion that embedded and mainframe are the only two natural kinds of computing. I don't see why personal devices -- or rather, running substantial software and doing substantial data processing locally on those devices instead of just using them as essentially thin clients -- shouldn't be on that list as well. The form factors might change, but I don't see it as inevitable that personal computing will revert to being primarily a hobbyist's endeavour. There are some very good reasons it should not.
If the language runtime supports the platform (even bare metal), the AOT/JIT does a good enough job even exploring some vectorization and the large majority of the standard library is available, then it is just an instance of cattle OS.
Which by way, has been the way that those hyper engineered mainframes from IBM (and Unisys) are designed, with their "language environments".
Has this really been the case though? Apps are still very much OS specific. Even Web Apps in Backend are also OS specific ( Linux Specfic ). I just don't see any "mainstream" adoption of high level languages that does this.
With a large enough codebase you'll accumulate some unix-specific logic where it'll be less painful to just develop under linux rather than trying to keep it running on windows.
Of course those pain points are minor and you could write some fallback code for windows, but that code would only ever be exercised on developer machines.
All data should come from databases, which also don't matter on which OSes they're running on.
The web app may be OS specific, but does the OS really matter? Any unixy thing would probably work fine with at most mild effort for almost all backend apps.
Microsoft (Azure) is investing heavily in FPGA based configurable cloud
One could get a lot of mileage in many areas out of a desktop with a good CPU, lots of RAM, a graphics card for general-purpose use, and one or more FPGA's w/ HLS tooling (or just buy the modules). Especially if the basic components were standardized like the PC with minimum requirements on cores, slices, etc on each version of the platform. An app ecosystem could be developed or emerge with capabilities regular desktops couldn't match.
Basically real time capable microcontrollers that share memory with the mainstream CPU that is probably running Linux.
Allows for interesting use cases, like high speed data in or out to drive LED displays, sample signals, generate audio, etc.
Project that uses PRUs for audio: https://bela.io and how it works: https://hackaday.com/2016/04/13/bela-real-time-beaglebone-au...
Driving an LED matrix display: https://trmm.net/Category:LEDscape
Emulating an old Macintosh SE video board: https://trmm.net/Mac-SE_video
Another sort of "minion core" in Allwinner ARM boards: http://linux-sunxi.org/AR100
Tech use cases :- https://www.upmem.com/use-cases/
POWER is also known for being highly SMT, which could lead to them being even more prone to the issues that plagued Intel's implementation of hyperthreading. A single POWER9 core has 4 to 8 threads.
Or maybe POWER9 is completely secure.
My main point was that Epyc has already proven to be much more resistant to these attacks than Intel’s architecture, and it wouldn’t require dealing with porting your applications to run on a niche ISA, with very limited options to buy server hardware, and very few (if any) options to rent cloud instances on POWER9.
Which SMT issues are you referring to here on Intels?
I wanted a future where people had big mainframes in their basement (like a furnace) as the sole computing source for their house and terminals in every room. Instead we got no one using computers and everyone having a dumbed down phone. :(
For those two do need it, you don't need a mainframe sitting in your basement like a furnace when a Rapsberry Pi has 500x the processing power of an IBM 4300 from the 1980s for a fraction of the price and size and power consumption.
There is absolutely no need to shit on the capabilities of modern smartphones when "arbitrary computation" devices are so cheap, small, and ubiquitous.
You probably can’t access mainframes file system directly either. That’s a security and reliability feature.
IBM Newsroom - https://newsroom.ibm.com/2019-09-12-IBM-Unveils-z15-With-Ind...
Someone suggested that they used infiniband or something similar to make a single instance span multiple machines, but I don't buy this. There would be performance characteristics that would show this, and it would be documented.
I think all sizes they offer are available as single machines, although the really large ones maybe are somewhat exotic (NUMALink or other interconnects to get a machine with 8 sockets? Not sure if the top Intel platforms do 8 sockets natively)
2) ... they could also time-share one core to as many virtual cores as they like, even more than the real number of simultaneous threads. This seems to make sense for better utilization of part of capacity, but I have no idea if they do it. Probably not for marketing reasons, they don't want to support an idea that their vCPUs are weaker than those of the competitor. Small providers are more likely to do this, I think, much less concerns about PR.
3) Big GAFA-like corps have access to specialized hardware from Intel. The cpus they operate may not be publicly known.
4 cpu, 12 cores each, 2 threads per core.
A large entity purchases 12 fridge-sized mainframes from IBM for over $100 million. Who might do that? Airlines, banks, governments, logistics, and others needing high levels of reliability.
To understand why this clientele would use a Z-series mainframe, first consider what the "z" in the name stand for: "zero," as in zero downtime. Typical compute providers express their downtime as "#-nines". For example, 5-nines reliability would mean you're down for around 30 seconds per year, on average. The Z-series mainframes are sold as having zero downtime, period. A remarkable amount of research, development, and engineering effort goes into achieving this level of reliability. Now, these clients usually perform jobs which are not computationally difficult (validating a credit card transaction, for example) but must work, since the economy depends on the availability of these services. The Z-series mainframe shines in processing these loads of many, short jobs.
There's a security angle to mainframes as well. Commodity hardware allows for fast scaling and redundancy. However, commodity hardware also allows for exploits to be shared easily. Once those exploits are discovered, companies need to patch, and there's no guarantee the patch will happen. Now, imagine trying to develop exploits for a system which is not commercially available (governments could still presumably acquire one), is a completely custom computer architecture (Z/Architecture, custom compiler, Z/OS, pretty much every layer below JVM), and has very few design documents available online. Oh, and consider that, from z14 onwards, any data in the mainframe is encrypted when at rest. (Decryption/encryption is handled beneath the ISA; once an instruction is run, the mainframe uses the central key management chip (tamper-resistance, designed to handle natural disasters, etc.) to decrypt necessary information. The information is processed, then encrypted again before the instruction is completed.) The likelihood of a script-kiddie getting into and exfiltrating data from one of these things is very unlikely. Hacking one of these mainframe would take an intense, coordinated effort.
Another important component is backward-compatibility. Take IBM's two main in-house storage protocols, FICON and FCP (FCP is FICON, minus most support for old systems to get higher throughput). FICON connects mainframes with giant storage arrays from EMC, Teradata, and others. FICON replaced ESCON, which replaced the parallel data communication system from the System/360 era. When a company upgrades their mainframe, knowing that your 20-year-old storage unit can still talk to your new machines relieves stress. Companies WILL pay for this level of backwards compatibility, and there's no reason to hate them for it.
Supporting backwards compatibility has historically not been too much of a problem for IBM. I worked with a person who took a class in IBM Poughkeepsie's now-abandoned Education Building on this hot new programming language called C (this was sometime in the 80's). Multiple people in my department were around for the development of not just the current generation of IBM tech but those before as well. The levels of technical depth they had were immense. I've heard people say, "oh, but that depth is narrow and won't get them jobs outside IBM mainframes." Perhaps, but in my experience, they don't care. They build systems the world depends on, whether the users of those systems realize it or not. I'll also add that in the days of Big Blue, your job was basically secured. Even after the layoffs of the 90's, IBM still needed to retain the old talent. (Imagine a company with lots of employees who've worked there less than 10 years and lots who've worked there more than 30 years. You'd describe IBM's mainframe division well.) Makes me sad to hear that IBM is discriminating against their older employees to push them out.
One commenter asks why IBM doesn't have "micro-mainframes" for smaller companies. For all I know, they could be moving this direction. At the same time, it seems like it wouldn't make much sense for IBM to do this. Why deal in thousands of dollars when you can deal in millions? Why put engineering effort into building computers for non-critical companies when, as long as you keep advancing performance and capabilities, your mainframes will provide you one of the best long-term cash flows possible?
Another commenter said new companies do not consider mainframes because they aren't cost-effective. I think it's for a different reason: new companies come and go. Their services aren't that important to the world, but they're trying to show the world their importance. Because of that, startups whip-up an infrastructure concoction which is inefficient, but that's ok because 1) they aren't encountering the issues of scale and 2) their workload and information can run anywhere. They just don't need a mainframe because they don't need that level of reliability.
Happy to answer other relevant questions you might have.
At a minimum, supporting infrastructure (power/networking/internet/etc) will eventually fail even with backups. On top of that, no mainframe is going to work when under water (flooding) or on fire (remember the delta outage?).
> The last time he did shutdown the Mainframe was 15 years ago
And that was a controlled shutdown, not uncontrolled.
If more than fifteen years is your time horizon to validate the zero downtime claim, that's cool. By that point, though, the system will have proven its worth.
Right; unreliable supporting infrastructure is the inherent trickiness with saying the mainframe has zero downtime. The manufacturer isn't being deceptive, though, if, given reliable supporting infrastructure, the system will stay online for as long as is stated. It's just not their problem.
Can't imagine a company would blame a manufacturer for downtime in the event of a flood/earthquake to protect the equipment. If a piece of equipment starts on fire, that's another story, but I suppose a zero-downtime system makes assumptions that it won't start on fire.
Well, looks like plenty of mainframes are very well exposed to the internet, which.. helps.
Just as well Soldier of Fortran doesn't exist or this would be a silly assertion.
The new "cloud" running on racks and racks of cpus with memory and storage on nodes is really close to how mainframes are.
With Google and other cloud vendors making more specific hardware to deal with certain processes, we are entering even closer to similarities with mainframes. They have a lot of supporting processing power that offloads the CPU.
Mainframes now can also run Linux :)
Isn't Mainframe just powerful Servers? At least I thought the term meant ( or used to ) mission critical and massively powerful. Nowadays I could get 2S EPYC 2 with 128 Core and 4TB of Memory. What makes a IBM Server with POWER10 any more reliable than a powerful x86 Server? After all many SuperComputer are now running on x86 as well.
Or are we now using the word Mainframe specifically for IBM products? Rather than a category of its own?
Can you change physically change CPU, Memory, and internal components without any stop/delay?
Not only the Mainframe is Very robust it was made not to stop.
I used to work on a Data Center with Sparcs, Intel, Blades servers, and Mainframe.
The only time I saw the Operator with Fear was when he had to turn off the Mainframe because of electrical maintenance.
The last time he did shutdown the Mainframe was 15 years ago.
VMs? The mainframe has it since the '60s.
IO? The mainframe is a BEAST on IO
Backward Compat? IBM guarantees that your code from the '40s will run today.
The whole research in mainframe is to Be a powerful beast with high availability and safety.
If you may check more about it, it is a part of Tech that is very beautiful yet.
I hope this is exaggeration, because I don’t think you’ll be running Colossus “code” (from the first programmable digital electronic computer) on your z15 :)
Your point still stands. They’re in a different class from x86 machines.
Mainframes have spectacular capabilities, like running a compute on multiple CPUs in multiple data centers to ensure integrity and survivability built in so that any software can take advantage of it. They have the lowest transaction processing costs of any machine. They detect hardware issues and phone home to order repairs without user intervention. And on and on and on.
Quite frankly, a lot more companies should be using IBM mainframes than trying to build a reliable infrastructure on the cloud.
Today, servers are adding redundancy where it matters most, but they still have a different philosophy where you should think about adding or removing servers, instead of components.
Commodity x86 and mainframes are both computers. But are as different as T and Civic.