Hacker News new | comments | show | ask | jobs | submit login
Millennials and Mainframes: How to Bridge the Gap (model9.io)
101 points by rbanffy 89 days ago | hide | past | web | favorite | 156 comments



Mainframe skills shortage is a complete fabrication, it simply doesn’t exist. In June of this year a local company laid off 850 people, a company with a large mainframe installation (SS&C formerly DST Systems). Hundreds were mainframe programmers. This is after a decade offshoring. There are thousands of former mainframers here. And this is just in Kansas City, Mo.

Anecdotal but representative I believe. Mainframes are costs centers, always will be. And the target of cost cutting. Millenials aren’t dumb!


Your comment made me curious and I ended up finding a great article on the story behind the layoffs at SS&C.

https://www.thepitchkc.com/news/feature-story/article/210195...


Great read, long form writing we don't see enough of nowadays, and not pay walled!


Holy, that was a heartbreaking story of its own kind.


"Being a publicly traded company — as DST has been since 1995 — does not lend itself well to the kind of community-oriented approach DST took under McDonnell, though. When you are a public company, your duty is to your shareholders, and most of those shareholders 1.) do not live in the community in which your business is based and 2.) wish to see consistent, ever-rising profits. Handing out subsidies to the coffee shop next door, or buying up distressed assets and selling them at cost are, in the eyes of sociopathic capitalists, unnecessary acts of philanthropy. It’s the type of thing that can draw the attention of corporate raiders and activist hedge funds, entities that buy up stock in a company and then use their power to demand the short-term maximization of profits. "


That was worth reading. capitalism's fixation on quarterly reports is ruining successful companies like DST.


Do the terms asset stripping or corporate raider mean anything anymore?

In the 70's and 80's lots of todays big names made a fortune buying controlling shares of companies that had spent decades reducing their overheads buy buying (instead of leasing) their corp offices, factory machines, etc until the value of the real assets were greater than the stock value. Then all the assets were sold off, and leased back, with shorterm sweetheart deals that lasted just long enough for the raider to sell the resulting company to someone else (or simply in some cases shut it down).

Wall street has long sense stopped serving anyone but themselves. Its long past due for many of their activities to be made illegal.


Tech service companies have to rely on reputation for renewals and referrals. Focusing on quarterly profits makes managers take short-cuts that burn bridges. It might work in a commodity business where the outputs are standardized or easy to grade, but short-cutting reputations is shooting your own foot.


That's not capitalism, it's our culture around capitalism. There is nothing inherent to capitalism that stresses short term growth vs. long term growth. For example much of the push for short term growth comes from the stock market, but the stock market (or public ownership at all) is in no way required for capitalism to work (and IMO it would probably be better without it).


> There is nothing inherent to capitalism that stresses short term growth vs. long term growth

It rewards people that focus on short term growth and punishes people that think long term, that sounds pretty inherent to me.

Just like the human self interest that allows capitalism to thrive short term thinking is another human quirk it exploits.


Sorry, but that is entirely captialism. You don't get to take core aspects of the system and say it's not really part of the system.


False. Capitalism: an economic and political system in which a country's trade and industry are controlled by private owners for profit, rather than by the state.


It's the government's requirements for filing 10-Ks and 10-Qs that generate this type of behavior. Plenty of examples out there where capital-raising efforts, and therefore investment decisions are stymied by markets that are driven by quarterly government reporting.

Some companies have been able to fight it off and really put long-term profitability into place (Apple seems like a reasonable example of this), but it is the exception, not the rule, and many times does not last long.


DST!? I still deal with DST(haven't heard SS&C mentioned) in the medical insurance industry every day. Nothing but bad things to say about them.


How would characterize their software and services? Cutting edge modern stuff with engaged knowledgeable support?


I'm not sure if your argument supports your conclusion:

1) You mentioned layoffs, but you didn't mention how long it took those people to find new jobs.

2) For those who couldn't find new jobs, how many of them avoided interviews or job offers because they were unwilling to relocate?

3) How transferable are "mainframe" skills to other "mainframe" installations? (As a non-mainframe-programmer, I have no idea how much one's skillset is peculiar to a specific mainframe vendor, mainframe OS version, etc.)

4) I get the impression that long-term mainframe developers actually have two parallel skillsets: (a) technical skills related to mainframe work in general, and (b) skills related to their particular installation's line-of-business and specific applications. For laid-off employees who couldn't find work, is it possible that their (b)-style skillsets were just not marketable to many other employers?


FWIW, a couple of years back I had a gentleman who was a 30 year mainframe programmer in a Rails & Postgres class that I taught. He learned both, was humble enough to take an internship for a Rails job at over 50 and then turned it into full employment.

Anecdotal, but I found it interesting to watch the portability of experience.


IMHO the key is your student has programming skills mixed with humility. When I'm hiring, that's all I'm really looking for, and I think that's true for most coding jobs, mainframe or not.


Honestly, COBOL is not that different from any other imperative programming language. A bit more verbose perhaps, but if you are a good COBOL programmer you could pick up another language without much trouble.


This answer is pretty telling... Thirty years of mainframe experience and at the end of it, all you're qualified for is an internship.


Honestly, I assumed there was going to be more of an age bias working against him than anything else. He picked things up very quickly.


The conclusion that there are many former mainframe developers who no longer work as mainframe developers in the US might be anecdotal but fits what I've seen in the Chicago market as well.


The way the article is describing millennials is from someone who has an outsiders view and a poor understanding. The things millennials do are not because millennials have wildly different values, they're because the world changed. Everyone in business learned giving pensions, training people and retaining people doesn't pay off. The post-ww2 bubble of prosperity has let out its air. That environment no longer exists. At this point you've got to have a good reason to commit to working in a dwindling market, for example being ordered to in the military.

The only thing mainframe shops have to learn from Facebook et al is how to do large scale services without mainframes and cobol etc. They've already got the "let's overstate our talent pool problems because we want cheaper workers" part figured out, it seems.


> Everyone in business learned giving pensions, training people and retaining people doesn't pay off.

...in the short term.


Ok, maybe this is a silly question, but I am a millennial.... Question: what is a mainframe and why would I want one? In my mind I think of it as a big, but inelastic, compute resource. If that's true, why wouldn't I want to use something horizontally scalable instead?


Try this:

Ok, maybe this is a silly question, but I am {...}.... Question: what is a house and why would I want one? In my mind I think of it as a big, but inelastic, tent. If that's true, why wouldn't I want to just get multiple tents instead?

A mainframe is the kind of computer you can buy and set up, turn on, and run for two decades (or more, if you like) non-stop. It doesn't need reboots, it most often doesn't even need to be powered down for hardware replacement or upgrades. It's the epitome of reliability.

Most people don't care about reliability. Heck - most people use Windows, so they can't even possibly imagine what reliability and consistency might be like. Therefore, most people can't even imagine wanting reliability because what they have seems "good enough".

People who run services which should never go down, on the other hand, love mainframes.


Except when they do go down it can be a catastrophic business interrupting event. Which is why the cloud model of assume everything is going to break and all your hardware is disposable works much better (IMO).


Mainframes share a lot of similarities with cloud data centres. Redundant hot swappable components (even the CPUs in some models). Virtualised operating systems (VMs were invented for mainframes). These days it wouldn’t be unusual for some mainframes to be running mostly Linux instances.

You could almost think of a mainframe as a cloud in a box, and if one isn’t realiable enough, you can always run two or more.


AWS, Google Cloud, and Azure break more often than our mainframes. In this case, the cloud model is a con, essentially the self checkout lane at Walmart. You're paying more for the same ability you had before.

"Well, you want redundancy right? Well you're supposed to be redundant across AZs, and then regions, and then you're going to have to have disparate vendors to mitigate sole vendor risks." And then we're right back to hosting our own mainframes in our datacenters.


Clouds reinvent mainframes. They're still cheaper, run more FOSS, and have more talent available. They're a better form of lockin than mainframes.


> They're still cheaper

Are they, though? Commodity hardware certainly is, but it's not as if cloud providers are charging a small margin on top of that. They're charging a multiple, potentially as large as 10x.

Combined with the parent's proposed need of multi-cloud, that could turn what might otherwise be a few hundred $k of commodity servers into a few $M of cloud costs, which I understand is the OOM the cost of a mainframe.


Problem with IaaS clouds is that it is strictly less reliable than having your own infrastructure.

With IaaS typical reliability issue is that whole location/datacenter/AZ goes down, with your own infrastructure the typical issue is that the colo-facility/datacenter goes down, which would be essentially identical save the fact that with IaaS there is significantly larger probability that the reason for going down is some byzantine failure of orchestration automation, which in the self-hosted case either isn't there or is under your control.

One fact of running your own infrastructure is that you should plan for hardware failures, but not stress about it too much, because even entry-level enterprise-grade hardware just does not break (and if it does you will get signs that it is going to break well in advance).


It is only strictly less reliable than having your own infrastructure if you assume the same level of organizational competence at running your own infrastructure as the IaaS has at running theirs.

That is certainly possible for an organization to achieve, but it isn't easy and it isn't cheap. It certainly can't be taken for granted.


> if you assume the same level of organizational competence at running your own infrastructure

I'm not sure that the competence required is organizational (e.g. people managing) so much as operational (e.g. best practices) and even technical, at least at sub-FAANMG scale.

> it isn't easy and it isn't cheap. It certainly can't be taken for granted.

I agree that it can't be taken for granted, but I disagree with it not being easy and cheap. Rather, combining the two, I don't believe it's necessarily hard nor necessarily expensive.

It just requires finding someone who both still has the competence, is willing to use it, and is willing to train others. Running your own infrastructure isn't actually difficult or complicated, but it's certainly not "sexy" and can be a bit tedious at times. That means it's possible to hire inexpensive, less (overall) experienced staff and have them handle that portion. Unfortunately, the "unsexy" part means finding someone to do the training, as well as the actual work when necessary, can be challenging, even though we're out there.

Even then, that's only necessary at substantial scale. In <1000 server environments, I've never had the hardware-specific [1] part of the work take up more than a quarter of one senior FTE (usually me).

What can get astronomically expensive is outsourcing the wrong things, though that ends up being a form of not actually running your own infrastructure (yourself).

Anecdote: I recently had a phone interview with a startup that moved from "hardware" to the cloud and the main reason cited was the inability to ramp capacity up fast enough (nor predictably fast enough), which seemed odd to me. One example of unpredictability of lead times involved a new server underperforming due to mis-applied thermal compound between the CPU and cooler, which I have never experienced [2]. I didn't ask the rhetorical question, "how could you have picked such a horrible VAR?!" Carefully re-reading the blog post about their transition gave me my "aha" moment: even though it's a company in the SFBA, their datacenter was out of state (maybe not even in a tech hub city, but it didn't specify). They were outsourcing the actual installation, running, and maintenance of their hardware to someone else, far away.

[1] for lack of a better term.. i.e. anything that an IaaS cloud provider would eliminate, including purchasing and vendor negotiations, colo space, network hardware and providers, hardware monitoring, and data destruction

[2] well, OK, that's a lie, since I've experienced it when I've personally done CPU moves/swaps/upgrades in exceptional circumstances, when I was out of practice, but I knew to test my work and caught the problem immediately. I've never had it happen with professionally-assembled systems, presumably because CPU coolers tend to arrive with the thermal compound pre-applied.


Are you replying to the right sub-thread? I was asking about cost.

That said..

> Problem with IaaS clouds is that it is strictly less reliable than having your own infrastructure.

Although I'm a fan of running ones own hardware, I'm not sure I could make this claim. However, since I'd like to, do you have public data to back it up?

> orchestration automation, which in the self-hosted case either isn't there or is under your control.

I'm not sure how that would be different. Although a provider like AWS has portions of the automation toolchain not under your control, it's not obvious they're any more likely to fail (even due to some bizarre interoperability bug) than, say, BMC firmware, which is also not under your (full) control.

> One fact of running your own infrastructure is that you should plan for hardware failures, but not stress about it too much, because even entry-level enterprise-grade hardware just does not break (and if it does you will get signs that it is going to break well in advance).

This is something that I routinely have to point out to cloud proponents when they complain about having to "worry" about hardware failing: modern, commodity server hardware just doesn't fail often enough for it to be a significant consideration. Usually, it's just selection bias in that they remember every "nightmare" scenario from their past where hardware failed (possibly even as long as 20 years ago) but don't account for the overwhelming majority of times when it didn't.

Of course, there are notable exceptions, such as high-density "blade" or half-U servers, which often suffer from thermal design failures, but I argue that those are a departure from commodity, even if they appear identical if one squints.

Most importantly, though, it's not as if an IaaS cloud provider can somehow magically shield you from the consequences of such a failure: your VM will still go down. Sure, they have an arbitrarily large supply of spares to replace it, but you only ever need exactly 1 of those spares, and N+1 redundancy when self-hosted is very easy, if implemented merely as warm spares.


I started with some comparison of cost of big IaaS instances vs. mainframe vs. commodity HW and then got sidetracked on the reliability and ended up deleting the first paragraph :)

That said I believe that when your workload really necessitates such a big systems and if you can use all of the mainframes capability and have the capability to manage the mainframe (which requires Ops team with totally different skillset and mainly willingness to do such a thing), I believe that cost of mainframe will be comparable to IaaS, with self-hosted commodity HW being somewhat cheaper.

It is anecdotal, but when I used to work in more of an Ops role I cannot remember single time when server hardware failed in production without external environmental cause (there were flooded servers and servers that were DoA from manufacturer), it is somewhat surprising that this experience even extends to spinning rust harddrives, where most common causes of failure I've seen were flaky SATA/SAS connetors, followed by simply bad series (eg. Constellation ES.2) and then by extreme overheating.


> if you can use all of the mainframes capability

> cost of mainframe will be comparable to IaaS

I'd be very interested in seeing even a rough cost comparison, since I have no experience with mainframes.

I also have, essentially, no experience with that first "if", which I'd say is a big one. Workloads that are (already) suited to that particular system design may be rare (and obviously getting rarer).

> self-hosted commodity HW being somewhat cheaper.

I'm pretty sure "somewhat" grossly understates it.

However, I realized that one of the problems here is that we're talking about these costs as if they're single numbers, rather than ranges.

For mainframes, it may as well be a single number, because there's only one vendor (for the latest hardware).

For IaaS and self-hosted, the ranges can be very broad, because it's very easy to pay a multiple of the minimum cost with merely a naive implementation. Trivial examples would be not leveraging "reserved instances" on AWS or not getting competitive quotes for self-hosted. In fact, if one removes the "commodity" constraint from self-hosted and allows "enterprise" hardware (especially storage), the top end of the range can easily balloon above the top of the IaaS range.

I've been assuming a comparison of the bottom ends of the ranges, but including the cost of the expert labor for each. What's difficult to know, of course, is how scarce the experts (capable of keeping costs near the bottom of the range) in each category actually are.

> I cannot remember single time when server hardware failed in production without external environmental cause

I think that's just a matter of too small a sample size.

> servers that were DoA from manufacturer

I wouldn't even count that, since it's not in production yet.

> it is somewhat surprising that this experience even extends to spinning rust harddrives

That's definitely too small a sample size, then. If you're not seeing at least a 1% AFR (realistically closer to 3%), you don't have enough of them or haven't been running them long enough yet.

> simply bad series (eg. Constellation ES.2)

That's not an external environmental cause, though, and counts the same as any other failure due to (presumably) a manufacturing defect (defined broadly), including RAM bit errors. It's merely something that can be engineered around with best practices.

None of this is to say that any of these inevitable failures are actually frequent or voluminous enough (even on, e.g., 5 year old hardware, which is ancient by most standards) to require outsized worry or effort/cost to mitigate/repair them.


It should be noted, though, that from physics perspective it makes sense to cram as much as computing power (although in case of MF, which unlike supercomputers do relatively simple calculations on high volumes of data, it rather means I/O throughput) into as small space as possible.

So having interconnected commodity servers will always be marginally more expensive to run than mainframe boxes of the size of fridge, which have dedicated hardware for interconnect of internal components (for example on-book CPU caches and shared RAM).


You can get very very redundant mainframes IBM Parallel Sysplex and so is Tandem


It's catastrophic for your business if either your cliud model or mainframe breaks, ergo, it's worth to choose the more reliable one where that happens less frequently.


and everything inside is already virtualized the memory, the processors, the hardware, the power, everything, it can multiple operating systems.


As a non mainframe programmer I found your anecdote about windows reliability contrary to the experience I have had over the past few decades so I am not sure how accurate the rest of your post is.


Here's my favorite: https://support.microsoft.com/en-us/help/2553549/all-the-tcp...

article about it: https://blog.ctm-it.com/it-support/blogs/matt-cannon/2013/49...

edit: If you want to be especially cynical, you can note that this wasn't discovered for a while because everyone running Windows reboots it regularly because of its notorious instability issues (the bug was found in 2013 and dates back to at least Vista, which was last updated in 2009).


> because everyone running Windows reboots it regularly because of its notorious instability issues

I reboot monthly because getting owned by 0 days sucks, and at this point in life, every platform (except for OpenBSD) is having new exploits found against it at a rather fair rate. Same reason my phone gets rebooted once a month.

Aside from that, I have had uptime on Windows boxes in excess of 6 months.

Laptops on the other hand, those are more problematic, but it isn't Windows' fault if a WiFi card hard locks itself, has a dysfunctional watchdog timer on board, and needs to be power cycled to get it working again.

OSes that can hot swap or live patch kernels can brag about uptimes. The rest of us should stay humble. (And AFAIK no modern consumer based OSes distributions do that)


> Laptops on the other hand, those are more problematic

Windows laptops.

My MacBook has been up 56 days 14:06 hours.


And my wife's MacBook randomly freaks out when plugged into an external display.

Apple laptops have their fair share of hardware issues. Windows laptops tend to have more due to a shorter product cycle. Apple has longer relationships with their parts vendors and thus they have more time and leverage to force good firmware onto the multitudes of independent chips that make up a modern day computer.


That bug is over five years old. Are you implying there haven't been similar, or worse, bugs in other operating systems since?

Windows used to suck, but it's somewhat unfair to judge the platform of 2018 because Vista of 2006 was bad for you.


To be fair, the overall context of the conversation isn't (all) "other operating systems" but, specifically, mainframe ones. Additionally, the GP referenced "the past few decades", which certainly includes 5 years ago.

Hopefully a mainframe expert can chime in as to whether the major mainframe OSes have included similar or worse bugs.


https://www.cnet.com/news/windows-may-crash-after-49-7-days/

Windows has improved since then, but it's still a very brave person that would have a single business-critical Windows machine without a few scheduled maintenance and reboot periods every year.


That bug is from 2002. Do you really think it's a solid reason to not host business-critical things on Windows in 2018?


If mainframe reliability is what it is purported to be, then, for someone making this choice as an alternative to a mainframe, especially if they've been on it for decades already, it's a very solid reason.

It is, of course, only a proxy for the real reason, which is the engineering culture that led to the bug and/or reliability. If there's better evidence that Microsoft has drastically changed its engineering culture in favor of reliability and/or IBM has done the converse [1], it could easily trump this proxy evidence.

[1] for example, if they've included the OS developers in the layoffs they've been in the news for


That bug, no. The latest bugs that Windows no doubt has, yes.


They DO horizontally scale, just not very cheaply. Reliability above all else is their advantage. There is an entire internet between you and a "5 9s" cloud provider that can screw you.

side note; I HATE the trend in the level of abstraction away from hardware. Seems about as short sighted as GP doctors abstracting away human anatomy. They mostly treat illnesses and proscribe medication; why should they need to know the names of bones in a human body? There are orthopedic specialists for that!


People are only capable of so much, mentally. As the world is increasingly well understood, it gets increasingly complicated, and individuals require ways to interact with the parts that they don't have time to truly understand.


You want mainframe in similar situations as when you need EC2 ridiculously large instances. That is when you have workload that absolutely most run as a whole on one big single machine. Typically that means OLTP database which cannot work with some relaxed consistency model (payment settlement is often cited example of such thing).

And for the elasticity part, mainframes are where the whole virtualization and hypervisors started. Although, or maybe because, purchasing and operating an mainframe is significant investment in terms of both capex and opex (it is somewhat telling that IBM's specification sheets for mainframes read somewhat like flyers for new cars, about half of the thing is about available financing methods :)).


Given that Intel leads cpu performance, How can mainframes be that much better at single node performance than x86 servers ?


Intel leads in x86 performance and comparing performance of System z CPU to x86 is not exactly trivial task as zArchitecture is "uber-CISC" with instructions like "give me SHA256 of this part of memory" that also typically runs on 4+GHz clocks (and the "CPU book" has TDP of several kW). But main part of the performance advantage (be it perceived or real) is incredible memory and IO bandwith combined with large caches and hardware offload for essentially anything that remotely looks like IO.

On the other hand, you can get rack-mount x86 server with several TB of RAM for fraction of price of mainframe which is exactly the hardware for new applications that would otherwise be best served by mainframe.


> On the other hand, you can get rack-mount x86 server with several TB of RAM for fraction of price of mainframe which is exactly the hardware for new applications that would otherwise be best served by mainframe.

I expect the largest one of these is still a fraction the size of the largest mainframe, at least for processor power (e.g. 224 cores[0] compared to 1700[1]). Maximum memory, though, is 32TB, which isn't exactly huge compared to 12TB [0] or 24TB [2].

[0] https://www.supermicro.com/products/system/7U/7089/SYS-7089P...

[1] https://en.wikipedia.org/wiki/IBM_zEnterprise_System#z14 (Not saying they're necessarily equivalent in performance, but they might be, as you point out)

[2] https://www.supermicro.com/products/system/7U/7088/SYS-7088B...


From my point of view for typical modern bussiness workloads that necessiate large machines (ie. SQL RDBMS or something similar) only thing that you really care about is memory size. On the other hand many "legacy" mainframe workloads are significantly more CPU-bound because the software culture is simply different and brute force is often the way to go.


> From my point of view for typical modern bussiness workloads that necessiate large machines (ie. SQL RDBMS or something similar) only thing that you really care about is memory size.

That's my general impression, as well, but I have little enough (or inadequately broad) direct experience for that impression to be a strong one.

I've also seen, second-hand (i.e. benchmarks, so not quite real-world) that cache and memory latencies (including inter-CPU/NUMA) can significantly affect OLTP performance.

If that weren't true in practice, then the sheer bandwidth one can deliver from enough SSDs over PCIe rivaling a CPU's memory bandwidth would mean main memory size is much less relevant.


Data bandwidth probably


Bandwidth and reliability. It's in general hard to swap out cpus while running db transactions on commodity x86 hw.


In theory you can hot-swap CPUs on x86 (with slightly hillarious fact that primary use of the Linux's CPU hotswap support is suspend to disk on laptops) but hardware that supports it and it really works is rare to non-existent.


You want a mainframe if you want your system designed by engineers and not web developers.

The IBM mainframe system is called "Z" for zero downtime.

Banks and Airlines use systems that haven't needed to be rebooted in decades.

(This is a little oversold, but that's the selling point.)


I think in actuality they go for what is known as "five-nines" reliability. Up 99.999% of the time. Or down about 5 minutes/year.

When I worked on "big iron" (Not sure if the old hp superdomes counts mainframes..), the solution proposed for redundancy was to have two and be able to switch from one to the other..

Our development machines had to be rebooted from time to time as we turned off interrupts for some processes (quasi-realtime) and when under development some bugs could cause the processes to fail and be uninterruptible. Things stabilized quickly though.

(edit: apparently there is 4 nines, 6 nines....) https://en.wikipedia.org/wiki/High_availability


Banks and Airline systems go offline all the freaking time.

Just the other day I told the post office attendant their system was back online, as the cursor started blinking again. Telltale sign of mainframe style computing.


When you turn off your laptop, does Google go down?


We don't, we are happy with out distributed and networked computing. Most of us can't afford a mainframe, mainframe is owned by big corps, to play with one you have to go through a big corp. If you worked for a big corp, you still have to appease the priests or be one of them to play with one. It goes against the dream of "everyone should have a computer and be free to do with it as much as they wish". The biggest computing power that have been demonstrated in our history has been via distributed systems not mainframes.


It's bit like asking why would you use a truck when several vans can do.. and be more "horizontally scalable".

Mainframe is a really big server, and it is actually plenty elastic. You can divide it into smaller logical servers (LPARs).

The only downside is price. But I think for larger installations and certain workloads it will come out similar with a server fleet.


It's a huge, inelastic compute resource. The term "huge" can't be overstated here - however "big" you think that is, you probably aren't thinking big enough. That's the whole sell. If you ever need to scale horizontally, then your mainframe is too "small". It's not unusual to assign 2048 cores for a scheduled job on a mainframe.


It is a relic that you want nothing to do with.

You own one because your company or agency bought one in 1970, and it is cheaper to run it than to redo everything. You may need one for certain new workloads if you’re a defense contractor.

As an employee, you’re a pure operational cost center. You have skills that few employers care about, and vendors/contractors are churning out cheap replacements for you.


This vid has some good photos https://www.youtube.com/watch?v=45X4VP8CGtk (also discussed on HN previously).

You don't want one if you ask me.


A lot of the responses to this question could have been copy/pasted from Erlang marketing copy.

I'm still not seeing what mainframes give you that a well-designed pc cluster couldn't, other than marketing BS and opportunity for grift.

Waitaminute...


Regulations seem to be one reason. The IBM mainframes use their own implementation of floating point math that appears to be legally required for some businesses as you can see under "Special Uses" here[1]

[1]https://en.m.wikipedia.org/wiki/IBM_hexadecimal_floating_poi...


Is that just required in storage formats (which on other platforms would just mean conversion when accessing files) or do calculations actually have be done in it (which would need to be simulated, more overhead and potentially harder to certify correctness)?


Was there a reason this was downvoted? I thought this was a factual statement, but I'm not very well versed in the mainframe world so I'd appreciate it if someone could point out what I said that was incorrect


I don't know why the downvotes since it's an interesting tidbit of info... but looking at the Wikipedia page, it's not that you are legally required to do all of your processing using the IBM hex format in your floating point calculations, it's just that when you submit your data set to the FDA the file format they specify uses the IBM hex floating point format as the data type for one of the fields. So you can convert at submission time (and, indeed, it looks like the FDA even provides code to help you convert if the data isn't coming from a mainframe with native support for the format).

And the other examples under Special Uses... like the GRIB data, I have toyed with reading weather model data and I just write Go code that reads the files and does conversions as necessary on x86 without having to involve a mainframe. (Although that said, I have an odd fascination with mainframes despite never using them professionally, and maintain a system using the Hercules emulator and the MVS 3.8 operating system, which was the last freely-available version of what became today's z/OS.)


Although downvotes don't necessarily mean factual incorrectness (and I didn't downvote), that may have been the case here. It may also have been because the comment could be read as implying undue influence (and thereby indirect subsidy) by government regulation, injecting controversy/politics where there is none.

Regardless, I wasn't able to find in the WP entry adequate support of your assertion. (Oddly, the "[..]" were in the original WP text):

> As IBM is the only remaining provider of hardware (and only in their mainframes) using their non-standard floating-point format, no popular file format requires it; Except the FDA requires the SAS file format and "All floating-point numbers in the file are stored using the IBM mainframe representation. [..] Most platforms use the IEEE representation for floating-point numbers. [..] To assist you in reading and/or writing transport files, we are providing routines to convert from IEEE representation (either big endian or little endian) to transport representation and back again." Code for IBM's format is also available under LGPLv2.1

There's only a regulation for a file format (and a single, isolated one at that), not regulation of using/processing, and format conversion utilities are available, rendering it irrelevant.


The CPUs in the zSeries mainframe actually support 3 floating point formats (in fact there is like 10 of them if you count all the sizes):

- IBM legacy hexadecimal floating point you mentioned

- IEEE 754 binary floating point

- IEEE 754 decimal floating point (they were a first implementation of it)

So perhaps you were downvoted because on the mainframe, you don't have to use the legacy format at all.


> what is a mainframe and why would I want one?

Very roughly speaking, a mainframe is to a CPU what a supercomputer is to a GPU.


If speaking very roughtly, I'd go so far as to extend that analogy to the CPUs, as well, and, more importantly, the I/O.


Sometimes one might like to hold lots of power in reserve

Sure, my big powerful desktop PC spends 90% of its life surfing the web or doing light stuff, but for that remaining 10%... if I need the power, it rises to the challenge, every time. Plus, its paid for and I don't need to answer to anybody to make use of it.


> why wouldn't I want to use something horizontally scalable instead?

Besides the (purported) reliability reasons other commenters have brought up, because "vertical" [0] scalability is, often enough, more efficient in terms of money and (human/engineering) time.

This can hold true even if paying a premium for both the hardware and the humans (which, presumably, are more expensive due to their scarcity, as implied by the article).

The promise of "horizontal" scalability is being able to achieve arbitrary performance just by adding more units (e.g. servers), but not at any particular efficiency (slope or shape of the curve). Even that promise is suspect in the face of Amdahl's Law [1] and USL [2]. In reality, the Fallacies [3] matter, so the most general-purpose supercomputers are still bespoke and expensive, even if they're assembled from (a selection of) commodity parts.

Even just in the realm of commodity hardware, I/O-heavy loads are going to benefit from keeping everything as close together as possible, on as fat pipes as possible. Distributed databases or any distributed data handling (that doesn't involve heavy processor use, such as compression or transcoding) is what I've personally seen suffer from this issue. 20-60 "inexpensive" ($3k-4k) servers instead of 2-3 "expensive" ($20k-$60k) ones.

Cloud infrastructure further confounds any calculation, since, with the most popular provider, AWS, you'd pay 2x-10x what you would buying and operating your own hardware. That, multiplied by the inefficiency of scaling a distributed system, especially on a network you don't control, may well be higher than the price premium of a mainframe.

OTOH, if "you" are like Google, and are paying commodity (or below) prices and have had operating your own hardware, as cheaply as possible, as part of your strategy all along, including investing in experts/specialists to keep it cheap, then mainframes wouldn't make much sense. Amusingly, it seems popular to emulate only part of the big players' distributed systems, the software, while ignoring the context that made it so useful for those players, the cheapness of the hardware.

[0] I'm using quotes because the terms are a bit imprecise in comparing mainframes, which, even traditionally, involved distributed processing. IIUC, the reason there's a "C" in CPU is from mainframes, which routinely relied on processing units that were not central. Other commenters have asserted that mainframes are horizontally scalable without detailing what that means.

[1] https://en.wikipedia.org/wiki/Amdahl%27s_law

[2] http://www.perfdynamics.com/Manifesto/USLscalability.html

[3] https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu...


My recommendations:

- Give lots of young people the opportunity to learn the necessary mainframe skills (ideally by themselves). Lots of today's great programmers learned their programming skills on their own. This is very hard to do for mainframes because of access to them.

- Pay really good salaries and advertise it. Money nearly always has a huge attractive force.


> This is very hard to do for mainframes because of access to them.

Bingo. I work in an AS/400 shop. I simply can't invest myself personally in the work because I know my access to that environment is predicated on working for this particular company.

I can hone my Linux skills and even my Windows Server skills any time. I can bring it home. I can use it myself. I can own it.

I can't do that with the AS/400. Just spinning up a "virtual machine" in the office to play around on is a licensing mess, and there's no "cloud" where I can rent one for a reasonable fee (that I've found). I have no sandbox.

Since my continued access to that environment isn't guaranteed or even convenient, I don't really want to dive in. And if I don't want to dive in then learning about it at all feels like a waste of time. Which then means I end up learning the absolute minimum before moving back to familiar territory.


Pretty much this. I basically steer my clients to free software, simply because they can hire people with experience in it because people can learn it easily in their free time.


http://www.timeshare400.com. Starts at $15/month.


Thanks for this. Will actually be really useful form me.


one of the best posts I have read on hacker news!


- Make sure the jobs that are available are serious roles at real companies and not contract gigs sourced via body-farming consultancies.


This. A long-term job is indispensable for this kind of programming. Since the tooling is very stable and backwards compatible, you can get really good at it. Now if you get laid off after 5 years of doing just that, you better find another mainframe job because your resume won't look as diverse.

OTOH, body-farming consultancies are excellent for networking and finding a full time job.


For finding a job at a crappy company, because those are the ones that hire bodyshops.


YMMV, it worked fine for me.


IBM runs a student competition to try get people into it: https://masterthemainframe.com/


Is there something similar available if you are no student?


Yes, it's called [The Learning System](https://www.ibm.com/it-infrastructure/z/education/master-the...). I've done it twice, it's fun. You get the same tasks the students get, which is (on the whole) a nice tour around the system.

And nobody keeps you from playing around and writing and compiling a few programs on your own. It's a virtual system which gets reset when they start a new round. And I'd assume they've set up proper resource controls so you can't damage stuff.


just an fyi for other readers - the auditing link for non-students is down. it doesn't work regardless of how many times you press submit. just another sign that ibm is behind the times?


Not that I know of, and as a student I found the competition cycle also a bit off-putting, since it discourages just messing around a bit during random down-time that's not aligned to the schedule.


Or maybe, hire people and have them spend some time learning. Pay them while they are learning. Pay them enough to stick afterwards and treat them well.

Really. If people dont have exact skill you need but have aptitude, hire them and pay them to learn. No matter what missing skill is.


Let me add a shameless plug here:

https://area51.stackexchange.com/proposals/118484/mainframes...

It's a proposed Stack Exchange website dedicated to mainframes. It is in commitment phase, which means people need to come up and commit to support the site - with questions, answers, moderation, comments - in numbers sufficient to ensure its viability.


It needs especially committers with sufficient reputation on other stackexchange sites, which is the only thing holding it back right now.

Maybe one should start a discussion on meta-stackexchange that the usual criteria are not a good fit for that one, because many of the committers don't use other stackexchange sites that much.


That's true. Let me prepare an argument.


As a mainframer:

IBM should make the z/OS available for free experimentation! That's the biggest obstacle and reason why people do not know it. Even the long dead Solaris has more hackers on it than z/OS.

Also, millenials will also want job security and pensions, eventually. Just wait when they have kids.


As a millennial I took a pass on mainframe jobs, even though I find mainframes interesting, precisely because of job security.

Companies are not going to change their culture overnight, and they currently view employees as disposable. There's not many jobs in mainframe developer compared to say web dev, and the skills in mainframe developer to seem like they transfer poorly.

I have a better chance at keeping the actual day to day of my life stable if I keep skills in an industry where I have lots of opportunities for jobs. Unless companies employing mainframe developer start offering garunteed pensions that are funded up front or a much higher average salary than the alternatives, it's not a gamble I'd be willing to take

Edit: additionally the mainframe engineer talent pool seems to skew much older than other development. That wouldn't be a problem if the field was growing. Having a decent chance of always being the new guy or junior even if you've been there 10+ years would probably do little for job satisfaction given how many people automatically defer to seniority. I have no real interest in having my ideas ignored for years


There are advantages and disadvantages to mainframe specialization. Because it's a stable thing for big companies, there is actually more job security than in other areas of IT. Big companies need to keep these systems running, somehow; they are more open to training people from the outside, and they are willing to pay for expertise. There is an evolutionary balance.

You also say that you would like the field you select to be growing. Selecting a growing field can be more risky, too. Things that are already widely adopted are probably proven to work really well, and therefore they are there to stay for a while.

As for the older people in the industry, I have always enjoyed it. Compared to young people, older people are generally more calm and less needy to prove themselves than young people. They actually understand (and demand from the company) the work-life balance better. And they have more interesting stories to tell. The truth is lot of smart people worked on the mainframe in the past, and often the most successful stayed in the business.

The fear that innovation gets ignored - well, that is partly unfounded. It's true that mainframe is supposed to be super stable platform (it's a philosophical difference), so you only change things in production when needed. But in tooling, you can innovate a lot.

I work for one of the mainframe companies mentioned in the article, but I am not American. In our office, there is plenty of young people and some of them do really good innovation, like for example writing some Python or JS tools. Usually, the older people are impressed by that, as long as it brings some practical value.


Is that what is behind the Master the mainframe program? https://www.ibm.com/it-infrastructure/z/education/master-the...


link is down, doesn't work


Even if Z/OS was available for free, wouldn't hardware be a huge problem? IBM zSeries are pretty unique under the hood.


IBM has an emulator as part of zPDT, and there is also Hercules emulator for free. It's a problem for certain things (like sysplex programming or HMC), but mostly easy to deal with.


My first professional job was to write RPG for an AS/400 mainframe (or whatever it's called these days), and even I don't know what the word "mainframe" is supposed to suggest to the modern user. As far as I could tell, it was just a single bog-standard server rack (amusingly sitting alone in a room that was far, far too big for it; a reminder of the massive bank of computers that used to do the same job) whose only interface was a green-and-black 80x24 IBM terminal emulator speaking a bespoke telnet dialect, and where nothing on the system was textual so you could only use first-party tools to modify the system (conspicuously lacking any support for source control). And I ain't a greybeard, this was in 2011! And the thing was dog slow; I took a batch job that spent hours running every night, ported it to PHP (forcing it to make database requests over the intranet now), and saw the runtime fall to minutes. I estimate that the company paid around $40,000 annually for the privilege of using it. IBM must have the best salespeople in the entire world.


You have it go the other way, as well. We were running some accounting software on an System i (AS/400, iSeries, like you said whatever IBM calls it this decade), and recently switched to something that runs commodity hardware.

Another company that I knew from our user group meeting also switched - the batch processing was so slow, they had to go back to the IBM. Their shipping process just wasn't fast enough otherwise. This wasn't a small company, either. If you're familiar with firearms you've probably heard of them.

You can run the 5250 at 27x132, and if you really need screen space you can run Eclipse/Visual Age on your desktop and have a 'real' environment to code on.


> You can run the 5250 at 27x132

This does ring a bell, I think I did discover this eventually ("wide mode", they called it?). And I'm not sure if this is damning with faint praise, but I will admit that the 5250 monospace font kinda grew on me.

Interesting to hear that I could have been using Eclipse for RPG; for the PHP side (which was still running on the mainframe via AIX, so I had at least a few Unix tools at my disposal (not git though... I had to expose the PHP source directory via a fileshare and mount it via NFS on Windows so that I could leverage msysgit (it's as ridiculous and slow as it sounds, running VCS in a Unix emulation layer on Windows performing filesystem operations over the network to a virtual filesystem in a Unix emulation layer on an IBM mainframe (but I digress)))), I got the company to shell out for WebSphere, which I thought being an IBM product might have some RPG support, but alas.


Zend has a php product for the IBM i that isn’t too bad - and it runs like you said on a Unix-ish file layout and everything.


AS/400 is more a minicomputer. The line was developed specifically to take on the DECs and Data Generals that were starting to appear in businesses.


Also it was designed for maintenance and even programming by "non-tech" staff, which is cause for most of the ways in which AS/400 is "weird and alien" for Unix/NT people.


The answer given by the article is literally conscription: the authors were assigned to the mainframe unit as part of their national service in Israel, and found that they liked it.


Fine. I want to learn how to maintain/write software for a mainframe.

How do I start? Can I get access to one? Can I build up something I can learn on? What is the best path for doing this? How would I usefully gain enough experience to get into this?

I have linux, python, c, but unsurprisingly can’t afford 30 million (or whatever) on a mainframe.

I know about Hercules, but I’d only be able to run Linux. Is that right? Is that useful?

My feeling is that I’d need experience to get a job, but I can’t get experience without a job. Seems like the whole shortage is a lie. If there was a shortage I’d be able to find various organisations falling over themselves to help me into mainframes.


When industry X complains about shortage of Y what they actually want is for someone else to pay for the training of a bunch of Y, so they can hire them for peanuts and dispose of them at will.


Useful links for the idly curious like me. More welcome.

Rent cloud access http://www.timeshare400.com/

Hercules http://www.hercules-390.org/

Debian s390x is linux that seems to be supported.


I learned Mainframe programming at a third-rate school in Flint, Michigan for a few hundred bucks (baker.edu).


Fear of not keeping up with the Technical Joneses is a huge influence in our industry because everyone sees "old" programmers booted first when a company or the economy sours. Nobody wants to be That Guy (or Gal).

If mainframes are perceived as a shrinking or outmoded technology, obsolescence fear kicks in and people avoid it. It may need some clever marketing to trick people into giving up that fear or impression. BS works in other areas, per mind-share. Call mainframes "high-up-time cloud", Cloud++, cloud.js, Deep Cloud, or something. There's better BS'ers out there than I.


Dunno, I'm a millennial (didn't even know I was part of that group till recently, gotta love being profiled by things I cannot control) and all it took me to like Mainframes (although I don't work with them) was hearing an older developer tell me all about them and their capabilities. I'm a total computer nerd aside from a programmer so I love learning about tech old, new and upcoming.


Yeah, all it would take for me to take on a mainframe is "we need someone for a permanent position that you can live on". I'm in this profession because I loved computers as a kid and it's never stopped. You wanna show me something novel? I'll be happy to learn it (assuming there's a livable job in there at the same time). They just have to be willing to teach it.


that's funny, i'm a so-called millennial and all it took me to like embedded programming was hearing an older developer tell me all about its limitations.


Local-cloud or onprem-cloud!


Back in 1996, I lived in Columbus, GA. There are a couple companies down there that used Mainframes for their processing. Synovous/TSYS, AFLAC and Blue Cross Blue Shield. Those companies teamed up with Columbus State University to provide a program for people to learn how to program in the technologies they used:

- Cobol, CICS, Rexx - DB2, IMS, VSAM - JCL

You got paid to go to 6 months of school, and if you already had a Bachelors degree, you got a second one for the classes you took. You went from 9 - 5 each day, and you were guaranteed a job at one of those companies, and if you worked for at least 4 years, you didn't have to pay back the loan you got to go through the training.

I was part of the second group of people that trained, and it started my professional career as a developer. Once Y2K was done, I moved on from mainframe programming to Java and .NET.


Do we need to bridge the gap? Or, to be brutally direct: Do mainframes deserve to survive in the coming decades?


As a precursor, I don't have anything necessarily against the concept of a mainframe. It's just one big shared computer, not too much different than a kube cluster etc. But if there is one thing that definitely deserves to die in the mainframe world, it is IBM's predatory tactics for vendor lock in. Mainframes running linux I can see being around for a good time.

Watch this video for a pretty cool story behind IBM and EBCDIC.

https://www.youtube.com/watch?v=FUIqtevjod4


Another question to ASCII is whether millennials can learn EBCDIC.


To be brutally direct, why do you think mainframes should not survive "in the coming decades"?


Asking out of complete ignorance: What can a mainframe do that other kinds of machines can't? I barely understand what a mainframe is or why you'd use one over other technologies.


What a mainframe can do is have ANY single part fail, keep running at full speed and inform you of the part that failed so you can get it replaced. It is not unusual for any part to go all the way back to the substations!

Is this more or less useful than bunch of cheap PCs in a rack. Depends on what you want to do. The bunch of PCs model generally just accepts that there will be a few blips when (not if) something goes wrong and ensure nobody will notice. The mainframe model is even the smallest blip would be noticed so you can't have any.


Plain and simple, they are the biggest servers available. Built to be highly reliable and to have high IO throughput.

There are also many features in z/OS (the flagship OS) which are not available elsewhere. Things like workload manager or instrumentation facilities.


This question has an easy answer: exactly what they have been doing for 20-50 years. Inertia is a real thing.

Other relevant answers might include "true fault tolerance". More than half of your mainframe can be burned to a crisp, or being chewed on by a dinosaur - doesn't matter, it will keep working. The closest bad analogy I can think of...is a mainframe is like the NASA space missions of yesteryear. Every component has three or four duplicates, just waiting for a primary component to fail so they can take over. They aren't cost effective, but they are operationally effective.


Other relevent factor: mainframes are probably not riddled with security flaws. If Iran used a mainframe in their nuclear enrichment facility, they might never have gotten hacked by nsa and friends.


Channel I/O was something I wanted on all my desktops and servers. It helps mainframes get their high utilization ratio and throughput:

https://en.m.wikipedia.org/wiki/Channel_I/O


How does this map to the current state of commodity computing?

For example, it seems we already achieved that with disks, once embedded controllers attached to DMA-capabled HBAs became the norm.

A similar thing seems to have happened with NICs, as well as the ability to offload higher-level protocol processing (another mainframe-like feature).


The Channel I/O worked on many devices and the OS. Far as cost, using AWS might not be cheap but I was thinking redundant VM's or dedi's with cost-effective hosts. That doesnt cost hundreds of thousands to millions a year.


Perhaps I'm missing something. I'm still unsure if you're saying that Channel I/O (or its equivalent) is missing from current x86 server systems. Does virtualization affect this situation?

Initially, you mentioned high utilization ratio and throughput. I think the former is a red herring [1], but I'm curious about the latter. I've certainly witnessed poorer I/O throughput under virtualization, but on bare metal, throughput doesn't seem to be limited (beyond the capabilities of the bus).

[1] e.g. it doesn't matter if CPU is pegged but I/O channels are at 10% if the workload is CPU-bound and CPUs are the expensive part to scale. Or substitute memory for CPU.


I think their selling point is running cheaper and easier than the equivalent power in server clusters, as you don't have to deal with coordination and its overhead.

That being said, I think that advantage has mostly gone away with modern cloud computing and its matching infrastructure.


I don't believe the GP expressed an opinion, but simply asked a question.


Maybe I am prejudiced but I have the feeling that mainframes get sold to shops with a lot of money based on the salesman's skills and, well, I wouldn't be surprised if there was a kickback here and there, to oil the old engine. Not because they are the best tech.

That may be a bigger obstacle to people getting in that area. You can make money in non-corrupt companies where your career won't get stuck.


I bet the kickbacks are like any other enterprise IT sales process: wine & dine, golf and box seats. Except the restaurants and courses are probably much nicer. Decision makers have pretty strict policies to follow that restrict gifts over a certain dollar amount but invite a bunch of employees to a box seat at a sporting event you already paid for and how much that gift is worth depends on how you measure the value.


Cloud is initally is a copy of mainframe just a affordable cheaper version. Mainframe aka z/OS is not old it's just like any other software always upgraded only thing is hardware is also upgraded with it. Positive thing about mainframe is if you write a program and if it serves its propose it can run for decades without an issue. Mainframe is made up of TSO CONSOLE, SDSF, CA7(job scheduling), Db2 database, CICS(online system), JCLs to run batch jobs. Languages are primarily Enterprise COBOL and rexx for developing inhouse tools.

Why to use mainframe, less downtime(data sharing & Plex environments make it look like no downtime), no data theft or any malware destroying data.

Why cloud is better ? Pay as you go, this works for all startups and small companies using latest front applications. With mainframes you will be paying from millions to billions to IBM so only large corporates can afford it.

My opinion, Initially IBM with all this tech boom on cloud they thought they can wait it out and it will fade away and they were mostly focussing on Watson. Since that boom didn't stop, now they have started to put some work. Mainframe is not a JSON world yet. Last few years mainframe is developing rapidly as well new flavours of languages like java and python are introduced. z/Linux and z/VM(like compute engines small z/os vm's) are being introduced. IBM Cloud is bluemix free for only 30days anyone interested can try it out. CA-Endevor which is mainframe change management tool is getting integrated with git mostly preference is given to bitbucket as Altassian has many products like Jira under their wings. Now machine learning is also coming in.

During college days i wanted to be a web designer. But got a job in Mainframe technology and given only month training. That's enough to start in mainframes. Yes, you are going to see black & green screens for rest of your life if you are a mainframer, creating a progress bar in rexx is a skill, it looks cool but stalls the screen so you don't want to do it and you will be aware to things like entire CPU is shared across various users across your organisation.

If you want to learn a technology that processes massive data. Mainframe is the place.


> no data theft or any malware destroying data.

That would only be true if the mainframes never interfaced to any other systems or to any humans. Since the latter can't be the case, and the former hasn't been the case for a long time, this just doesn't hold.

> Why cloud is better ? Pay as you go, this works for all startups and small companies using latest front applications. With mainframes you will be paying from millions to billions to IBM so only large corporates can afford it.

There's an implied false dichotomy here, though, that ignores a decade or two of using commodity hardware, before cloud was viable/popular. It was also pay-as-you-go, just with much larger increments, so it didn't work as well for as small companies. That's still available and is actually cheaper (and carries less vendor lock-in).

Perhaps ironically, large enough companies routinely pay millions to AWS. In theory, they could switch providers, but not if they bought in to any of the vendor-specific services. As you say, cloud is, in many ways, a copy of mainframes.

> If you want to learn a technology that processes massive data. Mainframe is the place.

That may be a stretch, since "massive" isn't what it used to be. Something like all the world's financial transactions might have been massive, in computer terms, 20 years ago, but, today, processors have 1000x the transistor densities, spinning disks have 20x+ the I/O, and SSDs are even faster.

I'd argue that the scientific, physical lab, HPC/supercomputing world is the place for that. The quantities of data produced by the instruments at LLNL's NIF or CERN's LHC would easily overwhelm a mainframe, even after initial processing/filtering.


What gap, just pay more for the position.


best comment of them all, I agree, there is no skills shortage of mainframe programmers!


I tend to think the mainframe shortages are the result of a culture that views mainframe "operators" and "system programmers" as nearly minimum wage unskilled jobs until someone has 40 years of experience.

But what really gets me is that tapeless mainframe backup solution abound. Though they tend to fall into two categories. The ones that require code changes in legacy applications that are using tape as hierarchical storage. And the pile of virtual tape solutions (like this one https://tributary.com/storage-director-2/) which appear as tapes on the mainframe and dump the data to alternate sources disk/AWS-S3/whatever.

I guess there is a 3rd "transparent" set of methods that basically back-up/snapshot individual volumes outside of the applications. Either way, their page could do a better job contrasting their product with existing backup solutions.


Fidelity Investments has a software engineer training program for fresh college graduates that lasts a few months and is supposed to prepare them to work at Fidelity. The training program is split into different tracks, one of which is a mainframe track [1].

[1] https://www.cs.uri.edu/wordpress/wp-content/uploads/2013/09/...


Working with a mainframe everyday, the system is very fast and reliable. The hardware was very much ahead of it's time when released. I agree that the platform is an impressive one.

My issue is with the developed software and database design. Although the software is custom written, many mainframe developers do not understand the concepts of modern programming and do not improve over time. Once they learn a concept (sometimes 30 years in the past), they continue to develop this way, rather than improving as it is outside their comfort zone. They are not open to new concepts or ideas. When you have a team of developers doing this over several decades, you end up with a big ball of mud. https://en.wikipedia.org/wiki/Big_ball_of_mud

Trying to port a big ball of mud to another language is very difficult, which contributes to mainframe developer job security and monolithic, antiquated systems. Modern developers are very particular to code portability, readability and modern concepts. Hopefully this is just in my experience and not common across other mainframe developers.


Modern mainframes are just cloud compute. These are services and microservices written in proprietary programming languages, using possibly-proprietary CPU and RAM. They're equivalent to a few racks of Windows or Linux servers behind load balancers. Or a small fortune in AWS services.

Cray, IBM, et al sell computing nostalgia and laziness. Their biggest product is the need to not rewrite code in an open system.


I think that's rather a flip attitude.

Say you have a 30-year-old codebase. Odds are there's not completely accurate documentation for everything. Even if you have an accurate spec on paper, it may not encode every gimmick and undefined behavior that external consumers depend on. And if you bollix the switchover up, you're burning tens of thousands of dollars per minute of unavailability.

In that situation, saying "pay $5M for a new mainframe, which buys you another ten years of predictability, 20% more performance, and 20% less power consumption" is a solid alternative to "spend five years and several million dollars in developer effort trying to build something modern that's a drop-in replacement and then praying it doesn't catch fire or have hideous real-world performance when you go live"


I emailed IBM about 10 years ago, to see if they had a certification program for z/OS, they didnt - it appears they do now, which excites me.


I work for a large consulting firm and I do a lot of work helping large government and private companies perform data migrations away from AS/400 mainframe environments.

Many people will say that there is a shortage of skills when it comes to mainframe environments, that might be true, however the need for such skills is diminishing significantly.


One comment out of that was interesting to me:

>And it’s not all COBOL anymore. Today mainframe programmers use Python, Linux, node.js, Java, blockchain.

node.js on a mainframe? That doesn’t strike me as being something that can uninterrupted for years.


It's not used to run anything critical.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: