But as far as training people to handle COBOL, well, that's simple. Look at vocational schools of years past. Why can't you train a random person just how to do COBOL? They don't need to do object-oriented programming or UX theory. They just need to maintain and occasionally update some ancient program that's been rock-solid for longer than they've been alive. It doesn't take a genius, it just takes very specific training. And there are a lot of people that would jump at a chance for some high-paying, steady job that they just need to train for a year or so at night school.
Thank you. So much internet arm chair analysis falls into this category.
It's also a great proposition for a grad student. They don't have to unlearn much, they're in a perfect position to absorb a new language, and the term 'set for life' can be thrown around liberally.
Wouldn't be surprised if this was already a thing, probably in the form of COBOL-specific Consultation services.
You can. Our local, community college ties its classes to what industry in the area wants. It has classes on COBOL, RPG, and so on. That's because there's quite a bit of it out here in big companies.
Do all students just care about learning the hype and creating the next flapping bird game?
Creating a new flapping bird game is exciting for students because it can give them an opportunity to create something from nothing. You aren't going to get that opportunity doing COBOL development. You will most likely spend the rest of your career doing maintenance work rather than any greenfield development. There is nothing wrong with that but not everybody likes the fact they can't create something new.
The driving factor was a suite of new services and features they wanted to implement, we came up with a creative way to run parts of the system as proxies to the COBOL applications backing them and to decorate the requests and responses with data from the newer systems.
We were done much faster and cheaper, for a project of its scope there were surprisingly few hiccups or post launch rework projects, and we got the end result they could've spent 10x the money to achieve- and all with way lower risk.
I'm assuming we were in a fortunate position that the general flow of data could be handled from the various source streams, but I wonder why this isn't at least an option for moving forward from some of these ancient systems. It seems like a good first step, at least.
Other companies have worked together to essentially make boot camps before SV even thought to do so.
Running mainframes isn't cheap. COBOL programmers aren't cheap. Not being able to improve these systems / innovate with them can cost the business a lot of time/money. And if we're talking COBOL, we're talking at least 25 year old corps.
How long's the ROI on moving over? How expensive does the maintenance of antiquated systems get? It's easy to say that they're rock-solid and never need to be touched, but is that really true? For example, lot of banks today have a lot of absurd UX, often because of absurd internals and I know a lot of that usually is "the system is ancient and nobody really touches it anymore, that's just what it is now".
What language would you suggest? Remember that decimal reliability and predictability is important. Plus, let's pick a language likely to have a very long service life because we need to have our investment rewarded.
Frankly, the iSeries machine we have is pretty cheap to run. We bought it 10 years ago for about $15k and have had a maintenance contract that doesn't cost that much (about the same as our Red Hat service contract). We probably will buy another one since they just released a sub $10k one that will more than meet our needs. Never lost any data, easy enough backup that the biz office does it themselves, and IBM is pretty quick about any fixes. Much cheaper than any Linux, BSD, or Windows system we have ever had. Plus, I don't mess with it like the others. It just works.
1) we've tested it multiple times, after all, an unrestored backup is not a true backup
Maybe someone should be thinking about a COBOL replacement language instead of the plethora of C / C++ replacement languages.
I imagine within the "a few years of development" is budgeted some time to figure out what tool to use
It will be a lot more than a few. Things absolutely cannot go wrong or even change in production. One slip-up can result in real money being lost. This whole "go fast and break things" doesn't apply to bank accounts.
Or you could not do any of that. You could just accept your situation. Mainframes are essentially a subscription service. For a yearly cost, IBM has offered you guaranteed performance and guaranteed hardware (they fix it immediately if there is a problem). So, all you have to do is pay your subscription and pay some people who like to have a steady, unexciting job and your business will continue to run like clockwork.
I'm obviously not saying banks should just switch today cause it's easy and all, but if the ROI on switching is 10, even 20 years, it still makes sense to do if you're a bank.
I can't find the post now, but some folks from the US govt who frequent HN talked at length about the ongoing replacement of mainframe-type services with newer tech. What I gathered from that is that the cost isn't ridiculously low at all when you take into account the impact it has on the organization's ability to get things done.
You are at a known state but you now have to re-create the steps to get to that known state as I'm sure a lot of the domain knowledge around the product is not there anymore. Not only that, you now most likely have other sources consuming the output of the original COBOL application so you have to ensure that the other applications are able to consume the new data and that those don't break either.
What you have to remember is that when the COBOL code was written, it replaced hundreds, maybe thousands of people doing manual data entry and manipulation, maybe even pen-on-paper. That gives you a fantastic ROI. After that's been done, replacing one computer system with a newer one is completely different, a spectacular case of diminishing returns.
I've seen it attempted first-hand and it was a dismal failure. And it was their third or fourth attempt. And it wasn't even a full replacement! Just a simple data lake to dump things into. Google or Facebook or Amazon could do it, but few companies are like them.
Success isn't guaranteed, maybe as low as 50%. When you say 'few years' it intimates you don't appreciate the scale of some these systems. In the end, retiring legacy systems isn't a technical problem, it's a product owner, product management, and business problem. The business owners have to make concerted, multi year effort to document their process as implemented, define the replacement, and manage it's (re)implementation. This never happens.
I wouldn't say it never happens, but it certainly takes a couple tries and a lot of time and money.
My employer is close to being off our mainframe(s), but it will probably take at least another 10 years to get rid of the last little dependencies. We've already spent about 15 years (with hundreds of developers) getting to the point where all the major online and batch processes have been reengineered and have very small mainframe dependencies. The mainframe development teams are very small and mostly focused on decommissioning work.
But yeah, it's a massive undertaking. In our case, the business was motivated to pay for it because mainframes just couldn't scale for our main use cases.
Very, very unlikely. The company doesn't even have technical writers anymore for internal projects, let alone technical historians. And with the time it's taken and normal staff turnover, I doubt that there's anyone who really could write about the whole effort based on their own experiences.
Basically, it consisted of building of building a new central datastore for our content (that was pretty mature by 2005, when I started). Then hundreds of smaller projects to migrate this or that process to read from or update the new datastore or keep it in-sync with the old one. There was also a lot of opportunistic piggybacking of new feature work onto migration projects or migration work into new feature work. However, I only have the view of a regular developer.
I see a lot of people here telling me I don't know what I'm talking about without actually giving an answer as to how long it would take and how much it would cost, which was my question in the first place and I'm being downvoted for asking it. Seriously, WTF HN?
What you can do is estimate the cost of implementation in today dollars. For example suppose Bank System X has been under development for 40 year with an averge of 10 developers per year. That's 400 person years. Suppose you find cheap labor at $100k per year, the sunk cost in personnel alone is $40M in todays dollars. So 40 people maybe could do it 10 years.
The above analysis is not satisfying.
Price for performance for certain workloads, as I understand, it can be.
> Not being able to improve these systems / innovate with them can cost the business a lot of time/money.
The banks use other technology for peripheral systems that interact with the core functions all the time (look at the drivers behind AMQP, sometime.) The core functions, OTOH, are about impelmenting requirements that haven't changed in centuries; until there is a desire to implement a fundamental new way of doing things (like blockchain, which financial institutions are exploring and probably aren't using COBOL on mainframes for their experiments) theres nothing to innovate on.
In fact, if you look at the existing COBOL banking applications, many of them aren't pure inhouse development, but packaged applications like CSC Hogan. Hogan runs on top of z/OS and CICS and is written in COBOL. Hogan shops will have a lot of their own COBOL they add to the Hogan system to implement customer-specific requirements. So for many banks moving off mainframe/COBOL, it is actually a migration from one packaged app to another, but of course they need to rewrite from scratch all their customisations.
In my personal experience, most banks doing this don't try to do it as a big bang. They identify subsets of functionality where they feel the current mainframe system is most limited – e.g. mortgages, mobile banking, whatever – and start by replacing that function only with the new non-mainframe software, and have it integrate with the pre-existing mainframe app for other functions. Then, the idea is, progressively they will migrate more and more functionality to the new system, and eventually the mainframe/COBOL systems can be retired. A piecemeal approach often involves less risk, costs less (in the short term) and is easier to justify to the business since they can see tangible results much easier.
This is a good excuse for a few years. But after decades?
And talking about banks?
The whole software industry is frankly pathetic. We are talking about actors with the HIGHEST profits of mankind, and yet "we don't have money for this" star to look pathetic.
Google, Apple, MS, IBM, Oracle, Banks, Defense industry, etc have not excuse to not invest in basic infrastructure.
This is the same thing with country that neglect self-investment in fundamentals. If you agree is smart to invest in road, electric grid, etc, then why we think otherwise when we talk about OUR industry.?
This isn't a "basic infrastructure" problem, it's a problem where the people writing the software didn't plan for obsolescence and many of the "best practices" we take for granted weren't even in place when this was being written. And it is about tracking real money, so if the system isn't exactly right, they could end up being tied up in court which makes it even tougher.
The risk is too high to let things as is today.
BTW, probably some downvoters think I'm advocating to move this to some hipster language like JS or something like that.
I work mostly for enterprise-alike customers, have work for government projects, etc.
So I'm aware of the work here. Also, have helped to move decent-sized code bases, several times, from OLD->NEW.
> ~$1 Billion to convert their old system
If things in AU is alike my country, this cost is inflated... For some project Oracle bid our customers 10x more than ours at the time.
But this is beside the point, anyway.
So, how this could be done?
The first problem is the need to have a solid hardware+OS, where linux is not there.
Solution? Stay on mainframes. Is basic.
The second problem is OLD cobol + Zero understanding in why, how, where, etc is the full system. This is the big one.
This is alike find a OLD C codebase, made like 3 years ago. Or even better, a OLD B codebase (or any kind of precursor to C, if it exist at the time...).
And pretend the solution is to STAY WITH THE OLD CODEBASE. For DECADES.
IS THIS MADDNESS!
NOT, IS IT!!!!
So, you build a better COBOL. Like CoffeScript->Js. You star moving the code from OLD to NEW. In the process, you document, and build testing. You end with a better cobol (or a transpiler) with a proper toolchain.
This can be look as the progress with another bad language, JS. You start with something terrible, with a huge installed base, yet, you star moving torwards something else.
In the process, you can do something alike ASM.js but for cobol, and whola, you can transpile other languages.
The main trouble is that customers on the enterprise side of the world have no respect for developers, and fired them after years or just not care.
Then, when in trouble, ask for external contractors that do not have the required knowledge (and maybe, not even the skills).
For this to work, it need to engage the on-site developers + maybe contractor or hire some additional hands for the mechanical work. Do it well, and this is fairly cheap for the benefit.
But this require to "do it well". And that is the "hard" part, because is the human factor the big issue here.
You can tell what a person's _real_ priorities are by what they do. Likewise, you can tell what a society's real priorities are by what they do.
Of course. But after decades of the same problem, is not time to admit that is time to change?
In other words, even though everybody complains about something, and even though it's theoretically fixable, the cost+benefit isn't always there. That doesn't stop people from complaining or offering conceptual solutions.
If nobody pursues a particular solution, that's strong evidence about the real implementation and problem costs. Not conclusively, of course. But as a passive observer it's usually the best evidence available.
Really, the answer to the COBOL "problem" is obvious and it's what basically anybody familiar with it suggests: a piecemeal and gradual shift according to cost, benefit, and opportunity.
>> We did a piece the other day about how learning the ancient programming language COBOL could make you bank.
>> Which can only maintained by small group of veterans, that grows thinner every day.
>>There are almost no new COBOL programmers available so as retirees start passing away, then so does the maintenance for software written in the ancient programming language.
This is false because there is in fact tons of COBOL talent around (in the US). The waves of offshoring in the '00s left many thousands of unemployed COBOL developers. I personally know of 100's in the local market. A prominent financial services and BPO firm off shored mainframe development, and laid off many 100's here. Most got out of software development all together. This skills shortage simply doesn't exist.
Me, COBOL in LinkedIn... Inflated offers via LinkedIn, Zero.
If you were a developer affected by stuff in the '00s, you're probably nearing the end of your career (unless you'd just started)
The problem here is the lack of a NEW talent pool.
Is 47 really "near the end of your career"?
I always get a chuckle when some hopeful naif writes like they're going to retire at 50 and be done. Most everyone I know who's tried that "gets bored" and goes back to work or starts consulting after a half year trying to find their feet. Those are also the ones who when asked what happened tell you they've got plenty of good years left and are now planning on retiring at 70.
47 is still building steam in my experience.
As a non-lead, non-management code slinger? It's usually well past it.
While verbose, COBOL enforced structure and discipline, and it is very possible for code to be maintained by someone other than the author.
On the other hand in the 90's and early 2000's many critical pieces of software were written in Perl. While it is quite suited as a scripting or glue language, I've seen elaborate integrated systems written in cryptic, unstructured and undisciplined Perl, often with proprietary extensions.
And no one learns Perl anymore.
There is no way to maintain these projects, and in at least two projects I've seen, what should have been a straightforward change lead to a large scale retire-and-replace project.
I would've much preferred to have done that in RPG.
Every time I've written Perl with another programmer, I learned something new.
Option 4, which is what banks are actually doing to the best of my knowledge, is to train new hires on how to write COBOL.
The problem with these systems is that they are very old, and thus do not benefit from many of the more modern developments in the field nor do many quality developers learn the language.
The benefit with these systems, though, is that they are very old. With that age comes completeness. They're battle hardened, thoroughly tested, and a known quantity within the institutions that leverage them.
History is littered with companies that attempted to replace the core of their business with one big project written with newer technology, only to fail catastrophically.
During my first gig with a bank, I was appalled to see the ancient technology in play at the heart of the bank's systems, and the retired developers coming in part time on schedules they set for outrageous hourly rates to do maintenance tasks on that system. Over time, though, I came to realize that this was the most reliable and cost effective solution the bank had available.
The third option listed is what I see happening, but at a much slower pace than the article suggests:
> Basically, Döderlein suggests making light-weight add-ons in more current programming languages that only rely on COBOL for the core feature of the old systems. However, the key thing is how the connection to the old system is made.
> Gradually banks will be able to address each and every product need that they have with new platforms that will replace the overly complicated COBOL add-ons. This compartmentalizes the banks’ COBOL-problem and makes it cheaper to fix, as it won’t have to be done all at once.
It's a glacial pace, but it's being attempted. The heart of the old systems, though, were still untouched last I had visibility into those inner workings.
Which is cool when everything works and nothing has to change. When something breaks or you need to change something... you wish you had things such as clean code, unit tests or even TDD/BDD, code coverage, continuous integration.
Banks are basically sitting on systems they don't fully control. They're kind of an interesting experiment: how long can you control the dragon, i.e. continue operating via patchwork and outside contractors which are in their 60s? :)
The track record of Big Bang rewrites is not a good one. A few years ago it was either ACM's CACM or IEEE's Computer magazine that profiled several high-profile failures. The financial and time cost overruns were spectacular.
Banking and Insurance are not the only industries using COBOL. If you're in higher ed and your institution uses Banner, you've got COBOL. During every upgrade, our Banner team has to license the latest and greatest COBOL compiler set from (I believe) Micro Focus.
I have to wonder if a split in technology exists between commercial and investment banking.
You want to loose the hardware - there's a solution for that.
You want to create modern web services over - there's a solution for that.
You want to code in a modern IDE - there's a solution for that.
If the systems that use IBM Mainframe were using Amazon ECS or Azure... the same people would write articles about the lack of reliability of core servers like banking and airlines.
Auka CEO Döderlein, mentioned a mere 5 times, says there are only 3 options, ignore the problem, big bang rewrite, or bolt on new pieces.
Don't panic! Just call Auka. They have your new piece.
There's no mention of the option we would all pick, a gradual migration, of course.
And besides that, banks should do what they want to do. That COBOL software they are running is hardly ever a problem, the problems are usually in the new stuff which has been far less battle tested and is connected to a hostile network.
It's also an incredibly expensive one, especially since it's a recurring situation. No matter how clean and clear you think your code is, or how thoroughly you've commented everything, in a decade or two, someone is probably going to be groaning about it being legacy code.
Not only do they have a hard time finding people to replace their retiring veteran developers, but for smaller companies (like this one) that can't afford to pay ridiculous salaries for a top notch COBOL dev, they have to settle for mediocre aging developers that can write COBOL and are on the job market.
These devs are getting paid good money to work on critical systems and aren't skilled enough to properly maintain them. It would be so much cheaper for these companies to pay better devs to do more recent tech. But it's hard for them to get out of that loop. Makes me rethink where I have my money.
The same we're saying today about Object Pascal, Business Objects, Cold Fusion or DCOM:
nothing at all.
Get your bid in!
Well what about the other couple-dozen genders that seem to have surfaced in recent years? =)
Microservices may still exist as an organizational concept but I'd be stunned if we're still thinking about things like containers in 20 years.
There are probably only two languages which might face similar problem as COBOL: C and Java. For now they are both actively maintained and used for new projects - so we are fine for at least next 50 years.
The kinds of services I'm describing are critical to the normal functioning of government agencies. They're running now, and they'll probably be running for the next 25 years (at which point another contractor will upgrade them to something new and shiny).
- If power grid goes down you are facing riots.
- If systems on your new shiny Boeing go down you are facing deaths of dozens of people.
- If your military systems goes down you are facing russia ;)
If employees of government agency works less efficiently(is it even possible?) for some period of time (few days, months?) - nothing wrong happens. That's why it is possible to change those systems every 25 years. They will be less productive after software change, they will face new bugs in transition period. A lot (but not all!) of work those agencies do exist only because of some stupid laws (example: in my country you have to have special permit to cut down a tree on your land - I know someone who had to start building their house year later because they couldn't get permit to get rid of one tree, another one: you cannot inherit some types of parcel if you are not a farmer - you have to either become farmer[of course virtually - but it takes few months] or change parcel's type [it's not always possible and it's also few months of fighting with bureaucrats ]).
I really think people miss this very important point. Redundancy is transparent for 99.9999% of the software you run on the old big iron systems. For something like a modern application on AWS you have to know, understand and code for the infrastructure.
If you just want to host your servers on EC2, the SLA is 99.95% uptime. So that's about 4 hours of downtime due to outage a year. Put your servers in a multi-AZ autoscaling group and you're pretty solid. Bonus points for having autoscaling groups in more than one region. If you don't want to use any of the other services AWS provides, you can simply use them as a basic hosting service and get pretty amazing uptime. I've never used an old mainframe system, can you get greater uptime than that? Including hardware failures, power outages, network outages?
4 hours downtime a year is just way to much for some systems.
Just have a google for "mainframe uptime".
Yes the software is abstracted from the HW redundancy. You can pull CPUs with running code in systems and things keep working. Really - walk up and pull the CPU out. No impact to running code.
Heck you can run your private cloud on one:
It was inconvenient for some people (quite frankly I didn't even notice anything - I've read about that in my RSS reader few days later ;)) but nobody died(excluding heart attacks), countries didn't collapse, wars didn't start because of this.
DISCLAIMER: probably should have mentioned this earlier, I'm an AWS employee :)
I suspect that in 50 years, Haskell will still be the up-and coming thing.
I mean, it is present there, but most of the time it's part of some glue or automation, it's not part of the core systems.
Those are usually Cobol, C/C++, Java and .NET. Or some proprietary language in case they were crazy enough.
Compare that to C where buffer overflows are common, Rust where the type system is a high art, Java where thread-based concurrency "just works" so you can get in trouble using it, etc.
Every couple of years people decide to redo distributed computing stack.
I'm actually surprised that graduates can still enter the web industry without receiving specialist training - that's how graduates are dealt with if they go down the Cobol route and work for one of the big IT multinationals who maintain these sorts of systems.
I like this quote from https://teuxdeux.com/purpose
> On that note, why the heck does every app need to change all the time? Can’t something on the Web be more or less finished?
However I have no idea what COBOL is like to work with so I am probably being unrealistic.
I don't know how well that worked out, but other than Hypercard all other languages I am familiar with (and they are many) only make sense to those who can program in them. So though Java is often considered to be the modern COBOL, at least in this regard it isn't.
IBM isn't sitting around waiting for COBOL to die out, its working out hooks between COBOL and Java and introducing new languages like NodeJS 
The problem isn't money, its risk. No CIO is willing to risk moving to a new system, and that's the problem that needs to be attacked.
Then in theory you could write unit tests in some language to test the real COBOL and the transpiled COBOL.
They also have a proprietary, parallel language I'd like to see go FOSS:
Its that is 10M lines of complicated probably poorly written (in terms of hard to understand and little comments and no test coverage) code.
Otherwise companies successfully transpile far bigger code bases than 10M lines...
IMO the first thing they should be doing is writing automated tests against the existing COBOL. A transpiler might help this process.
The other thing is how these systems were architected. They were mostly all green screen apps that build data files that were then fed into extensive (complex, large) nightly batch processes. For example today when you do a transfer between bank accounts, you may expect the transfer to be immediate. most often it's shown in a pending status until after the next nightly process. This is why.
However, I do see the appeal for building off of the COBOL foundation using more modern languages. Writing a UI in COBOL on CICS is an absolute nightmare to deal with and maintain (not to mention my firm's egregious install and deployment process that is nothing but red tape and TPS reports). I'm working on a project right now where we're migrating some of our reconciliation screens to the web, where the UI communicates with a Java/Spring Boot server that itself interacts directly to the database using DB2 Stored Procedures.
In this space, I see a need for moving away from COBOL. But for the batch processing? Not a chance.
It's a quirky language, (omit a single '.' somewhere and you're in a for a world of hurt) has its neat parts and in general will come across as limiting once you have been exposed to other, more expressive languages. But that's probably also the reason it is still around, think of it as JAVA but then conceived in the gray past. The original intent was to make a programming language that would allow managers to program. That didn't quite work out.
Have you ever written about how this came to be? I find myself to be similarly broad in interest/experience, but I'd judge myself much more superficial/limited in this regard because of the breadth of it all. I'm curious how you manage it all.
If you want to take it off HN feel free: firstname.lastname@example.org
That said, the guy doesn't even have a Wikipedia page! So how should I know?
I'm not a COBOL programmer, though, and not planning to be.
Are in the USA?
Edit: Clarification, you can't download a Mainframe VM so you need access to a Mainframe for training.
I tend to think about what you need to learn to actually work in a IBM Mainframe environment which is greater than just learning the language.
As an aside, I think part of the reason I like COBOL is because I am an above average to very good academic writer. I enjoy writing papers and find it hard to meet page minimums because brevity and clarity is important to me. These characteristics (good, clean, to the point writing) seem to translate to COBOL more than any other language that I have used.
It was actually pretty straight forward. The key to the project was being to easily convert the data in the mainframe to standard RDBMS tables, Oracle.
After that it was just a matter of "mapping" the green screens of the system to web pages, i.e. displaying the same data and providing the same actions.
The only time I really needed to grok any COBOL was when there were discrepancies between the old green screens and the new web pages.
After a period of user testing it was basically do the final data dump to Oracle, turn on the new system and turn off the old one. However, there were many other programs running on the mainframe so the hardware didn't go anywhere.
Currently I work for a major insurance company which has multiple data centers with many, many mainframes... Think ~60K square feet worth of mainframes per data center.
Mainframes are pervasive and aren't going anywhere anytime soon.
My dad programmed in COBOL for the Army, and he retired in 1975. That's how old COBOL is. Knowing my dad, he probably still has his manuals, though finding them might prove an archaelogical exercise
I swear there are not enough classes about numbers.
Yes. This is one thing that irks me about these fads in programming, they all solve problems that have long ago been solved and then re-introduce all the bugs that were already discovered, fixed and forgotten about ages ago. Old software is a very nice repository of information about edge cases and real world complexity.
A $0.75 accounting error turned out to be named "Markus Hess". Precision matters. If it works good. If it works but you don't know why it works: bad. If it works, but has a small problem and you don't know why: bad. If you run a $1M budget and you're off by $0.75: bad. And so on. You really don't want to store anything to do with money in floats.
I think that would probably be a good business.
You can't migrate that way. An Insurance company can do that (contracts usually last 1 year) but Banks can't.
It was more a suggestion of how to move to a new system at all.
What happens if the bank closes before those 50 years?
Also, you'll still have to write wrappers for the old tech (iPhone apps which access the COBOL core) in order to maintain feature parity with the other banks.
You mean "battle tested for 40+ years, totally solid 1970s tech"?
If anything, it's any crap they'd write now that would be MORE fragile.
You're right, of course, but the public is trained to trust the latest and greatest.
If 'battle tested' was the most salient selling point we'd be on Series 60 or QNX phones.
'We're putting new customers on the latest technology and keeping existing ones on our old systems' is not going to be an easy thing to message to the general public.
Actually, there are more Series 40 and Symbian phones sold today than iPhones - 350 million to 200 million (India / rest of Asia, Africa and Latin America are the main buyers). Many people here still miss the old Nokias. Of course, Android is the new Series 60.
But I don't see this ever really coming to consumer attention. They're never directly--or visibly--interacting with old mainframes. From their perspective, they're using the newest tech. The mainframe in the background doesn't exist to them. If they even think about it, it's just magic. The closest you might have gotten to the public talking about legacy systems was probably the healthcare.gov debacle, when a number of news articles mentioned integration issues as being an problem. Those were big issues in themselves, but they weren't the main problem behind the site's development nor were they ever a major discussion point.
The canonical example when it comes to UK banking is the RBS outage, which meant that customers could not make payments for three weeks. RBS was hit with a £56m fine for that.
Another commenter says there's no incentive for managers or execs to stick their necks out, but £56m is quite an incentive.
One could argue that legacy code makes that more likely because of unseen factors that predate the people actively working on it. It's hard to miss a landmine, after all, if you don't have any clue that it's there. But while a rewrite might eliminate that specific risk, it does so by inviting a lot of new ones to replace it. In some ways, the risks are probably greater as you need to figure out possibly decades of business logic and then re-implement everything. From a business decision perspective, the existence of legacy code on its own is an insufficient argument on its own. After all, the problems inherent in legacy code will eventually surface yet again in another 10-15 years anyhow. The better solution in most cases would be to identify specific problems and determine what can be done to hedge against them.
My first job was in one of these software houses for banking and finance(B&F) who made both core system and peripheral software.
A bunch of us programmers tried to make the same sell to one of the bankers we had on staff. (You need bankers to do the job of products manager in this business). He looked at us like we're morons, and asked a rather pointed question: "OK, say we rewrite the whole software stack in C++ (this was 1998). What NEW banking products can we make in C++ we CAN'T make in cobol?" Answers is of course; None, zip, nada. Last time you changed bank, did you ask who's banking system is being used and in what language it was written in ? I'm guessing not.
So banks go to the software houses to license a core banking system, and they care pretty much about only two things:
1) Is it solid (i.e bug free. Few things evaporate trust in a bank faster than there being questions about the banks ability to do basic math right).
There have been a few cases when a new core banking system have been written from scratch, and most have been failures due to being too buggy when launched. Word get out among the banks and the system is dead. All will wait until it's proven, no one will be the first one out of the gate. The bug free requirement is a show stopper from the getgo.
On to price. The development cost of such a system is in the neighborhood of 250 - 500 million USD. The only way to get that down is to strip down the features to a much simple system. The other software houses are just going to sell an existing cobol system, with disabled features to match yours, and underbid you.
Bankers, unlike most CS majors, actually understand how to calculate cost of investments! Say 500M USD development cost, a future value calculation will yield that you'll need a return-on-investment (ROI) of 20-50M US EVERY YEAR FROM HERE TO INFINITY to make the investment pay off. (That's PROFITS, not revenue)
Core banking systems are so feature stable, and have been so thoroughly debugged over DECADES, that there is very little maintenance to be done. I'd expect that each of these systems have a staff of 10-20 programmers maintaining them (and they spend a lot of time doing other things). How big a staff of maintainer will you new banking system need after launch? Even if it was zero and you assume you pay each maintenance programmer 250k USD, a staff of 20 still only cost 5M USD.
The article have a section "what can the banks do" left out a fourth option. Instead of using CS majors to code cobol, you use B&F majors, and give them a 1-2 year education on doing cobol programming. The CS majors need 1-2 years of on the job experience to reach a B&F Bachelor level understanding of what a core banking system need to do anyway. Cost is the same.
One last point, say you do write a new system today your choice of language will be either Java or C#. As database there is no viable alternative to SQL. In 30 years no fresh CS major will touch Java or C# with a ten foot pole, and you'll have to do another rewrite to the tune of 100-200M USD (assuming that programming will require less hands).
Much cheaper to keep on educating non CS majors to do cobol.