I've always wondered why this is the case. Is it because people are way more careful when entering code? The article points out that their IDE is connected to the Mainframe directly which makes me believe an error is much more severe. I'm more confident that the compile and unit tests can catch my mistake than I'm in my own coding skills (which is why I am not always afraid of changing tightly coupled systems). And even if worse comes to worse, i'd still be confident that some system will detect that change and roll back my commit before all goes south.
With COBOLT (in such old environments), it seems that things must not break at any point so developer are a lot more careful with what they do, which makes the code do what it is designed for very well. The problem with such an approach is of course upgradability (I would not want to touch undocumented code that has not been updated for 20 years because some government decided to replace an old "protocol".
Or maybe, COBOLT is considered more reliable (by Banks, etc) because managers just see the fact "this system has been running for 15 years" while ignoring the problems that come with it.
But I'd like to hear your opinion about it.
I was one of the few new hires with actual education and experience in software development. The majority of new hires were people in their mid-thirties who were looking to change careers from trades, etc. The company was specifically looking for smart people that they could train extensively in house. Almost nothing you learn outside of that environment is applicable inside and vice-versa. The best I can describe it as a complete parallel evolution of technology. The technology is similar and yet also completely alien.
Connecting to the mainframe didn't mean you were editing the code live. It's exactly like shelling into a remote Linux machine to compile your code. You didn't work locally because there's nothing that is local.
That said, the language itself was (is?) so limited in scope and ability that stability was nearly guaranteed. You READ and PRINT using tape or disk and printers, but the programmer didn't worry about what devices were available - some system admin pointed these devices at a job, and READ came from that tape and PRINT went to that printer. The setup and management of the jobs that ran your code was a very manual process. No pointers, no memory management ... nothing low-level at all. Just high-level abstraction.
Then there's hardware reliability. As an example: you can walk inside the mainframe, pull a CPU, and the thing keeps working. Replace that CPU, and the system makes use of it. None of the system's users ever know that it happened.
From what I understand, until Tandem came along and started to seriously eat away at the market in the late 1970s and forced all the other manufacturers to follow, mainframes were not known for reliability.
What I've usually heard is that code that already exists and has been demonstrated in many years of use to work correctly in COBOL is more reliable and less costly to maintain in COBOL than the alternative of replacing working components with something in a modern language when new business requirements required a change.
Having been involved in a big-bang replacement of a large, working COBOL system with a from-scratch .NET solution, I'm not unsympathetic to this argument.
I've never heard the argument (though perhaps its been made somewhere) that COBOL is, for a clean slate project, more reliable or maintainable.
To the extent it's true, it's because the people that have built these systems are experienced and have a lot of business knowledge, and know how to reuse and tweak their systems while not trying to build each new requirement with the latest interesting framework they downloaded 5 minutes ago.
But it's slow to code in, especially if you go outside COBOLs comfort zone.