Hacker News new | comments | show | ask | jobs | submit login
Banks scramble to fix old systems as IT 'cowboys' ride into sunset (reuters.com)
397 points by petethomas 247 days ago | hide | past | web | favorite | 352 comments



Been working on accounting systems in RPG and COBOL since ~1992. I also know C/X86ASM/Pascal/Delphi/VB/Fortran. Never bothered with C++ that much; played with Java a bit but Oracle irritates my bowels so moved away from that.

As mentioned in the article it's good work; but it is also not easy work. You tend to go through cycles of being pushed out to brought back under extreme emergency at any costs to get stuff working. Only for the cycle to repeat. Companies never think of the old guys as the ones to implement the new system - that's a job for the "enterprise experts" - I can't even keep track of how many "rewrites" I've seen in my life fail because of this.

We are the dinosaur club; but it's a club that pays extremely well (high 6 figures a year without working too hard if you are talented and have a good client base and reputation), but like fossil fuel one day it will all be gone ;)


> Companies never think of the old guys as the ones to implement the new system - that's a job for the "enterprise experts"

Exactly this is why rewrites fail. The challenge of a rewrite is not in mapping the core architecture and core use case, it's mapping all the edge cases and covering all the end user needs. You need people intimately familiar with the old system to make sure the new system does all the weird stuff which nobody understood but had good reasons that was in the corners of the old system's code. IMHO the best way to approach a rewrite is to make a blended team of experts of the old system and experts of the new technology, and you put a manager in charge with excellent people skills who can get them to work together.


What a ton of people forget : you also need to implement bugs or edge cases the same as in the old software.

More often than not the next developer using the old system worked around it. But never documented it as a bug (or wrong format, it featur! Not a bug!)


Rewrites fail, IME, because the old system was never documented in the first place, and because they are only done when a change that management is not confident can be done in the old system (usually because both technical and business expertise associated with the old system has been lost, and it's being maintained by what amounts to a cargo cult priesthood) is necessary, usually on a firm and fairly short timetable.


And since the number of old guys available to help is constantly growing, it no longer makes sense to do projects without them. A diverse team makes better products.


What about not doing a rewrite? What about instead refactoring, documenting etc the old system.


If you want to move off of COBOL + mainframe, that kind of necessitates a rewrite, doesn't it?


Emulate the mainframe and make a modern abstraction interface over the platform. We investigated this option for some IBM 360 COBOL code for a project I was running. We were a very small team and were market consumers (not implementers) of the original system which got open sourced in a panic. We eventually chose not too - but seriously considered it. If I were the owners (the Fed) I would have.


This doesn't help if part of your goal is to have a system not written in COBOL. Emulating the older hardware and OS gives you an even more complex system that's harder to hire people to work on.


It helps for the first stage - getting the platform onto a more sustainable environment. Once the interface exists and has full coverage, parts of the backend can begin migration without a doomed-to-fail 'big bang' rewrite.


Does it really help? What does it cost to build a trustworthy emulator for an ancient system that you don't own the source for? What would it cost to just migrate your legacy applications to a newer mainframe that IBM supports?


I wasnt suggesting writing an emulator from scratch. I was suggesting emulating the mainframe. Here is what we, specifically, were looking at: https://en.wikipedia.org/wiki/Hercules_%28emulator%29


Don't you then need to pay IBM for the OS anyway? Will they license the OS for this use?

Or did your system actually include the OS source?


For 360, it is now in the public domain: https://en.wikipedia.org/wiki/OS/360_and_successors


Interesting. Thank for the details.


One of the problems with attempting to write interface is the opaque source/consumer problem.

E.g. I'm working on a system that I could hypothetically abstract (I've got access to it, can poke with enough tests and test data, etc).

However. What I don't have is access to code or test injection into any of my sources / consumers. Both of which are expecting all the corner case quirks to be exactly identical and may actually have accreted software that depends on a specific quirk. A specific quirk that I have no way of knowing about. Or may send me something I've never seen and am not expecting because it's a 1:1,000,000 corner case, and we don't have any logging of an example that came through production.

I haven't worked on too much of the heavyweight stuff like you have, but I tend to take the perspective that "a 100% compatible rewrite is impossible." 95%+ maybe, but we're going to have to deal with the <= 5% after it goes to production.

Did you ever pursue writing and dropping a new tailored load balancer / router type application on the incoming data stream such that you could divert a specific portion onto new system(s)?


> we don't have any logging of an example that came through production.

If it's something that's never come through production, it is not already documented anywhere, and it has an occurrence rate of approximately 0.001%, is it really a feature that needs to be replicated?


Sure, because clients will say anything short of 100% success is failure.

(Currently migrating a site of 125k pages of content with oodles of edge-cases)


Yes, but not necessarily a big-bang style rewrite and that's where the difference lies. Big-bang style rewrites have an extremely high failure rate, doing it smart is more work from the start but has a much higher chance of success.


Because COBOL is the PHP of the 60s, and mainframes are slow and expensive.

Also, too many of the talents are stuck in a blocked I/O mindset somehow.

Some are wizards though, writing assembly and making raspberry pi sized systems blazingly fast. OK a couple of raspberry pi:s


Mainframes aren't hotbeds of compute power - they are all about the I/O. Your typical COBOL program reads a record, does some moderate processing, and writes a record out. Over and over. So keeping the input and output channels full so processing wouldn't stall was a key design goal.


As a counterpoint, many very well funded and talent rich organizations have failed to retire what TPF mainframes do every day for airlines, banks, and credit card companies.

Cobol isn't involved, but those slow mainframes are.


To be fair, rewriting huge mission critical systems is hard, no matter of what kind of system it is and what you are changing to.


Non-Blocking i/o isn't really some miracle new-age programming drug. Doesn't change the equation much.


Our mainframe is expensive but it isn't slow. It's not exactly sitting on the same hardware from the 70's.


The amount of performance (particularly CPU) you get per dollar is very low. Mainframes are all about lots of I/O with ridiculously high reliability and availability, but for an absurd amount of money.


Right, but that's our use case and it also happens to run our legacy applications! IBM's support is also very good.

We're not running our modeling engines and that stuff on it. We have HPC for that.


There was never an excuse to not document PHP functionality. Infact, its quite easy to document now, just the same as any other language. Devs simply argue that the 'system is still changing', and so nothing is documented.


Rewrites also fail because the the thing being rewritten is a mess, no specification exists of what it does, and the only regression test suite is deployment into production.


all features and edge cases should have a test. if possible also have tests for all the bugs ever found in the old system. it helps if the tests are not too coupled with the code and are well documented. a test could be making an entry where deb and cred dont match. or trying to enter an extra decimal. and if the system adds two decimals corectly.


>all features and edge casss should have a test.

Yeah, but the interesting problem is what happens next. Let's say a test case reveals that there's a flaw in the next-gen system. You fix it. It later turns out that the same flaw exists in the legacy system. What do you do?

Do you revert the fix, or leave it in place?


Yeah, I had to explain to a client once that our new version of their analytics query produced different results because their original SAS code didn't mind referencing variables before they're declared, and so one of the numbers in the analysis used to always be 0.

They did NOT appreciate hearing that they had been running a bugged query for years...


Revert the fix and add it to your backlog. You've got enough to worry about when porting/rewriting, don't add additional dimensions of complexity and risk by trying to change business logic at the same time. Minimize risk by minimizing change...then when the new system is up on its feet, go back and fix all the mistakes you found.


I upvoted that but I suspect that there will never be an opportunity to "go back and fix all the mistakes you found".

At least there never is for me.


I should count all the grains of rice in my lunch, too.


Maybe I don't give the rewriters enough credit, but in all honesty, I'd be surprised if they bothered to look that deep into the code and tests before it's already too late. Too often, there's just some handwaving and proclaiming you "don't to it like that in today's software".


I worked a company where they did a rewrite of one of the main systems. My friend said it best when they finally finished and annoyed a lot of their customers. He said, "They fixed the things that were wrong, but they missed implementing all the things it got right."


> Companies never think of the old guys as the ones to implement the new system - that's a job for the "enterprise experts"

So true, even for more recent stuff. It's so absurd I always wanted to make a web comic about this. Companies keep ignoring advice of their own developers, and then eventually hire some "technology expert" who is going to implement the same tech their existing staff recommend years ago. Except, of course, the expert has no idea about business processes and user needs, so you end up with a long and expensive train-wreck that results in something barely better than what you had before.


Our company does the opposite. The tech staff recommend against buying bloated enterprise software. They buy it anyway and tell us to make it work. It takes 5 times longer then building something simple and custom and is generally not fun to work with.


We have most platforms from IBM, z (Mainframe), iSeries (AS/400), p (AIX), plus about everything windows.

Moving off core systems (z/i/p) isn't simple mostly because of the amount of data combined with all the custom applications. every attempt to move the apps off the z don't come to fruition because of scope and the fact it just works. the i is increasing its load to pick up from the z and getting a good dose of webfacing and rest services. these are dead simple to implement and many reuse all the existing code.

talent wise, it really hasn't been much of a challenge in Atlanta/Dallas to find those who can support what is needed. the language isn't the biggest road block for many mainframe systems, its the file systems that can trip people. the i is pure DB2 so anyone versed in SQL can use it and RPG looks more like Pascal these days than that three column stuff people normally associate with it.

the one thing many here don't understand is just how many companies are invested in mainframe, i, and large p series systems. don't scoff at these platforms. with modern tools they can be webfaced just fine and the advantage their default coding languages have is they are business math oriented and simple. with modern features and file systems it just comes down to, what is management comfortable with and does it serve the companies needs.


> Moving off core systems (z/i/p) isn't simple mostly because of the amount of data combined with all the custom applications.

I wonder if someone could get away with selling a z/i/p emulator, akin to Wine. (Or perhaps more appropos, Mame, since the machine architecture would differ as well.)


The other part of the equation, software aside, is risk -

Anyone using mainframes heavily in the way we are talking about here is probably running something mission critical with lots of $$$ tied into it - and the financial/legal risk of having anything go wrong at all is a huge driver - they want a big company with deep pockets and pages long support agreements backing them, so they can indemnify the other company if there is any problem and shift the blame (and from a personal perspective, cover their own butt). The mainframe using companies also usually have deep pockets, and so even if the mainframe is expensive, it still allows them enough profit margin to support it..

"There is a problem with our IBM mainframe, we have requisitioned a team of 12 people from IBM to investigate and legal is looking into what sort of contractual obligations IBM has if issues aren't resolved" is a much more palatable statement for the fortune-100 CIO to pass along to the CEO than "our IBM emulator seems to be having problems, but the 3 people developing it are on vacation, IT isn't sure if it's an emulator issue or something to do with the new hardware we migrated to run it on 2 years ago, and IBM says they don't support the software when it's run on the emulator. We're calling them again to see if there's some way we can convince them otherwise"


IBM legally destroyed a business for trying to do that on top of Hercules emulator. You can use the emulator, though, with a licensed copy of their mainframe OS. Common case is you already have license for an actual mainframe then use Hercules for development/debugging/whatever.


https://en.wikipedia.org/wiki/Hercules_(emulator)

The problem isn't the availability of an emulator, it's licensing the OS you want to use on the emulator.


Aren't the modern mainframes something like emulators / hypervisors of the old ones already? (IBM still sells you new mainframes.)


IBM would never license you the OS.


I smiled at your last comment. You're clearly a charming individual :-)

Out of interest, what's the solution here? With your experiences in mind, what are the banks doing wrong from a technological perspective?

I get the impression banks should almost start again on the side, building a completely new bank they then (manually?) move customers over to, or just use to take in new customers whilst waiting for the previous ones to die out. I guess if they built a whole new system, wouldn't it then be a "simple" case of doing a money transfer into the new system?


French banks are actually doing that! SG/Société Générale, a brick and mortar bank that costs a hundred dollars a year, built Boursorama, an e-bank that's free and that gives you $200 if you subscribe. Advantage: SG has too many manned agencies and can't justify preemptively mass-firing them yet – they first need the customer base to dwindle. Better IT, usage of Internet techniques at the core, and drastically simplifying the product line is an intended consequence.

Same happens with other French banks. As a personal PoV, banks' manned agencies provide bad service (from 20x delays to plain mistakes), so I won't cry for them.


Boursorama is not a bank. It's a website to allow people to play on the financial markets ("bourse" = exchange). I think Americans would call that a sort of dealing account.

Obviously, a dealing account has to hold funds and handle transfers. That's not any close to a consumer bank though.


Boursorama Banque is a bank, as in, a normal cheque account, IBAN, VISA card, loans, term deposits and no trading account. Real normal bank. The main Boursorama website is another department, it's not really clear on their website.


It is now also a bank. I own one Boursorama bank and I have account credit card...


Boursorama also owns an Spanish "digital bank" called Self Bank. I have an account there.


You can't wait for customers to "die out" as their mortgages will go on for decades. You definitely need a customer's old data in the new system. Now you have to maintain and develop two systems - bugs, banking regulation require constant work. Often the clock is ticking for old systems and asking vendors to provide and maintain ancient tools can be expensive or impossible. You can't "manually" move customers over, you need to "migrate" lots and lots of data - both systems will have huge, completely different database schemas.


For an example of how this system switchover looks like, First Bank in the United States just did a whole-system switchover in March, at least for their business customers. They handle my wife's business' banking, which means I do all the paperwork, and it definitely was a pain. Basically it was like signing up for a whole new bank account. They copied over something like only the last 6 months of transactions and some payroll info, but things like automatic bill payments and everything else (even basic online login) had to be set up again.


While mortgages do go on for decades (typically), people will frequently re-mortgage based on new interest rates.

e.g. in the UK it's typical to sign up for 2-10 year deals, with the interest increasing after these offer periods, meaning people will likely switch to another deal.

So there will be a certain amount or organic churn.


Not enough churn, though. I worked on mortgages for a UK high street bank a while ago, and they were still running the mortgage systems for every bank and building society they'd taken over in their decades of growth, plus a few different attempts at building one system to rule them all.


As a developer who recently turned thirty, I think about this a lot. I like working in startups, but its not going to fly forever. I'm already the old guy in the room.

Would learning 'ancient' technology be a good career move? These systems aren't going anywhere, right? But the people who know how to maintain them are. Which means that maintaining these system, which already pays well, will pay even better in the future? Or is there something I'm missing here.


You're missing the part where they're being aggressively replaced by Java at every turn and will eventually cease to exist and also these jobs are temporary consulting gigs more popular with retired experts going back to the same bank that fired them years previously.

If you want to be well employed, learn Java and move to the American south where a nice house with a 1/2 acre backyard sets you back less $130k - $200k and they can't find enough qualified IT people. No it isn't a startup or Google, but the pay compared to the average income of $30k lets you live far better than San Francisco as long as you don't mind less things to do in town.


Just hang around - the technologies you learnt when you were 20 will be ancient soon enough.


Yeah, but my impression is they're so ephemeral they might not have a big market share.


Wait you mean Rust and Go aren't going to be around 30 years from now?!


Go probably will be, but it's still a niche. Rust, the jury is still hearing evidence.


No, the stuff you know will be ancient soon enough. I would recommend keeping up with the new way of doing the same ole shit though.


I can't upvote this enough. Your tech stack will be obsolete even though the replacement will be mostly the same and possibly worse. Looking at it differently, imagine being a common lisp diehard starting in the 80's. Every 5 years you look around for something better and mostly anything would be a step back. Your dog died, your kid is in college, you're middle aged, Perl, VB...etc have all come and gone and you're still using lisp lol.


"LISP programmers know the value of everything and the cost of nothing."

- Alan Perlis


I don't think you need to learn an "ancient technology". There is plenty of work in traditional corporations. You just need to find one that has legs so you can ride it out for the next 30 years. A steady job, 5 weeks of vacation after 15 years, benefits. If your lucky you can become a number and slowly disappear, spending more time doing your own thing. Long lunches, nap in the car, run some errands, etc.

Eventually everyone runs out of piss and vinegar. It's not so bad.


> Long lunches, nap in the car, run some errands, etc. Eventually everyone runs out of piss and vinegar. It's not so bad.

I hate working with people like that, and would hate myself if I did that.


Life is short, how much of it are you willing to give away. That may not resonate with you now but wait until you're in your 40's or 50's and 10-20 years has passed.


I like what I do. I like going to work. I like my company. I like coming home to my baby and tickling his feet. It's life. Being a lazy bum not accomplishing what you want to do shortchanges you and your life as well as the people paying you.

My father in law has a saying: "if you don't like what you do, what are you doing?". And I agree with that.


> I like what I do

You are fortunate. But labeling everyone who "doesn't like what they do" misguided lazy bums is... a bit much.

Most people in this state are slowly recuperating from years of failure while attempting to "do what they like". Usually as a bonus they also have to dig themselves from under a crushing mountain of debt.

As you correctly pointed out - it's life. Though it's rarely as simple as you paint it.


You have a lot to learn.....You'll figure it out eventually.


You should move into management then


it the cost becomes too high or rather if there are uncertainties they will just replace them.


I think the question everyone is failing to ask is: what's the interview process like? Do they put you through 4 rounds of whiteboard coding bullshit followed by a coding assessment followed by a pair programming session? If anything, I'd argue the interview process for such a nice role should be the most rigorous because you can't rely on auto-complete-by-stackoverflow.


Doubtful. I'd bet a lot of the people who can do this work wouldn't be interested in a whiteboard interview. If you're fishing in a small pond for people that can maintain your legacy code, there's going to be a lot less bullshit in the process.

Whiteboard interviews have their place, I'm sure. This isn't one of them.


I'm pretty sure with these kinds of systems, if you can repeat about 10 nouns relevant to working within a mainframe environment like this, you're hired.


I've considered it, seriously even. I could go off, contract for large hourlies, probably get paid a lot more than I do now as a more junior exec with a lot less stress (and when I first moved into management, I had the same thoughts). If you have the skills and reputation, this is an area where you write your own ticket. If I were closer to retirement age instead of in my mid-30's, I think I'd be doing it.


I guess software is so ingrained into companies these days---and banks were particularly early adopters, that the best chance of a rewrite of a company's software is to start a new company.

A new company whose processes for example don't have all these interdependent edge cases (yet). The domain will start pushing edge cases on the new company too sooner or later, but that knowledge still has a chance to diffuse---and lots of newer companies these days are probably better at avoiding small bus factors.


>> Companies never think of the old guys as the ones to implement the new system - that's a job for the "enterprise experts"

Too often the old guys simply re-create the old system using the old methods in new software, and you end up with all the same problems as before.

In a perfect world, the two would work together.


> high 6 figures a year without working too hard

Wow, like $800-900k?


It's not unheard of. But you must understand that these are people that have a pretty decent job security in an often huge codebase that has been aquiring cruft for 30 years in language most people don't understand. I would like to be able to claim that the code is good, but some of the people writing it in the first place were not programmers. It is not strictly bad, but some parts are just mine fields.

A bit related:

I spent 2 years of my life on a C codebase written by mathematicians that would much rather have written it in APL. I don't know if you have seen C written as APL, but that is 2 years of my life I will never get back. I left for a lesser paid, more fun job. Upon leaving the manager offered me a 60% pay raise (yup, I should have had higher demands, but at least I proved myself :) ).

Unless you enjoy torturing yourself, legacy COBOL programming is not very rewaeding, and at least in Sweden, most of it is slowly being moved to other languages.


Best story I've heard is one of the old programmers commenting his COBOL code in latin. It was a different time...


"Mea culpa" over and over?


It's common practice in Vatican Bank.


How do you know? Have you seen one?


The ATMs in Vatican City have Latin as their main language, so it's not a huge stretch...

https://commons.wikimedia.org/wiki/File:Vatican_latin_atm.jp...


I feel like that's a play for job security more than anything else.


And different times: these days with Google Translate Latin doesn't add much to your job security.


Malus est?


> COBOL programming is not very rewaeding, and at least in Sweden, most of it is slowly being moved to other languages.

Which would imply that there's a demand for people who know both COBOL, and whatever's the popular migration target for such systems these days (I assume Java?).


Migration target like visual cobol (not a joke). Just look at those smiling hipsters writing cobol.net on their laptop.

https://www.microfocus.com/products/visual-cobol/visual-cobo...


I can't wait until I can compile Cobol for webasm


Nah, go with terminal apps in Visual COBOL distributed as Electron apps.


https://xtermjs.org/ is here for you.


Oh my goodness... why do I keep making jokes about how a ridiculous world would function and then reality is already halfway there?! It's been happening too often past two years.


This seemed surprising to me on the first read, but it makes sense. An average senior software engineer probably makes ~$120K in US. They often work on systems that aren't in any way critical and are more or less replaceable. Anything in COBOL these days is by definition is crucial and highly expensive infrastructure. The talent pool to work with that tech is small. They are by definition very experienced and probably highly specialized even within that pool. So upper six digit salary actually makes sense.

To be honest, the idea of companies paying through their noses because of decades of short-term thinking makes me smile. Karma in action.


I expect he meant the high range of 100K-200K. It's a colloquialism I hear periodically. I took it to mean $180K-$200K.


You can make $250/hr consulting for mobile app development. I have no doubt an expert in some ancient Cobol-based banking software can set their own price. When your entire business stops when it breaks paying $1000/hr is chump change by comparison.


I think since he wrote 'pays extremely well' it could mean 800k+


I don't think so. TFA references a rewrite that cost $750M over five years. Paying three or four experts $800k/year to keep it running is chump change in comparison.


How can "high 6 figures" mean high 100-200k? Ugh, I hate english.


It can't. This is not reasonable English, it's (presumably) a misspeak by the OP. People make these kinds of mistakes in every language.

It is highly unlikely that that the OP is making $800k/yr consulting as a programmer. That's $400/hr sustained for more than a year of fulltime work.


$800k/year is possibly unusual, but these are unusual people. I see nothing unlikely about a highly skilled contractor with 30 years of experience, a good network and in the right location to be pulling down $800k/year.


Do you know anyone who does? "Without working too hard"?

The chance that the OP is billing $800k/yr is vastly smaller than the chance that the OP used a bad figure of speech. The number of technology professionals who can sustain $400/hr billing for a full year is tiny. The number of technology professionals who are bad at communication is huge.

Bayes' Theorem is relevant here.


I know someone who makes half that and who is utterly unremarkable (nice guy, competent - but unremarkable, he's just played his card very well. Not playing politics, but being a consultant, not FTE, in a lucrative niche, and possibly having a nose for ending up on good projects). So I can believe that a tiny number (which is what we're talking about, they can make this much exactly because there is only a tiny number of them) can make this much.

"Without working too hard" is quite subjective. I suspect they are quite disciplined and works quite hard, but are very rarely in the office after 5, nor working much more than 35-40 hours a week.

As to (bad) communication, $200k/yr is high but unremarkable for SV/NYC careerist professionals. The notion that someone with decades of rare, in-demand experience should somehow be capped at what a (lucky) 35 year old engineering manager makes is just silly.


There is a steep power law in bill rates. It is not twice as hard to get $400/hr as $200/hr, but orders of magnitude harder. And my observation - as someone who bills more than these numbers sometimes - is that the higher the rate, the shorter the gig.

You have to look at this probabilistically. Contract programmers taking home $800k/yr are very rare, but folks misusing "upper six figures" to mean $180k are common. I'm not a gambler, but I would happily place bets on the meaning of the OP. This is classic Bayes' Theorem.

The fact that everyone here is jumping into this thread to justify how it could be possible probably says a lot about the psychology of HNers.


If the systems they build and maintain generate magnitudes larger revenue for their overload owners, then why not 800k salaries. Time for labor to actually be valued.


And the fact that everyone else is jumping in to categorically deny that its possible says a lot about the psychology of other HNers.


Nobody is denying the possibility that the OP spoke correctly. The realm of possibility is broad.

We can, however, assume with a fairly high degree of confidence that he misspoke.


This isn't your average developer banging out Rails or Swift apps. This is extremely specialized knowledge of huge and arcane systems that are decades old. Entirely different ballpark in terms of knowledge and skillset.


Not to mention the value to the businesses running these systems. $800k doesn't even qualify as an expense to a lot of these companies relative to what they lose if the system goes down. They probably think they're getting a deal.


It's not unlikely at all for a highly specialized skill set. Speaking from experience.


i feel like the terms quarter- or half-million (or million itself) are what people would use to describe salary once they've busted the "6-figure" descriptor where the first figure is implicitly "1".

"half-million" is much more impressive sounding than "mid 6-figure" salary.


I wouldn't call it that, but if I had to for some reason I'd go with percentiles. Within the group of people who make 100-999k, 200k puts you at about the 80th percentile. I think 80th percentile is good enough to call "high".


While I agree, I heard a similar thing on the weather (here in the UK, in degrees Celcius), when the weather woman said we would be in "mid double figures" tomorrow, clearly meaning around 15 degrees, not 50!


They don't use "mid-teens" in the UK?


Yes they do, but I guess she just mis-spoke (for those in the UK, this is Carol Kirkwood on the BBC)...


Yeah, we do. "Mid double figures" might make sense in the context of use, but my first evaluation of that phrase isn't a value less than 20.


Sounds like it was clearly a mistake, but as you mentioned in a very specific context, with a specific range of likely values (temperature near sea level on Earth) I could see how it could make sense.


Sounds like you should start an "enterprise expert" consultancy that knows what it's doing. Seems like a good opportunity.


Go for it, if you like death marches that will just fail.

The real trick to software consultancy nirvana is to find the big whales paying big money for what are just kiddie apps.


great idea. Or an expert network as many are self employed.


I worked for years on the modernization of a critical banking infrastructure system. The problems are manifold. The biggest problem is overall architectural impedance mismatch - switching from a batch system (most of those old COBOL mainframe apps are batch systems) to a service-oriented architecture. This makes incremental replacement extremely difficult. But a cutover? On a system that moves more money in a day than the GDP of most countries? A failure could put the entire banking world in a tailspin, cause an economic crash.

Next, there's a lot of business logic basically embedded in the COBOL, or even at lower layers. For example, lots of banking files are in EBCDIC, a different character set from ASCII. Except there are lots of different EBCDIC variants, and there's no good way to tell which one you're viewing just by looking at the file. So you have to reverse-engineer COBOL to figure out the "correct" meaning of a given file.

The problems go on and on. When I see people in the startup world rolling their eyes at the "incompetence" of the enterprise world, I take it to mean they've never actually worked on a truly hard problem in their lives.


> When I see people in the startup world rolling their eyes at the "incompetence" of the enterprise world, I take it to mean they've never actually worked on a truly hard problem in their lives.

I worked in a related field in the past. I don't claim that the problems are easy - where I tend to roll eyes rather is:

- Bureaucracy: A lot is required by law, I know. But there is a difference between "just following the required bureaucracy in a minimal necessary way if it stands in the path" (the startup way) vs. "taking it seriously in a way that makes the work harder than strictly necessary".

- Hierarchies: Just three words: I hate them.

- Unwillingness (?) to tackle these hard problems: The problems are hard, as you already outlined. But this implies to me that everybody in the company will move heaven and earth so that the people who work on this hard problems are able to (e.g. give all necessary information (requirement specification etc.) they know of etc.). If there is just one thing (or even office politics) which will prevent that, I don't just roll my eyes, but get furious. Because of the hardness this is not to consider an obstacle, but targeted sabotage - and should be considered as such.


Part of this comes from an inability in the early years to truly respect engineering as both a craft and as a form.

I know your startup mindset well and I carry with it with me too. I came into a legacy fintech company the same way and pushed for faster decision making processes.

I didn't realize the cause for conservatism until I was given a story about how the company needed to manually call and refund thousands of customers... all because one developer fucked up and double charged people.

When you deal with MONEY and you experience getting burned like that... you realize how mantra's like "move fast and break things" only work as convenient motto's for startups who have nothing to lose.


I'm starting to blog about some of the issues around these things. One I want to write about - especially because it triggers controversy when I say it - is the idea that it's more important to not be wrong than it is to be right. So no matter how obviously right a move is, fear that it just might be wrong hampers decisions.


Don't forget rampant offshoring in the early aughts.


Startups have yet to solve the problem of legacy code. At the very center of Internet companies too old to still be called startups is some ancient Perl or PHP written by the founders, back when they were around, and even wrote code. That might seem less archaic than COBOL or faxes but it's the same root problem. "If it ain't broke, don't fix it" didn't account for "well you see, it's not catastrophically broken but it's holding us back"; maintenance programming isn't sexy, like maintaining bridges and highways that have already been built, but just as necessary for continued operation.


Yes and no. Some of the more successful "startups" like Google and Facebook have slowly rewritten the bulk of their systems over time. In the case of Facebook in particular, there's very little if any of the early scary spaghetti PHP left.


Early in my career in the 90's I worked for BZW working on derivative hedging for financial swaps.

Things were pretty hectic and it was a pretty small team. We were using SQL Server at best and Excel spreadsheets as information feeds at worst, trying to calculate Black-Scholes on this stuff.

I clearly remember talking to the Traders and Quants regarding certain calculations we were doing to give them bond price goals for offsetting risk on the trades.

Honestly the traders at least didn't give a crap. I would present them tables; they would look at it, and would say "yeah that looks about right" - that's a direct quote from BZW's lead trader in 1994. I can't imagine things have changed that much.

I didn't stay long in that environment; it was pretty clear to me that despite getting people like Grady Booch coming in to clean up our act our "Customers" didn't really care too much about the mechanics of how things worked, or even worse whether the calculations were correct.

Top and bottom is, while the banking industry may employ "Cowboys" in the back-office for IT services, they also employ Cowboys in the front-office making the trades.

I doubt that has changed for the better that much. See 2008 financial crisis et al.

[Edit] As a side note, for the time I was earning more money than I knew what to do with. My boss at the time felt so ambivalent about his twice yearly 50k (sterling) bonus that he threw it away at the Casino the day he got it, (remember, this is 1994). I left and took a 75% pay cut to go work for Microsoft on projects that were at least form a CS point of view a lot more respectable. Having said that, I don't want to come off too harshly, we were at the cutting edge at the time and the technology was very cool and thoroughly enjoyable. But still..


Well, since I have also successfully managed similar such projects for financial institutions & telcos, and I live in startup land, by your last remark I feel moderately qualified to comment.

I'll challenge your view that folks in the startup world don't know enterprise. Maybe some visible fraction are the young and inexperienced hipsters as portrayed on HBO, sure, but most of those I know in CTO+ roles actually have a lot of enterprise under the belt. In my case in a B2B play it's practically mandatory, in order to understand the customer.

I believe enterprises suffer principally from the fear of change, or more bluntly, the fear of screwing up and being held accountable, which leads to the pathological technical debt issues you've described. So the problems I've always faced in enterprise projects are not primarily technological, but instead those of a) finding a full team of people competent enough and fearless enough to perform transplant surgery on the beating heart of a living body corporate and b) collecting sufficient clout to be allowed to perform the operation.

I reckon the best thing you can do, as a project leader in the enterprise world, is leave a legacy of constant and gradual change. Normalize frequent updates through CI/CD. Get business owners used to things like minor feature requests being included in daily deploys. No-one will thank you at the time, but a change in culture is almost certainly the most enduring value you can create.

So yeah, I'll happily roll my eyes at the "incompetence of the enterprise world", because I've dealt with the stupid head-on, and used techniques from startup land to innoculate it permanently.


Furthermore lots of decisions in enterprises look like the following:

An executive is approached by a vendor. The vendor entices the executive, shows them a good time, gives them a really good assurance their service is worth it.

An engineer hates this service because it sucks. Because the decision was made based on how cool it looks, not by technical needs.

Given that, it is hard for an executive to get approached by a vendor who says "okay, this will cost a crazy amount of time and money but we'll make your systems more modern" which sounds like "hey, I'm going to come in and offer to replace a system that has worked for 30 years with something modern and potentially risky. And it'll cost you a lot." The executive doesn't (usually) realize that the true cost is really high and goes up with time. And of course they don't want to lose their cushy job so hell no they won't take it. Also the executive isn't directly working with the engineers so he doesn't truly know if he can trust them.


This reflects a failure of modern corporate structures to empower engineering to the role it should and (ought) to be having.

Thankfully, this is changing rapidly and the valley is leading the way.


   I take it to mean they've never actually worked on a truly hard problem in their lives.
I certainly have some empathy for this view. On the other hand, the right time to have addressed actually fixing some of these problems is 25 years ago. The second best time is today (to steal/abuse a phrase). Enterprise organizations sometimes punt these things down the road with half-assed solutions. It's cheaper today and tomorrow maybe it will be someone else's problem, right? All the while the overall issue becomes worse.

It sucks sometimes to be at the bottom of a deep hole you dug yourself into without a ladder, but at the end of the day, it's your hole.

And you are right that sometimes it's just a hard problem. But you can always make those worse.


Sometimes, 25 years ago, a solution to the problem wasn't available. I worked on state-of-the-art systems from 25 years ago. We were writing homegrown streaming data protocols over raw sockets, parsed with lex and yacc. We didn't have ssh, we didn't have http, we didn't have xml (much less json). A world-class system from 25 years ago would be scrap today... how many bright young junior programmers today could update a lex/yacc parse stream, or handle socket programming or DOS HIMEM?

Improvement needs to be continuous. The ability to update individual parts of the system with minimal coupling is vital. But even keeping that as the system evolves is a challenge - and designing for it in advance leads to all sorts of unnecessary "just in case" abstractions in the code.

Keeping code alive and running for a generation is a whole different kind of challenge.


> how many bright young junior programmers today could update a lex/yacc parse stream, or handle socket programming or DOS HIMEM?

Almost all of them – if they're really "bright" anyways. Even given that a lot of the important context is missing, bright programmers can do this stuff.


How many programmers could write a functional machine code?

The wrong question is being asked. Modern programming techniques work because they have successfully abstracted this complexity.


that's disingenuous. anyone can do anything given enough to learn the skills. what OP is asking is how many of today's programmers have the skills already. the answer is very few.


Indeed. If someone suggested writing a custom stream parser for something as simple as scanned images today, I'd point them right back at the wide array of off-the-shelf, standardized solutions.

Sure, a good programmer can learn this stuff. But they shouldn't have to, not these days. There's far more to programming than any one person could ever learn. Choose your battles.


I agree on the continuous improvement - i wasn't suggesting you do this once and stop thinking about it.

But note, 25 years we didn't have the same solutions we might have today, but we had good solutions to lots of common problems. We certainly had solutions to "system is specified by a mixed bag of ASCII and inconsistent EBCDIC files, none alike, all specified 15 years ago. Which is at the heart of the problem OP posited. 25 years ago people were saying exactly the same thing about banking systems in COBOL that beat describes. Exactly. The batch processing OP discusses had already been out of vogue for a decade at least. We had good solutions for nearly all of these problems, what we didn't have was quick, cheap solutions.

Just for completeness: we had html. We'd had SGML for a decade (which begat HTML and later XML). We had reasonable streaming protocols. We were a lot worse at connecting heterogeneous systems controlled by different entities and interoperating, but we good at networking and building distributed systems at a smaller scale.

Keeping code alive and running for a generation is a very difficult problem, but keeping systems tidy, modular, and evolving is manageable, until you let them go too much.

And people are digging the same holes today. It's not technology that is the cause, it never was. It's cost and short term planning.


As my Director says: "Yes, we're solving problems and making code better, but never forget, this code has run the business for 10 years, so give it credit that it did do it's job."


At a certain point you reach a point where continuing to try to move elephants around is going to get you no where.

Death is a necessary component of change. In fact, renewal could not come without death.

Existing legacy systems bring with them assumptions about how things ought to work, and debt about expectations -- expectations that slow down your ability to change away from existing paradigms.

True innovation requires this breakaway.

So honestly, IMO the best move for a bank that is facing this kind of software nightmare is to maintain existing legacy support for the old system, but do a complete breakaway (NOT REWRITE) that is explicitly NOT dependent on the old contracts of functionality that the old system would have imposed. Make the rules change, acknowledge the old system will break with the existing system, and plan for a data migration over where ever possible.

Accepting defeat and moving on is a saner path. Migrating the data will become possible once it's realized that ultimately data is easier to change over than behaviour.

I say this too as someone who is very against rewrites generally. It's a fallacy to believe that old systems can accomodate new.


Looking back on that giant rewrite project, that's how I'd have done it... I'd have built the new system in parallel with the old one, not sharing the data store. The new system would have significant advantages over the old system (ie near-realtime transactions rather than waiting for overnight batch jobs). Get it running, and encourage early-adopter customers to switch over. That will stress-test it and allow it to scale. After a few years, with lots of warning, retire the old system.

That gets away from the "Flip a switch on billions of dollars of transactions a day" terror.


$100 an hour, so $208k a year (assuming no time off), for a guy with 30+ years working on a critical system in an industry where floor traders can make 7-8 figures, and huge bonuses are paid to "analysts". I have plumber buddies that make more than that per hour (and with a crew can make more a year).

That's why a couple years ago when I had access to a zseries, and was hearing about how "desperate" the banks were I took a look. What I discovered are salaries that are some of the lowest in the IT industry and a cynical attitude towards hiring. AKA, banks would rather pay $40k a year for a "system operator" which is generally the ground floor for learning anything about a mainframe than hire someone with a comp-sci degree and teaching them the system while promoting them.

So, no thanks (you can fill in a less polite version), the banks can go to hell.


You hit the proverbial nail on the head here. I quit college to start an IT support company (back at the degree currently, data science is cool!) and later contracted for myself for a while, so I have seen the inside of all kinds of companies, from fortune 500 oil to 2 man law firms.

The number one issues I have seen in the IT industry is lack of incentive, primarily in the form of salary, lack of respect, and lack of a C-level working on their behalf at the boardroom level. I know sysadmins who supported entire 200-250 person, 7 branch companies single-handedly, but get 40k, never had a budget, and got refused for hires and simple stuff like "hey, the cabling here is from the 1970's, we need a contractor to come in and recable the offices", and being told "no, you do it, at the same time you support the whole company, and no- you can't buy the TIA spec book."

It's no wonder companies are hemoragging good IT talent left and right.

If I could offer a single peice of advice to a company, it would be to create a CTO and CIO position if they don't exist, and get a good one who advocates on the behalf of the IT department. I see most of these issues as management failures first, not technical, so don't come crying about cobol to me.


Yeah that was odd, already back in 2009 I was making 135K as a PHP sr developer/architect so I am not sure why six figures is such a big deal. This sort of stuff needs to be six figures where the first is at least 2 if not 3. If we are talking of an agency, it needs to bill 500 an hour and hand over half of that.


You always have to take into account location. $100k/year in NY and Arkansas are completely different beasts.

I've been online for many many years and I STILL do not understand why people miss this simple idea.


I see a lot of that, too. But I also see a different error, of treating cost of living as multiplicative.

Once you are making noticably more than you are spending, earning 2x and spending 2x means you're keeping 2x.


Not sure if the $100/hour figure quoted is a hard and fast cash amount banks strictly adhere to, but ultimately this is business, so: set your price and walk away if it's not met.

I set my price: AUD$850/day, with a TINY bit of wriggle room, and that's final. I know my worth (from a skills perspective and market/economical one.)


You could probably charge higher - this is at the mid range of many client side programmer daily rates in AU and systems skills like this are so much more critical.


Maybe. I'm a DevOp, not a coder. I could charge more.


That strategy works if you have enough demand for your skills, and you don't price yourself out.


If demand for your skills is waning and your daily rate is dropping, it's time to re-skill.


Inc. Super/GST?


No. I don't receive that, the government does. That goes on top.

Edit: misread. GST is NOT included in the price; Super is my problem and comes out of the agreed daily rate.


I found that to be a pretty low number too (for a consultant). It seems like given the value he's bringing, he should be able justify far higher prices.

My own consulting rate is higher than that, and there's way more people with my skill set. One of the big differences is I work with companies that value code as an asset. It sounds like he's working with Banks which view code as a cost of doing business.


I know a guy that bills at least 5000 eur/day (this is before taxes, so personally he doesn't see that much) for fixing Cobol stuff for financial institutions.

And depending on the urgency, his rate goes up. He's retired, and works at most 2 or 3 months/year and lives a pretty normal life and lives in a very small house. His neighbors were pretty stunned when he bought himself a Tesla, his previous car was a Nissan Note...

He told me he was usually called in for projects where they needed specific expertise when writing Java to interface with existing COBOL, or rewriting COBOL parts with Java and adapt other COBOL code to inter-operate with that. Apparently most COBOL stuff now runs in the JVM - which tought me: in 30 years, Java will be the new COBOL.


Some people make over $100 an hour for this work, and some only do this part time. It's you're a retired programmer that just spends part of your time doing this, it's not a bad deal.

I'll also throw in the counterpoint that you get taxed heavily on these earnings and you (usually) don't get benefits.


Do you really think they quoted the high end of what they make to the reporter for this story? I'm sure that's the very low end of what people make (think "junior" devs in this particular niche). I'd bet the higher end people easily make 2-4x that amount.


The problem really isn't COBOL. COBOL is not a difficult language. It reads almost like English sentences. Any competent programmer would not be challenged by COBOL. The problem is all the glue. JCL, SNA, CICS, ISM, ISPF, etc.

It's like the difference between Java (the language) and J2EE, except in the context of 1970s computer technology. It's not intuitive and not something you can really work through without a lot of training and experience.


At my last employer, there were a few "mainframers" (we called them). I was curious so I would ask my mainframer coworker how things worked on his end.

I always struggled to understand, because it seemed everything was different ... the terminology, the culture, the ideas. I couldn't use analogy to tie what he was describing back to what I knew.


You got that right. I had to deal with an interesting banking file format and I asked a question on Stack Overflow and got this gem of an answer.

I politely accepted it as the right answer but boy oh boy does it feel like it's from an alien civilization.

http://stackoverflow.com/questions/28640159/what-is-the-diff...


Am I missing something? That doesn't seem all that odd, but perhaps that's because I'm familiar with IP,TCP and UDP and I've written something to read PNG headers before. Interchange file formats often include variable length blocks, which a part of the header defining the block length. That first part of the record is just a very small header per item, which defines the type of item and the size. Look into the deflate algorithm, or tar file format, or any number of on-disk storage formats and you'll find they all (well, definitely most) do this, because it's the most efficient way.

Fixed length records are less common tor interchange formats, but likely you use them every day anyway. That's what databases often use, and the reason is because it makes it very efficient to index intro the structure and get whole records (know your data is sorted? Binary search is possible and easy then). Sometimes it's just the indices that are stored this way (they essentially have to be), but you can get fairly efficient table access out of some engines without indexing everything if all the records are fixed and the engine can determine that.

If you generally don't program in a low level language, this is generally abstracted away by some library that is written in that language. People do't usually write PNG and JPEG libraries in pure Python or Ruby (or at least, they don't expect them to be used much in production), they write a shim that wraps libpng or libjpeg.


>Am I missing something?

The interesting part is that the fixed and variable record formats are first class things on the mainframe.

So, something like DB2 on a mainframe can use system supplied functionality (VSAM) as their storage engine. As opposed to unix, where higher level databases like MySql, CockroachDB, etc either roll their own (InnoDB) or use some 3rd party offering like RocksDB, LevelDB, etc.

VSAM isn't just one thing either...it supports k/v indexing, or indexing via relative byte address, or indexing via record number, etc.

So, basically, when you talk with mainframe people about interchanging data, they don't tend to consider that you might actually have to write some code to parse what they are sending you. They tend to assume you already have utilties that understand these things. It's not an interchange format...it's the native format for them.

I suppose the answer seemed novel because it's speaking with very specific "official sounding" terminology about something that's usually ad-hoc negotiated by project in the unix world.


As to the interchange of data, unless you run into a bunch of lazy Mainframers (they do exist) there's a lot they can do at almost zero "cost" to make it easy for you. It is not often there is a genuine case that anyone outside the Mainframe needs to know about internal Mainframe formats. I rail against "how do I translated packed-decimal fields in language-x" questions. There's no need for them to be seen. Same with LRECL and RECFM. Text-only, explicit signs, explicit decimal-points (or scaling factors), then you can have delimited records and no worries. You can even have XML or JSON if happier with that.


So I guess it's fair to say the difference is that the mainframes have a standardized format for record creation and consumption in the OS, sort of like DB2 being included in the kernel? That is nice, and it does explain why it might be confusing, even if I think it doesn't necessarily get you much over using a third party library (unless there are other benefits I'm not considering).


It's less of a difference now, and somewhat unrelated to the specific topic, but...

A big historical difference with mainframes and data was the architecture around I/O. They always had separate processors to offload I/O, and I/O was always asynchronous. And things like VSAM were highly tuned to take advantage of that.

That's why mainframes continued to outpace Linux/X86 for some types of workloads...even after X86 performance far outpaced the main processors in a mainframe.

I believe that advantage is completely gone now, but mostly via brute force vs elegance. Commodity hardware is just so fast now.


Correct about the I/O. You can let the space-bar auto-repeat 1919 times, for instance (nearest equivalent to circling the mouse) and the CPU cost is... zero. When, exactly, do you think that the X86 surpassed the Mainframe processors, and in what particular way? Current generation (expect a new one this year) is 5Ghz (actually slower than the previous) and has lots of stuff. A fully-loaded box has a theoretical throughput of 30bn (yes, billion) RESTful transactions per day. And if that isn't enough power, you can hang another 31 boxes onto it and treat them as one.


"When, exactly, do you think that the X86 surpassed the Mainframe processors, and in what particular way?"

Fairly recently. Through things like affordable ssd, enough Moore's law around intel, and better distributed data stores. And better app side knowledge on how to break up a monolith.

I was around for a few failed "rewrite this TPF system" attempts and I saw what broke.

Commodity stuff can replace it now...but only very recently.

Or if you just meant x86-64 vs any other CPU, for the CPU alone? That debate is just done. They poured enough money into that mess that they won, assuming you don't care about power consumption.


Moore's runs into the laws of physics: https://www.forbes.com/sites/gregsatell/2016/02/24/how-ibm-p...

Mainframe DASD is the same as "X86" disks, at least for those using "storage arrays".

Pretty much all the smaller Mainframes are gone, many years ago. I've not heard of any successful replacement of a loaded system which used fewer than three times the initial projection of "X86-power".

Anyway, time will tell. In 10 years' time you'll still think X86 is faster and there'll still be Mainframes.

As to your last line, who is "they"? I'm just interested. Thanks.


They is Intel directly, but collectively everyone buying Intel products. And a bit of AMD.


Sure, I wasn't trying to indicate there was no reason or benefit to mainframes, just to summarize the situation to make sure I understood it correctly. It does make sense to have an integrated library for advanced file access if you have dedicated IO hardware. That prevents a lot of misconfiguration of libraries that might try unsuccessfully use that system, if they even support it at all (i.e. OpenSSL and crypto hardware such as dedicated AES hardware as in the Via mini-ITX platforms of yesteryear).


If you want DB2 in the operating system, you need an "IBM midrange" or iSeries. Much shorter code-path, much faster. If you want the fastest record access, the operating system z/TPF for the IBM Mainframe. Not only just fixed-length records, but jus fixed-length records of one size. Effectively, there are no "third-party libraries" (until you get to IBM's Java, or any language someone has ported (Lua is a popular example).


I just lost a chunk of time checking out the user who supplied the answer:

http://stackoverflow.com/users/1927206/bill-woodger

This one was great:

http://stackoverflow.com/questions/15008999/can-anybody-tell...


Thanks for looking, I hope you didn't feel it lost in a bad way :-)


It was most definitely "lost" in a great way.

A better explaination would have been "accidently invested".


The IBM EXEC 2 manual (second edition 1982) is an interesting scan/read too.

http://bitsavers.trailing-edge.com/pdf/ibm/370/VM_SP/Release...


Although I was never a mainframe programmer per se, I did quite a bit of interfacing between mini/microcomputers and IBM mainframes, so I got to see under the hood a little. (If I write something stupid, it's either memory issues or ignorance).

I recall seeing how files were allocated on disk (remember that mainframes have many different OSes, like OS\390, and even OSes on top of OSes like VM/CMS, and I don't remember what this was running on).

In this particular case, a file was preallocated in JCL to use N extents starting on a specific cylinder. Fixed size. None of this fancy ext3 or NTFS ;)

JCL (Job Control Language) was a language to control batch jobs, and many have called it the worst language ever designed, although not as bad as brainfk.

On the other hand, I had a chance to interface C++ with CICS (a transaction processing subsystem) using WebSphere MQ, and I must say, I was really impressed with its sophistication. It was a kind of SOA long before the term was invented.

A lot of what I saw in the mainframe world predated things - by decades - that some may think are new(er) concepts, such as clusters (sysplex), front-end processors, hypervisors, HA, and so on.

Those of us who had to fiddle with implied file formats with fixed-length fields and records won't find this stuff quite as alien, but equally as painful to deal with. I recall using some sort of ETL program to get around this. On the plus side, these primitive formats certainly were efficient in terms of processing speed, and a great match for COBOL.

Speaking of COBOL, as part of this project, I had to write a parser in C++ to parse COBOL copybooks (kind of a COBOL data structure definition) and generate C code to read the data.

It is a very different world, but I don't think it's all bad. After all, the technology has been working very well for a long time. Kudos to the COBOL Cowboys. I hope they charge a lot more than $100/hr!


> In this particular case, a file was preallocated in JCL to use N extents starting on a specific cylinder. Fixed size.

Sounds like a z/VSE system (formerly known as VSE/ESA, VSE/SP, DOS/VSE, DOS/VS, DOS/360). In DOS JCL (which is a different syntax to z/OS / OS/390 / MVS / OS/VS2 / OS/360 JCL), you manually allocate files to disk locations using the EXTENT statement. By contrast, in z/OS the operating system decides where on disk to locate your file (or dataset, to use mainframe terminology). (You don't have to manually allocate files any more in z/VSE – you can use VSAM, or store your files in libraries, and in both cases the OS decides on disk locations for you – but, originally, neither VSAM nor libraries existed, so you had to manually assign locations to all the files on disk.) It is very primitive, but remember it was designed in the 1960s to run on machines with only 16KB of memory–plus, humans could design a disk layout to maximise performance, by placing frequently used files on faster areas of the disk. Nowadays, the OS can do a better job of locating files on disks than humans can do, but this capability is kept for backward compatibility.


Thanks for this! I had a number of interactions over the years with the S/3x0 world, and I wasn't always sure what was under the hood. I was aware that there was a bewildering slew of xxAM access methods, but had no chance to look into them.


Fixed-length fields and records weren't all that long ago. Remember ISAM in QuickBASIC and VB for DOS?


JCL is not a language. It is no more than "the language, in the sense of words and symbols, that you use to define resources to a Job Step".


Unix has a few tricks up its sleeve from when it wasn't the top dog. I hesitate to ever recommend perl, but pack [1] and unpack are pretty sweet for this kind of stuff.

I only learned about them when a state sent me files in EBCDIC [2]. As with all things perl, you can convert from that to ASCII as a one liner. Or, rather, i helped someone much smarter than me do that, 20 years ago.

[1]http://perldoc.perl.org/functions/pack.html

[2]https://en.wikipedia.org/wiki/EBCDIC


Please don't remind me. I had to convert the expat XML parser to compile on z/OS and work in EBCDIC, and found that round tripping between ASCII and EBCDIC was sometimes impossible because of the existence of not two, but THREE line terminator characters: CR, LF, and NL (0x85).

Not to mention that you cannot test for uppercase or lowercase like the ASCII `ch >= 'A' && ch <= 'Z'` because they are not contiguous in EBCDIC. A good reason to use the C RTL.


This is how you can implement isUpper() on EBCDIC: https://github.com/Perl/perl5/blob/v5.24.0/handy.h#L1153


IIRC, you should be able to do it easier than that: EBCDIC is not contiguous because upper-case versus lower-case is a (shift) bit-flag.


> Not to mention that you cannot test for uppercase or lowercase like the ASCII `ch >= 'A' && ch <= 'Z'` because they are not contiguous in EBCDIC. A good reason to use the C RTL.

Watch your sorting methods, too. I had a guy over here once totally confused why running his SAS job on the mainframe yielded a different result than the same code running on PC SAS against the same data.


I did the same to enable XML messages to flow over MQ between an RS/6000 based front office FX options system, and a back office S/390 system. IIRC there were six (!!) different EBCDIC codepages that could be in play. I had a code generator that would crank out C or Java bindings that could martial between the expat results and the COBOL data structure. 20 years ago now!


Not sure if you remember but sendmail used to require a certain amount of m4 knowledge and hackery. Emboldened by that and reading the dragon book I was very impressed with myself when I wrote a COBOL parser in a mix of C, lex and YACC that automatically generated the needed 'C' structs and Sybase database layout to load data fed from a System/36. I made the data supplier put his code in the first part of the magtape, read and parsed that and then read the rest of the tape.

These days I consider it more of a "what was I thinking" facepalm-worthy sort of thing but at the time I was very proud of it. The "what was I thinking" part is more about the fact that some poor bastard had to come along after me and support that mess.


Likewise: I regard code generation as a red flag these days. The version skew issues when code generated off slightly different versions of messages are in play can be really nasty. CORBA suffered from that issue big time in the late 90s. And if your generated code uses mutexes in a misguided attempt to be "thread safe", all bets are off...


Oh yeah, I remember the code pages. Joy. I didn't want to bloat my comment, so thanks for mentioning this!


> I hesitate to ever recommend perl ... Or, rather, i helped someone much smarter than me do that, 20 years ago.

No need to hesitate. It's not half bad for a dynamic language if you keep a little discipline. 20 years ago the average Perl programmer was probably akin to the average PHP programmer from 10 years ago. That is, not very experienced, and with code that made that fairly obvious, even if it got the job done. With some of the more modern modules, you get something pretty swizzy[1]. :)

1: https://news.ycombinator.com/item?id=11633961


No kidding. Over 20 years ago, I used unpack to read minicomputer log files. It was vastly faster to FTP them over to a Sun Sparcstation and process them with Perl than to use the native log reader. It was also far more flexible.


I read expecting to see something really alien. But it doesn't strike me as alien so much ... just low-level. You have to know the format of the bytes that were written to disk, which is pretty rare these days outside of systems programming (and mainframes I guess).


You don't have to know, and many working on Manframes don't. No-one outside the Mainframe should need to know, unless you get the stupid "oh, but they refuse to change the program" for a data-transfer (read, "we signed-off on it before we knew what we were doing"). There's 256 bit-patterns to a byte. It is not rocket surgery.


Wow. That answer took some time and a whole lot of knowledge to write up, so definitely deserved the "accepted" - but if that's just the surface of things I understand why it's like a different world.


It's not weird if you think of computers that handle data byte by byte

Think of building a DB on an C64 and that will probably be more like it

But of course if you still have to worry how your data will be serialized to disk in 2017 that's your problem right there


Byte by byte? Where does that come into it? I don't think there's much Production work on a Commodore 64 these days. You don't have to "worry" about anything. I/O is asynchronous, but your High-level Language doesn't know about that, it will appear synchronous. Meanwhile tons of other stuff is going on, and I can't really think of any of it that would be "byte by byte".


I frequently say that IBM sells websphere to assure that they continue to sell mainframes. Because everyone knows if your going to replace your mainframe, you need to rewrite it in java, and what better framework than one provided by IBM! Then when the whole project ends up taking 100x the hardware IBM can pretend there is some secret sauce in those mainframes.

But, to the linked answer. While the details differ a bit, the zseries is simply a low level description of how the machine worked (past tense because modern zseries mainframes have a lot of hidden "virtualization" in order to leverage industry standards). That is why "mainframes" outperform racks of x86 PCs. There really isn't anything magical about the hardware. The real magic is the fact that the software is written by guys who grew up understanding how to processes transactions in a couple K bytes of memory, and the machines grew, as the transaction load did. The result is code which understands the hardware and is crazy optimized in the critical paths. The fact that frequently the critical paths all fit in a fraction of a modern L1 cache doesn't hurt either.

More specifically to your link. While the details vary a bit, modern PC hardware doesn't conceptually differ that much from mainframes. You could just as well ask the same question of a modern PC... Sure you can open a file and treat it as a stream of records, but unless you make sure to size your records on a multiple of 4k (was 512 until recently, although RAID controllers complicate things, as does flash) you will have read/modify/write cycles rather than simple write cycles. Plus, depending on your access method, the kernel may get involved and bounce buffer everything rather than DMA'ing directly to/from the page storing the data. Yah sure, the track/sector meta data on a modern hard-drive varies a bit from the answer given, but you might find a modern SSD that compresses its read/write operations doing things far closer to what was described. So, given a machine with a couple K of memory. You need something that today we would consider the front-end (html/javascript), back-end business logic, database, OS, drivers and disk firmware all in a single piece of code. What would it look like? Yah, all those layers would collapse, and your database records would look a lot like the disk sectors...

For something even closer, you could consider the options to tar, and modern tape drives which continue to actually support the concept of fixed vs variable block reads/writes and blocking factors, and with recent encryption standards even allow what is effectively per block metadata.

My point is that while a lot of the terminology and things exposed on a daily basis with a mainframe seem strange to someone at first glance, a modern server has just as much (if not far more) strange behaviors buried it in. The difference frequently are the layers of standardized interfaces, protocols, and software stacks layered below what most people consider their "software stack".


And this is the comment that hit the nail on the head

Of course Java is going to be slow when the "Enterprise Architect" and his minions will push for hundreds of classes that barely do one thing right and have several inheritance levels deep while the "mainframe" people are shuffling data using something that's simpler than a csv

Also, the virtualization magic is good enough so that the COBOL people keep playing with their 70's technology while ignoring modern-world problems


What "virtualization magic" do you mean?

You mean taking a program which hasn't been recompiled since 1970 and running it (the 1970 executable) on hardware released in 2014 under an OS from 2015? Of course, it will work. There's no virtualization there, just reality. What 70's technology in particular are you thinking of? The latest Enterprise COBOL compiler is just over a year old.


Well I said it originally, and I was referring to the fact that a lot of the zseries hardware is actually just software emulation running somewhere that makes a piece of regular hardware look like a zseries peripheral. Take the disk subsystem for example, the disks are just boring old SCSI disks fronted by a linux/PPC controller which is making them look like ficon CKD/DASD disks. Sure, sometimes IBM has a little "secret sauce" in place to ease that transition, but its not at the level of actually writing native 3390 tracks on modern multi TB disk drives.


That's the right answer, BTW. I'm not that old, but I've written a ton of SAS code against Mainframe assets. Lots of fun.


maybe re-describing that stuff with the ASN.1 terminology would help make it more googlable ?


Makes sense, as there's little in common for many base concepts.

You deal mostly in stream files, they probably deal mostly in record oriented files..where their "kernel" understands it's record oriented.

Your terminal emulator sends every keystroke the the host. Theirs doesn't send anything until they specifically make it do so.

You have ASCII or utf8, they have EBCDIC.

Some of it is crossing over though. They used to talk about virtual machines before they were common for us. The concept was confusing at the time.


Indeed - and although COBOL is the topic here, the issues are pretty similar with J2EE. There are a lot of legacy J2EE applications in banking and finance. Let alone the skills and inside knowledge you need to know the various containers and modern(ish) glue (WebSphere is the prime example).

Underlying this is that banks used to see IT/Technology as competitive advantage. A lot of tech came directly out of the banks. IT became a cost-center in the 90s and it was never the same.

The irony is that this tech stack was pretty good and fit-for-purpose for a lot of banking workloads. Plus it kept working... so that just lessened the investment required.


Yeah even though COBOL is called out by name here, the problem these guys are getting paid to solve seems like as much of a "domain knowledge" or "business rules" problem as a language one.


Also just half a century of accumulated practice. Before Y2K I used to hear so many stories of people changing or removing things and then finding out that e.g. some manager in a different division depended on a report which needed these two “unused” variables.


> One COBOL programmer, now in his 60s, said his bank laid him off in mid-2012 as it turned to younger, less expensive employees trained in new languages. [...] "The call back to the bank was something of a personal vindication for me," he said.

I find this being an oddly satisfying part of the story


First employee/architect of Kraken here. (Now does >USD$10M+/day of volume by my estimate)

Accounting systems of all kinds should be generic and open source. Why do we need banks at all? It's not like this infrastructure really has any special characteristics. Most of it even has downtime and batch jobs. Fact: The bulk of banking is just recording some numbers at a stupendously simplistic resolution, with maybe 64 characters of description and a date.

A few weeks ago I spent a day interviewing 20 different Hong Kong banks about API availability for cross-border RMB transactions. None at all offered it.

We're reaching a point where the financial systems of a mid-level company exceed those of the banks they are forced to utilize.

Modern requirements include things like: 24x7x365 availability, multilingual, arbitrary asset type support (energy, carbon credits, cryptocurrencies, space, time, etc.), multi-asset type accounts, new settlement networks, real time reporting and AML/KYC, all features API-available, new and established customer interaction through non-snailmail/physical means, customers routinely in different countries, multi-user accounts with disparate access levels (eg. accountant/auditor/spouse/kid), multiple legal jurisdictions with clashing regulatory frameworks, settled-means-settled, regulator-forced free market integration for non-core (ie. account-related) financial services such as loans/forex, redundant service provider availability for every function, meaningful SLAs/reputation for service providers, routing and/or provider selection based upon nontraditional metrics such as ethical investment rationales, etc. The same set of requirements goes up and down the supply-chain: people want to reason with their suppliers and customers about stock, settlement status, payment and contracts, they sometimes need backups in case of failure down the chain, and they care about reputation.

Frankly the whole area is such a mess I am expecting an open source core accounting project to take over the sector. Probably it will begin in smaller/developing world banks and move toward the big guys like a meteor.

For some evolving thoughts on the area (from 2012, but literally picked up again in the last 2 days) see http://www.ifex-project.org/our-proposals/ifex


A few weeks ago I spent a day interviewing 20 different Hong Kong banks about API availability for cross-border RMB transactions. None at all offered it.

They don't do it because doing it while complying with all relevant regulations is more complicated than you realise. That's what banks are ultimately in the business of, finding ways to do business across jurisdictions. And it's why most fintech companies fail: their beautiful code runs headlong into the messy, illogical, unpredictable real world of regulatory compliance and guess what, the regulator always wins.


Strongly agree based on my work in the financial sector since 2009. Everything's driven by regulatory compliance now. To the point where RegTech is a thing.


I seriously doubt that, given a signup sheet of legalese, in most jurisdictions any service provided through an appropriately authenticated API is different in legal standing from the same service provided online through internet banking and slow, manual, error-prone process.


Hey Contingencies, are you talking onshore/offshore China RMB? If so you should know offshore RMB capital flow is strictly controlled by SAFE (State Administration of Foreign Exchange). RMB is NOT freely convertible and is designated by the PBoC as a restricted currency (even though it is in the SDR basket). Look up QFII its one of the few legit ways to move or repatriate foreign capital into/out of China. CNH (offshore RMB) deposits ARE freely convertible and major banks do offer conversions.

Im not totally sure where you think there is a missing gap, but happy to talk more. Email in my profile.


This sound good, but already exist tons of open source solutions in this space and nothing look good enough.

I wish to build some of this, but how will pay for it?

--- BTW, http://plaintextaccounting.org have some good ideas about this. I think this is the way to go, but how make it work with a database instead


I am looking at three problems in the area right now from an operations research perspective.

(1) Physical logistics for food machines http://8-food.com/ and their supply chain

(2) Liquidity and settlement logistics for cross-border payments http://moneyclip.cc/

(3) Energy trading for emergent renewables-focused densely interconnected next generation power grids http://fiberhood.nl/

The goal is to get the core markup defined to the point where an engine can be applied to a formally specified risk model to generate various goal-optimized decisions for for all three domains.

Note that there are also many other domains to which this reasoning would apply such as general logistics, supply chain and generic scheduling mechanisms for resource (eg. power)-constrained embedded systems.


> Why do we need banks at all?

Mostly, regulatory capture.


Accounting isn't about the ledger, it's about the myriad of laws at every level. That takes a well funded organization to track the constant stream of changes.


Cobol expert = $1,000 an hour rate. Great niche, but eventually will get replaced.

Awesome marketing and PR for his company Cobol Cowboys (http://cobolcowboys.com.

> His wife Eileen came up with the name in a reference to "Space Cowboys," a 2000 movie about a group of retired Air Force pilots called in for a trouble-shooting mission in space. The company's slogan? "Not our first rodeo."


They still teach COBOL and RPG at my old tech school. Went through the class and the tech behind the old mainframes are actually pretty neat. multi CPU processing, dynamically choosing which resources to use, easily transferring work to other machines and how everything works together so tightly. Just a lot of concepts that are getting 'rediscovered' instead of just looking at systems older than me (properly the same feelings that the people who learn lisp go through). They also put us through the normal java/visual basic hazing as well.

Though I ended up in electrical engineering.


A few concepts such as having redundant CPUs introduced a lot of technical challenges without clear pay outs. The previous inspiration was the comparably high cost of computing equipment. Redundancy techniques such as those inspired by the block chain, seem to be much hipper now days.


When I peeked into mainframe development in college, it seemed like a lot of those technically heavy features were implemented because some manager said so, and someone else was willing to pay for it, no matter the practicality (or cost).


If I made $1000/hour, I would only need to do that for like 3 or 4 years before I could retire.


You're not billing out 8 hours a day 52 weeks a year - typically - when billing $1,000/hr :)

That's generally why you charge $1,000/hr to begin with.


What would be the fun in that?


We have 50,000 lines of RPG code we need to replace over the next decade. Since the last RPG dev at our company is over 65 years old and can retire at any moment.

It's going to be funny when we replace it with Java and our business guys ask why it runs much slower.


I never wrote RPG code, but 50k LOC seems a pretty small project to me...


A 50k LOC program serving a critical business function without a test suite or requirements documentation can be quite a chore to replace, especially if you let it go until you have neither staff experienced in the language or with experience with the background of the specific application.


I've converted such programs from one language to another. It is not necessary to understand how the program works, only how the language works.

What you do is function by function, convert the language by duplicating what it does in the new language. Resist any and all urges to fix it, refactor it, improve it, etc. Just translate.


The problem is if there's some obscure language behaviour that's not documented well, and the code relied on that behaviour.

unless you're an expert and knows the language inside out, it's hard to even know such a problem occurred.


True, but languages tend to be far, far better understood and documented than some random application logic.


One semi modern language to another language is one thing. You could probably transpile the damn thing.

Converting batch based, RPG systems to a whole different paradigm, without a test suite covering the edge cases and 0 documentation is another.


> critical business function without a test suite or requirements documentation

Even in the world of standard consumer webapps, this is a real challenge.


It might be small, but I doubt it is. I am really curious what kind of test suite an RPG program would have. If it doesn't have a test suite then good luck because you're basically rewriting a decades old 50K line program and its behavior by guessing how it works. Also who knows what kinds of weird data formats or protocols it deals with that no libraries exist for in other languages.


I just took a look at the language. I can imagine it could be faster than Java since it appears to be so low-level but I couldn't find any benchmarks. Really depends on how good the RPG compiler is when it comes to mapping the language to machine code.


There's a local company where I am that still runs an AS/400. They're hiring a junior dev with 1 year experience in both AS/400 and RPG. Would you recommend taking the job? I imagine it's pretty good job security. On the other hand, you have to program in RPG and Basic all day.


It says 100$/h not 1k. It doesn't seem so great the marketing from a developer POV if it is true that they can normally get 1k/h.


This probably should not matter to me but going to that website (even though its a super basic marketing page) and seeing it without SSL it just starts to bother me that someone who does so much bank consulting would care so little about a simple SSL cert on their own site, especially given how easy Lets Encrypt has made it.

Like I said this probably should not make a difference to me because I won't be hiring any "COBOL Cowboys" anytime soon but as a customer of many large banks who these people have probably worked for its a bit irksome for some reason


Besides the contact form perhaps, what really needs to be SSL? Specific reasons. FYI, the site is powered by Wordpress.


In this age where ISPs are injecting ads and tracking headers? And MitM attacks are used to redirect to malware (see redirects to foxacid)?

Everything needs to be SSL - it provides more than just confidentiality.


"Experienced COBOL programmers can earn more than $100 an hour when they get called in to patch up glitches, rewrite coding manuals or make new systems work with old."

Any contract-based engineer would charge that, or more, regardless of the language. Fortune 500 companies are happy to pay $300 / hour to a consulting company for engineering time.


I just assumed that $100/hr sounds like a lot to the journalist in question and therefore "more than $100/hr" was simply their top categorisation of compensation.

Bear in mind that this is something like four times the average in private industry, and fifteen times the minimum wage.

My general rule for stories by journalists - even experienced ones - covering any technical industry is that their use of facts is often extremely loose, and if something seems awry, distrust the story first and question your understanding of the domain second.


My first job out of college I worked for a large-corp building a CRUD java web-app for one of their clients who implemented large-corp's backend. They billed the client 250/hr for my time (full time for 18 months). So yeah anything close to 100/hr is leaving piles on the table.


Yeah, $100 an hour for contract COBOL expertise sounds very low, but maybe I underestimate the number of COBOL people still around.


A lot of COBOL stuff is cut and paste crap, or just updating basic fields. There are people paying as little as $35/hr for these types of people, although most of the people at that rate are barely qualified to turn the computer on.


To be perfectly fair, turning on a mainframe or minicomputer is a nontrivial operation.


Ha, IPLing, seems to hardly be the problem. Its turning it back off... But then again, if you give up and use the big red switch its not so bad, but god help the next guy who wants to turn it back on.. <chuckle>

What really truly shocked me the most was "installing" a zvm/zos stack without relying on a machine image. Gosh, i'm still shocked thinking about it years later. I remember "writing" (cause copy pasting assembly with the assistance of someone who knew what the hell they were doing doesn't really qualify) system hooks for things that literally work out of the box in any OS written in the past 3 or so decades. Stunning..


This. Anyone who had doubts can get a stack of z/OS DVDs and the Hercules S/390 emulator and just try to get it to boot! Not trivial.

Although I don't know if Hercules is still alive and hasn't been shut down by the suits.


Open source Hercules is still alive. Attempts to commercialise it (TurboHercules) ran into legal trouble with IBM, but to my knowledge the open source Hercules project hasn't faced any issues. (Disclaimer: I'm not a lawyer, I don't work for IBM, etc.)

IBM's licensing agreements don't allow you to run current versions of its mainframe OSes under Hercules. IBM will sell you an equivalent technology which you can legally run their OSes under, an x86 mainframe emulator called zPDT, but it is quite expensive (I have heard figures quoted like USD 5000). This is where TurboHercules ran into problems–they wanted to run current IBM OS versions under Hercules, but IBM says that violates their license agreements. TurboHercules tried to get the EU to force IBM into licensing their OSes to run on Hercules, but they didn't succeed.

By contrast, you are legally allowed to run old versions, 1970s vintage–in those days, IBM chose to release its operating systems into the public domain. That has little practical use, so can't be commercialised, but lots of people do that as a hobby, and there is stuff happening in that scene. I recommend this distribution of MVS 3.8J if you want some basic exposure to MVS – http://wotho.ethz.ch/tk4-/ – a lot of the basics, like JCL and TSO, aren't hugely different, although a lot of features you'd expect on a modern z/OS system (e.g. ISPF, Unix, Java, TCP/IP, peer-to-peer SNA) are missing in this circa 1981 system.

It is also perfectly legal to run Linux under Hercules. I never have because it seems somewhat pointless–the differences between z/Linux and x64 Linux are minor–but I can see practical uses – you can port a project/product to z/Linux so your customers can run it on their IBM mainframes without you needing an IBM mainframe yourself.


> it is quite expensive (I have heard figures quoted like USD 5000).

A bit of correction, I'm afraid. That's the cost for mid-shelf copy of Visual Studio 2009 or so. It's not expensive at all for corporate purposes at any company I've worked at. Top-shelf Visual Studio was set for 10,000 for a long time (some checking of current costs suggests they've redone their pricing model).

Expensive software for functioning companies, broadly, might be north of $100K. I can't speak to specifics, of course.

It's quite a reorientation of what "expensive" means when you get involved, even a bit, in corporate purchasing and negotiation.


Expensive for whom? Yes, for a large corporation USD 5,000 is easily affordable.

But consider someone like myself. My job isn't focused on mainframes. I very rarely have had anything to do with them at work. Even though my employer could easily afford USD 5,000 to buy me a zPDT if there was a business case for doing so, they won't because there isn't really one–in the last five years, I've only once had to help a customer with mainframe integration issues, and we used a partner company with mainframe specialists to handle that engagement for us. However, I'd still like to learn the technology. I'm not sure I'd ever really want a job in it, but it fascinates me. But I'm not forking out thousands of dollars of my own money just so I can play with z/OS or z/VM.

I think this is a problem with mainframes–even if someone is interested in the technology, it is very hard to learn about it unless your employer uses them (and in a large company, even if your employer does, your own job might still have little or nothing to do with them.) I would have thought IBM would be more keen on spreading knowledge of its own technologies around–it might actually make it easier to sell them to people–but it doesn't seem to be on IBM's radar. HP has the hobbyist program for OpenVMS, IBM could set up a similar program for mainframes (and IBM i too), but that has never happened. (I have heard some folks have tried to talk IBM into it, but they have never got anywhere.)


I totally agree that this is a problem for the mainframe, and I personally have tried to talk to IBM about this (and obviously haven't gotten anywhere). This is a huge missed opportunity for them. I regularly encounter folks that would like to learn the platform, even at a hobbyist level, but it's simply not accessible.

FWIW... I know it's not exactly what you're looking for, but IBM does host a Master the Mainframe contest each year. It provides free access to current z/OS systems for a while (few months, I think) and a project consisting of a series of challenges that guide you though learning the environment. IIRC... you must be an enrolled student to actually win the prizes, but anyone can sign up and perform the activities. One of the mods on /r/mainframe helps coordinate the contest if you're interested in learning more about it.


I am very glad that Hercules is still around.

Kudos to those intrepid enthusiasts who implemented it and kept it going all these years. It's sad that they weren't able to capitalize on their efforts, but as a F/OSS supporter, I thank them for giving us at least the potential to play with mainframe technology (without running mainframe-compatible hardware).

Fun thought: I wonder what it would take to layer Hercules directly over ESX/i to create a poor person's "mainframe VM"? I mean, besides installing Hercules on a Linux guest on ESX/i.


Any particular reason why?


It's a multi-step process, and every system is different. It's not just a matter of flip the switch and wait for it to come up. If you're looking at something really primitive, you might even literally have to toggle in a boot loader program.

These 2 Youtube videos show how to initialize an S/390. The top comment on the second video is really good, too.

https://www.youtube.com/watch?v=ytMgyrZm87A

https://www.youtube.com/watch?v=qrKbh5HwF3Q


NOT QUALIFIED


They don't give the context. Maybe that's $100/hr w2 for the end contractor after the "Cobol Cowboys" company takes their cut.

Also, it's Dallas, where cost of living is great. 2300 sq ft houses on decent lots in good school districts for $250k.


(Tangent: The Dallas of >3 years ago had a good CoL. Everything has gone up $100k+ since and it's still going... Those same houses are 400k now. And new homes have tiny lots.)


Maybe that was supposed to be $1000 but writer thought "no way"?


It must have been $1000 and somehow a zero was lost.

Financial insustry? Short term work? Specialist competence systems? There is no way it's $100.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: