As mentioned in the article it's good work; but it is also not easy work. You tend to go through cycles of being pushed out to brought back under extreme emergency at any costs to get stuff working. Only for the cycle to repeat. Companies never think of the old guys as the ones to implement the new system - that's a job for the "enterprise experts" - I can't even keep track of how many "rewrites" I've seen in my life fail because of this.
We are the dinosaur club; but it's a club that pays extremely well (high 6 figures a year without working too hard if you are talented and have a good client base and reputation), but like fossil fuel one day it will all be gone ;)
Exactly this is why rewrites fail. The challenge of a rewrite is not in mapping the core architecture and core use case, it's mapping all the edge cases and covering all the end user needs. You need people intimately familiar with the old system to make sure the new system does all the weird stuff which nobody understood but had good reasons that was in the corners of the old system's code. IMHO the best way to approach a rewrite is to make a blended team of experts of the old system and experts of the new technology, and you put a manager in charge with excellent people skills who can get them to work together.
More often than not the next developer using the old system worked around it. But never documented it as a bug (or wrong format, it featur! Not a bug!)
Or did your system actually include the OS source?
E.g. I'm working on a system that I could hypothetically abstract (I've got access to it, can poke with enough tests and test data, etc).
However. What I don't have is access to code or test injection into any of my sources / consumers. Both of which are expecting all the corner case quirks to be exactly identical and may actually have accreted software that depends on a specific quirk. A specific quirk that I have no way of knowing about. Or may send me something I've never seen and am not expecting because it's a 1:1,000,000 corner case, and we don't have any logging of an example that came through production.
I haven't worked on too much of the heavyweight stuff like you have, but I tend to take the perspective that "a 100% compatible rewrite is impossible." 95%+ maybe, but we're going to have to deal with the <= 5% after it goes to production.
Did you ever pursue writing and dropping a new tailored load balancer / router type application on the incoming data stream such that you could divert a specific portion onto new system(s)?
If it's something that's never come through production, it is not already documented anywhere, and it has an occurrence rate of approximately 0.001%, is it really a feature that needs to be replicated?
(Currently migrating a site of 125k pages of content with oodles of edge-cases)
Also, too many of the talents are stuck in a blocked I/O mindset somehow.
Some are wizards though, writing assembly and making raspberry pi sized systems blazingly fast. OK a couple of raspberry pi:s
Cobol isn't involved, but those slow mainframes are.
We're not running our modeling engines and that stuff on it. We have HPC for that.
Yeah, but the interesting problem is what happens next. Let's say a test case reveals that there's a flaw in the next-gen system. You fix it. It later turns out that the same flaw exists in the legacy system. What do you do?
Do you revert the fix, or leave it in place?
They did NOT appreciate hearing that they had been running a bugged query for years...
At least there never is for me.
So true, even for more recent stuff. It's so absurd I always wanted to make a web comic about this. Companies keep ignoring advice of their own developers, and then eventually hire some "technology expert" who is going to implement the same tech their existing staff recommend years ago. Except, of course, the expert has no idea about business processes and user needs, so you end up with a long and expensive train-wreck that results in something barely better than what you had before.
Moving off core systems (z/i/p) isn't simple mostly because of the amount of data combined with all the custom applications. every attempt to move the apps off the z don't come to fruition because of scope and the fact it just works. the i is increasing its load to pick up from the z and getting a good dose of webfacing and rest services. these are dead simple to implement and many reuse all the existing code.
talent wise, it really hasn't been much of a challenge in Atlanta/Dallas to find those who can support what is needed. the language isn't the biggest road block for many mainframe systems, its the file systems that can trip people. the i is pure DB2 so anyone versed in SQL can use it and RPG looks more like Pascal these days than that three column stuff people normally associate with it.
the one thing many here don't understand is just how many companies are invested in mainframe, i, and large p series systems. don't scoff at these platforms. with modern tools they can be webfaced just fine and the advantage their default coding languages have is they are business math oriented and simple. with modern features and file systems it just comes down to, what is management comfortable with and does it serve the companies needs.
I wonder if someone could get away with selling a z/i/p emulator, akin to Wine. (Or perhaps more appropos, Mame, since the machine architecture would differ as well.)
Anyone using mainframes heavily in the way we are talking about here is probably running something mission critical with lots of $$$ tied into it - and the financial/legal risk of having anything go wrong at all is a huge driver - they want a big company with deep pockets and pages long support agreements backing them, so they can indemnify the other company if there is any problem and shift the blame (and from a personal perspective, cover their own butt). The mainframe using companies also usually have deep pockets, and so even if the mainframe is expensive, it still allows them enough profit margin to support it..
"There is a problem with our IBM mainframe, we have requisitioned a team of 12 people from IBM to investigate and legal is looking into what sort of contractual obligations IBM has if issues aren't resolved" is a much more palatable statement for the fortune-100 CIO to pass along to the CEO than "our IBM emulator seems to be having problems, but the 3 people developing it are on vacation, IT isn't sure if it's an emulator issue or something to do with the new hardware we migrated to run it on 2 years ago, and IBM says they don't support the software when it's run on the emulator. We're calling them again to see if there's some way we can convince them otherwise"
The problem isn't the availability of an emulator, it's licensing the OS you want to use on the emulator.
Out of interest, what's the solution here? With your experiences in mind, what are the banks doing wrong from a technological perspective?
I get the impression banks should almost start again on the side, building a completely new bank they then (manually?) move customers over to, or just use to take in new customers whilst waiting for the previous ones to die out. I guess if they built a whole new system, wouldn't it then be a "simple" case of doing a money transfer into the new system?
Same happens with other French banks. As a personal PoV, banks' manned agencies provide bad service (from 20x delays to plain mistakes), so I won't cry for them.
Obviously, a dealing account has to hold funds and handle transfers. That's not any close to a consumer bank though.
e.g. in the UK it's typical to sign up for 2-10 year deals, with the interest increasing after these offer periods, meaning people will likely switch to another deal.
So there will be a certain amount or organic churn.
Would learning 'ancient' technology be a good career move? These systems aren't going anywhere, right? But the people who know how to maintain them are. Which means that maintaining these system, which already pays well, will pay even better in the future? Or is there something I'm missing here.
If you want to be well employed, learn Java and move to the American south where a nice house with a 1/2 acre backyard sets you back less $130k - $200k and they can't find enough qualified IT people. No it isn't a startup or Google, but the pay compared to the average income of $30k lets you live far better than San Francisco as long as you don't mind less things to do in town.
- Alan Perlis
Eventually everyone runs out of piss and vinegar. It's not so bad.
I hate working with people like that, and would hate myself if I did that.
My father in law has a saying: "if you don't like what you do, what are you doing?". And I agree with that.
You are fortunate. But labeling everyone who "doesn't like what they do" misguided lazy bums is... a bit much.
Most people in this state are slowly recuperating from years of failure while attempting to "do what they like". Usually as a bonus they also have to dig themselves from under a crushing mountain of debt.
As you correctly pointed out - it's life. Though it's rarely as simple as you paint it.
Whiteboard interviews have their place, I'm sure. This isn't one of them.
A new company whose processes for example don't have all these interdependent edge cases (yet). The domain will start pushing edge cases on the new company too sooner or later, but that knowledge still has a chance to diffuse---and lots of newer companies these days are probably better at avoiding small bus factors.
Too often the old guys simply re-create the old system using the old methods in new software, and you end up with all the same problems as before.
In a perfect world, the two would work together.
Wow, like $800-900k?
A bit related:
I spent 2 years of my life on a C codebase written by mathematicians that would much rather have written it in APL. I don't know if you have seen C written as APL, but that is 2 years of my life I will never get back. I left for a lesser paid, more fun job. Upon leaving the manager offered me a 60% pay raise (yup, I should have had higher demands, but at least I proved myself :) ).
Unless you enjoy torturing yourself, legacy COBOL programming is not very rewaeding, and at least in Sweden, most of it is slowly being moved to other languages.
Which would imply that there's a demand for people who know both COBOL, and whatever's the popular migration target for such systems these days (I assume Java?).
To be honest, the idea of companies paying through their noses because of decades of short-term thinking makes me smile. Karma in action.
It is highly unlikely that that the OP is making $800k/yr consulting as a programmer. That's $400/hr sustained for more than a year of fulltime work.
The chance that the OP is billing $800k/yr is vastly smaller than the chance that the OP used a bad figure of speech. The number of technology professionals who can sustain $400/hr billing for a full year is tiny. The number of technology professionals who are bad at communication is huge.
Bayes' Theorem is relevant here.
"Without working too hard" is quite subjective. I suspect they are quite disciplined and works quite hard, but are very rarely in the office after 5, nor working much more than 35-40 hours a week.
As to (bad) communication, $200k/yr is high but unremarkable for SV/NYC careerist professionals. The notion that someone with decades of rare, in-demand experience should somehow be capped at what a (lucky) 35 year old engineering manager makes is just silly.
You have to look at this probabilistically. Contract programmers taking home $800k/yr are very rare, but folks misusing "upper six figures" to mean $180k are common. I'm not a gambler, but I would happily place bets on the meaning of the OP. This is classic Bayes' Theorem.
The fact that everyone here is jumping into this thread to justify how it could be possible probably says a lot about the psychology of HNers.
We can, however, assume with a fairly high degree of confidence that he misspoke.
"half-million" is much more impressive sounding than "mid 6-figure" salary.
The real trick to software consultancy nirvana is to find the big whales paying big money for what are just kiddie apps.
Next, there's a lot of business logic basically embedded in the COBOL, or even at lower layers. For example, lots of banking files are in EBCDIC, a different character set from ASCII. Except there are lots of different EBCDIC variants, and there's no good way to tell which one you're viewing just by looking at the file. So you have to reverse-engineer COBOL to figure out the "correct" meaning of a given file.
The problems go on and on. When I see people in the startup world rolling their eyes at the "incompetence" of the enterprise world, I take it to mean they've never actually worked on a truly hard problem in their lives.
I worked in a related field in the past. I don't claim that the problems are easy - where I tend to roll eyes rather is:
- Bureaucracy: A lot is required by law, I know. But there is a difference between "just following the required bureaucracy in a minimal necessary way if it stands in the path" (the startup way) vs. "taking it seriously in a way that makes the work harder than strictly necessary".
- Hierarchies: Just three words: I hate them.
- Unwillingness (?) to tackle these hard problems: The problems are hard, as you already outlined. But this implies to me that everybody in the company will move heaven and earth so that the people who work on this hard problems are able to (e.g. give all necessary information (requirement specification etc.) they know of etc.). If there is just one thing (or even office politics) which will prevent that, I don't just roll my eyes, but get furious. Because of the hardness this is not to consider an obstacle, but targeted sabotage - and should be considered as such.
I know your startup mindset well and I carry with it with me too. I came into a legacy fintech company the same way and pushed for faster decision making processes.
I didn't realize the cause for conservatism until I was given a story about how the company needed to manually call and refund thousands of customers... all because one developer fucked up and double charged people.
When you deal with MONEY and you experience getting burned like that... you realize how mantra's like "move fast and break things" only work as convenient motto's for startups who have nothing to lose.
Things were pretty hectic and it was a pretty small team. We were using SQL Server at best and Excel spreadsheets as information feeds at worst, trying to calculate Black-Scholes on this stuff.
I clearly remember talking to the Traders and Quants regarding certain calculations we were doing to give them bond price goals for offsetting risk on the trades.
Honestly the traders at least didn't give a crap. I would present them tables; they would look at it, and would say "yeah that looks about right" - that's a direct quote from BZW's lead trader in 1994. I can't imagine things have changed that much.
I didn't stay long in that environment; it was pretty clear to me that despite getting people like Grady Booch coming in to clean up our act our "Customers" didn't really care too much about the mechanics of how things worked, or even worse whether the calculations were correct.
Top and bottom is, while the banking industry may employ "Cowboys" in the back-office for IT services, they also employ Cowboys in the front-office making the trades.
I doubt that has changed for the better that much. See 2008 financial crisis et al.
[Edit] As a side note, for the time I was earning more money than I knew what to do with. My boss at the time felt so ambivalent about his twice yearly 50k (sterling) bonus that he threw it away at the Casino the day he got it, (remember, this is 1994). I left and took a 75% pay cut to go work for Microsoft on projects that were at least form a CS point of view a lot more respectable. Having said that, I don't want to come off too harshly, we were at the cutting edge at the time and the technology was very cool and thoroughly enjoyable. But still..
I'll challenge your view that folks in the startup world don't know enterprise. Maybe some visible fraction are the young and inexperienced hipsters as portrayed on HBO, sure, but most of those I know in CTO+ roles actually have a lot of enterprise under the belt. In my case in a B2B play it's practically mandatory, in order to understand the customer.
I believe enterprises suffer principally from the fear of change, or more bluntly, the fear of screwing up and being held accountable, which leads to the pathological technical debt issues you've described. So the problems I've always faced in enterprise projects are not primarily technological, but instead those of a) finding a full team of people competent enough and fearless enough to perform transplant surgery on the beating heart of a living body corporate and b) collecting sufficient clout to be allowed to perform the operation.
I reckon the best thing you can do, as a project leader in the enterprise world, is leave a legacy of constant and gradual change. Normalize frequent updates through CI/CD. Get business owners used to things like minor feature requests being included in daily deploys. No-one will thank you at the time, but a change in culture is almost certainly the most enduring value you can create.
So yeah, I'll happily roll my eyes at the "incompetence of the enterprise world", because I've dealt with the stupid head-on, and used techniques from startup land to innoculate it permanently.
An executive is approached by a vendor. The vendor entices the executive, shows them a good time, gives them a really good assurance their service is worth it.
An engineer hates this service because it sucks. Because the decision was made based on how cool it looks, not by technical needs.
Given that, it is hard for an executive to get approached by a vendor who says "okay, this will cost a crazy amount of time and money but we'll make your systems more modern" which sounds like "hey, I'm going to come in and offer to replace a system that has worked for 30 years with something modern and potentially risky. And it'll cost you a lot." The executive doesn't (usually) realize that the true cost is really high and goes up with time. And of course they don't want to lose their cushy job so hell no they won't take it. Also the executive isn't directly working with the engineers so he doesn't truly know if he can trust them.
Thankfully, this is changing rapidly and the valley is leading the way.
I take it to mean they've never actually worked on a truly hard problem in their lives.
It sucks sometimes to be at the bottom of a deep hole you dug yourself into without a ladder, but at the end of the day, it's your hole.
And you are right that sometimes it's just a hard problem. But you can always make those worse.
Improvement needs to be continuous. The ability to update individual parts of the system with minimal coupling is vital. But even keeping that as the system evolves is a challenge - and designing for it in advance leads to all sorts of unnecessary "just in case" abstractions in the code.
Keeping code alive and running for a generation is a whole different kind of challenge.
Almost all of them – if they're really "bright" anyways. Even given that a lot of the important context is missing, bright programmers can do this stuff.
The wrong question is being asked. Modern programming techniques work because they have successfully abstracted this complexity.
Sure, a good programmer can learn this stuff. But they shouldn't have to, not these days. There's far more to programming than any one person could ever learn. Choose your battles.
But note, 25 years we didn't have the same solutions we might have today, but we had good solutions to lots of common problems. We certainly had solutions to "system is specified by a mixed bag of ASCII and inconsistent EBCDIC files, none alike, all specified 15 years ago. Which is at the heart of the problem OP posited. 25 years ago people were saying exactly the same thing about banking systems in COBOL that beat describes. Exactly. The batch processing OP discusses had already been out of vogue for a decade at least. We had good solutions for nearly all of these problems, what we didn't have was quick, cheap solutions.
Just for completeness: we had html. We'd had SGML for a decade (which begat HTML and later XML). We had reasonable streaming protocols. We were a lot worse at connecting heterogeneous systems controlled by different entities and interoperating, but we good at networking and building distributed systems at a smaller scale.
Keeping code alive and running for a generation is a very difficult problem, but keeping systems tidy, modular, and evolving is manageable, until you let them go too much.
And people are digging the same holes today. It's not technology that is the cause, it never was. It's cost and short term planning.
Death is a necessary component of change. In fact, renewal could not come without death.
Existing legacy systems bring with them assumptions about how things ought to work, and debt about expectations -- expectations that slow down your ability to change away from existing paradigms.
True innovation requires this breakaway.
So honestly, IMO the best move for a bank that is facing this kind of software nightmare is to maintain existing legacy support for the old system, but do a complete breakaway (NOT REWRITE) that is explicitly NOT dependent on the old contracts of functionality that the old system would have imposed. Make the rules change, acknowledge the old system will break with the existing system, and plan for a data migration over where ever possible.
Accepting defeat and moving on is a saner path. Migrating the data will become possible once it's realized that ultimately data is easier to change over than behaviour.
I say this too as someone who is very against rewrites generally. It's a fallacy to believe that old systems can accomodate new.
That gets away from the "Flip a switch on billions of dollars of transactions a day" terror.
That's why a couple years ago when I had access to a zseries, and was hearing about how "desperate" the banks were I took a look. What I discovered are salaries that are some of the lowest in the IT industry and a cynical attitude towards hiring. AKA, banks would rather pay $40k a year for a "system operator" which is generally the ground floor for learning anything about a mainframe than hire someone with a comp-sci degree and teaching them the system while promoting them.
So, no thanks (you can fill in a less polite version), the banks can go to hell.
The number one issues I have seen in the IT industry is lack of incentive, primarily in the form of salary, lack of respect, and lack of a C-level working on their behalf at the boardroom level. I know sysadmins who supported entire 200-250 person, 7 branch companies single-handedly, but get 40k, never had a budget, and got refused for hires and simple stuff like "hey, the cabling here is from the 1970's, we need a contractor to come in and recable the offices", and being told "no, you do it, at the same time you support the whole company, and no- you can't buy the TIA spec book."
It's no wonder companies are hemoragging good IT talent left and right.
If I could offer a single peice of advice to a company, it would be to create a CTO and CIO position if they don't exist, and get a good one who advocates on the behalf of the IT department. I see most of these issues as management failures first, not technical, so don't come crying about cobol to me.
I've been online for many many years and I STILL do not understand why people miss this simple idea.
Once you are making noticably more than you are spending, earning 2x and spending 2x means you're keeping 2x.
I set my price: AUD$850/day, with a TINY bit of wriggle room, and that's final. I know my worth (from a skills perspective and market/economical one.)
Edit: misread. GST is NOT included in the price; Super is my problem and comes out of the agreed daily rate.
My own consulting rate is higher than that, and there's way more people with my skill set. One of the big differences is I work with companies that value code as an asset. It sounds like he's working with Banks which view code as a cost of doing business.
And depending on the urgency, his rate goes up. He's retired, and works at most 2 or 3 months/year and lives a pretty normal life and lives in a very small house. His neighbors were pretty stunned when he bought himself a Tesla, his previous car was a Nissan Note...
He told me he was usually called in for projects where they needed specific expertise when writing Java to interface with existing COBOL, or rewriting COBOL parts with Java and adapt other COBOL code to inter-operate with that. Apparently most COBOL stuff now runs in the JVM - which tought me: in 30 years, Java will be the new COBOL.
I'll also throw in the counterpoint that you get taxed heavily on these earnings and you (usually) don't get benefits.
It's like the difference between Java (the language) and J2EE, except in the context of 1970s computer technology. It's not intuitive and not something you can really work through without a lot of training and experience.
I always struggled to understand, because it seemed everything was different ... the terminology, the culture, the ideas. I couldn't use analogy to tie what he was describing back to what I knew.
I politely accepted it as the right answer but boy oh boy does it feel like it's from an alien civilization.
Fixed length records are less common tor interchange formats, but likely you use them every day anyway. That's what databases often use, and the reason is because it makes it very efficient to index intro the structure and get whole records (know your data is sorted? Binary search is possible and easy then). Sometimes it's just the indices that are stored this way (they essentially have to be), but you can get fairly efficient table access out of some engines without indexing everything if all the records are fixed and the engine can determine that.
If you generally don't program in a low level language, this is generally abstracted away by some library that is written in that language. People do't usually write PNG and JPEG libraries in pure Python or Ruby (or at least, they don't expect them to be used much in production), they write a shim that wraps libpng or libjpeg.
The interesting part is that the fixed and variable record formats are first class things on the mainframe.
So, something like DB2 on a mainframe can use system supplied functionality (VSAM) as their storage engine. As opposed to unix, where higher level databases like MySql, CockroachDB, etc either roll their own (InnoDB) or use some 3rd party offering like RocksDB, LevelDB, etc.
VSAM isn't just one thing either...it supports k/v indexing, or indexing via relative byte address, or indexing via record number, etc.
So, basically, when you talk with mainframe people about interchanging data, they don't tend to consider that you might actually have to write some code to parse what they are sending you. They tend to assume you already have utilties that understand these things. It's not an interchange format...it's the native format for them.
I suppose the answer seemed novel because it's speaking with very specific "official sounding" terminology about something that's usually ad-hoc negotiated by project in the unix world.
A big historical difference with mainframes and data was the architecture around I/O. They always had separate processors to offload I/O, and I/O was always asynchronous. And things like VSAM were highly tuned to take advantage of that.
That's why mainframes continued to outpace Linux/X86 for some types of workloads...even after X86 performance far outpaced the main processors in a mainframe.
I believe that advantage is completely gone now, but mostly via brute force vs elegance. Commodity hardware is just so fast now.
Fairly recently. Through things like affordable ssd, enough Moore's law around intel, and better distributed data stores. And better app side knowledge on how to break up a monolith.
I was around for a few failed "rewrite this TPF system" attempts and I saw what broke.
Commodity stuff can replace it now...but only very recently.
Or if you just meant x86-64 vs any other CPU, for the CPU alone? That debate is just done. They poured enough money into that mess that they won, assuming you don't care about power consumption.
Mainframe DASD is the same as "X86" disks, at least for those using "storage arrays".
Pretty much all the smaller Mainframes are gone, many years ago. I've not heard of any successful replacement of a loaded system which used fewer than three times the initial projection of "X86-power".
Anyway, time will tell. In 10 years' time you'll still think X86 is faster and there'll still be Mainframes.
As to your last line, who is "they"? I'm just interested. Thanks.
This one was great:
A better explaination would have been "accidently invested".
I recall seeing how files were allocated on disk (remember that mainframes have many different OSes, like OS\390, and even OSes on top of OSes like VM/CMS, and I don't remember what this was running on).
In this particular case, a file was preallocated in JCL to use N extents starting on a specific cylinder. Fixed size. None of this fancy ext3 or NTFS ;)
JCL (Job Control Language) was a language to control batch jobs, and many have called it the worst language ever designed, although not as bad as brainfk.
On the other hand, I had a chance to interface C++ with CICS (a transaction processing subsystem) using WebSphere MQ, and I must say, I was really impressed with its sophistication. It was a kind of SOA long before the term was invented.
A lot of what I saw in the mainframe world predated things - by decades - that some may think are new(er) concepts, such as clusters (sysplex), front-end processors, hypervisors, HA, and so on.
Those of us who had to fiddle with implied file formats with fixed-length fields and records won't find this stuff quite as alien, but equally as painful to deal with. I recall using some sort of ETL program to get around this. On the plus side, these primitive formats certainly were efficient in terms of processing speed, and a great match for COBOL.
Speaking of COBOL, as part of this project, I had to write a parser in C++ to parse COBOL copybooks (kind of a COBOL data structure definition) and generate C code to read the data.
It is a very different world, but I don't think it's all bad. After all, the technology has been working very well for a long time. Kudos to the COBOL Cowboys. I hope they charge a lot more than $100/hr!
Sounds like a z/VSE system (formerly known as VSE/ESA, VSE/SP, DOS/VSE, DOS/VS, DOS/360). In DOS JCL (which is a different syntax to z/OS / OS/390 / MVS / OS/VS2 / OS/360 JCL), you manually allocate files to disk locations using the EXTENT statement. By contrast, in z/OS the operating system decides where on disk to locate your file (or dataset, to use mainframe terminology). (You don't have to manually allocate files any more in z/VSE – you can use VSAM, or store your files in libraries, and in both cases the OS decides on disk locations for you – but, originally, neither VSAM nor libraries existed, so you had to manually assign locations to all the files on disk.) It is very primitive, but remember it was designed in the 1960s to run on machines with only 16KB of memory–plus, humans could design a disk layout to maximise performance, by placing frequently used files on faster areas of the disk. Nowadays, the OS can do a better job of locating files on disks than humans can do, but this capability is kept for backward compatibility.
I only learned about them when a state sent me files in EBCDIC . As with all things perl, you can convert from that to ASCII as a one liner. Or, rather, i helped someone much smarter than me do that, 20 years ago.
Not to mention that you cannot test for uppercase or lowercase like the ASCII `ch >= 'A' && ch <= 'Z'` because they are not contiguous in EBCDIC. A good reason to use the C RTL.
Watch your sorting methods, too. I had a guy over here once totally confused why running his SAS job on the mainframe yielded a different result than the same code running on PC SAS against the same data.
These days I consider it more of a "what was I thinking" facepalm-worthy sort of thing but at the time I was very proud of it. The "what was I thinking" part is more about the fact that some poor bastard had to come along after me and support that mess.
No need to hesitate. It's not half bad for a dynamic language if you keep a little discipline. 20 years ago the average Perl programmer was probably akin to the average PHP programmer from 10 years ago. That is, not very experienced, and with code that made that fairly obvious, even if it got the job done. With some of the more modern modules, you get something pretty swizzy. :)
Think of building a DB on an C64 and that will probably be more like it
But of course if you still have to worry how your data will be serialized to disk in 2017 that's your problem right there
But, to the linked answer. While the details differ a bit, the zseries is simply a low level description of how the machine worked (past tense because modern zseries mainframes have a lot of hidden "virtualization" in order to leverage industry standards). That is why "mainframes" outperform racks of x86 PCs. There really isn't anything magical about the hardware. The real magic is the fact that the software is written by guys who grew up understanding how to processes transactions in a couple K bytes of memory, and the machines grew, as the transaction load did. The result is code which understands the hardware and is crazy optimized in the critical paths. The fact that frequently the critical paths all fit in a fraction of a modern L1 cache doesn't hurt either.
For something even closer, you could consider the options to tar, and modern tape drives which continue to actually support the concept of fixed vs variable block reads/writes and blocking factors, and with recent encryption standards even allow what is effectively per block metadata.
My point is that while a lot of the terminology and things exposed on a daily basis with a mainframe seem strange to someone at first glance, a modern server has just as much (if not far more) strange behaviors buried it in. The difference frequently are the layers of standardized interfaces, protocols, and software stacks layered below what most people consider their "software stack".
Of course Java is going to be slow when the "Enterprise Architect" and his minions will push for hundreds of classes that barely do one thing right and have several inheritance levels deep while the "mainframe" people are shuffling data using something that's simpler than a csv
Also, the virtualization magic is good enough so that the COBOL people keep playing with their 70's technology while ignoring modern-world problems
You mean taking a program which hasn't been recompiled since 1970 and running it (the 1970 executable) on hardware released in 2014 under an OS from 2015? Of course, it will work. There's no virtualization there, just reality. What 70's technology in particular are you thinking of? The latest Enterprise COBOL compiler is just over a year old.
You deal mostly in stream files, they probably deal mostly in record oriented files..where their "kernel" understands it's record oriented.
Your terminal emulator sends every keystroke the the host. Theirs doesn't send anything until they specifically make it do so.
You have ASCII or utf8, they have EBCDIC.
Some of it is crossing over though. They used to talk about virtual machines before they were common for us. The concept was confusing at the time.
Underlying this is that banks used to see IT/Technology as competitive advantage. A lot of tech came directly out of the banks. IT became a cost-center in the 90s and it was never the same.
The irony is that this tech stack was pretty good and fit-for-purpose for a lot of banking workloads. Plus it kept working... so that just lessened the investment required.
I find this being an oddly satisfying part of the story
Accounting systems of all kinds should be generic and open source. Why do we need banks at all? It's not like this infrastructure really has any special characteristics. Most of it even has downtime and batch jobs. Fact: The bulk of banking is just recording some numbers at a stupendously simplistic resolution, with maybe 64 characters of description and a date.
A few weeks ago I spent a day interviewing 20 different Hong Kong banks about API availability for cross-border RMB transactions. None at all offered it.
We're reaching a point where the financial systems of a mid-level company exceed those of the banks they are forced to utilize.
Modern requirements include things like: 24x7x365 availability, multilingual, arbitrary asset type support (energy, carbon credits, cryptocurrencies, space, time, etc.), multi-asset type accounts, new settlement networks, real time reporting and AML/KYC, all features API-available, new and established customer interaction through non-snailmail/physical means, customers routinely in different countries, multi-user accounts with disparate access levels (eg. accountant/auditor/spouse/kid), multiple legal jurisdictions with clashing regulatory frameworks, settled-means-settled, regulator-forced free market integration for non-core (ie. account-related) financial services such as loans/forex, redundant service provider availability for every function, meaningful SLAs/reputation for service providers, routing and/or provider selection based upon nontraditional metrics such as ethical investment rationales, etc. The same set of requirements goes up and down the supply-chain: people want to reason with their suppliers and customers about stock, settlement status, payment and contracts, they sometimes need backups in case of failure down the chain, and they care about reputation.
Frankly the whole area is such a mess I am expecting an open source core accounting project to take over the sector. Probably it will begin in smaller/developing world banks and move toward the big guys like a meteor.
For some evolving thoughts on the area (from 2012, but literally picked up again in the last 2 days) see http://www.ifex-project.org/our-proposals/ifex
They don't do it because doing it while complying with all relevant regulations is more complicated than you realise. That's what banks are ultimately in the business of, finding ways to do business across jurisdictions. And it's why most fintech companies fail: their beautiful code runs headlong into the messy, illogical, unpredictable real world of regulatory compliance and guess what, the regulator always wins.
Im not totally sure where you think there is a missing gap, but happy to talk more. Email in my profile.
I wish to build some of this, but how will pay for it?
BTW, http://plaintextaccounting.org have some good ideas about this. I think this is the way to go, but how make it work with a database instead
(1) Physical logistics for food machines http://8-food.com/ and their supply chain
(2) Liquidity and settlement logistics for cross-border payments http://moneyclip.cc/
(3) Energy trading for emergent renewables-focused densely interconnected next generation power grids http://fiberhood.nl/
The goal is to get the core markup defined to the point where an engine can be applied to a formally specified risk model to generate various goal-optimized decisions for for all three domains.
Note that there are also many other domains to which this reasoning would apply such as general logistics, supply chain and generic scheduling mechanisms for resource (eg. power)-constrained embedded systems.
Mostly, regulatory capture.
Awesome marketing and PR for his company Cobol Cowboys (http://cobolcowboys.com.
> His wife Eileen came up with the name in a reference to "Space Cowboys," a 2000 movie about a group of retired Air Force pilots called in for a trouble-shooting mission in space. The company's slogan? "Not our first rodeo."
Though I ended up in electrical engineering.
That's generally why you charge $1,000/hr to begin with.
It's going to be funny when we replace it with Java and our business guys ask why it runs much slower.
What you do is function by function, convert the language by duplicating what it does in the new language. Resist any and all urges to fix it, refactor it, improve it, etc. Just translate.
unless you're an expert and knows the language inside out, it's hard to even know such a problem occurred.
Converting batch based, RPG systems to a whole different paradigm, without a test suite covering the edge cases and 0 documentation is another.
Even in the world of standard consumer webapps, this is a real challenge.
Like I said this probably should not make a difference to me because I won't be hiring any "COBOL Cowboys" anytime soon but as a customer of many large banks who these people have probably worked for its a bit irksome for some reason
Everything needs to be SSL - it provides more than just confidentiality.
Any contract-based engineer would charge that, or more, regardless of the language. Fortune 500 companies are happy to pay $300 / hour to a consulting company for engineering time.
Bear in mind that this is something like four times the average in private industry, and fifteen times the minimum wage.
My general rule for stories by journalists - even experienced ones - covering any technical industry is that their use of facts is often extremely loose, and if something seems awry, distrust the story first and question your understanding of the domain second.
What really truly shocked me the most was "installing" a zvm/zos stack without relying on a machine image. Gosh, i'm still shocked thinking about it years later. I remember "writing" (cause copy pasting assembly with the assistance of someone who knew what the hell they were doing doesn't really qualify) system hooks for things that literally work out of the box in any OS written in the past 3 or so decades. Stunning..
Although I don't know if Hercules is still alive and hasn't been shut down by the suits.
IBM's licensing agreements don't allow you to run current versions of its mainframe OSes under Hercules. IBM will sell you an equivalent technology which you can legally run their OSes under, an x86 mainframe emulator called zPDT, but it is quite expensive (I have heard figures quoted like USD 5000). This is where TurboHercules ran into problems–they wanted to run current IBM OS versions under Hercules, but IBM says that violates their license agreements. TurboHercules tried to get the EU to force IBM into licensing their OSes to run on Hercules, but they didn't succeed.
By contrast, you are legally allowed to run old versions, 1970s vintage–in those days, IBM chose to release its operating systems into the public domain. That has little practical use, so can't be commercialised, but lots of people do that as a hobby, and there is stuff happening in that scene. I recommend this distribution of MVS 3.8J if you want some basic exposure to MVS – http://wotho.ethz.ch/tk4-/ – a lot of the basics, like JCL and TSO, aren't hugely different, although a lot of features you'd expect on a modern z/OS system (e.g. ISPF, Unix, Java, TCP/IP, peer-to-peer SNA) are missing in this circa 1981 system.
It is also perfectly legal to run Linux under Hercules. I never have because it seems somewhat pointless–the differences between z/Linux and x64 Linux are minor–but I can see practical uses – you can port a project/product to z/Linux so your customers can run it on their IBM mainframes without you needing an IBM mainframe yourself.
A bit of correction, I'm afraid. That's the cost for mid-shelf copy of Visual Studio 2009 or so. It's not expensive at all for corporate purposes at any company I've worked at. Top-shelf Visual Studio was set for 10,000 for a long time (some checking of current costs suggests they've redone their pricing model).
Expensive software for functioning companies, broadly, might be north of $100K. I can't speak to specifics, of course.
It's quite a reorientation of what "expensive" means when you get involved, even a bit, in corporate purchasing and negotiation.
But consider someone like myself. My job isn't focused on mainframes. I very rarely have had anything to do with them at work. Even though my employer could easily afford USD 5,000 to buy me a zPDT if there was a business case for doing so, they won't because there isn't really one–in the last five years, I've only once had to help a customer with mainframe integration issues, and we used a partner company with mainframe specialists to handle that engagement for us. However, I'd still like to learn the technology. I'm not sure I'd ever really want a job in it, but it fascinates me. But I'm not forking out thousands of dollars of my own money just so I can play with z/OS or z/VM.
I think this is a problem with mainframes–even if someone is interested in the technology, it is very hard to learn about it unless your employer uses them (and in a large company, even if your employer does, your own job might still have little or nothing to do with them.) I would have thought IBM would be more keen on spreading knowledge of its own technologies around–it might actually make it easier to sell them to people–but it doesn't seem to be on IBM's radar. HP has the hobbyist program for OpenVMS, IBM could set up a similar program for mainframes (and IBM i too), but that has never happened. (I have heard some folks have tried to talk IBM into it, but they have never got anywhere.)
FWIW... I know it's not exactly what you're looking for, but IBM does host a Master the Mainframe contest each year. It provides free access to current z/OS systems for a while (few months, I think) and a project consisting of a series of challenges that guide you though learning the environment. IIRC... you must be an enrolled student to actually win the prizes, but anyone can sign up and perform the activities. One of the mods on /r/mainframe helps coordinate the contest if you're interested in learning more about it.
Kudos to those intrepid enthusiasts who implemented it and kept it going all these years. It's sad that they weren't able to capitalize on their efforts, but as a F/OSS supporter, I thank them for giving us at least the potential to play with mainframe technology (without running mainframe-compatible hardware).
Fun thought: I wonder what it would take to layer Hercules directly over ESX/i to create a poor person's "mainframe VM"? I mean, besides installing Hercules on a Linux guest on ESX/i.
These 2 Youtube videos show how to initialize an S/390. The top comment on the second video is really good, too.
Also, it's Dallas, where cost of living is great. 2300 sq ft houses on decent lots in good school districts for $250k.
Financial insustry? Short term work? Specialist competence systems? There is no way it's $100.