There is a lot to be said for systems that do their jobs despite their age, and which have been validated and refined over the course of decades.
In many ways, I'd rather maintain an old, well tested system than a modern system with widely available exploits, conflicting commercial interests, and rapid obsolescence.
The only reason I do it is job security. I prefer to know a version of something inside and out, all the strengths and weaknesses. I don't want to spend the time to learn something new if the current tech stack suffices - it's a risk. I'd rather spend that time building something. But if I don't upgrade/rewrite every few years, and I need to find a new job, the interviewees will look at it and think I'm a dinosaur - despite the fact I follow the latest trends through blog's and podcasts.
My company is slowly upgrading all our application servers because what we were running is way past EOL. The ops team bitches and moans because the newer one is more unstable. Go figure.
Yep. In a lot of parts of the industry, old and stable is seen as a negative by many. Maybe that's why people don't consider programming "engineering".
Yeah, well. A lot of those systems does it's job very well. But now they want to access it's data and calculations on an iOS and browser app. Suddenly it becomes unusable or blocks productivity.
Locally: if you distract the person making the calculations for a second, it's easy to move a bead or two.
Insider: easier than a computerized system for individual computations. Imagine how easy it is to move a few beads incorrectly when computing taxes on your own income statement and how much work it would be to detect that.
Too true - both the negative (who wants to program an abacus), and the positive (there is no magic in this program/device).
I feel like developers tend to favor the magic, due to the burden it relieves, without having to consider the negatives due to the way the business of software works.
Not all developers, but I'd say that certainly the majority seem easily entranced by the "latest and greatest" especially in some areas of software; web development is the most prominent example to come to mind.
Web development has the most innovation because it is the least entrenched. It is not hard to write a bare-bones web application that produces output that can be read on browsers everywhere.
Compare that to the difficulty of writing a cross-platform GUI framework. It's not even easy to write bindings for existing GUIs because it's hard to wrap C++ in a way that doesn't have you manually managing memory or discarding other language features and philosophies.
This is why we're seeing the Electron approach to GUI apps grow. It's easier to write bindings for chromium and feed it HTML.
Oh god, this. So much this. I'm dealing with it right now in a hardware migration. Local iron to AWS. There's so much magic that we can't effectively debug errors. The new system is extremely vanilla and it throws errors where you would want them to. The old system silently swallows them and creates invalid data. I'm not even talking about php or mysql.
Magic is bad when it comes to replicability. I wish more people would take that into consideration when writing software. Your convenience for today is your headache for tomorrow. Be lazy. It's a good trait. But not too lazy.
In many ways, I'd rather eat a flaming ball of excrement than work with 1960s technology again, especially when the code was written by programmers in the USG. These systems are horrible kludges.
They're like the crumbling ruins of an ancient Roman city beneath a metropolis, beautiful only for their history. It's a nice place to visit. But you wouldn't want to live there.
This happens a lot more commonly than people might think. Not sure if this is still the case, but Florida DHSMV (motor vehicles department) used a Honeywell mainframe as recently as a few years ago, and the word around Disney World Ride & Show was that the parades in Magic Kingdom ran on a DEC PDP-10 as recently as the late 1990s and early 2000s. Both of these may no longer be true, but I haven't checked in awhile. Both of them might still be true...
"IMF is located at the Enterprise Computing Center – Martinsburg (ECC-MTB) and resides on the MITS-21 GSS. IMF is written in Assembler Language Code (ALC). There are no direct users of IMF. IMF receives data from an array of systems and then sends data to several systems as well."
MITS-21 GSS == "Modernized Information Technology Services (MITS)-
21, IBM Master File General Support System (GSS)"
"The Unisys mainframe at the Martinsburg Computing Center processes all of the centralized Individual Taxpayer Information File workload for the 10 IRS campuses. The Unisys mainframe at the Memphis Computing Center processes all of the centralized Business Taxpayer Information File workload for the 10 campuses.
The Unisys mainframe at the Martinsburg Computing Center is configured to run with 950 MIPS for normal weekday processing. For weekend processing, the Unisys mainframe at the Martinsburg Computing Center borrows MIPS from the development and test environments to increase capacity to 1,200 MIPS for managing the increased workload. The borrowed MIPS are returned to their respective systems on Monday mornings to support weekday processing. The Unisys mainframe at the Memphis Computing Center is configured to run with 675 MIPS for normal weekday processing."
"In October 2010, the IRS upgraded its Martinsburg Computing Center mainframe computers from the IBM z/9-series mainframes to the IBM z/196-series mainframes."
The report even provides nice utilization graphs.. Your government at work.
edit: The type of Unisys systems "The IRS Unisys mainframe environment contains two Dorado 280 mainframe computers, with one located at the Martinsburg Computing Center and one at the Memphis Computing Center. Because vendor hardware support will be discontinued as of December 31, 2011, the IRS has decided to upgrade the Dorado 280s to Dorado 780s during Fiscal Year 2011."
Yes, the number of instructions per 'business operation' in a mainframe is typically much, much smaller than on something like an x86. This dates back to IBM's early habit of implementing all instructions in lengthy microcode, resulting in instructions that were relatively slow to execute but did complex things, typically taking an entire high level operation (e.g. sum all these numbers) and making it one instruction. A big part of making this happen is the IO model on mainframes which is much more integrated than x86 professionals are used to. In general files on mainframes are not binary blobs but schema-compliant tables similar to an RDBMS, a fact that is leveraged at a very low level when programming for mainframes.
This leads to a very different kind of thinking from using microcomputers. I highly recommend that anyone in computing learn to use a mainframe or at least minicomputer operating system, which is affordable thanks to services like PUB400 from RZKH which provides free access to an IBM i minicomputer. Yes, these systems are uncommon today outside of certain specific market verticals (e.g. finance), but the exercise will show you a very different way of organizing a computer system that might help you think outside of the *nix-style box.
> We did indeed find the oldest computer in government, but it’s not really a computer at all; it’s computer software.
What a complete disappointment! I was hoping to read about some crazy legacy hardware they were still using. Software?! Yeah... no, that does not count as the oldest computer in government.
In general, does a software's age alone really give any cause for concern? In my book, software that survives 56 years is proved solid. Software that is new is guaranteed to have bugs.
> I’m starting to have a lot of questions about this tax software and the management around it.
This does seem like muck-raking by a non-expert with little to no real context... Is that what Muckrock does? This software was audited, presumably by experts, and people looking directly at it acknowledge it's hard to maintain and still conclude there's no immediate need to replace it. What evidence is there that this conclusion is in dire need of re-examination?
The article mentions that the oldest actual computer in use by the government regards nuclear strike operations, and that it's actually being upgraded sometimes next year. I find this...concerning? Says they're upgrading terminals and data storage, which seems pretty harmless, but I feel like in terms of nuke-launching tech, "if it ain't broke, don't fix it" is a pretty good philosophy to live by.
60 Minutes toured a missile silo a few years ago show control computers from the 50s and 60s. It looked just like computers in TV shows from that era. There appeared to be a few jury-rigged gizmos demonstrating how hard it is to repair such.
Ive heard stories of some older Russian aircraft still containing vaccum tubes. And I imagine the oldest US craft, even the old space shuttles, not that far ahead.
I'm sure plenty of replacement parts of EOL by decades. Heck, the terminal upgrades alone are likely necessitated by the fact CRTs are no longer manufactured in quantity.
I was running into this doing requisitions for a precision approach radar that was commissioned in the late 60s. By this point, all of the part suppliers had long since moved on to other things or gone out of business, and the usual avenue became "Order the part, and wait for enough air stations to also need the same part. Then, and only then, will the government commission a custom fabrication job to manufacture the parts needed for an obscene amount of money."
Two results happened. First, we ended up paying $15k for parts that were worth $500 because that's what it costs to set up a fabrication job when you only want 50 circuit boards. Second, we did whatever the hell we could to avoid having to spend $15k on a $500 part, and the radar was therefore held together with gummy bears, duct tape, and black magic. It was a clusterfuck, and we were always putting out fires with that thing, sometimes literally.
Horrifyingly, the military has never been able to create a better PAR, which is why it's been around for so damn long.
I got out in 2014, and last I've heard, they just re-extended the EOL for the radar to 2025. They'll probably be sending out lance corporals with binoculars and walkie-talkies to yell "TOO HIGH" and "TOO LOW" by then.
It's not difficult to find those systems in the government and some big companies. Adding logic to some some source code that was last modified in 1985 and recompile it 30 years later is magical.
Its all about cost and rish. These are bespoke systems that were written decades ago -- the entire operations of the agencies were organized around the abilities of the system.
In the 60s, there was a clear ROI, the money spent zapped an army of clerks. Standing up a new system is high cost, high risk, and brings low measurable benefit.
Sounds like the cheapest way of upgrading hardware for the IRS is to write an emulator that can understand IMF assembler and stick it in a virtualised environment. Given the server it is running on is now underpowered, a medium sized server could handle what it does at a fraction of the cost.
> The System z family maintains full backward compatibility. In effect, current systems are the direct, lineal descendants of System/360, announced in 1964, and the System/370 from the 1970s. Many applications written for these systems can still run unmodified on the newest System z over five decades later
To be able to hire cheaper talent. There are a million college graduates who can churn out good-enough Java out there. Good Assembler programmers who are willing to learn the quirks of an obscure platform might be able to find higher-paying jobs.
No, it is the other way around and it surely has a business case.
Instead of throwing out the investment done in years in the mainframe hardware, because juniors cannot grasp old technologies or don't feel like using them, bring them into the mainframe, while keeping the investment into the existing working stack.
Instead of writing a REST API in RPG, which doesn't know anything about Web APIs, used to create the customer support application, make use of JEE/Spring instead to provide a SOAP/REST API.
Just a possible business case, that is actually used in production for Java applications on IBM mainframes.
(So this isn't a garbage comment: I imagine somewhere deep in Mt Cheyenne and elsewhere there are some even older milspec machines that have been running archaic code since mainframes were first invented.)
The DOD one ("Strategic Automated Command and Control System") claims to be 53 years old and run on a Series/1. But Series/1 first came out in 1976, 40 years ago. Anyone understand the discrepancy?
IBM mainframes have virtualized their predecessor hardware for a long time for backwards compatibility with locked in enterprise customers. It might be something 53 years old running on hardware from 1977.
In the short term, there's always a lot more risks in scraping an old system and spinning up a new one, especially in large complex organizations. However, in the long term you're really better off with modern systems.
They gave him short shrift, and he was a Berkeley man who was more at home at a Grateful Dead concert than tracking down overseas hackers.
He only discovered the issue because he was trying to teach himself how to program, and decided to chase down a problem with a discrepancy on the timeshared server that was causing it to be out of balance by about 5 cents.
Probably why the core IRS systems have not been hacked yet because few hackers know IBM 360, COBOL and 9 track tapes.
New peripheral systems like the transcript request system have been hacked.
If governments insisted that all hardware should come with a spec that runs on a universal virtual machine, then today I could probably run the old machine on my phone. But alas, thanks again to an unvisionary government, we're stuck with old hardware.
IBM i software was compiled as platform independent binary and converted into machine specific code by the operating system. Ok, it's a propietary spec far from being open etc, but the tech is there and even better than a vm (in certain aspects)
In many ways, I'd rather maintain an old, well tested system than a modern system with widely available exploits, conflicting commercial interests, and rapid obsolescence.