Hacker News new | past | comments | ask | show | jobs | submit login
America’s Cities Are Running on Software from the ’80s (bloombergquint.com)
228 points by eplanit on Feb 28, 2019 | hide | past | web | favorite | 335 comments

A few years ago I had to update some FORTRAN code written in the 1980s that did bond valuations and some other financial stuff. It hadn't been touched since it was written and was now running out of space because the number of bonds keeps increasing year after year.

I'd never done anything with FORTRAN before, so I spent a couple days reading about it and looking into how it works, fearful that I'd muck everything up trying to fix it.

I open the code and find that it was the most elegantly written software you could wish for, with fantastic comments and structure. I changed two numbers and was done, buying us another 30 years or so of rock-solid service.

A lot of young people think that the people who wrote software 30 years ago were just plain stupid and all wrote crappy code. In reality there were people back then who did a good job and there were people who didn’t. They just used different tools. I can’t wait for the current systems to become 30 years old. Making changes to micro services architectures written in different languages over multiple servers will be a lot of fun.

Necessity is the mother of invention and memory/resource scarcity is the mother of quality code

A lot of junk and spaghetti code gets written (or copied/pasted together) today because there are few hardware-based constraints in much of modern app and system dev

I work with a lot of embedded engineers and have the utmost respect for those that work on safety-critical, legacy systems

I don't believe that hardware constraints have anything to do with code quality. Running your code on a microcontroller will require your algorithms to run in a smaller RAM footprint and potentially come with timing requirements, but do not dictate that you write quality code to do so. You can solve those problems with thousands of lines of uncommented assembly efficiently, but that does not mean it is quality code.

I have worked for a significant time in my career as both an embedded engineer as well as a backend engineer, and in general I find that the backend code is way easier to read, maintain, and extend. Embedded code is seldom properly tested and most of it is written in C where people can abuse a library's contracts or the preprocessor. It is not uncommon to find functions that are over a thousand lines in the embedded world. Compare this to Rails where there are a ton of standards, short and succinct functions, and good support for testing.

I guess it depends on your definition of "quality code". If you mean code that is dependable and will do one thing well on one platform for the rest of time, then embedded code could be considered high quality. I would debate that these items have more to do with the binary than the code though. If you mean code that conveys how a program works well to other programmers, is under test, and follows some standard structure, I would pick modern app development as typically being much higher quality.

I don't know where this got a downvote from because it's correct - it's quite hard to build sensible automatic integration tests for embedded code unless you have the luxury of a full system emulation.

The only thing that resource constraint forces is less code, especially less dependencies, because you run out of space.

The Toyota "unintended acceleration" court case was a flagship example of bad embedded code, that we rarely get to see.

The Toyota computer is like a super computer compared to old mainframes. IIRC the Toyota program had 10k global variables updated by spaghetti C code. So you had a lot of space, enough to shoot yourself with C code. Old mainframe code in contrast often operated on records in a batch fashion, with a pretty clear input and output.

Of course, the antipattern on enterprise code comparable to thousands of global variables in C code, is to have thousands of columns in hundreds, or thousands even, of database tables, all intertwined and used only God knows where.

AFAIR, the were also autogenning code from a Matlab model of the engine, and that's where the 10k globals thing is from. Like yeah there's technically C code that's doing that, but come on.

Interesting, I thought the "unintended acceleration" was actually just floor mats creeping up and holding the accelerator pedal. Gonna have to google some stuff now :-)


It's one of those Rashomon situations where I'm not sure we can ever be entirely sure but it seems to have stopped.

Thank you for linking that. The comments in it share some terrifying stories.

That was Toyota's explanation for what was happening. Turned out to be a lie or a hasty conclusion, I don't remember which.

> I don't believe that hardware constraints have anything to do with code quality.

When the punishment for writing code that doesn't work is a long wait, you learn to write small pieces and test individually.

> memory/resource scarcity is the mother of quality code

My experience with Legacy code has been the exact opposite of this. I've seen some really well written and documented legacy code but almost never in resource / performance sensitive areas.

People inevitably seem to accept trade-offs that sacrifice readability and maintainability for efficiency.

Well I recall writing Fortan 77 for a billing system and we had nicely structured code (our team leader was a genius) but we did do some specific stuff eg reading in data in chunks the size of a disk sector.

And it was perfectly normal to do stuff like this.

"resource scarcity is the mother of quality code"

I don't know, your mileage may vary with this one. I have had to make code less readable to make it more efficient more often than I've been forced to find a more elegant way because the clever hack was too slow.

I wish that were true. I’ve worked on a lot of old FORTRAN code bases (including a couple that started life as punch cards) and it just doesn’t bear out. Remember FORTRAN comes from an era we’re people we’re concerned about the overhead of a function call. Programmers also worried about he memory overhead of comments in their editor! Most FORTRAN is a mess of gotos, common blocks, implicit types and other relics that should be left to the past. This code doesn’t age well either, optimizers don’t do well with gotos and the lack of function calls means that there are copy and pasted snippets of what my coworkers and I called “fassembly” that were untranslatable. One time I spent days translating one of these functions only to realize that it was an fft, I swapped it out with one from a library that had be optimized for decades the code was easier to read and 100x faster. I’m not even going to discuss any code base that was unfortunate to need any kind of string manipulation. To cap it off FORTRAN 77 doesn’t even allow variables with names longer than six characters.

That all said, modern FORTRAN is entirely pleasant language. Just stay away from the legacy.

>A lot of junk and spaghetti code gets written (or copied/pasted together) today because there are few hardware-based constraints in much of modern app and system dev

Maybe in the general case, but there are so many exceptions they're almost the rule. Apps like JIRA and Electron constantly have slowdown from bloat that pretty clearly is swamping the hardware constraints and would be avoided with better coding.

Also, I see classic games that are emulated or transformed on "fast" hardware that nevertheless have input lag. I've seen this one the Wii console for SNES games, and on inflight emulator that ran Pac-Man. Plus, TVs that upscale make Dance Dance Revolution have too much lag to be playable.

> I can’t wait for the current systems to become 30 years old. Making changes to micro services architectures written in different languages over multiple servers will be a lot of fun.

This will be bad even if they are all written in the same language, if they were not also all written in the same time frame.

With ancient FORTRAN systems, generally you need to learn the version of FORTRAN it was written in (FORTRAN 77, Fortran 90, etc) and enough about the problem domain to understand what they were trying to do.

With the kind of systems we are making now the hapless maintainer will not only have have to learn the specific ancient language version and the problem domain, but also whatever now forgotten framework was in fashion at the time the thing was written.

The stereotype isn't that the code was bad -- it's around the UI. (There are also stereotypes around modern UI being too minimalistic. Obviously these things are not so simple.)

There's also the idea NOT that it was bad when it was written 30 years ago, but that 30 years of patches by people who didn't write the original code mean it's no longer clean code.

Early in my career I worked on legacy i-Series systems. One of things we were taught was to never write a code without putting in comments in the header. And the commends need to specify dates, ticket number for tracking, and the rationale behind the change. Every changed line had to be inside a block with the same ticket number, making it easier to simply Ctrl+F through the code.

Now I work on newer software and I frequently see people making changes without spelling out the rationale, dates, author etc. And it makes it harder to track what has happened and what does the code do exactly. When I tell people to comment it correctly they often reply about - how much time saving can be had by writing a clean code.

Nowadays we keep that information in the VCS. git blame gives you more reliable info than careful comments (assuming that the rationale is recorded in the commit message).

Integrate with a ticketing system, force commits to match a pattern, and you can ensure the rationale is tied to your ticketing system (i.e., commits/merges have to be prefaced with FOO-####)

This is generally great (and I do it on projects I'm confident won't be moving any time in their useful life), but if you ever switch VCS/ticket systems, it gets messy quick.

Ever moved from Github Issues to Jira/Trello or vice-versa? How about Bitbucket to Github (as I did for my current company)? That's a whole lot of context potentially being lost, either in tickets or PR discussions.

How much of that lost context would have ever been used anyway is up for debate, of course. In my experience the answer tends to be "rarely, but sometimes".

This is why I discourage Free Software projects from using hosted issue trackers entirely, especially ones that are proprietary (such as GitHub). Mailing lists are the best solution for longevity and meaningful discussion. If you really need to, you can organize things in a file that is in the repository alongside the source code (Org mode is great for this), which has many significant advantages (and downsides, like not being point-click for PMs, but if your Free Software project has PMs, you have bigger problems that software cannot solve).

Often that sometimes is a life saver though...

This happens on github/gitlab too, with pretty much all open source projects. It's definitely a cause for concern eventually, because the context for the problem and the rationale for decisions is now something that you have to maintain apart from just maintaining the codebase. Right now github etc do a good job of managing that for you, but the long term handling of this context definitely has scope to improve.

Obviously this system will be obsolete in a few years too and nobody will know how to find stuff int it :-)

Well you know my last job I worked on a system that was a lot of 4gl under the hood and a Java application that sat on top calling the legacy 4gl code. The 4gl code had comments in the header with dates and ticket numbers. The dates stated late 80s and 90s. The ticket system was from 4 ticket systems ago so that was useless. The comments had become wrong over a period of 30 years of changes. The rationale for something in 1989 made a lot less sense for 2015. The author was now the boss for the whole office and hadn't touched code in 25 years so I can't go ask him about this function he wrote in 1989.

Entropy. It's not our friend. It's why we can't have nice things. Code bases suffer from bit-rot over time.

I think all of things you mention have value, but all of that information is (and in my setup, displayed inline) located in git. We keep ticketing, author, rational, and version changes, and you can walk through the changes with the tracked intentionality by stepping back through the version control.

I followed a similar protocol, but then I ran into a situation where good discipline kind of went out the window during "the great outsourcing". So when recently updated code had problems, because my info was the last actually properly recorded in the source code (even though I hadn't touched that code in years), someone might come gunning for me to fix the problems with "my" code. Not fun, especially when I might actually take a look at that code and see how badly the outsourced folks had mangled it!

Build the UI for the target user.

Minimalistic UIs are great if you're trying to make it simpler for outsiders.

If you have people who understand how to use the software, you can actually cram the UI full of everything you need in a single place.

Look, this sounds like you are talking about brutalist design for corporate software, and I can dig that.

What I can not dig is how old software prints things to screen. Touch a new dimension, let's recompile the data and re-load the list/entire screen right now! We've never done analytics our study our user base, so the most popular functions are buried 10 levels deep in a menu that re-loads every time you lose your scroll spot! ETC.

If you have spent time in corporate software developed in the WIN 95 days (cough oracle CRM cough) you won't see intentionally brutalist design. You will often see Magyver level hacks stacked through the roof, and software that loads so slowly you can re-load Gmail 3 times before it finishes.

First generations of browser-based UI did refresh/reload the page a lot. But there was nothing else they could do, really. AJAX wasn't a thing, DOM manipulation wasn't possible.

Older UIs written in Visual Basic, Delphi, Powerbuilder, or even for text-mode interfaces did not do this, because the frameworks supported refreshing only the changed data.

Look at how emacs or vi works over a 1200 baud dial-up connection. Surprise, it does, because it only redraws the parts of the screen that are changed. These problems were solved in the 1970s and 1980s.

The web broswer has been for most of its existence a really bad way to deliver user interfaces. The zero-deploy nature of it was very powerful though, so people suffered through it. Only in the last few years have browser-based interfaces approached the abilities of native clients.

“you can re-load Gmail 3 times before it finishes.”

After the latest changea to Gmail I have high hopes that it soon will catch up in terms of slowness :)

Most UIs these days seem written for beginners. Easy to get in but they become tedious for experienced people.

Oof. Currently developing software for doctors to use and this feels all too real. The concessions we have to make to cater to their relative inability to adapt would make me hate using this UI every day, but it's what they want.

It's not like I don't get it. I do. Just not 100% of the time.

eg Facebook or Amazon or Outlook or Photoshop. The antithesis of minimalism, features and buttons everywhere.

>Making changes to micro services architectures written in different languages over multiple servers will be a lot of fun.

Don't forget the 10,000 packages you need to find because they no longer have a feed available! They'll be rewrites.

> A lot of young people think that the people who wrote software 30 years ago were just plain stupid and all wrote crappy code.

The generation from the 70s and 80s will forever be the best technological generation the world will ever see. We had serious constraints that would need elegant solutions. I remember as a kid looking at Z80 maunals to learn how to save a few bytes so that code would run, or the 680k Microsoft limit that caused the code to be very concise and every bit was thoughtfully used.

Now that the sky is the limit we have cookie cutter solutions built on other pieces that "work" which really are just rewritten code in some new language or framework. This is now the new norm and it will never change.

> We had serious constraints that would need elegant solutions.

In the era of abundant hardware resources, I like to think that this sense of craftsmanship hasn't been abandoned, but rather transmuted--transmuted into the ability to make things as readable, accessible, and maintainable as possible.

The Redis, Phantom (Thrift proxy), Kafka, and PostgreSQL codebases are regularly things I consult as role models in the design of clean, pragmatic code.

The explosion of technologies since the elegance-by-necessity-of-optimization era does not necessarily mean that the capability of engineers has declined.

As a generation yes. But, I got started in 93/94, learned C. Got into infosec and reversing and can do machine code reasonably well these days. Plenty of my peers learned assembly on TI calculators and other places. By the numbers there are probably more competent low level systems engineers then there were in the 70s and 80s. As a ratio of programming professionals we are dwarfed by PHP programmers, etc.

> We had serious constraints that would need elegant solutions

I don't think those are comparable to complexities in IT that exist today.

> I don't think those are comparable to complexities in IT that exist today.

I think the constraints and lack of “here you go” resources many starting programmers dealt with, even for toy applications, out of necessity in the 70s and 80s are better preparation for the attitude necessary for dealing with the complexities in IT today than the learning conditions today.

Which isn't to idealize it, it was also a hard onramp that drove lots of people off that would have done well at many real world problems where they weren't soloing without support, and night with experience have still developed to great solo practitioners, too. But the people I've encountered that came through that 70s/80s start tend to be, IME, on average more willing to slog through and learn hard stuff in new areas than people who came up through, later, easier onramps.

Though that may also be in significant part survivorship bias, as the 70s/80s crew will have had to have stuck around the field longer, usually, and it may be people with that flexibility are more likely to stay in technology past the half-life of whatever was current when they got in.

In business the goal of software is to make money. Elegant solutions do not necessary increase the bottom line. Increasing productivity at the cost of elegance is a net win in most cases.

And yet I've seen plenty of business spend a huge amount of time and effort on "enterprise" abominations to attempt to build elegant extensible systems. I've seen at least a dozen instances of cron businesses re-invented. I've seen business throw money down bottomless holes on proprietary databases. There's a well known tech company that now owns two bloated electron based cross platform text editors, neither ever had the goal of making money, at least directly.

Business is great at wasting money on software, I just wish they'd waste it in a way that benefited people.

Survivor bias. The fortran that is still running is the elegant stuff, the stuff that was written well enough that nobody has needed to touch it for decades. Just as with "classic" cars, there was plenty of junk out there at the time. What is left is the most resilient, not necessarily representative of the whole.

It is entirely possible that one could make the assumption that if software was written 40 years ago and hasn't yet been rewritten, that it's good enough as-is that it survived the test of time.

Or that it is so horrendous as to be untouchable.

May be horrendous, but still does the job.

> A lot of young people think that the people who wrote software 30 years ago were just plain stupid and all wrote crappy code.

Stupid, no. Crappy code, yes. Part of the reason people wrote crappy code then is because programming techniques and best practices have come a long way. The other part is most people writing code then were inexperienced coders.

Of course, plenty of crappy code (most?) is written today, mostly because of inexperienced coders. It's not because they're stupid. After all, who would make a 20 year old a general?

Thinking about it as 'crappy code' is the wrong way to think about it. It's average code. Everything is normally distributed, including the quality of code. On average code quality is average. Some of it at the top tail end of the distribution is beautiful and some of it at the bottom end is an unworkable trash fire.

That hasn't change in the last few decades and it won't change in the next few decades either.

We've already reached the "fun" point for legacy Rails apps.

I have no issues navigating around in an old rails 1.2 app. The problem is usually the special sauce and anti-patterns that have been added to it over the years. This is a problem regardless of language or framework.

Running a Rails 1.2 app sounds... dangerous. Have you plans on upgrading it? (Probably at this point easier to start a fresh Rails 5 app, write some tests + copy/edit stuff over as needed).

Thankfully, most of those won't survive for 30 years. Anything that lasts that long without constant maintenance is probably a well-designed system that you won't mind working on.

It seems like with the modern pace of software today, things are practically guaranteed a rewrite within 5 years.

We said the same thing back then too.

A lot of developers using [insert this week's hot framework] think that the people who wrote software 30 days ago were just plain stupid and all wrote crappy code because they were using [insert the hot framework of four weeks back].

It's already "fun" trying to maintain systems that are two years old.

>I can’t wait for the current systems to become 30 years old.

Good luck with them lasting 10 years

That has been the fallacy for a long time. I remember discussions in the 90s when I asked people why we didn't store 4 digits for years and I was told "By 2000 this will already have been replaced". I know for sure that this particular system was still used in 2010 and probably even now. things that work keep getting used.

Yep. Software can last surprisingly long time. Personally wrote a micro service with intended life-time of 6 months. 8 years on, it is still soldering on, surviving multiple attempts to replace it, with almost zero maintenance.

At least there was some justification for space saving in the 60s when your PDP8 gave 4k core memory. Probably not unreasonable to expect the code to be obsolete by y2k either.

In the 90s that use case was not valid anymore but the mindset still persisted.

Very true. I was probably guilty myself - everyone was expected to know how structures aligned, and how to pack, so "being efficient" was often just thought part of writing good code. Hopefully there weren't so many 2 digit dates being coded in the 90s though. :)

It was probably still a valid use case for comms, if not within the application. Code was still being built on x25 and oh-so-slow modem links.

There was a lot of 2-digit year coded in the early 1990s. I wrote some of it myself. The reason was compatibility with older code/data formats, and also as mentioned the notion that "this will all be gone before 2000" or that it would be updated along with everything else if it had to be.

Same as today. If anything, programming was accessible to fewer people back in the day. What looks like poor design decisions today (fixed length integers) was just a reality they had to deal with.

> and all wrote crappy code

Yet, we didn't hear about left-pad until what? 2016?

Maybe some do, but I look at the past coding era as more of a mystical age where wizards and gurus made things happen out of thin air. Seems pretty impressive to me.

They also got a lot more time to think about more focused problems because many aspects of compute complexity simply weren’t possible

> young people think that the people who wrote software 30 years ago were just plain stupid and all wrote crappy code.

It's understandable. None of us back then had anywhere near the "copying and pasting javascript from stackexchange" skills that "tech savvy" kids have now.

There's also a disposable culture around the resources involved. A mentor of mine always says "You can have it, but if you break it then you get to keep both pieces". Without value in the broken pieces, no one needs to clean up their own messes anymore, so there's no need to move carefully or fail gracefully. If you trash your dev env, or even your workstation, it's trivial to roll back or redeploy and get back to work. Management is in on it too. Rather than hire more careful devs, it's easier to just move the explosions farther away from prod, and limit the blast radius. CI/CD

A lot of that stuff isn't gonna last 30 years.

That's what we thought 30 years ago and, yet, here we are.

I question your use of the word fun

This is totally right.

Old things are typically built to last. Because the cost of redoing them was prohibitively expensive.

If it were shit craftsmanship, it wouldn't still be running.


- Roman architecture

- Browning designed firearms

- Savile row tailoring

- Old ships

- anything your great-grandfather owned

This can be rephrased as "things that lasted a long time were built to last a long time", which is an entirely unsurprising rephrasing of survivor bias.

The fact of the matter is that some things have lasted a long time. Isn't it worth asking _why_ they lasted so long–and conversely, why other things do not? All the survivorship bias in the world doesn't negate the fact that some things are made to higher quality than others.

Well, the Romans had a custom of charging less rent for many-story buildings, because they were widely recognized to be susceptible to collapse. I don't think they're really high-quality architects.

I had a look at some Roman houses in Ostia Antica, and the really striking thing is how little the techniques of architecture have changed. They could do some impressive architecture from time to time, but on the whole, they built stuff about the same as your average cowboy builder would today, minus a bunch of legal requirements (fire proofing, etc).

Well, old things still around today were usually built to last which is rather natural. (There are rare exceptions, the Eiffel tower was meant to stand only 20 years)

We do have a bit of a sample bias. We only see the things which have survived. But planned obsolescence was the norm.

Technology advanced too slowly for a producer to assume they'd have a new model to sell in the near future.

Exactly, which is why we should think twice about replacing them, just for the sake of having something newer. Newer isn't always better.

Old wooden ships had a short life BTW

Yeah old ships isn't as good an example as I thought after thinking about it a bit. I was only thinking of the ~100 year old dingy I have that seems nearly indestructible with little maintenance.

Thats why old wooden well preserved (made) ships survived up untill to now.

Crappy made w/e will break the moment you stop maintaining it.

You've obviously never dealt with a wooden boat much less one immersed in salt water. Maintenance to maintain those things is an never ending hard work chore. The best made wooden boats begin to rot the moment you stop maintaining them.

I bet a large part of that is that there is a good chance that someone writing FORTRAN in the'80s would work out their algorithms, how they would code those algorithms, and how they would structure their code on paper, and have pretty much the whole thing worked out before they ever sat down at a terminal to actually enter code.

When actually entering code, their focus can then be on the fairly straightforward translation of their notes specifically into FORTRAN, and on adding good comments for those who deal with the code later. Most of their creative energy at this stage can go into those comments.

Part of this is that it was still common in the '80s to either not have interactive access to the computer the code was for, or for such access to only be via shared terminals in a computing center away from your office. You needed to arrange things so that when you actually got to a terminal, you were efficient.

Since then, we've almost always got access to our own private computers that are powerful enough to run at least test versions of whatever we are working on even if it is ultimately meant for some big server somewhere else. Now we can sit down and start coding while still designing the program in our head. And so we do, even if sometimes it probably be better to separate design and coding.

I think this also might have something to do with why BASIC was good as a teaching language, as was noted recently in some other HN discussions. BASIC, especially with some of the limits put on it to fit it in some smaller computers, was constraining and painful enough to deal with that people quickly learned that trying to do that while also trying at the same time to figure out the design and algorithms they needed was way too hard. They naturally learned to separate figuring out how to do something from coding it.

In the 1980s, for FORTRAN, coding your program on punch cards was still not that uncommon.

I wonder if they were made to do leetcode algorithms on a chalkboard in FORTRAN 77 back in the day during interviews.

Back then, you likely didnt have anything resembling a PC or a white board (probably available but less common than today), so it was more likely on a blackboard with chalk and pen or pencil on paper before working on your punch cards. I started my internship at a Fortune 500 in 2001, and they had just retired their last card reader for the mainframe. Still had plenty of blank/partially used cards that made for great notecards.

I'm honestly terrified that America's nuclear missile silos are going to get "upgraded". Let them keep their 8 inch floppies and text interfaces, for the love of god. It works fine the way it is.

It will be fine they will just download the nuke app off of the app store and sign into Facebook for authentication.

I'm sure it will only need 70 or so NPM modules.

Only 70? That's a stretch. This is 2019, after all, not 2015. Those 70 dependencies are only the first-line dependencies. Their dependencies will bring in another thousand.

Hey, when you need to pad a string, what else are you supposed to do?

Hardware fails over time and gets increasingly less reliable and harder to source. These types of upgrades are mostly about maintaining existing functionality on modern hardware.

In the case of nuclear missiles, this is entirely a good thing. If both sides can just maintain Potemkin nuclear arsenals we'll achieve gradual disarmament through obsolescence.

We might hope that at some point in the early 2100s, somebody will notice that none of the nuclear missiles work anymore, and with any luck, the engineering knowledge to create new ones will have been lost.

Or, preferably, simply seen as a childish use of rockets and nuclear energy while the tech is being used productively elsewhere.


I have not thought of FORTAN as legacy because whenever I need to look at very old FORTRAN code I am looking at mathematical structures such as matrices and vectors. At that point I am not thinking of FORTRAN anymore but another language (such as linear algebra) which is few hundred years old but still so crisp in a world of numerical computing.

HOWEVER: I cannot say the same about ABAP or Java that initially began to take shape in an enterprise system 20 years back. Then I have to go through reams of code that is truly legacy, hard to navigate through years of modifications, dependent heavily on local contexts, personal perferences, etc.

I've come across code on IBM System i (AS/400) with copyrights from the late 70's. The code still works, plenty of businesses run on it, and it was written quite nicely.

The core code base dates back to the System/38, which was released in 1979. You might expect to find a whole series of older to newer dates in copyright notices for the AS/400 (which is not its correct name now, and hasn't been for ages) code base, but the copyright laws have changed a few times over recent decades and I don't know what the current requirements are.

Did you leave a comment for the guy who has to fix it in 30 years?

Hopefully there's even odds it will be a woman by then.

Perhaps we go full circle and have predominately female programmers by then.

A full circle puts us back where we started?

Programming used to be a "woman's job" as secretaries.

yeah, but being a programmer had a different connotation in that sense, just like how you wouldn't call a person who xeroxed a book an "author".

No, a lot of those women legitimately did what we'd call programming.

It was soft-ware after all.

Downvoters don't seem to understand this point, that was the idea in the old days. That hardware was for men.

In the 1840s when programmable computing devices were first manufactured, 100% of programmers were women. She died about 10 years later.

I think one Mr Charles Babbage might disagree with you on that. Also I wouldn't say that any were actually manufactured. The first fully working model of the difference engine was created in the 1990s and as far as I know there never was a working analytical engine.

Looks like some folks missed my point, others got it 100%. I am in favor of more diversity in software builders, I practice what I preach


Bit of a strong reaction to a comment that began with the words "Hopefully"

Love this story. How much did you charge for changing those two numbers?

Hopefully at least a week for all the research.

I'm salaried, but it sure bought me a lot of goodwill.

God bless that coder, who probably received nothing compared to the aggravation they saved.

I hope to god you over sold yourself

> Assessors are prone to make mistakes when using the vintage software because it can’t display all the basic information for a given property on one screen.

Well they're in for a surprise if they think modern user experience designers would fix that.

I used to support a few assessors offices.

The software is pretty simple, and mistakes were almost always process problems associated with complex exemptions and classification.

The other set of problems were related to small market sample size for some corner cases. You can see this at play with Zillow.

Software is used as a boogeyman for leadership and process failure. A clipper system from 1985 is a great solution. The problem with it is that it’s highly likely that the needs of the business have changed over 30 years, and the software has not. That just reflects the lack of capital investment and process improvement typical in municipal government.

It’s easier to say “OMG, Clipper from 1980!” Because that is better than “we are a bunch of idiots”.

In the town that I grew up in, the process prior to 1980s computerization was a 100 year old paper process that was essentially the same as the computerized process — maybe better as the internal controls were better. Some of the properties in that town had deeds dating back to the Dutch colonial period, and in many ways the 1650 process was similar to the 2019 process, except they used percentage of rents and a levy of crops based on market value.

"Software is used as a boogeyman for leadership and process failure."

In my business systems consulting business, I think I have seen exactly this issue almost more than any other issue that I've encountered... across industries and business sizes.

Software gives you that convenient, opaque, ill-understood rug you can sweep any old systemic (non-technological sense) problem under. And it's probably so common because it works. Those people trying to get to the root of organizational problems more often than not don't have sufficient understanding of what role technology plays in their business as to challenge people blaming the system... so many dollars and so much time get dedicated to solving the wrong problems this way....

To be fair most, if not all, business systems are terrible, but they aren't usually the core problem when a company gets to bringing some like me in to "fix/replace" the systems. Makes my job a little harder than it should be to be honest.

Software is used as a boogeyman for leadership and process failure.

Software is often used as an attempt by one group in an organization to impose control, often by imposing process, on another group. Software and software salespeople are often pawns or MacGuffins in an internal power struggle.

CA Clipper, FoxPro.... those systems were awesome.

My first part time job while back in college (1985) was doing dBase II work. Then dBase III+, Clipper 86, Clipper 87, ...

It sucked not having transactions and referential integrity, but I liked not having to second guess a query planner :-)

Systems didn’t scale past small workgroup LANs very well, though.

As a Foxpro 2.6 developer in my earlier days, I really miss those days.

Geez. This is the truth. Applying A/B tested mobile design fads from consumer app to data driven for-professional-use software is leading us in a very bad direction.

Replace half of the labeled buttons with inscrutable generic hieroglyphs and hide the other half under a hamburger menu!

<Aliens Guy>User Experience</Aliens Guy>

Very true. Modern web UIs have some good ideas and some bad ideas, but fundamentally they are designed for people browsing stuff and optimized for conversion and engagement. At work, people do work and you need to optimize for efficiency and reducing burnout. A lot of people don't get this and blindly copy designs. Yay, your tool now looks like Amazon! ...which people leisurely use two hours a week, while your tool is being used 9 hours a day, in a crunch.

Since I work on the AS/400 (iSeries) what makes the whole story ludicrous is that for nearly a decade if not more it has been relatively simple to web face your applications. There are vendors who can do but the tools are pretty straight forward. Hell even the zSeries and AIX based solutions have web facing products that make supporting new interfaces a breeze while keeping decades old proven business logic in the background.

however as you mentioned, a modern user experience isn't the end all. we even did a migration from an MS Access database application to a new platform where it was required to be 100% look a like. The whole move was done to modernize that application yet those doing the work got handcuffed on day one.

one isuse holding back the upgrades in most cities is that their IT departments tend to be a real mess with one or two SME per system/platform that isn't PC related and even then its all maintenance mode anyway because it just works.

however the real issue is likely, infrastructure maintenance which includes IT and other not sexy items like sewer and road repair fall by the way side of ribbon cutting opportunities for new buildings, roads, and bridges. then throw in the incredible debt to employee benefits and pensions most cities have and its no wonder so falls to the wayside

Did some work back in the early 90s at a place that sold tools to port RPG code from S/36 and AS/400 to Unix and PC type environments.

RPG was not my cup of tea, but the results on the screen looked much like the XBase stuff on PCs I was doing the previous half decade, so I guess people were happy to schlep their working code onto newer, cheaper machines.

They could if they understood their users. A lot of UX design you see today is designed for new users, as apposed to say expert users like assessors that use the same software all day, everyday. Those are different UX goals. A good UX designer understands the user first.

I've found some great UX designers who could fix this. They complain that they're constantly at war with designers, who think things like text labels are "ugly".

Do you have some examples of modern UIs made for work instead of looks?

Excel, Outlook, FreshBooks, Google Keep (yes, I'm surprised I put a Google product in this list too), Repl.it, Forest Admin, HeidiSQL, Photoshop, Healthcare.gov

The Adobe suite in general always astonishes me for how effective its UX philosophy is. It’s slightly intimidating for novices but still usable, and as you get more proficient over and over again you discover that something was built to help you work faster.

I would vouch for IntelliJ. It's clean, customizable and nothing is hidden away

I wouldn't say it's very effective though. I end up looking for things -way- too often in it, as there is just -way- too much to it. I'm fairly experienced with it, having used it off and on for 3-4 years now (any time I'm using a language that isn't miserable I'll just use Sublime), but I still find myself trying to figure out how to do certain things. And starting out it was terrible; yes, plenty of auto-magic things that make Java suck less, but even just a basic 'find any occurrences of this line of text in the project' required multiple tries to figure out (apparently it's not "Find...", "Find Next/Move to Next Occurrence", nor "Find Usages", but "Find in Path". Which is not at all what I would have expected when it's not a class I'm searching for).

You haven't used many IDEs. This is standard behavior. "find next" or "find usages" is for the current editor and you should never expect it to search the entire project. Searching files recursively is usually called "find in path".

I HADN'T used many IDEs. I -did- say 'starting out'.

Regardless, defending it as standard behavior means you're arguing it has -average- UX. The OP was in favor of it having -good- UX (well, 'clean, customizable and nothing is hidden away', and I was just pointing out that doesn't necessarily mean it's good).

I was trying to point out that it's already logical and sensible, and what you think it should be is nonsense.

You seem to have a lot of trouble with past vs present tenses. What I think it should be is not something I ever said. I listed out what options seemed (past tense) more likely to me, given the list in IntelliJ.

Yeah +1 for the whole suite of IntelliJ IDEs. The UI is predictable and there is a fantastic VIM plugin (which is a hard requirement for me to use an IDE over just a terminal)

completely changing the interface has its own problems in my opinion.

People that have years of experience using the older interface are going to have a tough time making the transition.

training is usually lacking, and a newer UI does not guarantee that more mistakes will not be made in the learning curve.

One thing I've often heard that the old software is oftentimes DOS / ncurses style keyboard-driven UI, the new is a Windows forms app or webapp - often completely incompatible with the existing flows, lacking any keyboard interaction, and even with optimization and training not as fast or as fast to operate as the old application.

If most UX designers took the time to listen and understand people's needs, then this wouldn't be such an issue. But in my experience, there is a culture of hubris in the UX community.

I doubt most UX designers have ever even used a curses based interface. They are just disgusted by seeing so much monospace text, and are compelled to tear it down and replace it with something they are comfortable with (And usually the only thing they know how to use).

I had to clean up after some UX monkeys a few years ago, who "upgraded" a warehouse inventory system. The warehouse used 30 year old embedded MSDOS based portable Symbol terminals and a SCO UNIX server in the back end. They also used Wyse terminals in the office for the admin interface and data entry. Everything booted in seconds, and the portable terminal/scanners lasted for days, and never had a problem.

They decided to replace it with React Native on Android touch screen terminals (this was a refrigerated food warehouse). The hardware couldn't go even a whole day, and the touchscreens were not at all appropriate. The software was slow and unreliable, and had led to costly mistakes.

Speaking to the designer, he seemed far more concerned about how it looked than about adding back essential features. He kept joking about how backward things had been, and how heroic he was for helping these simpletons see the light. His only justification for several things was that it was "trending" and he didn't want his client to be "left behind".

They also started by gutting essential features, and adding them back on a rolling basis, when they got around to it.

In my opinion, a big part of the problem is the culture of UX. It seems to have amplified the stereotypical arrogance that people sometimes associate with IT people, and combined it with smug fashionistas.

With hindsight, a "UX designer" is probably not the best choice to lead a project building a warehouse inventory system.

Sorry that you worked with a bad UX designer. Do you know if they did any sort of user research at all, or did they jump straight to making it look pretty?

That's the thing, this is what it is almost always like. Anyone who calls themselves a UX designer and is under 25 has just been insufferable.

And they just assume that older technology must be inferior, because it is old. They use obsolete as a synonym for old.

In the span of a year, I had UX consultants tell me that password masking is obsolete, and that there should be one field for passwords, no wait, two fields but masked again. Decisions are not really based on logic, they are based on fashion trends, and intense fear of not being hip.

Maybe in some way this makes sense in the modern app ecosystem, that is possible. But is should be relegated to crummy Listicle apps. But these types of people should be kept well away from any software that does anything important.

As for the warehouse I mentioned, Android and React Native are not the right technologies. But maybe this firm didn't want to tell the client that they should go somewhere else.

> and is under 25 has just been insufferable.

I think this is a key point. Beyond the fact that their brain is still developing[1], most people under 25 (or with only 1-3 years in tech) simply don't have the breadth and depth of experience necessary to build an appreciation for reliability, much less the ability to identify reliable components or design reliable solutions for various environments.

[1] https://mentalhealthdaily.com/2015/02/18/at-what-age-is-the-...

"Nobody likes you when you're 23" ~Blink-182

Of course. I think the problem though is the culture surrounding UX that empowers unqualified people to make messes.

To be fair, this culture is by no means unique to UX. I've seen unqualified people make messes in many different tech disciplines: coding, operations, project management...

Yes, but I think that UX is a particularly toxic vector for unqualified people. UX people are often good at dazzling upper management. What they do is vague enough that it can be hard to nail down a failure condition. Everyone wants the newest and best.

I've seen CEOs give the reigns of their app to some UX person, because they want to impress the hip young UX expert, and they know just enough tech to appear to a wizard.

Man, it sounds like you've had bad experiences with UX people. I guess that's what happens when UX is the new craze, and everybody and their mom wants to jump into it.

Having thought about it, I think it's more of a Brooklyn startup thing.

I probably say this on a weekly basis but...

Software is art, fashion and politics.

It's so much faster when you're not using the mouse. Move up to 'Transactions', then 'Customers', then 'Payments', then scroll down to which one you want to see.

Or you can do '5, 3, 12, <customer number>'

Try changing the work flow of the 60 year old lady in accounts payable who's been using that for the last 25 years, she will never bring you in brownies again.

Not to mention no fat web client interacting with a server in the cloud is as fast as interacting with the server in the basement of your own building.

I did a lot of work with ruggedized Windows Mobile devices for field service workers. After one update, we noticed that data was being entered incorrectly more often than usual and the office workers had to correct it a lot more often.

We couldn’t find the issue for the life of us. Eventually, we sent someone down to the shop to see what was happening. We realized that the field workers weren’t waiting for the screen to update and were tabbing through using muscle memory. I had inadvertently switched a field and the tab order around.

And then there was the time that a new user was complaining about not being able to see a field but all of the old repairmen didn’t complain. Someone hadn’t actually tested the colors on the screen outside in direct sunlight where everyone was working. The new screen was just like the old one and all of the other repairmen just did it by memory.

This is the kind of question that is too rarely asked - "how is it used in the field?"

Im sure those technicians went through a few new-hires who just c/would not put up with not being able to see what they were inputting.

Thank you, kind sir, for fixing what actually broke.

And "typeahead", where your fingers can be entering data faster than the computer can respond to it. Folks who are used to this tend to really lose their cool if this feature goes away, and I can't really blame them.


Green screen!!!

How about a green screen with burn-in?

Green screens need to make a comeback.

I appreciate the concept of a green screen, but I'd rather have an 80x24 with multi-color than an 80x24 with mono-color. 8-bit color is perfectly acceptable to me; I don't need a lot of color, just enough to use to differentiate categories of data.

> People that have years of experience using the older interface are going to have a tough time making the transition.

And the effort spent making the transition may not be worth the reward.

Agreed. Perhaps this software from the '80s should be replaced by software with interface design sensibilities from the '90s.

Modern UX has ways to compartmentalize information on a single screen for when it's needed/not needed.

This assumes that you're not also trying to make it a mobile app. Many of the more (in)famous public redesigns in recent years went for "open" and "spacey" UX because they erroneously tried to make their main design a mobile-first design, when that wasn't their core business need.

(Often, a second application can be built that satisfies mobile needs. Many uses of reddit or gmail, like myself, are more than happy to use a separate mobile app for consuming data.)

In reality though, most UX designers will just try to transform an interface that people have been using for 30 years into a phone interface, and then smugly dismiss any criticism.

"You'll get used to it"

"It's the latest trend, you have to do it."

> Many of the more (in)famous public redesigns in recent years went for "open" and "spacey" UX because they erroneously tried to make their main design a mobile-first design, when that wasn't their core business need.

The Internet Archive redesign is a great example of this.

Im not sure that I think software is always bad just because its old.

I recall working as a technician years ago we used text user interface ticket system from the early 90s. While it wasnt the prettiest system it did its job and one could navigate it quickly once they has commited the keystrokes to muscle memory.

We thought about replacing it with more modern software but after evaluating a few of the options found none of them to be particularly compelling. If anything, we appreciated what our “old” system could do even more.

Where I lived before their old train ticket machines had an array of buttons[0] where you keyed in the zone code you were going to, adult/child/senior, hit OK, and stuck your magstripe card in. Once you had ridden a couple of times and knew your home zone code, you could buy a ticket in seconds. Of course, for first time users (especially tourists) they were more confusing.

They then replaced them with Windows touchscreen-based machines, where just paging through the stations to find your destination by name took as long as it took to buy your ticket with the old machine. Add to that all your typical touchscreen issues (calibration drift etc)

I've been to around 30 countries worldwide and never anywhere seen machines as fast as use as the old ones for repeat users.

[0] https://images.hdsydsvenskan.se/980x588/KoHYjzCi_hdzfPAF4xY_...

This is a good illustration of how differently a UI can be designed for different groups of users, who have different needs.

Your old train ticket machines were apparently great for locals who were familiar with them, and had memorized information like the zone code, but they were surely horrible for anyone else. By contrast, on my last trip to Germany, the Deutsche Bahn ticket machines weren't the fastest things in the world to use, but for tourists from other countries, they worked pretty well, since they had dozens of different languages you could choose to use the machine in, and had the ability to look up destinations, which is surely a lot easier than trying to figure out some arcade zone code.

In short, an interface that's highly efficient for an extremely experienced user probably isn't going to work well for someone who's never used the system before, and vice-versa. For a train ticket machine, it seems to me you want something that's easy for newcomers; you can't assume everyone is a regular user of the system. However, a lot of these comments are talking about things like warehouse inventory systems; these are things that only a relatively small number of employees will be using, and they're going to be experts or working towards that, and efficiency is the goal, not newbie-friendliness.

> In short, an interface that's highly efficient for an extremely experienced user probably isn't going to work well for someone who's never used the system before, and vice-versa. For a train ticket machine, it seems to me you want something that's easy for newcomers; you can't assume everyone is a regular user of the system.

It's not an either/or. You need to support both sets of users well.

Sure, if 90% if interactions are going to be by newbie users, design the system for them. But for something like a train ticket system, maybe 50% of users may be newbie visitors, but only 10% of interactions are by them. It doesn't seem like the right call to cater to newbies if it means making the remaining 90% of non-newbie interactions noticeably worse.

Actually, it might. If half the users can't use your highly-efficient system and then have to go bother your workers to figure out how to buy a ticket, that's probably worse than making things slightly easier for your regular visitors.

> Actually, it might. If half the users can't use your highly-efficient system and then have to go bother your workers to figure out how to buy a ticket,

That's the kind of user-hostile decision-making that annoys and infuriates people. It's basically saying it's better to bother a hundred other people than have one person bother you.

In my comment I said that you need to support both sets of public transit users well, and that it's not either/or. In many cases it's a mistake to make the dumbed-down newbie interface the only interface, because that could make your system noticeably worse for most users.

Personally, in cases like this, I think some kind of touch-screen guide to the efficient expert interface is probably the right balance. Make it easier for newbies to scrape by (because that's the best they'll ever be able to do), but give frequent users a path a better experience rather than holding them back.

> that's probably worse than making things slightly easier for your regular visitors.

I think we're talking about residents, not visitors, in this case.

No, I agree: ideally your UI will support both classes of users. Unfortunately, it usually seems we can only have one or the other, so I'd argue that for train tickets you want to lean towards supporting newbies more. For warehouse inventory, I'd argue that newbies should be utterly disregarded.

>That's the kind of user-hostile decision-making that annoys and infuriates people. It's basically saying it's better to bother a hundred other people than have one person bother you.

I don't understand this comment. If someone wants to buy a train ticket and the system doesn't even support their language, and they can't understand it, literally the only option they have is to seek help. I don't have to be able to read German to use the Deutsche Bahn train-ticket systems; they have options for English, French, Spanish, and even many other lesser-used languages like Polish, Czech, etc. If your city has a lot of international visitors, you either need to have a system that they can use easier (by supporting their language), or you need to hire a bunch of multi-lingual agents to work full-time at all the stations, or have some kind of translating service available for your employees to call to help these people. Guess which one is cheaper?

> I don't understand this comment. If someone wants to buy a train ticket and the system doesn't even support their language

I think we're talking about different things there. I was focusing more on the language-agnostic interface design (e.g. hand-holding but slow vs fast after a learning curve), but you seem to be focusing more on internationalization and language issues.

Interestingly enough - with your example - the ticket machines of the german bahn are quite efficient if you know what you need, making their interface even better.

For example, there are regional tickets that allow you to take just about all the regional trains for a day with a group of people. You can find this ticket if you poke in your destination (the default prompt), select it, and then it gives you the option to use this or other types of tickets. Or you go ahead and select "all tickets", type NDS, skip options and pay. Takes less than a minute.

You could conceivably do both either name or code lookup. Both options are available to me when I go to my self checkout grocery store and try to submit produce.

This sounds more like a bad UI. A good UI covering all use cases would let you do both, and a UI not able to provide choices like that is a bad one.

I'm constantly and pleasantly surprised by how nice and feature-ful text utilities are. First time I used htop, I was much happier than most of the GUI task managers I had used. It seems like having a simple user interface helps the designers focus more on having it actually do more of the useful things than on how many pixels to put on that margin...

You can see this first hand in almost every single Microsoft program that’s undergone a Win32 to Metro/UWP transition. The new versions can’t do anything.

See also: any mobile anything. Many times a week do I have to switch to the desktop site to get things done since it wouldn't fit into the clean mobile design.

And they do all that nothing so much slower than the old ones. Everyone involved with that project should be ashamed.

So true. It’s hard to believe that their new control panels since Win 8 still lack features of the old ones.

Honestly, I pain for the days of 90s UI. They were so much clearer and usable even with significantly less pixel density than we have today.

working on old software doesn't help your career as much as working on new software from my experience, the job market is too tough as it is, time is more well spent in new technologies, nothing is ever binary but I can strongly agree old software is almost certainly a bad situation to be in but not everyone can explore alternatives because of costs and other factors

That’s the perverse nature of our industry. On the one hand you are told to provide business value but if the most value comes from keeping old tech you are pretty much killing your career if you go that route. Nobody will hire a “dinosaur”.

You might be surprised at the demand for at least some older techs. The demand just won’t be from the “cool” SV companies.

Oh, there's plenty of demand for older techs. The problem is that the pay doesn't match. The demand for people with experience with older tech isn't in the more desirable cities; it's in crappy rust-belt cities, or small towns in the middle of nowhere. And the pay is generally very poor; you'll do much better pay-wise to stick to newer technologies, even if the stuff you're working on is total crap.

That low-pay job in the Rust Belt probably comes with a better cost-of-living than a $150k job in San Francisco. Don't knock the inlands with coastal elitism thanks.

>That low-pay job in the Rust Belt probably comes with a better cost-of-living than a $150k job in San Francisco.

That's putting it mildly. I make over $100k where my three bedroom house cost me $165k. I guess the weather is worse than the bay area, but I have a 10 minutes commute to an affordable house.

Or assume that everyone is in the US either.

I work on crappy legacy code (and some nice shiny new stuff or I'd go insane) for what would be considered in SF a crazy salary (but by UK standards and particulary where I live, decent).

Quality of life matters as well, Mon-Fri 9-5 with overtime been a once/twice a year thing.

there's no way to conclusively settle this debate. it totally depends on your savings ratio (if high, better to work in high COL high salary place; if low, better to live in low COL low salary place), which itself depends on your lifestyle choices.

Good comment. Personally, I have a very high savings ratio and live well below my means (esp. in terms of housing), so it works out much better for me to live in a high-CoL high-salary area, which is currently DC.

It usually seems to me that the people who favor the low-CoL lifestyle are people who value having a giant house. I want more money for things like foreign travel; I don't care about having a giant house.

It comes with a lower cost-of-living, but also a lower quality-of-life, and also, the pay they offer isn't enough to give you enough disposable income after adjusting for the difference in CoL. Amazon isn't going to charge you less just because the housing costs less in your area, and grocery costs are usually the same everywhere.

There is also a lot of anxiety in working with legacy code. You may get laid off and finding another job with the same skillset may be very hard.

This is not a software problem. It's a purchasing problem.

We built better software for cities and ran into this over and over and over again. They're not incentivized to change.

Citizens don't care or know enough to raise hell about city operations. The employees who do the work can't advocate for themselves and/or feel no reason to risk their jobs to make their own lives better (there's no reason to be more efficient.) Leadership cares mostly about press releases—blockchain or AI or whatever buzzword sounds better to them than the incremental changes that will actually contribute to fixing cities.

It's frustrating, bleak, and the inverse of inspiring.

I tried for 2.5 years before losing hope. My co-founder carried on another year before running into the same. The best way to get people what they need is to bribe them—e.g. work through an intermediary who takes a cut and delivers the rest to a political campaign—which we refused to do.

Local governments affect most Americans' day-to-day far more than the national political shitshow and this only becomes more true as populations further concentrate in cities. And yet, they are utterly soul-sucking morasses of internal and external politics, misaligned incentives, and bureaucracy. They drive out the people who can and genuinely want to effect change.

It's a damn shame.

For public sector it's inherently risky to buy software and even devices. They often lack the capacity to properly outline what they need, to understand what's possible, to know the real cost/price they'd need to offer to get good results for tenders, and most importantly lack the ability to write for technical tenders to get what they need.

The effect of that if something is running well often the most rational solution is to leave it running. Yes it might not be the most efficient solution, but it might still be more cost effective than losing 3 months of work due to conversion to a new system, in addition to all the extra cost, training, downtime, ...

In the current ecosystem more and more also seems to be license based. Do you really want to change a decently running system with just occasional maintenance cost for an untested system that will change all existing processes and in addition ups your annual cost by quite a bit? Any time you change a system you probably have to change some additional systems too as suddenly things don't work together anymore....

Cities are often stretched for resources ranging from (good) personnel to financial to physical space to ... It's probably in many cases the rational decision to stick with how it is if the system mostly works.

Can't edit, so just to add that public admin also has to be conservative. Seems like you were running a startup with a great new product. That means they have zero idea whether you'll still be around in a year's time! Really dangerous to commit to a system where you don't know the long term perspective- especially if you're a public admin which means your switching costs are high, timeline to make decisions long, and you have a legal or moral obligation to fulfil your tasks...

To give one other concrete example, i know of a country that chose a great new software solution to provide information exchange between medical providers. The tool runs well, they got long term commitment and a safe provider. BUT they then found that there's literally no one except the provider itself who can train all the users as the UI, process and exisiting training materials are all copyrighted (or otherwise protected, maybe through contractual clauses) by the provider. Means they found s gigantic additional cost even when they thought they'd done everything right in the choice of software.

Again, risk can be bigger than the reward.

Don't blame you. You're exactly right. I'm sad you stopped, I'm a really big fan. Times are changing though, maybe give it another go in a year or so?

I think this is part of the problem:

"We’re dealing with an irrational public who wants greater and greater service delivery at the same time they want their taxes to be lower,"

There's noting irrational about wanting government to take advantage of the same technological advancements that businesses use to improve service while cutting costs.

When I order from Amazon, I see real-time inventory, get immediate order confirmation, near real-time tracking as the package is handled (and actual real-time tracking for some orders were I can see where the delivery vehicle is), and delivery to my door step is 2 days (or sometimes the same day).

In contrast, when I ordered a dog license from my county, I had to mail (FAX was also accepted) a paper form, and then 6 weeks later I received a metal plate in the mail (just a numeric ID on it, no personalization). The only way I knew they even received my order is because my check cleared the bank 2 weeks after I mailed it (no credit cards allowed).

I don't expect my county animal services department to build an order an fulfillment system that rivals Amazon's, but there are thousands of this offices across the country, they could all cooperate on one system and save money dealing wit these paper forms.

I don’t think most cities have the economy of scale to justify much software development. This seems like something that needs to be done at the state level, setting up cities and counties as multi-tenants within a state level application.

Bias: I am working on revamping some systems used within California which feed data in from the county offices to the state office. We are almost done replacing a birth certificate / health survey system from the 80s so that the same database is used state-wide, rather than batch updates between hospitals, counties and the state running separate datastores / app instances.

Then we have to make some less drastic updates to some other similar (purpose) systems later this year...

If the paper form costs them less to process than whatever automated system that would provide whatever quality of service you deem acceptable would, why would they switch. Newer and more automated doesnt always save money.

I work for a medium size company, our expense reporting system is 100% paper or excel driven - we could go to Concur, but it would probably cost more than our existing system does, and may not pay for itself within the lifetime of the product.

Have you seen www.gov.uk? It's not Amazon, but as a german I truly envy the work they're doing to reduce authority interaction and friction as well as informing citizens from a central hub.

Sticking with your exmaple, you'd probably search for "dog" and receive actually useful results that range from relevant forms to complex guidelines[1].

The UK's portal has been discussed on HN before and in my opinion it's something other countries should emulate.

[1] https://www.gov.uk/search?q=dog

If only the solution were higher taxes. Local & federal government alike are comically wasteful of taxpayer money. I have no problem with paying higher taxes per se, but the return on our investment right now is abysmal. Something has to be done to make governments more efficient. I don't know what that is.

"There's noting irrational about wanting government to take advantage of the same technological advancements that businesses use to improve service while cutting costs." - the irrational thing is to expect this in absence of competition. How about having two or more local governments acting at the same time to enable the true competition?

It may not be realistic, but I don't think it's irrational for citizens to expect good customer service from government.

Competition is no guarantee of good customer service -- cellular companies have some of the worst customer service, but almost everyone has 2 - 4 cell companies to chose from.

Well, those are at best oligopoly. The competition is severely limited through the licensed access to the radio bands.

I work on several pieces of business software at my job. One was built for UNIX mainframes, almost entirely in C, with the end-users still capable of running on dumb terminals (or these days a PuTTY window). The "successor" runs on Windows servers with J2EE, a GUI, and all the modern niceties.

The most common complaint I hear from people who've switched from the legacy software to the modern one is that their end-users' productivity is greatly reduced by the point-and-click interface. The older software's TUI had a very steep learning curve, but almost like learning Vim, once they developed a muscle memory for the necessary keystrokes, they could access and update information much more quickly.

I once saw an attempt at modernising a mainframe application that had a single input line where the user would type a sequence of numbers. The new GUI had all the fields organised and labeled.

The user then requested and got a single input box where he would type in all the numbers just the way he did before.

So? I remember a large bookshop in my hometown in which each section of the store had a terminal with DOS-like interface to their internal database. Employees were able to use it blazingly fast, and they still used it a few years ago. Except for security concerns, why should newer software automatically be better?

Old crud apps ware probably the prime of usability. These hotel booking systems with green text terminals and hot keys for everything.

The video store where I grew up had a similar text based interface to some floppy located database for rentals. Amazingly fast and some override button to force invalid entries when someone else had messed up or something was registered wrong.

Binaries run straight from a floppy with no fancy OS to interrupt the very simple yet manually tedious things it does. Many apps today are way over cooked for what they do.

It seems they this kind of system would benefit from a more user friendly interface; however, I don’t think all decades old software necessarily needs to be uprooted and replaced because it’s old, so long as it’s functional.

BaRT, I wonder if they’ve upgraded their system software to more modern software? They certainly could add more intelligence (system health, wear, etc.) and begin automating more aspects, so it could make sense to upgrade here, but I’m sure there are lots of places where upgrades are not necessary and the old software is good enough, so long as it’s still supported by a vendor.

>> I don’t think all decades old software necessarily needs to be uprooted and replaced because it’s old, so long as it’s functional.

I'd say that's especially true in places like power plants. The old stuff is impossible to hack remotely because it's not on any networks. The rush to put everything on the network has made these types of systems vulnerable and places are only now trying to figure out how to secure them.

Which old software are we talking about? Is it the Clipper Card system that has tens of thousands of business rules that prevent transit agendcies from rolling out new passes and pricing strategies? Or are we taking about the software that runs the trains and currently causes a bottle neck through the transbay tube?

Not everything needs to be rewritten in React using Google’s Material UI design standards, but my god, our public software really sucks.

Even as the article talked about the tax assessors office... I know someone who bought a condo, but SF can’t collect property tax because their office is so backlogged. So instead they told the owner to make sure they save the money because in a couple of years they’re going to come asking for it.

The Clipper Card system _must_ fall into this category. I recently called because I was certain my card was being fraudulently used--the value would dip $5-10 for a single ride, and there were top-up credit card transactions that didn't correspond even remotely to my credit card statement.

Turns out SF Muni and Clipper Card (and possibly Clipper Card's billing vendor?) are two/three completely independent organizations. Everything is passed between them in batches with up to a several-month delay, with batches often arriving out of order. This makes auditing nearly impossible and is what made me think something sketchy was happening. It's a miracle it works at all.

The software is not the issue with the tube. It's the fixed block system imposing minimum 1 minute headways.

Is it the Clipper Card system that has tens of thousands of business rules that prevent transit agendcies from rolling out new passes and pricing strategies

You're putting the cart before the horse there. It's the transit agencies themselves that advocate for the lack of prices. BART, for instance, held out on implementing TransLink/Clipper for ages because they wanted their own BART-only cash purse on the card. Clipper is still fairly new and I don't think the fare rules have changed much, if at all, since its inception.

> SF can’t collect property tax because their office is so backlogged

IME they're able to collect property tax, no problem.

Clearly the truth lies somewhere in between both these statements. Something like:

"SF can't collect the full and accurate amount of property tax in a timely manner from properties that were recently reassessed because their local assessor office is backlogged due to use of old and inefficient software."

Which is basically what this whole article is saying.

>add more intelligence (system health, wear, etc.)

Of course, telemetry is the first thing that comes to mind about modern software.

Thought exercise: if you wanted to write code to support a business or nonprofit, and you knew that the software would be used for many years, and probably never see an update, and wouldn’t have anyone to tend network services, how would you build it?

Me, I'd probably go with a static executable with a terminal UI and a flat file sqlite database that could be copied by mortals, and hope that whatever I set up as a backup regime would survive future migrations to new hardware.

A fascinating idea -- kind of a software analogue to the 10,000-year clock (https://en.wikipedia.org/wiki/Clock_of_the_Long_Now).

A few years ago, I met guy in my travels who seemed quite carefree and just enjoying frequent international travel with his spouse, funding various art initiatives, etc. Turns out he wrote some utility-related software for a large American city back in the 80s and has nice a recurring revenue stream from that. Other than some now very occasional support requirements, it's essentially just a source of passive income. He said switching costs would be too great, and there's no need to change something that's working.

America's industry is running on software from the '80s. You can't imagine the horrors on the shop floor. Never mind XP or Windows ME - DOS with green-screen terminals; hand-coded apps booting on home-brew hardware; IBM dinosaurs that should be extinct but lumber on in the back office.

And there's nobody that can help them fix anything. So upgrade you say? Then they'd have to start from scratch. And they long ago laid off the accounting experts, tax lawyers and business planners. They have button-pushers and data entry folks. Nobody left who knows were the data comes from, where it goes, or who all uses it.

We recently bought a $4 million dollar steel fabrication system, automated, top of the line, runs Windows XP.

Rumor is the company has been trying to re-write the control software to run on Windows 7 for the last 10 years (name rhymes with spaghetti)

We interact with it using little data packets (ASCII, sent from a VB app) of coordinates to tell it what to do.

Insane amounts of money would be lost if that machine goes offline for a day. We can't just replace it unless it's 100% compatible. Some of our bigger competitors have 10 of these machines.

There's no real benefit of upgrading them, though. We'd still run the whole damn machine behind it's own ASA and on it's own network. The users don't care. And our programmers wouldn't be happy to change it.

I regularly see stories of software that is at least 20 years old now, running multimillion dollar industrial systems that are expected to last for at least another 20 years before being retired (if even then). There will probably be no changes to that software in the interim, and it's possible that it can't be changed even if you wanted to, for various reasons.

As long as the computer is well-isolated, the biggest ongoing risk here is replacing old and failing PC components when necessary. But the same might be also be true of the industrial systems themselves.

You can't imagine the horrors on the shop floor


DOS with green-screen terminals

Not a horror. If it works, why mess with it?

hand-coded apps booting on home-brew hardware

Also not a horror. Bespoke hardware is everywhere, as are hand-coded apps. AI does not yet build software.

IBM dinosaurs

Still not a horror. Perhaps you just don't have a lot of experience with legacy systems.

> Not a horror. If it works, why mess with it?

Can you still get parts for said machine? Do the disk have bit errors? Is the RAM faulty?

Things fail, sometimes in non-obvious ways. It's important to be able to fix your existing system. Often times that means having a plan to upgrade to a new system.

Can you still get parts for said machine?

Sure, why not?

Worst case scenario for a DOS machine is that you run it in DOSBOX or a similar VM.

...until you need to replace an old DOS-supported SCSI RAID controller that hasn't been made in 20 years. Or find an ancient Borland compiler to rebuild the messages in Spanish. Off to the thrift store!

My local hydroelectric dam still runs on OS/2. And I don't think it's the modern eComStation 2.1 or anything. I mean old OS/2.

why should those IBM dinosaurs be extinct? I can still buy the hardware, my software from 1973 will still run on a current z/OS machine - why change?

Good. We don't have to worry about node.js, apache whatsit and Google go-away v1.0 being our goto technologies in a critical space. That's progress. Stick a rest/rpc interface on the outputs and ctl and attach a wireless dongle to a middlebox: voila modernized! If you really want to get fancy attach ELK to the 'stack' and put what data you want in it to represent something or another. Mission accomplished!

This is sorta what my university did when they launched online registration. And that was why only 10 students at a time could use the online registration system-- that was the number of virtual terminals the mainframe had. And it was a Java applet :weary:

Things like this obviously attract headlines, but the real problem comes from the millions of other machines, decades newer than the ones in TFA, that are still irredeemably EOL.

I contract for a few companies that supply local governments in the UK. I regularly have to fill out stringent checklists about IT security provisions. PCI-DSS and beyond.

So I always chuckle when we meet them on-site and they're running an XP install with IE8, plastered with toolbars. These are machines I would sooner throw into the abyss than try to rescue. They'll make me jump through hoops but there they are inputting and manipulating citizen data on machines that are almost certainly part of a botnet.

Tablets are also increasingly a problem. As a b2b-service webdeb, I still see a lot of first gen iPad Minis in active service in enterprise. Last patched mid-2016. Oh well.

The next big worm attack is going to be devastating to organisations like this.

> Tablets are also increasingly a problem. As a b2b-service webdeb, I still see a lot of first gen iPad Minis in active service in enterprise. Last patched mid-2016.

I used to work for a large oil company. Their IT guys were pretty adamant about security. Right before I left the company, they purchased another smaller distributor. All their guys had ipad minis and same thing, were last patched in 2016. The IT guys right away were telling the execs, "Yeah, they're all getting Win10 laptops, fully patched and locked down to use only our software. No way we're going to keep using these."

> Minnesota spent about a decade and $100 million to replace its ancient vehicle-licensing and registration software, but the new version arrived with so many glitches in 2017 that Governor Tim Walz has asked for an additional $16 million to fix it.

This raises so many questions.

Most governments are bound by laws and regulations that effectively require waterfall development as every request has to be signed off before work can begin or money is dispersed. They also try to get the cheapest bidder in many cases.

The entire industry knows this doesn't work, but there's no political will to fix it because it's easy to attack an opponent for spending money with no defined goal or for wearing anti corruption measures that drive a lot of the regulations that require defining everything up front

I'm familiar with a company that works on government (specifically municipal) projects, and they simply answer "yes" to every question on the RFP. They say it's easier to build software than to negotiate requirements.

It was basically a leadership problem. There was nobody really running the project who was responsible. I suspect the leadership who chose to bring it in house straight up had no clue how to manage such a project.


Fantastic response, thank you.

Project officials did not enforce proper code development practices, and there was insufficient testing of the software.

Sheesh, I wonder how much of the budget was burned on inflated admin fees?

Have you heard of the Phoenix payroll system for the Canadian federal government? The original estimated cost was 380 million and now they are asking for another 2.2 billion.



the original contract was for $5.7 million, but IBM was eventually paid $185 million.

That's some serious scope creep going on.

It shows the value of getting the relationship with the customer going on a smaller job - it was probably to evaluate the existing system - and then segueing in to something bigger.

That's a seriously important insight.

There are no shortage of government IT projects around the world that end up like this. I've known of several major high profile ones where I'm from that were the exact same story. Always decades to deliver, always hundreds of millions over budget. Never works.

I truly do wonder exactly how much software you could produce if you had 2.2 billion dollars to do it.

Governments are getting taken for a ride.

Raise what questions? It seems par the course for many software projects.

Well, my questions are:

What was the original budget?

Were there cost overruns?

Who got the tender, and why?

Where was the complexity that made it a $100M, decade long project?

I don't see how cost overruns by more than an order of magnitude are par for the course.

That’s 16%, not an order of magnitude.

It's regrettable that US has no criminal incompetence offence for civil officials

Would you, a trained software developer, want to work on a project that was overseen by elected officials? Keeping in mind that Minnesota is a very polarized state where the representatives are always looking for a way to ding one another, and most elected officials have about as much grasp of software and IT as they do of brain surgery.

You'd have to pay me a lot to touch that with a 100 foot pole, or I'd have to be a pretty down on my luck subpar developer to touch that. But hey, to the lowest bidder we go!

Insane. That's like 100 people for 10 years (ballpark...)

That's some awful cheap people.

100K/year average. Consider they are not all highly paid engineers, Minnesota is like 1/3rd the cost of the bay area, a lot of it was outsourced overseas where they're even cheaper.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact