Hacker News new | past | comments | ask | show | jobs | submit login

There are certainly places in the world, today, where machines from the 60s-70s-80s are running critical processes, and all the original people involved in design and implementation are either dead or retired - so you're stuck with technicians that only have repair manuals, along with spare parts of everything. Bespoke, one-of-a-kind circuits / machines / etc.



It's the problem of keeping things in living memory. The craziest thing about human civilization is every 75 years or so, we have to complete a 100% knowledge transfer of everything we know to a new set of humans who start out not knowing anything about it.

Obviously it's a little more complex then that, but the importance and value of education and documentation is very much in that realm: drop the next generation in a desert, and we go back to the stone age instantly.


Cixin Liu’s SF novel “The Supernova Era,” in which a supernova irradiates the earth in such a way that everyone above the age of 13 will die in a year, takes this as its focus.


Oh man, got stuck half way through that one for some reason but hadnt thought of it from that angle somehow. Im inspired to pick it back up... Like right now.


Which is why when we happen to dig out some form of document from 5000 years ago, and eventually translate, we end up discovering that besides technology and religion of the day, the society was somehow pretty much the same.


You're probably thinking of the oldest written customer complaint, from 1750 BC.

https://en.wikipedia.org/wiki/Complaint_tablet_to_Ea-nasir?w...


Everyone wants the new shiny thing but writing things down is how we got to where we are today.


The problem isn't that these machines are from the 1960s. It's that we stopped making machines like that. I'm sure someone somewhere along the line convinced someone that it'd be better to do it all over from scratch, but if these machines lasted 60 years, then they can't be all bad. In Italy, there are textile manufacturers that use looms that were built in the 1920s and still make cloth perfectly well, just like the old times.

We do this all the time with technology. It's because we're addicted to change, and we have no judgment about how much change we need and where.


That scenario isn't caused by an addiction to change. It's just not economically viable to continue manufacturing obsolete equipment for a shrinking customer base.


And what causes that shrinking customer base? Apparently there is some kind of replacement, which is brought about by ... change.

Whether we are addicted to change or not, might be another question though.


Right, just like I want to go back to a car that gets 3 gallons per mile.


I agree that the problem isn't that these machines are from the 60s. They may have even been cutting edge then.

Indeed, the problem is that the user of the machine faced a choice between 3 possibilities: replacing the machine as expertise in the old tech dwindled; building and maintaining that expertise over the course of decades; or doing neither, and letting the equipment inevitably lapse into its inevitable disrepair; and they chose the latter.

Change is constant, which is good, because there is no progress without it. The only choice we have as individuals is how, and how well, we adapt to it.


> we're addicted to change, and we have no judgment about how much change we need and where.

^ this


> so you're stuck with technicians that only have repair manuals, along with spare parts of everything

Tell me more about how people hate writing documentation, and then tell me how they can't shut the fuck up about not having it when they need it.

This is a peeve of mine.

What do you expect?


What's so surprising about it? I hate cleaning my apt but also hate when its dirty. There's no big contradiction here.


Last time I cleaned my apt I managed to delete my grub conf somehow and had to spend the day learning to manually mount an encrypted zfs pool. Luckily I have some etc keeper thing that keeps a git log of all changes to my conf files so I could restore it.


If only I could pay someone to write the documentation for me.


You would still have to explain to someone what it is that should be documented.

A better approach is to learn just enough to do it yourself, but it's not easy to document things well.

I like to think about documentation like this: Without it your work will be almost wasted as nobody can make use of it. That helps a bit :)


Thus the "if only" - I can pay someone else to clean my place pretty well, but I can't really pay someone to take knowledge out of my head.


I do think it's a bit telling that we have a lot of automated software that exists to help us style our code, but there doesn't seem to be nearly as much investment in documenting code.

Simple things like a linter that yells at you if a class has an undocumented method would probably at least be a step in the right direction. People may complain that it would lead to overdocumentation but I'd argue that it's probably better than underdocumentation.

Even something as simple as providing a warning when there appears to be an long code block without documentation would probably be a step in a decent direction. What seems trivial and obvious today likely will not in a matter of weeks - and it generally just gets worse from there.


In Rust you can just add `#![deny(missing_docs)]` to your code and compilation will fail if you have any undocumented items.

Of course that can lead to developers just adding a single line comment with the name of the type ... but it's still a nice feature.


That's actually super cool; I wish I had more reasons to reach for Rust on a regular basis but this is definitely something I'll keep in mind should I ever get the opportunity to use Rust in any capacity.


Visual Studio nags you if public methods and properties aren't documented. It's not a panacea, in fact it's easy to work around but at least they're trying.

IME, the only thing that works is to attack it from a Project level: make usable documentation (needs a review before being signed off) a deliverable or the project isn't done.


> Tell me more about how people hate writing documentation

The thing is, I don't this happens in isolation. At least not for me. I hate writing documentation because time is not allotted for it. So I have to rush it, and in the process I feel like I'm doing a poor job, which is demoralizing. I'm gonna dread it next time I have to do it.


I saw this a lot with Y2K. Suddenly all those places that had been ticking along for a couple of decades on their COBOL line-of-business systems with minor maintenance had to make some major changes.

The choice, really, was "hire some ludicrously expensive COBOL devs, or replace the entire system". Replacing the entire system failed on every sane business evaluation: high-risk, hugely expensive, no guarantee that the new system will even work. So they hired the ludicrously expensive COBOL devs (because the devs needed to understand 1970's-era COBOL, and they were rare in the OO-frenzied 1990's) and patched up the old system.

But as time wears on, those systems fall further and further behind, and the COBOL devs who actually know how to maintain them get more and more expensive (or actually unavailable). The costs and risks of replacing the system is still too high for any given marginal change, but the marginal changes are getting very, very expensive.

And then 2038[0] rolls around and they'll face the same choice. And the same risks will come up. It'll be interesting to see what they do, and what choices are available. Because patching the old system may well not be possible at this point, because there's nobody left who understands it. And migrating the complex business logic to a new system may well not be possible, because there's no-one left who understands the old code.

[0]https://en.wikipedia.org/wiki/Year_2038_problem


I don't think them falling behind is necessarily bad. Modern systems, languages and programming practices are much nicer, but they also have a lot more effort behind processors, compilers, frameworks, operating systems, etc., so overall the system is much more complex.

If anything, I think having a separate, archaic, mostly frozen system base for critical infrastructure is a good thing. The current ancient COBOL abandonware situation is probably not the simple and mostly standardized solution one may wish for, but it's a lot closer.


I'd agree, if nothing ever changed and it was feasible to run them forever. But it's not. At some point (probably 2038) the whole thing will fall over and a new system will have to replace it.


Fortunately we still have 17 years till then so surely all these companies have started preparing for this inevitability already, right? (i know the answer)


> the COBOL devs who actually know how to maintain them get more and more expensive

COBOL doesn't actually pay that well, no matter how badly the business depends on it. The law of supply and demand would suggest that it should, but it still really doesn't.


it did back then... during Y2K I worked with COBOL devs who were enticed out of retirement with $500+ hourly rates


You're not stuck with it, you just have decided that's cheaper than replacing it.


CuriousMarc youtube channel, to extent, contradicts your statement.


Even if that's true, those are exceptional examples. They're rare. So why should we care?


They are a sign of a deficiency in the tech business model. Maintenance is a bad business, and so the only way to keep things understood and working is to build new things.

This is fine as long as the "in maintenance only" portion of the industry remains small. Which will only remain true while the industry is growing exponentially. However like industrials, there will eventually be a physical limit to how much better computer networks, microchips, web search, cameras and other activities can get.

Once this happens growth in the core parts of the industry will slow, and more software will be in maintenance. However software components are both opaque and orders of magnitude more complex than physical parts. As an example, Losing practical knowledge of the linux kernel involves losing knowledge of ~27.8 million lines of code.

There may come a time in a number of decades that getting a driver patched is an impractical activity due to the scarcity of knowledge ( or potentially even the ability to build the driver... )


Linux drivers are already a mess, and have been since the beginning. But the problem is due to IP concerns by hardware vendors, not a lack of knowledge about the code base.


which makes the problem more likely to emerge when the various hardware companies stop developing new drivers due to lack of significant new hardware evolution.

e.g. why update the nic drivers if the nic hasn't changed in 20 years? this could easily turn into "how do we even modify and release our companies nic driver?"


I don't understand what you mean. Most software spends the vast majority of its lifecycle being maintained.


> Maintenance is a bad business

Could you please elaborate why?

I would agree that such business has relatively low ROI. Still, for example, Linux distros and *BSD foundations are to an extent a maintenance business.


Operational Budget vs Capitalize Budget, for one.

Build something new, affect bottom line directly vs maintenance.

Blame the tax structure for placing an emphasis on new.


This is actually good news. It provides a lever for the government to change our entire culture (in a good way). Much easier than convincing every individual that maintenance is required.


Or bad news, that the govt's unreasonable bias is the reason why it is the optimal economic behavior to neglect maintenance?


Even that is good news. We have 1 knob to turn instead of 8 billion. even if today that knob is on a nonoptimal setting. Convincing 1 government is a lot easyer than convincing every individual.


Old IBM mainframes in banks and government institutions aren't that rare I think. But I don't have any numbers to back this up.


not rare at all.

COBOL still runs the world, despite what many on this board think.


COBOL is rare outside the USA and Western Europe, I'm pretty sure.


central banks all over are running on IBM.


Reliance on deprecated behavior, but without the compiler warning. There may be some hidden risk out there, and we have no idea how much it'll cost to fix or replace.

The example that springs to mind is the social security check printers. I think they wound up reading the wire voltages as checks were printing to duplicate the behavior for the y2k fix. Was urgent as many people relied on that income.

There are rare events that you can't do much about up front, they're external. A pandemic might be a good example. There are other rare events you can simply avoid, but it's often tempting to just skip the maintenance and let someone else deal with it when it breaks. It's rare right? Not like we're going to get blamed, or even be around to have to deal with it.


Alas, those rare examples does includes nuclear power plants.


That is probably less of a concern than most other things because the industry is so highly regulated that everything is formally specified and documented.

Now guess what happens if we forget how to make something intrinsic to modern farming.


like dirt?

Not being snarky; just want to point out that current soil erosion rates due to modern farming mean the world will run out of dirt by the end of the century.

http://large.stanford.edu/courses/2015/ph240/verso2/ https://world.time.com/2012/12/14/what-if-the-worlds-soil-ru... https://www.theguardian.com/us-news/2019/may/30/topsoil-farm...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: