Hacker News new | past | comments | ask | show | jobs | submit login
Never update anything (kronis.dev)
319 points by cesarb on Nov 5, 2021 | hide | past | favorite | 281 comments



"When your company won't be the first to market, because about 20% of your total development capacity needs to spent on keeping up"

In the world of JS and Typescript this ratio looks more like 80%. I swear that node hipsters at my last job spent four out of five of their working days wrangling with dependencies or their transpilers, linters, packagers, bundlers and whatever the hell else needs to happen to actually make a node program run. Meanwhile the geezers that worked on Java services reliably pushed new versions of their code sprint after sprint, no heroics, no drama required. What the fuck happened to those "modern stacks" that babysitting them takes the vast majority of developers time? It's a nightmare.


Having always been fullstack even in my current large traditional bank, I see both codebases in Java 6 and maven poms that are a paradise to change (and a java 8 upgrade is often trivial and sufficient to get most of the nice candies you really need), and yarn/npm frontend projects that cant be even rebuilt two weeks later, everyone insanely writing auto updating version descriptor (^2.0.4 with the ^).

I cannot understand what idiot thought it d be fancy cutting everything into mini dependencies that update every day without you knowing made by amateurish hipsters who transitive depend on each other like their life depended on it. Yes you can lock but every build script I ve seen forgot this detail and resolve anew the dependencies !! Why !?!?

Yes it s a pain to move up a version sometimes in Java, but we decide to do it for a reason, spend a few days tops fighting it if we must and it's done. You re right that in JS I cant even teach my more backend colleagues how the fuck it works and how I can navigate it. And I m very against transpilers and linters which I try to avoid for sanity but sometimes you inherit a hero's code and you re like omfg a vue js to typescript to js hacky build chain that worked 2 years ago on bower and now bower is gone and yarn wont build it. And you re 2 weeks well into it before you start building and discussing what you actually wanted to change :D

What kills me the most is when someone in management calls the java backend that is ultra optimized, instant to change, a dream to deploy "legacy" and the yarn soup that is coded by 200 successively burnt out juniors, does 40k binding function calls on a non moving DOM (you know, to "hydrate" it with whatever framework du jour was fancy 5 months ago), and we cant redeploy without a full budget proposal, that people do everything to be assigned out of "modern" grrr

I kid you not we have an entire team in charge of one small frontent management tool for a pretrade backend, they literally spend 99% of their time justifying why they wont change it, and 1% begging me to join for a week to reorder their column or put a new button... and THEY re in charge of it argl, and I cannot for the life of me spend more than 30 minutes teaching them before they snooze "I dont get it, let s pick our battles and change that one thing with you and promised it s the last time" ...


The small dependency thing is a sad artifact of the days when most JavaScript libraries were built to run on the browser. In that context, prior to the days of mature tools to do things like tree-shaking to remove unused code, the easiest way to make things smaller was to make JS modules as small as possible so code could opt-in to subsets of a library it wanted to use.

Fast forward 5 or 8 years and we have a very fragmented ecosystem of NPM modules—some built to be used on the web, some not—built on a legacy of tiny modules and deep dependency trees.

Add on to all that the pace at which the JS and browser ecosystem moves and it’s not too surprising that things have ended up the way they are.

Teams don’t need to build projects with deep dependency trees, but it’s pretty hard to avoid them with the NPM registry in the shape it’s in.


> the easiest way to make things smaller was to make JS modules as small as possible so code could opt-in to subsets of a library it wanted to use

Also due to JS having a very small stdlib. Compare built-in functions to eg Kotlin, its a different world.


If you can have a Java backend that's more or less totally decoupled from the frontend, great. Java and it's tooling are fine, and it's easy to be productive writing in Java. However, my only experience as a frontend dev with Java backend system has been truly nightmarish, because the backend processed all the frontend assets and rendered them. If anything, this is what helped me burnout, because everyone on my team writing Java was super productive and delivered on time, while I couldn't test my JS, CSS, or HTML changes without restarting Tomcat on my fucking useless 10lb HP laptop that the corp required me to use most of the time, because that's what all the Java developers used to get their Java working.


Yeah so the dev machine shit specs is something we fight SO HARD against in my bank. There s not many thing we can fight, but our 10 core SMT xeon with 128G of RAM for everyone in the team was a fight worth fighting. Yes, it took 2 years of them putting us on a new VM cloud systems after another before they gave up, us spamming them every say "intellij freezes", "I cant build in less than 10 minutes and the traders are getting nervous we spend our time daydreaming while it compiles" and other such trolls making the budget monkeys sweat.

Other than that, there s probably a way for you to propose a change, ive worked on the same nightmare you describe and, well, I was lucky enough I guess I could make enough changes to both stack it was possible to get either a light java subset running just the frontend to test, or make the js independent for testing, yes with a yarn soup lol, which is how I learned it in the first place I guess.


I remember 6 years ago working on financial company. We have to deploy our app into weblogic and everytime we have to change something, we have to recompile it again and restart the app server. It took almost 5 - 10 minutes from start to ready to test. I thought that's because of the code but turns out all of our notebooks are using shit HDD. When I use my own PC with SSD, it flies with less than 3 minutes and after that I proposed something like hot reload which make the dev experience a lot nicer (in some cases, you do need to restart)

I don't know about them right now but before I resign, I told them "have you ever calculate how much time and money wasted by just restarting our app server for local dev? you probably should be worry about that because I believe you pay us to wait for app server comes online more than we develop a feature"


In retrospect, I hadn't thought of that but it makes perfect sense. I believe I ended up accessing my laptop's local server from my personal mac, so it would have still been the bottleneck. I may have asked a similar question on my way out lol


If you are using IntelliJ, do not use the JBR JDK 11 as the boot JDK for the IDE. Since I switched to Azul Zulu 15, IntelliJ flies.


I'm glad you won the fight you were fighting. I lost most of mine, and this was 5 years ago.

A few things worth elaborating on.

> 10 core SMT xeon with 128G of RAM for everyone in the team In our case, most frontend web devs realistically use macOS or some other unix env. Not only did our bunk-ass windows machines suck spec-wise, and in terms of physical dimensions, they also sucked because they were windows based. This was shortly before WSL, so it was a pain. The DevOps guy did have a crazy machine with tons of ram and so on.

> I cant build in less than 10 minutes This was literally the case for me every time I made a change. Sometimes the previous result would be cached and I'd need to do it again. I was constantly fighting a stupid battle, and nobody really understood why things were the way they were or how to change them, including the java devs that had been working on it for years.

I kind of won the dev machine battle, because the VPN wasn't really sophisticated and I just set up some system on my personal mac, against the wishes of management. When a designer came in, they got a mac, and had to ask me how to connect it to the VPN.

> it was possible to get either a light java subset running just the frontend to test, or make the js independent for testing, yes with a yarn soup lol

Someone had set up a node instance that attempted to replicate the output of the java system, but without some bits that I can't recall atm. The inconsistencies and trying to maintain a one-for-one replica was just as much of pain. If I remember correctly, that was in part because there would really have had to have been a deliberate time investment in creating a sustainable replica that involved communication when any data changed server side. It was a crap-shoot, and I should give myself more credit for lasting as long as I did, but less credit for not jumping ship way earlier.

I ended up literally burning out, getting fired, and didn't find employment again for years. I have it on my resume that I did a few things successfully while I was there, but quite honestly I don't completely remember what those were, and am just guessing at this point.


> I see both codebases in Java 6 and maven poms that are a paradise to change (and a java 8 upgrade is often trivial and sufficient to get most of the nice candies you really need)

Using Java 6 is major red flag. Even Java 8 should be red flag nowadays, if it weren't so extremely common. Both versions no longer receive any security updates or bug fixes. If an organisation doesn't care to put a system on a platform with active security updates that tells me one of three things:

1. The system is unimportant and is basically value-less. I don't want to work on systems like that. 2. The engineers are incompetent and lack even the most rudimentary learning skills (or will) to follow even the most basic updates on the tech they use every day. I don't want to work with people like that. 3. Engineering management is too incompetent to listen to very important messages from their team. The system likely has major issues and a fear of fixing even minor technical debt. I don't want to work in an environment like that.

It's not seldomly more than one of these.


Java 8, if you use corretto, is supported until 2026. It's totally fine to use and support. And java being java and jumping to 11 won't be that bumpy either.


> Java 8, if you use corretto, is supported until 2026.

Interesting, I was not aware of this. I've not seen people use Corretto either, but it seems OpenJDK has a separate project just to maintain Java 8. So that's indeed still fine to use.

> And java being java and jumping to 11 won't be that bumpy either.

The update from 8 to 9 was rather painful in the beginning, mostly due to libraries not working well with the module system. I'm not sure how it is nowadays, it could have gotten better.

9→10→11→12→13→14→15→16→17 were very painless. I don't foresee a lot of companies getting stuck on those, though it's too early to tell.


Yes, we held off on upgrading from 8 to 9 originally because a lot of our dependencies didn't work with 9's module system. A few months ago we jumped straight from 8 to 11, and it was easy, because all of those dependencies have since fixed their module system problems. (What finally pushed us to move to 11 was that some of our dependencies stopped supporting 8!)


If you can't get reproducible builds the team doesn't know how to use lock files properly. Don't go blaming it on the stack.


What's your workflow with lock files? Do you force them in CI/prod only or also in dev?

I'm tempted to enforce lockfiles at every stage of the product cycle, making upgrades an explicit action, rather than the side-effect/byproduct of another action. Does it make sense? What do you think?


Lockfiles get commited to the repo and used everywhere to avoid version differences. Upgrade dependencies as needed and have everyone reinstall when they change.

I'm amazed anyone wouldn't do this, it's the only sensible course of action.


Some things go without saying, yet go better with saying.


Basically what TheAceOfHearts said. Always use them everywhere, except when explicitly upgrading dependencies. It's also much faster to do a 'npm ci' than an 'npm i'


> I cannot understand what idiot thought it d be fancy cutting everything into mini dependencies that update every day without you knowing made by amateurish hipsters who transitive depend on each other like their life depended on it.

I'm not sure you're thought things through in your comment, and that you are being fair or reasonable.

No one cuts "everything into mini dependendencies". You have dependencies you reuse. That's it. When one of those dependencies gets updated, say to fix a bug or a security issue, your package manager of choice handles it for you without any issue. Still, if you do not want to fix bugs or vulnerabilities in your code then you are free to pin them all and move on with your life.

At most, the JavaScript or NodeJS crowd needs to have a serious talk about standard libraries and third party lackages, such as whether using packages such as rimraf is desirable.

> and yarn/npm frontend projects that cant be even rebuilt two weeks later

You only experience that sort of problem if you're incompetent in how you manage your dependencies.

With npm you can a) pin major/minor/patch versions of specific packages, b) save your package-lock.json file which specifies exactly which dependencies you use. If you free-float any dependency then that's something you chose to do.

With JavaScript/NodeJS + npm you only shoot yourself in the foot that way if you purposely aim at your foot, remove the safety, and press the trigger really hard.

> And I m very against transpilers and linters which I try to avoid for sanity (...)

Complaining about linters because they test your sanity is a major red flag on your approach to mundane coding tasks. This, and your casual criticism on everyone else's code, leads me to suspect that you really need to do some introspection.

> I kid you not we have an entire team in charge of one small frontent management tool for a pretrade backend, they literally spend 99% of their time justifying why they wont change it, and 1% begging me to join for a week to reorder their column or put a new button...

Based on your comment and on my experience with similar projects, I suspect you're either oblivious and/or leaving out important bits of the story just to continue plowing with your humblebrag, or you're succumbing the need to be hyperbolic.

Frontend development has to deal with far more details and constraints and requirements than any backend task. Frontend is what both clients and PMs and executives look at, and the tiniest changes like resizing a button has deep implications on the business side of things. Furthermore, it's both harder to automate tests and their tests need to be more extensive.

Thus any change, no matter how small, is a uphill battle.

It's beyond me how someone who is so hard on their entire team ends up showing such weak understanding and insight of the problem domain. I know that on the internet no one knows you're a dog, but let's not get silly here.


> With JavaScript/NodeJS + npm you only shoot yourself in the foot that way if you purposely aim at your foot, remove the safety, and press the trigger really hard.

Eh, no. You'll get shot no matter what, even if you have no gun.

For example, let's imagine you now have your version locked down codebase and after a month you want to install a new package. This one is only compatible with XYZ V1 while what you have is V2. No worries, npm will just handle that right? Well, you'll have both versions, but the prototype constructor name changed from V1 to V2, so the versions won't work together, so now you need to upgrade XYZ V1 everywhere anyways. And when XYZ did the V1->V2 upgrade, they also upgraded to GKS V53, so now you need to also do that upgrade.

Continue ad-infinitum or simply stay away from a broken ecosystem.


This is a far bigger problem in the Java world. Maven will just pick at sort of random (it's deterministic but hard to predict and mostly invisible).

On the JS side there's a way deal with that if people use modules. Good dependencies don't clobber the global namespace. Multiple versions can exist side by side. It'll just make your build bigger.


Since the number of dependencies is way less then in the JS world, its possible to pin them down manually.

If you absolutely need to have multiple versions of the same dependency, you probably need to use OSGi.


> (...) after a month you want to install a new package.

What's your point? If you want to update a dependency then you also need to go through all their dependencies. This is not a javascript/npm problem; it's a software development problem. Dependencies don't get magically updated, even with semver. You experience the exact same problem with other tech stacks. I'm fact, I've experienced this problem far more with C++ projects than with JavaScript/typescript ones.

Why are we expecting and demanding random updates to work with JavaScript/npm when that belief and expectation was always totally unrealistic with any other stack?


The point is, that in a moderately complex Java project that might be something on the order of 150-200 dependencies, and updating just one of those usually doesn't lead to a big change in the transitive dependencies (it is a different thing for a major update, Java 8 -> 11, or from the pre jakarta JEE to jakarta), but its possible for a single person to track whats happened. But with 1500+ dependencies, so an order of magnitude bigger, its just not possible.


This is why I can’t use anything too complicated. I’m just not smart enough to wrangle complex configurations and dependencies.

So I keep producing Rails apps that get the job done without the bells and whistles and without much polish.

I wish I could hire a config expert for half an hour and have them create the setup I want. Because I will never get my masters in webpack/rollup/postcss/stimulusjs/mystery sauce working config du jour.


Same. I just compile TS modules with UDP, run it all through r.js, uglify, and deploy over SFTP. Roll out server side patches straight onto the servers, when I update a local file. SCP or even just edit the files in a shell if something breaks. I'll be damned if I'm going to spend days tooling deployments for the latest flavor of packaging.

This whole conversation only pertains to people who work in large companies, on salary, who both need to follow standardized procedures and have the time to spend days doing so. I always find it interesting, because it illuminates an entirely different set of priorities and a completely different universe of tools versus what an individual or 2-3-person team needs to focus on to get things running.


Checkout meteor.js, it takes care of all the tooling.


No, Meteor does not take care of all. If someone would help me to upgrade to newest Meteor etc dependencies, it would be very welcome:

https://github.com/wekan/wekan/issues/3881


As a full stack dev who leans heavily towards the frontend... yes, honestly, some weeks 80% is a realistic number and WOW is it exhausting to fix Webpack or Babel or some random npm package for the nth time. I do feel like I run into issues less often today than I did about five years ago, there are really only a few packages where upgrades are a massive pain. TypeScript makes these upgrades a lot easier.

That being said, I still have way more fun writing TypeScript and React code _when it works_ than I do writing yet another Java API. Might just be personal preference but the amount of fun that I have doing it makes dealing with the tooling hell all the more worth it.


_have way more having fun writing.._ - pretty much seems to be the culprit here.


Where I work, its quite the opposite, every microservice is built with js/ts and we have a big monolith built in java.

The java service releases every month or so, the microservice, on average, release every week, sometimes more.

You can build bad software with any tool, it all depends on how you use it.


Am I doing typescript wrong?

Setting up a new project is a 5 minutes task. Npm init, then install express, typescript and some useful middleware. Install eslint then eslint --init and choose airbnb style guide, and I'm off to go.

And if I take over an existing codebase it just takes npm install then npm run dev <script name> and I can start adding bugs.

With react and vuejs it's even easier with nextjs/nuxtjs.

I never fight dependencies.

Yet I sometimes read stories like yours while typescript/nodejs has only been the smoothest experience I've ever had.


Of course you won't have any problems with an empty hello world. Now develop it into a medium to big real-world project and let it sit for a year or two, then try to update dependencies (because you need new functionality or to fix some vulnerabilities).

It's never easy. For some projects it took me about a week to do this (because of a breaking change in some library that's used everywhere that you are forced to update).

I just spent another two hours fighting with breakage because of a library that shall remain nameless. The author introduced yet another major breaking change in a minor release. I'm thinking of migrating to a hand-rolled solution, it probably makes sense in the long run.

Edit: just for comparison, the backend for this project is written in Java/Spring. I recently updated it from a five year old Spring version (plus a dozen dependencies from the same time frame), and it took less than an hour to fix everything. This is simply unthinkable in the JS world.


I do both java and node backend dev and I wrestle way more with maven and java dependencies than with node and npm. The project lead also always wants to try the latest java stuff (quarkus when it just came out, latest java versions, ...) which has lead to many days of obscure bug tracing. So it's really more of developer attitude then about the stack


At some point, there is an irreducible level of complexity. At this point, adding tools simply results in pushing complexity around like an air bubble trapped under a plastic film.


In .NET I've spent maximum 1 hour updating major versions. Usually is almost nothing to be done.


This is why I don’t understand the approach of golang and rust to keep their APIs so lean and rely on 3rd party dependency management for (IMO) basic enterprise functionality.


The best experience I ever had with js was on an api that didn't use anything special.

No typescript. No import syntax. Nothing extra.

It was magical. The entire build stage was npm install && node app.js

Admittedly, once or twice a quarter, we'd update the dependencies and we'd spend a few hours fighting it, but the rest of the time it just worked.


> I swear that node hipsters at my last job spent four out of five of their working days wrangling with dependencies or their transpilers, linters, packagers, bundlers and whatever the hell else needs to happen to actually make a node program run.

Like they say - a cushy job if you can get it.


I'm still building new frontend UI with Typescript 2.x, Bootstrap 4 alpha, Pixijs 3.x. I don't particularly care about flags in code dependencies, and if they're a problem to upgrade I'll use the old version too. Does it matter? A UI just needs to work, ideally forever, and it's all javascript in the end anyway. It'll work the exact same 10 years from now. Serverside code is a different beast and you can't avoid upgrading Node or PHP, or migrating to MySQL 8. But only very occasionally does this present the opportunity or sufficient reason to upgrade frontend code.


Do your Java geezers work with Maven? Give me NPM (well, yarn, or pnpm) any day of the week.


Nah, maven is definitely not a modern tool, but it does what it has to. And java’s package situation is in orders of magnitude better health than the critical vulnerability each week js one. Like, maven repo was the only one not affected by that quite big attack last time as well.


I’m not a TS, JS, or node fan. But if you really believe that it takes 80% of a dev’s time, week after week, to wrangle dependencies etc in ANY stack, you are either incredibly naive or really bad at math. And if developers at your company got away with that, I’d be leaving that company ASAP.


I'm no longer with that company and it's possible that those devs were incompetent or always chased the next shiny framework or possibly both. But irregardless their output on the whole was about 20% that of those devs who used uncool old tech like Java/Spring and dotNET.


This is not "modern stack", this is frontend frameworks niche.


I appreciate the contrarian point of view, and I don't want to pick on the author, but this sounds like words of wisdom from someone who doesn't have much real world experience.

I worked in such a project who never upgraded once. And then Specter/Meltdown hit, and we had the mandate to patch all our systems (and the threat was very real). Welcome to a hellish mess of outdated deps that took weeks to sort out.

But is not only me. I watched a talk from a Principal Engineer from Amazon builder tools, and one of their biggest mistakes was not to upgrade often enough, which made each upgrade more painful.

So what do we do now working in multiple, large, multi-year projects?

First, we set tools that tell us when deps are outdated, and alarm when vulnerabilities are found.

Second, upgrading deps is part of the regular dev process. When you make a change, take the chance to upgrade the deps. Vulnerabilities are addressed proactively. The vast majority of times upgrades require no code changes.

Third, we regularly upgrade deps of inactive packages.

Lastly, if a dependency upgrade requires significant work, we create a ticket for it and address it as a new feature. In practice this happens very rarely, so it doesn't disrupt the development process much.

As other's said, the key is to upgrade often, and then it wont be as painful.


> the key is to upgrade often

* and have a very good test suite


And a good build system


Hello there, i'm the author of the article.

> I appreciate the contrarian point of view, and I don't want to pick on the author, but this sounds like words of wisdom from someone who doesn't have much real world experience.

I see why you'd say this, but what i'm mostly advocating for in the article is that breaking changes often being the only option for updates (which may be forced upon you because you NEED the security updates that are included) is a problem, since no one wants to (or even feasibly can) keep a maintenance branch of their software around forever, for every single version.

A pretty good example of software that doesn't cause many problems in this regard would be something like MySQL 5.7, which was released in 2015 and will be supported until 2023. Personally, 8 or more years seems like a pretty good amount of time to support a software product for, as opposed to forcing you to update to something newer after just a few years if you want security updates and bug fixes, especially if you don't have the proper amount of resources at your disposal to properly migrate over and test everything.

For example, for the past few months at my dayjob i've been:

  - working to migrate about 7 Java services over from Java 8 to Java 11
  - this also necessitated not only the migration of minor framework versions, but also major versions in some case (Java 8 --> 9 was a generational shift of sorts)
  - furthermore, the decision was made to also abandon Spring and migrate over to Spring Boot, both because it historically "won" and also because some of the services already ran with it, so this should increase consistency across the board
  - the decision to utilize containers also was made, after much deliberation and problems with the environments not being consistent otherwise
  - the decision to also use Ansible was made, because historically changes to the server configuration weren't entirely traceable easily and diverged otherwise
  - the decision to reorganize all of the servers with modern and up to date OS versions was also made, as well as the tools to manage the container clusters, as opposed to having systemd in one environment, sysvinit in another and manually run scripts in yet another environment (about 5 of those environments in total, each with all of the apps, though previously sometimes strewn across multiple servers for historical reasons, e.g. temporary environments that became permanent)
  - tack on a few topology changes, introducing a proper ingress, as opposed to sometimes managing SSL/TLS through Tomcat, but not always
  - if the scope creep doesn't sound bad enough, throw in some load tests that needed to be made due to issues in productions with performance, that had to be addressed in parallel
  - oh and since AngularJS is essentially a dead technology, some of those systems needed to be split up into proper separate front ends and back ends for the eventual migration to something else
  - besides all of that, i've also been working on introducing proper CI for all of that IaC and app builds within Docker containers, as well as any and all package repositories (Maven, npm, Docker) that we may need, both to avoid rate limits and improve performance, as well as cache dependencies in case the main source goes down
It's as if "the business" looked at everything they should have been doing in the past years and decided to put it all on my plate, only very good prioritization can avoid situations like this turning into an utter failure. Is that too much for one person to do successfully? Quite possibly. Have i also been mostly successful in all of the above and have learnt a lot? Most certainly. But does something like that possibly lead to burnout? I'd say that yes, in most cases. It's not healthy.

Now, i agree with you that updating often would have noticeably lessened my pain, yet when your department isn't seen as a profit center or you cannot sell your clients (assuming consulting) on the idea of things like SRE and constant updates, at most people will bump smaller versions every couple of months to tick a checkbox somewhere, because they cannot feasibly do what i'm attempting now.

Not only that, but most of your suggestions simply wouldn't be seen worth the time and effort, unless you'd prefer to ask for forgiveness rather than permission, which is hard to do when you're also expected to deliver features and fixes:

  - you won't have much automation or automatic scanning of outdated packages with proper alerts in place; i only recently sorted out the package management with proper caching registries myself
  - you won't care about vulnerabilities (unless very high impact or incidence), at least not to the level of being able to react to them proactively (ITSEC teams often viewing the services as a black box, which doesn't tell them much about a whole class of issues)
  - you won't care about upgrading dependencies regularly, because you probably won't want to be the person who breaks something with 0 perceptible benefit to anyone
  - most importantly, you probably won't have an all encompassing test suite that'd do both unit tests, integration tests, performance tests and would also check everything from end to end, to make sure that everything would indeed work in a browser (or if you do, they're probably not updated regularly and don't have good enough coverage to matter)
Furthermore, if you need to do a generational shift, like i had to Java 8 --> 11 and how we'll soon have to do with AngularJS to something else, you can't just go to whoever writes your checks and say: "Okay, i'll need the next 3-12 months to work on this migration to do basically a full rewrite," unless they're really on board with your past incentives and are aware of the need for keeping up with the current technologies. Any such incentive, no matter how important would generate pushback and long discussions, worst of all, you wouldn't even know if any of that is even feasible.

Would you want to be to known as the person who spent 9 months migrating an enterprise monolith, just to fail in the end and deliver absolutely nothing? In most cases that's a hard sell and almost impossible to put a positive spin on it.

For example, one of the systems that i haven't been able to split up and by far the largest one has the following:

  - i can't update from Java 8 to 11 because the version of Spring doesn't support it
  - if i attempt to migrate over to Java 11 alongside newer versions of Spring (Boot), the old web.xml configuration no longer works
  - some other configuration is randomly ignored and isn't loaded at all, whereas other needs refactoring because certain classes no longer exist
  - speaking of classes, there are now class path conflicts and i need to scan through the pom.xml and figure out which of the hundred dependencies are misbehaving
  - not only that, but about 50 of them are out of date and thus need updates, given that many of them are also incompatible with Java 11
  - worst of those are the classes that just break at runtime, like class loader functionality at app startup
  - after all of that, i discover that many of the configuration values have been changed and need updating
  - some of the servlets also aren't loaded properly and thus i cannot figure out how to get the app responding to requests properly
  - since the back end also bundles JSP/JSF/PrimeFaces, all of those also provide further challenges, as does Java EL
  - there are also scheduled processes within the app that break
  - there are also services that deal with file uploads that break
  - there are also services that deal with reports and PDF export that break
  - there are also services that deal with database migrations that break
  - there are also Maven plugins that break so certain front end resources cannot be built
  - there are other things too, but sadly i don't keep a full list of everything that broke...
I have no illusions about the fact that too much was put on my plate, but surely you can understand why in my eyes it could be pretty nice to have a framework version that's supported for 10-20 years and is so stable that it can be used with little to no changes for the entire expected lifetime of a system?

Either that, or you have to do updates often, keep your individual services small so they're easier to rewrite (maybe divided by functionality within a bounded context, e.g. PDF service, file upload service, migration service) so that 5% of "dead end code" doesn't keep the rest 95% from being kept up to date. Essentially, you'd have to constantly invest time into this and not pretend that code doesn't rust.

Knowing how little "the business" can care about these finer points in many industries, it doesn't surprise me that you see numerous neglected projects out there and i don't believe that it'll change - thus, we should slow down, if possible, and consider building solutions for the next decade, not just the next monthly iteration of our CVs.

Maybe that's a bit of a rant, but i felt like i needed to elaborate on my point of view. Now i'll probably go write my own little tool that alerts me when a new article of mine gets posted on HN, so i can provide comments in a timely manner.


Well that does sound like a lot, I hope you can negotiate for a raise at some point.

Long-term I'm not sure what you mean building solutions for the next decade, it seems quite hard to design something perfectly in hindsight. And of course you have to compare the cost of it to just slapping something in a VM and putting a firewall over it and calling it a day...


> Well that does sound like a lot, I hope you can negotiate for a raise at some point.

Oh, certainly. Though right now i'm more concerned with making the lives of my colleagues more easy and actually shipping software that works in the end.

> Long-term I'm not sure what you mean building solutions for the next decade, it seems quite hard to design something perfectly in hindsight.

That is a very fair point, but in my eyes trying to create stable software is a worthy pursuit in many cases regardless!

Perfection is probably never going to be achievable, but the difference between creating something that will break in 3 months versus 3 years is probably pretty impactful (e.g. using bleeding edge/immature frameworks which will necessitate a rewrite, or will have breaking changes). I'd say that both the technologies used and the architecture of the solution can impact this greatly.

What's also useful is thinking about limiting the fallout when something does break, e.g. making the system modular enough that it can be fixed, updated or changed bit by bit, as well as thinking about scaling, at least a little bit. I'm not saying that everything needs to be microservices, but having a tightly coupled codebase can and probably will create problems down the road.

> And of course you have to compare the cost of it to just slapping something in a VM and putting a firewall over it and calling it a day...

That's also a reasonable take. Of course, costs aren't always the only consideration - if my blog or personal site goes down, the impact probably isn't too bad, whereas if a governmental health care system goes down, many people won't be able to receive the services that they need in a timely manner. The latter is probably worth the investment, both monetary and in regards to consideration about all of the stuff mentioned before.

I've actually experienced what it's like to see queues building up in one such institution, with the medical personnel also being frustrated, all because a system component in a data center somewhere had been neglected and had DB connection pooling issues, leading to a complete standstill.

I was called in to fix that external project and somehow managed to do it by ripping out the DB pooling solution and replacing it with another one. That was problematic when the actual codebase was badly commented and there were no proper tests to speak of in place and the failure mode was both odd and hard to debug.

In the end, i didn't manage to repair the old pooling solution without swapping it out entirely, because one moment everything was fine, the next threads just got stuck waiting one by one, with no errors or debug messages, regardless of the config, whereas breakpoints didn't lead to anything useful either.

In short, there are systems out there that should be as stable as possible and tolerant of failure, given that not all of them will receive the constant love and care that they deserve.

I'd also like to link this lovely article and talk: http://boringtechnology.club/

Especially the bit towards the bottom about the "known unknowns" and "unknown unknowns" - knowing what you can and can't do with a particular piece of technology and its characteristics is probably a good thing.


> but this sounds like words of wisdom from someone who doesn't have much real world experience

Did you read the part in the first paragraph where the author complains about people shutting down the idea without even giving it any discussion?

I mean you just dismissed the idea entirely and went off on a "have you tried jQuery" tangent.

> one of their biggest mistakes was not to upgrade often enough, which made each upgrade more painful.

The mistake isn't not updating frequently enough. The mistake is that the industry thinks maintaining backwards compatibility is a waste of time.

Updating more frequently doesn't magically make updating less painful, it just spreads the pain out over longer timeframes.

There is some benefit given that you won't be building on top of more and more stuff that may eventually be rug pulled by something that may already be released, but again, this isn't about update frequency, it's about a lack of respect for backward compatibility.

If everyone maintained backwards compat indefinitely we wouldn't have so many damn problems and updating would be nearly painless at whatever frequency you like.

Maintaining backwards compat isn't actually all that hard. There is two types of change. Breaking and non breaking. Most changes can be done without breakage. Want to make a breaking change to that function? Make a new function, put a 2 on the end. Problems solved. How often does leaving that old function actually cause you problems? Technically it's tech debt, but tbh it's the most benign tech debt you'll ever see.

So anyway, my point isn't that we should update frequently or not at all, it shouldn't matter. My point is that semver gives inexperienced devs a pass for lazy development practices that cost everyone instead of eating the cost themselves.

Want to release a major version change? Go for itz but understand it's a fork and not everyone needs your special new features you think is so important.


Corollary: Do hard things more often, per Martin Fowler - https://martinfowler.com/bliki/FrequencyReducesDifficulty.ht...

Upgrade everything all the time and it will never be hard. You'll have full context for breaking changes, and the diff from A to B is always smaller than from A to Q, and less likely to break in strange and confusing ways.


In my experience, upgrade everything all the time only works if you can keep your dependencies to a minimum, which can be harder in JS/TS land, but not impossible. I used to think every line of code you write is a liability, but have come to realize that every dependency is also a liability. So it’s about balancing the two.


I'm in this camp. I ran into a recent situation where I could have used a drop-in security module if the project had updated the framework in the past decade. Instead we rolled our own, which is janky and took longer than it should have.

Debts have a way of compounding. Tech debt is no exception. Tech debt begets tech debt begets tech debt, and it will hang on your velocity like a ball and chain.


Dedicate 20% of developers’ time on constant updates of all components for a decade just in case you might need one drop-in security module ten years after now (if the company ever lives to see that day)? Does not sound like a good illustrative argument to me, I am afraid.


But it doesn't work like that. You might reserve 20% of the time to chores, but you don't have to use all of that 20% every week. Efficiency of maintenance will improve after you get your processes and tooling in order and everyone has had a bit of experience with them, and the leftover time can be used for whatever the developers feel most deserves the extra time.

The reason to do maintenance constantly is practice, which makes the difference between taking weeks or days vs. hours. When the time spent is evenly distributed, it's also less likely that an important task would be blocked by unavoidable maintenance.


This. Version pinning is just piling on the risk, and when a CVE is announced on the no-longer-supported version you’ve been pinned to for way too long then it’ll be a reactive emergency.


Keeping up to date also has issues in the JS ecosystem. There were a couple of recent examples of npm packages that were hijacked and new versions released with embedded malware.


Absolutely agree with this. Find the rest within staying in motion.


Everything being on fire all the time makes your firefighting skills very valuable.


There’s something to be said for being on the oldest minor version that still receives patch releases (usually the LTS if there is such a thing). Unfortunately most FOSS libraries don’t have the luxury/resources to support parallel releases. So to get security fixes you need to keep somewhat up to date with other changes.

The worst place to be is having to fix a CVE in a hurry, but first having to upgrade your framework a few major versions including fixing some breaking API changes. I’d rather pay a small tax every month than have to risk those late nights.

Dependabot is great here, you can get updates for free, or at least preview if they are going to pass all your tests.


I guess enterprise Linux subscriptions are such a tax, right ?

It pays for Red Hat/SUSE/Canonical to maintain and old stable version CVE free for you, so you don't have to update so often.


Sure, or if you prefer: the price of Debian Stable is that you only get security updates and bug fixes until the next Stable release.

It's wonderful.


Except for chromium.


Chromium's a hard one for Debian. It is a big software that often suffers from security issues due to its size but it is hard to package according to Debian's policies. Its upstream ships with forked dependencies included, and this goes against Debian's policy, however Debian doesn't have enough manpower to untangle this mess while keeping up with Chromium's upstream.

I wish they would just drop it from their repos and suggest users to use flatpaks (or Nix?) or the proprietary variant Chrome (which is what most likely want due to sync), it is a big liability to keep an insecure browser around.


and Firefox.


Oh right, yeah, that should also work fine. :)


I lived and suffered this tension between stability and security for years running a tech team. Staying on the upgrade treadmill while delivering actually important product features for the business. Hopping from LTS to LTS is a solid default strategy when you can use it.

So pardon the plug, but finding a happy middle-ground to exactly this problem for Django based projects is what I now work on with https://www.codestasis.com/

Projects that can't upgrade, because of the ensuing cascade of breaking changes and dev time needed, subscribe to CodeStasis to minimally update Django to new non-breaking patch versions.

So you can keep your trusty old version yet also stay patched and secure if you find someone to do the heavy lifting for you at reasonable cost, which I think we deliver.


Timely. I was recently force upgraded to Win11. Today in fact.

Last week I force-downgraded after I got an exception because it is my busy time of year. That exception was not respected and I realized fighting it weekly would be the same time investment as fixing compatibility issues.

"What compatibility issues? Win11 is fine, I have had no problems." My coworkers say.

Well on Day 1, 2 pretty important pieces of software crashed and exited on first run. And then the screen snipping tool failed to take a snapshot and helpfully suggested I reinstall the OS.

I actually really like the MSFT stack, but I know enough to avoid any totally new release for a while. I actually feel confident enough that I no longer try and skip major releases (like I skipped Vista, 8).


I loved and lived Windows for a decade and felt so cozy and at home in it, but when the first force-upgrade happened, I knew it was time to move along.

Just cannot accept things changing without my consent on my workstation.


Mac and Linux do the same thing. I know people who still prefer the OSX 10.3 and the GNOME2 GUIs, or for instance the way nytimes.com looked back in 2010. That's the problem with making GUIs the thing you love. Some goons stop by your home every few years and toss up the furniture. Apple is the gold standard since it pretty much looks the same as it did in the 80's and each major version tunes subtle things. Microsoft makes big changes to look, but at the end of the day its substance is the same. Then there's Linux where each GUI update is a radical break with tradition that continually reinvents its own identity.


> Linux do the same thing. [..] Then there's Linux where each GUI update is a radical break with tradition that continually reinvents its own identity.

If you're using a "mainstream" desktop environment (e.g. GNOME), then yeah, but that's not universally true. I've been using exactly the same desktop environment for something like ~12 years; I love Linux because if you set it up right it's essentially zero maintenance (significantly less than Windows and macOS), and almost nothing changes without your consent. The only forced change during those 12 years that I've noticed was the migration to systemd, and Firefox reskinning their GUI for no good reason every other release.


The forced changes to the init system and desktop gui are far more radical than anything Microsoft or Apple ever did. The init system change was also due to the GUI people. GNOME told everyone they must adopt SystemD or else you can't use GNOME anymore. It made people so unhappy there were forks, protests, and even suicides. If a system administrator woke up from a 15 year coma, they would have no clue how to use any of this stuff and would have to start over from scratch. How can we as open source developers, present a better alternative to big tech products if we keep dividing and conquering ourselves? To think about how unfair that is to the thousands of volunteers who worked hard to create these desktops and systems and treasure troves of stack overflow answers, that just get swept away two years later, it's a failure of leadership.


Bah. If a sysadmin woke up from a coma, systemd would be the least of their worries, since it has a comprehensive manual, working backwards compatibility for most standard interfaces, and is in most cases much, much easier to deal with than what was before it.

They might scream in horror at how containers often get (ab)used though.


More like compound tragedy. The world's largest search engine used to run on a single computer. Now the world's smallest app has its own kubernetes cluster.


Come on, let’s stop this bullshit about systemd. It was goddamn voted on multiple times by debian maintainers, in a system that is markedly more democratic than anything we have in a country, and won with huge margins.

Also, previous incarnations were hard to maintain, had no logging before the mount of filesystems, had ill-defined service life cycle, etc. Booting is a hard problem. Having it all around the system in million shitty bash script is a ridiculous idea. Make it declarative as much as possible and have it handled by a single core program. And systemd does these perfectly, my only gripe with it is that it should not have been written in C, but such is everything in linux land.


It's obvious you feel strongly about this from your language, but there is no need to call someone else's opinion bullshit.

The reality is that systemd's wide adoption has made many people unhappy, for many reasons, some outlined in the Wikipedia article[1]. Systemd is overly complex, to the point of being obfuscated; systemd has many interlocked dependencies; systemd takes control away from the sysadmin and puts it into a fat binary; systemd goes against the Unix philosophy of "do one thing well"; systemd creates a pattern of homogenizing Linux architecture, and so on.

Lucky for us, unlike with Windows and Mac, there is no "One And Only GNU/Linux Distribution", and instead there are many options and alternatives, many of which have not integrated systemd at all, or only ported small parts of it.

Every day I am ever so grateful for the miracle and gift of FOSS. Thank you. Gracias. Spasibo. Dyakuyu. Merci. Danke.

[1]https://en.wikipedia.org/wiki/Systemd#Reception


Wow look at the names on that list. None of them had a choice though since the decision was made unilaterally. They woke up one day and were told to hand over control of their boot, userspace, ssh auth, and dns to this new program with binary logs that speaks nonstandard binary protocols. Open source essentially boils down to free candy from strangers on the Internet, and the thing that's historically made that work is transparency. Without it, you've got a system that requires faith and is fueled by the fumes of trust painstakingly built by those before you. That's why the old guard is unhappy about it.


I really don't understand the systemd hate. It's fine. It works. I never had any problems with it, neither on my desktop, nor on the servers I maintain. People I personally know also don't have any problems with it.

> It made people so unhappy there were forks, protests, and even suicides.

...a suicide over a different init system? Seriously?


You could look it up on Wikipedia, as with most things one seeks to understand...

https://en.wikipedia.org/wiki/Systemd#Reception

Summary: It's overly complex, it has many interlocked dependencies, it takes control away from the sysadmin and puts it into a huge binary, it goes against the Unix philosophy of "do one thing well", it creates a pattern of homogenizing Linux architecture...


> Mac and Linux do the same thing

The earlier responses in the thread were talking about forced upgrades. I'm not sure about MacOS since I don't use it much, but most Linux distros do not forcibly apply upgrades like Windows does; you can continue to use old packages and even reboot the machine indefinitely. Sure, you might not get security fixes or keep unrelated packages up to date beyond a certain point, but that's not at all the same thing as updates being applied without actually being invoked by the user.


> I'm not sure about MacOS since I don't use it much, but most Linux distros do not forcibly apply upgrades like Windows does;

Eh, not very accurate in my opinion. If you want to use the latest software you are very much forced to upgrade Linux because they have no concept of separating the platform from the applications that run on it. Either everything is bleeding edge or nothing is. Or you compile things from source like it is 1979.

There are Windows programs released today that will run just fine on Windows 7. How many Linux programs released today will run on Karmic Koala without recompilation?


Karmic Koala is within the support vector of Actually Portable Executable. Since if you can get it to run on that, it'll most likely run on all the other distros too. Plus these binaries run on Windows 7, Mac, and BSDs too. They even run without an operating system. No need for recompilation. No virtual machine required. They aren't bloated either since the binary footprint starts at 12kb. See https://justine.lol/ape.html and https://github.com/jart/cosmopolitan Best part is you can still use the compiler that comes with your Linux. All it does is reconfigures the compiler so that, rather than producing a binary that can only run on one version of one distro of one operating system, your program magically becomes capable of running on all of them.


I'm not sure who you mean by "they", but there's a lot of difference between distros, and many different ways to put the pieces together (or leave them out)

Compiling from source is the most reliable way to run the latest and greatest version of a particular application that I've found to date, leaving out all the middlemen like package mantainers, whose competence I may or may not trust.

And generally speaking, I'd trust a several-decades-old technique much more than something just released.

As far as Windows software goes, I've found Wine to be a much more reliable platform for old Windows applications than Windows 7 or 10, although there's nothing better than emulated Windows 9x for running programs of that era.

(I make "any browser" websites, so running 1995+ software is something I do on the regular.)


I just looked up what Karmic Koala is, and it is Ubuntu 9.x.

I have not tried to use that recently, but I am quite happy running 14.x today, and it runs everything I need.

I use IntelliJ 13.x for my Git stuff, Geany for most of my text editing, a dozen different browsers at whatever versions they are for my Web things, and whatever versions of mpv, mc, LibreOffice, etc. it came with for what those programs do.

With only 1GB of RAM it's not always as snappy as I'd like, but I use the periods of swapping to meditate.

I don't have to use programs released today, only ones which I need today.


My point is that there's a difference between "you need to update to use certain things" and "your computer literally is forcibly updated even if you don't want it to or aren't ready". I agree that Windows is probably ahead in terms of backwards compatibility, but the first few comments in the thread were talking about machines getting updated without them applying the updates, which is not a thing I'm aware of happening on most Linux distros.


Force is a strong word, since Windows provides ways to opt-out of feature upgrades. It's also probably possible to opt-out of bug fixes too. If you want to put a rosy spin on things, you could think of it as a free system administration service. There's a lot of people out there who are working really hard, for you, to make sure it goes smoothly. It's also common for Linux distros to use the opt-out model these days too.


> Force is a strong word, since Windows provides ways to opt-out of feature upgrades.

I dunno, I've heard multiple reports of people leaving their machine alone and coming back to it having upgraded and rebooted into a new version, sometimes into an unrecoverable blue screen.

How would they have opted out of that?


Went through that with Win7 a few times. Eventually found a way to disable.

In this case, the forced W11 upgrade was forced on me by my company via push.


Whenever there's a way to disable it, it's usually discovered after the fact, and then next month a new setting is introduced which must also be disabled.

Fool me once, etc.


People who still prefer GNOME2 GUIs are using MATE, what is the OSX 10.3 lover to do?


I googled for some screenshots and MATE doesn't look like GNOME 2 to me. See https://www.server-world.info/en/note?os=CentOS_5&p=x&f=1 and https://int3ractive.com/blog/2019/things-to-do-after-install... I found some screenshots of MATE desktops from ten years ago. Those did look like GNOME2. However it appears that since then MATE chosen to embrace a new identity too.


Look instead at MATE's homepage: https://mate-desktop.org/ Or their screenshots. For Ubuntu MATE, select "Traditional" in your link's step 1. You can also look at the final screenshot of https://learnubuntumate.weebly.com/traditional-menu.html

Sure, MATE has evolved to give more options, and distros seem to like defaulting its look to something non-traditional, but the traditional look is still very GNOME2 and very much alive. I've got an old desktop from 2009 that's run Gentoo the whole time , I never upgraded to GNOME3, but I did switch to MATE when it came out, and haven't had to mess with it since apart from trying out different icon sets or other small theme changes. It looks basically the same as ever, even compared to my old laptop screenshots from 2007 -- I still have my wobbly windows from Compiz (fka Beryl) too.


TBH, both Ubuntu and Gnome have completely lost my trust as far as making stable and predictable environments.

The Mac-like global menubar which they've grafted on without being able to adopt the applications is an atrocity IMO.

I consider Windows 95 / NT4 / 2000 to be "peak desktop GUI" and use distros which allow me to emulate that look, feel, and behavior. I use it until it fails to deliver that experience and then keep trying other distros until I find another one who has not yet rotted out.

So far, I have only had to switch distros a handful of times.


Windows 2000 was a thing of beauty. It was Peak Gates. What do you think of SerenityOS? It's written by this guy from Apple who ended up leaving his job so he could do the same thing to Windows 2000 that Steve Jobs did to Mac OS 9. Now that's a dangerous idea.


I'm a huge fan of SerenityOS and Andreas, but I've not had a chance to try it yet.

I used Windows 2000 starting three betas before gold, downloading each build over dialup from AOL warez scene releases and didn't stop until a while after they stopped patching Pro, don't recall having any issues. What an amazingly solid OS.

My choice today is Xfce with the Chicago95 script, I can barely tell the difference.

I'd say Mint has the best default tuning for Xfce, with Manjaro I have to add the fewest additional packages on top of the base install, and Fedora is somewhere in between.

Thanks for the memories :)


> [...] Linux do the same thing.

My Arch + AwesomeWM setup I've been running for the last 5+ years would disagree with you. Kinda one of the reasons I went with this setup, I have total control over all updates, installed software, etc.


Mac does it, which is why I also quit mac.

Some GNU/Linux distributions do it, but not all of them. And I have way more control over it when it does happen.

Apple was pretty solid until around Mountain Lion. Once I saw them messing with "Save As", I jumped.

I've had good experiences with LXDE and Xfce so far on multiple distros.

Anything Gnome are no longer in the running, however.


Looking at Google Trends for GNOME,KDE,LXDE,XFCE from 2004 to present is interesting. https://trends.google.com/trends/explore?date=all&geo=US&q=%... Back then GNOME and KDE stood head and shoulders over all alternatives, whereas these last twelve months, all the Linux desktop choices appear more or less in the same league. Open source sort of behaves the opposite way as markets where instead of shakeout we get shakein.


Interestingly just removing the US location restriction it looks quite different, with KDE being far more frequently searched for than the others at the moment. Searches for Linux desktop environments look to have reduced a lot in total since 2004, on Google at least. https://trends.google.com/trends/explore?date=all&q=%2Fm%2F0...


macOS does not force updates or upgrades.


>I loved and lived Windows for a decade

I know it's probably a typo, but I can't help but imagine the wonderful OS that Windows 6 never was :)


Not sure where the supposed typo is, I started with 3.11, then used 95, NT4, 98, 2k, and around XP/Vista is where I realized it's time to move on because of the increasing upgrade nags and the UI being changed without my consent...

I still stuck around until Windows 7 because I had just grown so into it, was very comfortable with keyboard controls, etc. Then a computer I knew upgraded itself to Windows 8 without being asked to, and started booting into a bsod...


There was never any free and forced upgrade to Windows 8, I assume you mean Windows 10?


You may be right, perhaps it was Windows 8 to Windows 10. But even in the days of XP, upgrade nags were already in effect, and one keypress or mouse click at the wrong time (when the popup appeared) could send you down that road. Unless you already anticipated it ahead of time (from past experiences) and went through the settings and disabled automatic update checking.

The overall intent and attitude matters more to me than the details, and the general intent of Microsoft (and Apple, and Gnome, and Ubuntu, and many others) seems to be "we know how your desktop should look and operate better than you do."

This is completely the opposite of what I want, which is to have a workstation which is configured to facilitate my work, where nothing changes without my explicit REQUEST.


I hate updates with the fire of a thousand suns. I'm still on Windows 7 and will stay on it for as long as possible. Parts of my PC are from 2010, the case from 1990, the screen from 2007. And you know what? It works.


Might come in useful to others on Win11:

To get the snipping tool working, close the snipping tool, manually set the date to around the start of October. Reopen the snipping tool and it should be working. The date can now be set back.


The mind boggles at imagining the code that could possibly be responsible for this behavior.


It was actually caused by a digital certificate which expired last month

https://blogs.windows.com/windows-insider/2021/11/04/releasi...


I went to Windows.old, and copied out the all the executables in System32, as well as the locale folder (en-US for me). Pinned snippingtool.exe to start menu, and then uninstalled the W11 snipping tool.

My snipping tool again works, and exactly how I need it to.


I upgraded to Windows 11 and the volume bar simply does not show when left clicking on the sound icon on the lower right of the task bar.

No amount of things I've tried makes it work, so to change the volume I have to either use keyboard shortcuts or open the volume mixer in the control panel.


Well volume ain't that bad — I can't open widows defender after the update. It just acts like the app was unistalled and tells me that .lnk is defective.

Meanwhile the defender itself keeps running and preventing me from installing latest qBittorrent because it decided that it's malicious and to overwrite I need to open Defender...


Why not just get LTSC? You get zero feature updates (ie. the breaking kind) pushed on you, but you still get security updates for up to 10 years.


LTSC is only in windows enterprise, but what any windows pro user should use if they want to avoid the bleeding edge is switch to CBB, which is generally a lot more stable than the consumer releases of windows (which effectively are public betas during their first few months).

https://social.technet.microsoft.com/wiki/contents/articles/...


Last I checked, you need an enterprise license to get the LTSC version of Windows. Individual consumer users can't just purchase it on their own


getting an enterprise license isn't hard, see: https://community.spiceworks.com/topic/2167558-explicit-inst...


Ignoring the licensing issues, LTSC failed for me. Just literally wouldn't boot one day.

It's probably something hardware related, but the Pro version works great so far, even if it ignores update time and sticks my files into a black hole because it thinks it's a virus.

Hopefully the last version I'll use on bare metal as I move away from soldered processors and more centralized garbage.


AWS: “Postgres 9.6 is old. On January 22 we will forcibly update your instances to 12. We hope you noticed this alert. We certainly didn’t email you about this. You’d better get off your ass and test/fix your clients for any potential issues.”


I think they email account owner, because in our case he forwarded it to tech team. So if he would have chosen to not forward it to us, then we could have been in the same situation as you. Just wanted to mention that they actually sent an email about that issue. At least to somebody :D

We did our upgrades few months back (9.6 -> 13) and luckily in our case it wasn't big of a hassle. Just finding out correct upgrade path with Postgis took some investigation, but overall the upgrade documentation was good.


whoa, do you have a link?


https://imgur.com/dCxh2Dd (sorry. Looks terrible on phones)

https://forums.aws.amazon.com/ann.jspa?annID=8499

So obviously I was embellishing the language, but the sentiment remains the same. I found this because I checked the RDS admin panel, which I rarely do. I didn't get an email. It was very alarming to discover and makes me anxious about what other forced upgrades I'll miss.

I appreciate the point of this, but I think forcing upgrades is absolutely the wrong way to do it. Scream at me all you want, but don't force my stack to mutate and potentially break services.


Easy to blame AWS, but as the post you linked said, Postgres 9.6 is no longer going to be receiving updates from 11 November.

What do you want AWS to do here? Keep running software that won't get security updates? That seems a bit wild to me.

Communication could have been better, but there is no universe in which a managed database provider should be expected to continue to maintain instances with discontinued versions of software.

Why were you still running 9.6 anyway?

https://www.postgresql.org/support/versioning/


> What do you want AWS to do here? Keep running software that won't get security updates? That seems a bit wild to me.

PostgreSQL is open source, so they could keep patching the old version with security fixes.

Or... they could keep using just the community-supplied free-of-charge version and pocket all the money from not maintaining security patches themselves.


They are providing easier maintenance and monitoring for open source DBs. You can always avoid RDS and install Postgres manually on EC2, if you so desire.

I'm not saying RDS couldn't be better, but I wouldn't expect them to maintain unsupported versions of 3rd party software.


I agree AWS should be contributing back to the open source projects and they are listed as a 'sponsor' (though not a major one) on the Postgres website.

https://www.postgresql.org/about/policies/sponsorship/

But AWS should not have to take responsibility for providing indefinite updates to every version of every managed open source project it operates. The only way I could see this working would be if AWS charged the holdouts the cost of keeping them supported.

However, performing RDS Postgres upgrades is relatively quick and painless process. If a company doesn't have the capacity to do that every five years, then it shouldn't be running its own infrastructure.


> The only way I could see this working would be if AWS charged the holdouts the cost of keeping them supported.

That actually sounds like a great idea. They could charge more for use of older versions, so that people could calculate their tradeoffs, and migrate when they decide themselves.


At some point the alternatives are force-updating your DB or shutting it down. One of those at least has a chance of keeping your service online. I agree the lack of communication is pretty bad though.


The lesson here is to use proper hosting instead of AWS or some other fart cloud.


I've got some sympathy for this perspective.

It's frustrating when you update in order to get the latest security updates - and you get forced to do a bunch of pointless busywork because some asshole has made some arbitrary change like deciding that 'which' is deprecated now.


> arbitrary change like deciding that 'which' is deprecated now.

I kind of hate that this is going to live on as some example when the actual event was somebody proposing it, it failing in some builds on testing, and then a vote deciding against it. It was an example of good project governing preventing breakage, but for some reason it's already being remembered as the opposite.


The Technical Committee had to step in and vote, so in that sense the last resort worked.

But a good migration is a quiet migration. When internal Debian discussions reach the user's stderr and causes builds to fail, the system has failed.

There's only two ways to remember this sort of kerfuffle. Not at all, or as a lesson in deprecating things smoothly.


Didn't that only happen in testing? Isn't that the point of a testing release?


I'm still bitter about ifconfig.


When I finally got over that, I got to mourn netstat and learn ss.


Wait until you also have to mourn nslookup/dig and learn resolvectl...


Sad times... "drill" is the replacement on Arch.


> Docker Desktop doesn't let you decide whether you want updates or not, unless you pay them

This made me uninstall Docker Desktop.


They actually just reverted this change within the last week (of course with accompanying “we love listening to your feedback!!!1!” eyeroll inducing messaging).


What I always say about this kind of thing is not "It's ok now because they un-did it." but "It's still not ok because they tried."

If they are the kind of people who would try something, then they are still the same people and that problem did not go away.

They will try something else again, and may in fact already be failing to work to my advantage right now in ways I just can't see.

Once you know that, I prefer to just live without whatever the awesome thing is, somehow I will survive.


So basically you must be 100% perfect all the time and are never allowed to make any mistakes.

This is one of the attitudes that makes the internet so toxic IMHO.


> This is one of the attitudes that makes the internet so toxic IMHO.

Right up there with removing all nuance from a discussion and attacking a strawman...


on the individual level you can make mistakes. on an organizational level allowing this kind of mistakes means they discussed it and nobody found a problem with it. its a sign of disfunction.


It's a sign of docker desperately trying to find out a way to make money and survive.


Not my problem, and not a valid problem in the first place, and not the charge against them.

There are an infinite number of ways to make enough money to survive.

You can sell your work honestly without artificially witholding work that is already done so you can sell it a million times over, and get people to do it by artificially creating or at least artificially preserving a pain point and randsoming the salve.

That is not simply doing work and paying for that work.

If a thing is at all useful enough that anyone even wants to use it, then there are a million businesses that would love to pay you for expert installation and training and support of perfectly free software.

Ahh but that doesn't scale. You can sell your time to a few people and live very very well, but you can't sell your time to a billion people.

No one is "trying to survive" in this story. What a strange and incredible thing to even try to say.


It's almost like they probably should have figured out something so important by now. I guess they didn't have a plan B after they didn't get bought out. Sucks to suck.


"So basically you must be 100% perfect"

Yes. That is exactly what I said.


There's a big difference between being mad at this particular incident and demanding 100% perfection.

Dealing with a normal mistake isn't a problem because I can just opt out. They removed that ability here.


The auto update “feature” was only ever required so they could get everyone on to a version that they could remotely shut down to force subscription revenue. Makes perfect sense that it was a paid “pro” feature before to not be forced: you were already doing what they wanted. Now everyone is on 4.0 and they can turn it off again.


That’s great to hear, I’m one of the people who complained loudly about that unacceptable behavior. Guess I’ll be reinstalling.


Welcome to Docker, I love you


Most of my experience with Docker Desktop is on a Mac behind a corporate proxy. I swear with every update they either removed my proxy settings or changed the behavior of how docker build and docker runtime inherited the proxy settings. It was maddening because inevitably everyone on the team had different versions and therefore different behaviors. Took away the whole point of having a common tooling container.


That was so lol I had to post it to our #random and probably should have to r/assholedesign


I upgraded my Ubuntu distribution last week and my old Xerox Phaser laser printer stopped working over the network.

Something like this should never happen. I hate spending my weekends troubleshooting the Samba configuration. Maybe I will connect the printer to a Windows VM.


I noticed my wifi router had an "update firmware" option. Hmm, I said, that sounds cool. Go to the trouble of looking up the manufacturer's support page for my model, download the file, poke it into the update box, click go.

Now all my IOT devices on the 2.4ghz now fall off the network after ten minutes and have to be manually restarted. I'm sure if I spend the time to look into it I'll find some fascinating difference of opinion regarding a detail of the 802.11b spec between the manufacturer of my very cheap wifi router and the manufacturer of the very very cheap wifi radios in my internet-connected thermometer.

Instead of doing that, I factory reset the router, and spent fifteen minutes restoring various configuration details from memory.


This is why I badly want all updates and rollbacks to be as declarative and simple as Git commands.

"Be on this version. Now."


For workstation use I most definitely don't. Linux 5.13 has several regressions that make it unbootable on my system. Sure, i can btrfs/nix/ehatever rollback to 5.12 pretty easily.... and then what? Stuck in 5.12 for thr rest of my life? Hope that someone fixes it by chance?

What i mean is that I value decent changelogs, ability to diff changes between package versions, etc. much more. When a package regresses on my desktop, my next task is sadly to try to debug it.


?

Well if you wanna debug it, go back to 5.13. I just mean I want Nix or Guix style declarative systems.


Heaven!


Ubuntu upgrades are often a mess, even with LTS versions. Some issues I've encountered: 1) system lost its default route after an upgrade 2) network interface names changed, all connectivity was lost 3) system became unbootable (UEFI boot order changed.) This was all on a physical machine, and loss of connectivity meant having to to go the console.


Sorry, newest Windows also has printing broken. No luck there.


This sort of thing makes me want to give GNU Guix or NixOS a try. Not sure which yet though. Last time I checked both, they were limited with regarding to disk encryption and partitioning and such. I came up with a possible solution for GNU Guix, but I never put it to a test.


I have had much better luck exposing printers over smb from Linux than from Windows. My in-laws were visiting and couldn’t print to our network printers directly from Windows, so I had to add them to CUPS and smb, which was painless.


I literally never update anything unless it is not working. Quite happy here with Firefox 66.x on Ubuntu LXDE 14.x, which is what happened to come with this particular device.

I have an iPad running iOS 8.x, which I'm also happy with, especially when I do testing on the clusterfucks that are later iOS and Firefox releases.

Browsing a handful of reputable text-based websites from behind a NAT, I don't see the problem. (And I feel the same way about HTTP, which is faster, more compatible, and more accessible.)


I really appreciate people like you reminding most everyone that new != better. I live in Japan which people usually mock for still using fax machines, or keeping "the old ways" in many aspect of society.

There are important positive things that people don't realize are lost as we "modernize" society.

For example, these days the TV spies on my usage and sends that data back to the maker company; they will then sell it to advertisers. Every update is aggressively pushed to me, and after i accept it, i notice more ads (rebranded "you may enjoy" or "now trending") on the home screen.

Another example would be how awesome paper is. It displays information without requiring an energy source. It can be folded and unfolded. May not impress you until your phone battery dies on a trip or at the airport/stadium where you need to present your e-ticket


> I live in Japan which people usually mock for still using fax machines

Greetings from Germany as well...


>For example, these days the TV spies on my usage and sends that data back to the maker company; they will then sell it to advertisers. Every update is aggressively pushed to me, and after i accept it, i notice more ads (rebranded "you may enjoy" or "now trending") on the home screen.

Have you considered getting rid of that TV and TV in general?


This works at small scale (read: a small startup or for your own machine). It begins to fall apart once you have dozens of services each deployed at some time in the past 5 years, and no clue whether any of them are safe to update, or even validate.


Yeah, anything at scale I build with a bare minimum toolkit, and only use conservative tools in the process.

By conservative, I mean something which would still work today if I wrote a script for it 10 or 20 years go.


How do you make sure these are the reputable websites and not some interceptor, when using plaintext protocol?


I'm lucky/blessed/fortunate enough to live in a place where it is unlikely and use reputable ISPs.


Jesus, you are technically illiterate if you use that old version of a web browser, I’m sorry.


Do you have any actual arguments to back up your opinion?


Copying my previous answer to a similarly bad idea:

“ Browsers run untrusted code 0-24, which get JIT compiled to machine code through a very complex and bug-prone process. Add to that that desktop OSs are quite lacking when it comes to sandboxes, so even with browser sandboxes, the potential for serious damage is quire big. So, staying ahead of bugs is a must.”


It's impossible to stay ahead of the bugs, because they appear ahead of the patches, and there are undoubtedly many unpatched bugs out there.

But if I only visit sites where this is unlikely (and don't allow JavaScript, which you seem to have missed) I am much safer than when browsing willy-nilly with the latest patches.


Browsing the web with JS disabled on an up-to-date browser is still much safer. But you do you.


It's safer if it is an option.


This is Hacker News - someone here is probably reading this on a PDP-11.


The thing is, it’s not that old to be so obsolete to not be a pile of vulnerabilities/lack of advanced feature that actually provide an attack surface, but it is old enough that one should simply not run it.

I have nothing against running lynx where you don’t even have js support.


Never updating and always updating are just two different ways of sticking your head in the sand


It's more like sticking your head in a river.

No updates: ignorance is bliss. Until you need to breathe -- then you die.

All updates: maybe I can just drink the whole thing...


In one company there were quite old linux boxes that were never updated. They never caused any problems, the software in them kept chugging along just nicely.


This is only somewhat related, but I wish semantic versioning had settled on four fields instead of three. A transition from 13.1.2 -> 14.0.0 could be a major update that revamps the API, or a tiny incompatible change. Another field at the front would fix this: major.breaking.feature.bugfix. It would help with the "zero-based versioning" problem where projects sit at 0.y.z forever because there's an aversion to frequently bumping the first number.


"Breaking" is undefinable. I remember reading about some user complaining that a small bugfix broke their work setup because it fixed a bug that used to make the CPU go 100% when the spacebar was held. The user would hold it with a weight and get the CPU to make heat that way. The small fix broke his experience.

There are many other examples of this. Breaking compatibility by fixing bugs people rely on for instance.

Any change can break something for a user downstream. It's a very subjective evaluation for the producer to imagine potential impact to their consumers. It's easier if the relationship is rich and exclusive. In open-source where people do it on their free time and have thousands of consumers with widely different interests, it's pretty much impossible to label the release in a way that's conveying the right message to every user.

Semver, the way i see it, is just a way for the producer to subjectively label the amplitude of changes. The consumer should then ideally be familiar with the producer and get a sense of how they work and what they perceive as big. Still at the end of the day it's a very subjective judgment call system that doesn't offer real guarantees to consumers. If your software needs to be stable, don't upgrade, or spend time to review the changes in the dependencies. The version numbers are no guaranty.


> "Breaking" is undefinable. I remember reading about some user complaining that a small bugfix broke their work setup because it fixed a bug that used to make the CPU go 100% when the spacebar was held. The user would hold it with a weight and get the CPU to make heat that way. The small fix broke his experience.

You confused XKCD with reality: <https://xkcd.com/1172/>


There was an old story about a multiuser OS where the user would hold down a key to get more cpu slice during compiles… but my Google fu is failing to find it.

The idea was that the OS gave more cycles to interactive sessions … something like that. It may be apocryphal.


Oh that's where i read that. Thanks!

I believe the point i made still stands though


If the software you are using auto-updates and you lose business or esteem of peers -- it's YOUR fault.

Allowing most software companies to update anything on an running functioning work-related machine that you use to make $$, is ASKING FOR IT. WHEN it breaks something that is your fault for being so stupid.

I update software in most cases by installing it on another machine/device and then once it is confirmed to work, switching devices and wiping the former-work-device.

Yes I have more than 2 of everything critical for making $$.

Yes I filter all my inbound and outbound network traffic and default deny, at home and on the road

Software that prevents you from disabling auto-updates is a virus.


> WHEN it breaks something that is your fault for being so stupid.

Sorry, this one raises my hackles. It's exactly such a user-hostile worldview that makes everything suck. It's just more victim-blaming and elitist tongue clicking that helps absolutely no one.

Everyone is stupid when it comes to software. There are hundreds of millions, if not billions, of lines of code, written by tens of thousands of different people, with myriad internal and external complexity, all breaking and falling apart at the same time. It is literally beyond human comprehension all the niggling details that could go wrong.

I whole-fist pushback against this "oh you should know what you are doing with metric asstons of other people's code". Uh, no. That's the attitude of unserious people who want to ship garbage and make it users' problem.


But do we even disagree my friend?


If you're running software maintained by someone else and you don't let them do that, and there's a security or major bug fix and you lose business or esteem of peers -- it's YOUR fault.

Ignoring upstream security fixes on a work-related machine that you use to make $$, is ASKING FOR IT. WHEN it breaks something that is your fault for being so stupid.

Neither of these extremisms are helpful. It's clearly more nuanced than any of this.


Of course it is. Context matters. I was trying to keep with the spirit of the article: 'Here's a fair warning: this article is reductio ad absurdum, therefore you shouldn't take it as gospel. ' Usually though in my experience, if you also control the network, then most security updates can wait to be tested on a non-production machine. Also it helps to Never ever use Windows.


This resonates with me for a LOT of reasons but I take a very different approach. I try to keep just a few dependencies and keep them all up to date. For most updates I can read every line of updated code. I learn a lot, get all of the security patches, and sometimes I realize I don’t need a dependency and I remove it. I’m always trying to take small calculated risks. I have great monitoring and rollbacks are easy.


Having two of everything is actually a pretty decent idea.

Part of the fear of updating though is the time sink.

Even if I attempt to update one mac laptop to the new version (of which I believe there is a new one just released, doesn’t seem long since I last updated…) knowing that I have a safe backup, I dread the thought of spending hours knowing something _should_ be working but is now broken. It can be infuriating. Especially when it’s a pattern/way of working you have become so accustomed to.


Having two servers with an unpatched CVE 10/10 vuln will get both pwned in short to no time.

Or just one, exposing your data in a ransom attack.

Dependency and update management is hard. Welcome to IT.

From my experience, extreme viewpoints and religions are convenient in the way they have answers to all hard questions in life that are simple, clear and wrong.

If you like simple and correct answers, you're usually better off choosing simple questions instead.


Unpatched? Not necessarily.

Unpatched and unmitigated? Yes.

Taking the time to build “defense in depth” into the architecture has saved my ass on many occasions.


dear raul, did you read the article? 'Here's a fair warning: this article is reductio ad absurdum, therefore you shouldn't take it as gospel. '


On "cloud" servers I usually do a snapshot before the upgrade. That way I can revert to it in a few minutes.


"Java over Go" seems kind of weird. Go has been around for a decade and it seems like code built on 1.0 should still work, if you're not too crazy about dependencies?


Yes it’s a terrible example directly contradicted by the graph he gives above of everyone stuck on Java 8. Go, after 10 years, is still on version 1 and likely to remain so and takes this problem of breaking updates really seriously.


Pretty sure that decision is based on the standard library size and package repository culture, not language updates.


Again, those are reasons not to use Java vs Go. The Java standard library is a bit of a mess and far too large for human comprehension and personally I prefer the Go package culture of never breaking HEAD it has led to very stable builds IME.


IMO the Go libraries are less extensive but more focussed than Java and the Stalin covers most of the bases for web apps very well.


“Angular over react” is also a bad example because in my experience angular upgrades take more work and are necessary more often.


I identify heavily with this but also the inverse —- always update everything (all the time). I don’t run arch anymore but one thing I did enjoy as a daily driver was that I was constantly updating my software. The pain is in the middle where you update things once every year or two (or when forced) and there’s hundreds or thousands of new changes that break your stuff.


I feel the pain, seeing non-trivial updates for Ruby, Elasticsearch and Postgres (thanks AWS) this year.

Can not agree with the React part though.

They keep so many things backward compatible. React 16 is 3, 4 years old now? I've updated pretty big code base from early 16 to latest 16 version painlessly - like in a day. Used some of the available code-modes for some really old part of the project. It worked flawlessly. Currently there are console warnings to fix certain things and that is it. No breaking changes, no nothing.

Now, React 17 will be a problem - there are a lots of 3rd party UI libs that will not work with it. But I think we are pretty well set to use react 16 for the next couple of years, making it work for 6, 7 years total. I think that is a century in terms of FE and is great.

Also, "...React pulling a sneaky on everyone and introducing hooks, which was a pretty bad move, especially if you needed to migrate to them" - just don't use hooks then - no problem.


6 years being like "a century in terms of FE" is the core objection being made in both the article and the comments. You're right in every point, but I can't help but feel like you're accepting a state of affairs that should be unacceptable.


Sort of off topic I guess, but: if I wanted to learn react nowadays (coming from extensive backend experience but not much in the front end) where should I start?

From ignorance, it seems like react has changed a fair amount during its lifetime, in terms of good practices, features available, etc. Where can one find resources that are both complete for a begginer and not outdated?


Tbh, I think the tutorial on React’s site itself is pretty good at making an introduction to the framework and the React way of thinking. It’s the one here: https://reactjs.org/tutorial/tutorial.html

As for changes during its lifetime, I would say that it is fairly on par for most web-focused frameworks, both front and backend. You could look at how ASP.NET has changed over the last few versions for example and see it being similar to React’s changes. The largest changes in React recently have been a greater focus on moving from syntactic sugar JS classes to pure functions, and React Hooks that go along with that. There’s definitely a shiny things syndrome you get if you just google for React tutorials, which is a bit annoying, but the official docs are pretty solid, and a few of the core devs are great to follow from a philosophical standpoint (like @dan_abramov).

Hope that helps!


create-react—app is a good way to create the necessary scaffolding.

There’s an O’Reilly book called Learning React (2nd Edition) which is great. It brings you up to speed on the history of the framework as well.


I can second that book, speaking as someone who also hasn't really programmed in JS in a while.


The new React documentation page[0] is still in beta but worth checking out, since the current documentation's code examples still use class/component-based code, which is not so common since React Hooks were introduced.

[0]https://beta.reactjs.org/learn


What are the benefits to updating frontend packages. Are there security vulnerabilities that I should be worrying about if most of my npm dependecies dont get updated.


Android: We update your system until it's too slow to use.

Also, Android: One day we will stop giving you updates, so your apps can't talk to new versions of online services anymore.


The second is not so much a problem with Android but the horrible intersection for OEMs with no reason to support old devices, Linux baking drivers into the kernel, and hardware manufacturers releasing closed source blobs instead of OSS drivers.

Newer versions of Android have done a lot to decouple the device tree from the rest of the OS so you can update without OEM involvement.

Google also moved a lot of functionality into Google Play Services that updates over the air (but that's a negative for some people)


Android, iOS, macOS, Windows: Newest version only works on newest hardware. Old hardware not supported anymore.

So, for old hardware:

- Install Linux to desktop/laptop computers

- Install Ubuntu Touch to smartphone, if available


I am anything but an apple fan, but newest ios and mac do work on even 5-10 years old hardware as well.


Android: if you have a hobby app in the play store we’re gonna make you jump through stupid hoops to keep it alive every year or so (I just let mine die)


This is why I like running LTS distributions where versions are fixed and security patches are backported. Unless you install from 3rd party repos chances are very good that updates will be seamless.


I run a very successful SaaS on Angular 1.8. No need for shiny Angular v659, or React, or Svelte, Vue, whatever. Angular 1.x, Python, and Flask is all you need to build a business with 7 figure ARR.


I think I'm on 1.4.something

It works, I'm not all that impressed with Angular 47 or whatever they are up to now, nor React, and, to repeat myself, it works.

When you have tens and hundreds of thousands of lines of code and a only handful of people, it ain't worth it to rewrite everything to make it just work the same and look the same.


>Use maintenance mode software: >Angular over React

author lost me at this point


> Actually, i don't have enough time to do my day job, learn new technologies [...] > > Ergo, i cannot update. Ergo, companies cannot ship features AND handle all of the updates [...]

Your time is limited and you can't get more of it. That isn't the case for a company. There, time == money, because if their employees don't have enough time to do all the work, they can just hire more to do the rest. But because they're only interested in growth and not sustainability, most of that money goes into developing new features, not keeping up with security updates. The vast majority of large companies could very easily hire one engineer and task them exclusively with updating legacy dependencies. And the smaller ones could instead re-task half their dev team for a month each year.


Here, use this opinionated software to set up a react project and be frustrated about the amount of dependencies instead of just setting up a project myself! I just tested a base typescript react project, and you only need 14 packages on its own (typescript, react, react-dom) to build the project. Wow!


I tend to agree with the premise that in general updates aren't something to race out and apply. Security updates, yes. Updates that are required because of API changes, yes. Other ones are often more trouble then they are worth.


Like the author mentions in the backporting section, if you don't update you'll fall behind maintained version and then you don't get security updates and upgrading is a huge chunk of work that has to be tackled as a giant step. Depends on your priorities and timeline, not updating is the most straightforward example of creating technical debt.


Conversely, not updating until you can't not update anymore is often the optimal approach, because when you're eventually forced to update, you deal with potential breakage just once. The amount of hassle around updating does not scale with the amount of updates you missed.


You deal with it just once, but you'll still have to more or less deal with every major breaks that would have been easily detected if the patch wasn't huge.

Sauce: had to deal with that recently. It was not pleasant.

I'd rather deal with 50 points of failure spread over 5 years than 50 points of failure once. Far less stressful, far easier to diagnose.


The counter point is, new releases come with new bugs you may now hit and waiting could mean someone else hits them

Besides your points, it also sucks if you hit an old bug that's been fixed yyy versions down the line and you can't get there reasonably

(I tend to agree with many small chunks than a couple huge ones)


Hilarious post as expected from this camp. But there is some truth to it. Eg we explicitly do the equivalent of apt update -y: apt upgrade -y on our test builds to see update errors early. Only then our work dockers can be upgraded.

Of course those folks don't have these luxuries. npm upgrades without testing, dependency tracking in XML, closed source deps, no symbol versioning, loose dll hell.

But even with simple glibc binaries I have to maintain up-to-date and lts (ie outdated) versions for the recent fstat and time64_t changes. Then you start appreciating go. And podman.


Interesting read, personally I disagree with the author on the risk-benefit tradeoff of having lots of updates.

Something work calling out though - the author appears to have completely unironically updated their own post.


Few interesting areas:

- JS dependencies are a problem because of the sheer number, which is really because of all the authors of the dependencies and how they update, backwards compat or not etc. if you had 10 big things by 10 authors they would coordinate better than 1500.

- The Lazarus example is interesting but “trusted components” also means walled gardens, who gets to choose? Is there competition? How is that decided in vs out? Logical conclusion to much of this ends up with a central community package manager like maven central..


What about trying "planned obsolescence"? Hopefully every 5 years for production, and once a year for consumer. With this in mind, it is possible to plan ahead and allocate resources.

There will be a major update in xx.xx.20xx date, it will break many things, so plan accordingly. And there won't be any more security updates one month after that.

Meanwhile, security updates are pushed in realtime.


Yeah, there's a reason users have always perceived planned obsolescence as hostile action.


I use a 5 year old Linux distribution and I never update it except for important security patches. Works great. Nothing changes so nothing breaks. I ran into a problem once with trying to run some Go binary that wasn't actually static (pretty common apparently) but got it working in a Docker container.


As a red teamer/pentester, this is an attacker's dream. This has to be a joke.


Auto-updates are an attacker's dream too.

Also, this disclaimer in the very first sentence of the article:

> "Here's a fair warning: this article is reductio ad absurdum"


> Auto-updates are an attacker's dream too.

Why?


The attacker only needs to compromise the update-infrastructure to be able push a malicious updates to everybody who runs the software [0] [1]

[0] https://blog.malwarebytes.com/android/2021/04/pre-installed-...

[1] https://www.businessinsider.com/asus-acknowledges-computers-...


In general, of course. However, the article describes the ridiculousness of today's update mania. And there are actually systems that I don't update very often (my OpenBSD-based firewall) without losing any sleep over it. Unfortunately, such systems are very few in the real world.


Yes, totally the reason why healthcare is more vulnerable to ransomware.


Anyone care to comment on how [Deno](https://deno.land/) might address some of the issues shared in this thread?


It is so wrong to desire to return to handwritten Javascript and libraries like jQuery?

I don't do much fronted today and the reason are the frameworks.

I still do it but I use Blazor to generate web assembly.


I couldn't help but notice the author's using of i and i'm (note the lower case 'i'). I think i like it! Oh, and pascal is cool too!


i's used to be lowercase back in the day. nobody is certain why they changed to being uppercase but there are some theories that its because 'i' on its own could easily look like it had drifted away from another word or that it could have been made by accident. previously 'i' had been 'ic' or 'ich' so it would have been a lot easier to see.

im not sure i agree with the accident theory though because 'i' looks a lot more deliberate that 'I'


DOS 5.2 ftw


Do what my Tandy 1000 did, and burn DOS 3.3 right to ROM. None of this wimpy flash-but-with-a-read-only-bit, no fancy UV-erasable EEPROM...just honest-to-goodness-blown-fuse ROM.

Boots immediately too.


> just honest-to-goodness-blown-fuse ROM

Well, blown-fuse ROM is called PROM. Real ROM is made from hard metal VLSI masks or old ladies weaving copper wires.


My Tandy 1000 (TL/2) had a EEPROM you could configure for booting from rom (dos 3.3) or the hard drive (originally also dos 3.3, but IIRC dos 6 ran fine too.


Better yet, if you don't want updated software, then why are you installing it? Just use a versioned Linux distro be done with it.


Because many times security updates are tied to fancy changes nobody asked for but consumers are expecting.


Worse, they're often tied to changes nobody asked for or wanted, including the customers in particular, and importantly, you can't tell which kind of update you'll get.


Don't package maintainers for Debian and what not often backport changes without breaking things? I agree that the mental complexity of understanding so many components of software and staying relevant is near impossible, and that is specifically why I try to avoid JavaScript development altogether.


Yesterday, a package (redis-server) for Debian Stretch in non-backports repository was updated, and now relies on a package not present for Stretch (libjemalloc2).

https://packages.debian.org/stretch/redis-server

So much about not breaking things.


Semantic versioning is a curse. It makes perfect logical sense until you are a few iterations in and quickly run into dependency hell as shown in the diagrams in the article.

My solution to this isn't "never update anything", but rather "never version anything". People can choose to stick to the bits they originally got, which is perfectly fine, or they can switch to the current "live" one. As a developer I'm only ever building and maintaining the latest source.


> As a developer I'm only ever building and maintaining the latest source.

That's valid under semver.

The crazy diagrams show what happens when you want to support lots of old versions.

I have semver stuff at work where I only maintain the latest version. Bugfixes bump the patch version, adding a feature bumps minor, removing a feature bumps major.

I don't make any special effort to bump or avoid bumping anything. It gives me a rough idea of whether a given version has new stuff in it (x.1.0 or x.2.0) or is just a bugfix (x.0.1 or x.1.1)

Dependency hell has nothing to do with how you label your versions and everything to do with how much you want to annoy users by breaking whatever you like vs annoy yourself by taking the time to set stable APIs and hold compatibility backwards and forwards.

Semver vs. Git commit hashes vs. "just one number" vs. web browser versioning vs. "just timestamp" is all just different labels on the same soupcans.


Versioning things helps with bug reports. Oh, this was introduced in 3.144.42? Great, let's look at the changes to that build.


You don't need semantic versioning for that. "Oh, this was introduced in build 22456" works just as well.


Except that using a build number in the way you're describing is just a worse semantic version. You now have no way to indicate if your changes are breaking. Separation of your pipelines also just got a lot more hectic because you could have a situation where you don't know what happened when you're missing "versions" (builds) because it's failing but still incrementing... Using build numbers for versioning doesn't really work in a large ecosystem and often relies on picking the latest of a branch or having to sift through builds when deploying to find the right one.


I think the concept of a breaking change is part of the mistake. If you want to change something you should introduce the new way and support it side by side with the old for a while while going through a depreciation cycle.

I'm with the GP on this. Pinning to a specific version is a code smell. You should have enough confidence in your regression suite to always use the latest version. And hopefully enough confidence in your vendors that they're not going to break a bunch of stuff.


The concept only makes sense in the context of APIs. I don't know if that's obvious; I've seen people use semantic versioning with software that didn't have public interfaces.

Even if you go through a deprecation cycle, you're still going to eventually have a build N with feature X, and build N+1 without feature X. That's a breaking change.


That's true in the sense that a bullet flying at you and steam roller running you down both represent mortal threats. But practically a reasonable depreciation cycle isn't going to be a breaking change because everyone will have plenty of time to upgrade.


> You now have no way to indicate if your changes are breaking.

You now have no way to indicate that you know your changes are breaking.


Your build number tags the source control commit it was built from in any reasonably sanely implemented CI/CD system.


100% the way to go. Also people in python land (myself included) Are prone to “emotional” versioning



I think semver is mostly useful for telling, at a glance, how much of the changelog/migration notes to read

I've noticed (more frequently) npm packages quickly climbing in versions (1-17 in a couple years) but at least it's painfully obvious.


I'm trying to understand this comment. Do you never use package managers?


I'm guessing they're referring to conflicting transitive dependencies. Everything works fine when you use version 1 on date A. A couple years down the road, all those depends have matured at different rates and the top level packages all specify different versions of the same transitive packages so you have to try to line them all up.

On the other hand, the longer you wait to update, the worse it gets. Updating more frequently you can suss out packages moving quickly vs slowly vs not at all and address the problem before you're in update paralysis


What an irony: the article about never updating anything has an update.


Twitter agrees, with regards to tweets and their business model


I'm not sure if this is a joke or not.


It is not a joke, just a very frustrated (clickbaity) hyperbole rant.

"My premise is that updates are a massive waste of time, most of the time. Obviously, someone will jump out and say that "Hey, if you don't update, your Windows XP installation will get added to a botnet in less than a week," and they'll also be right at the same time. "

So my understanding is, when compared to a academical, idealistic point of view, handling of updates is not optimal. Sure thing.

But in reality, you still must patch your WinXP system if you have no better alternative and the author likely agrees. And if you do have a shitty legacy java project that is still needed in production - you still have to patch it, if you have to use it in the wild.


> Here's a fair warning: this article is reductio ad absurdum,

literally the first sentence...


It is not a joke. Upgrading dependencies is a lot of work in many programming languages, many web frameworks, etc.


Windows xp works fine when airgapped.

In a more serious note: it is a rant


Use NixOS.


Hah... I was bored and decided to just press the update button on all of my dependencies. I spend 2 hours changing things pointlessly and then deleted the branch and read why any of these updates were necessary. React Router - "wow, we have an amazing new v6"... *reads the "why upgrade"... "it has hooks now", ok they are already in the version I am using (5.2). "it has some changes to how you specify routes and you will have to rewrite them all. Just wait for automated rewriter". I couldn't really see any advantage to them at all but supposedly "something in the future!". ok delete that one. Material UI - we changed everything.... here is 10 page long how to upgrade guide. Also no date picker in core anymore. ok delete.... delete delete. I will not upgrade this project again. Lesson learnt.


IIRC there has never been a date picker in mui, it was in a separate package called pickers, and the recent major version moved it to their labs package.

React Router, however, feels like the one major React package that constantly breaks things on major versions and requires refactoring to get back to where you were.


> Here's a fair warning: this article is reductio ad absurdum, therefore you shouldn't take it as gospel.

Yeah, i'm pretty sure the author doesn't know what that phrase means.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: