Most use-cases and people/companies don't have the traffic amount where scaling matters that much, so performance gains will be negligible (or even worse) when using a more complex "specialized" stack.
Yes, if you have a large team, building for scale in a VC-funded company, go for dedicated stacks, but if you are just building a start-up where moving fast is important and architecture quickly changes, any DB will do and will not be the bottleneck.
I use Postgres to cache some responses from APIs (max 100 different documents per month) where latency doesn't matter, so using Redis instead would provide no benefit (will actually make it worse, due to its "worse" persistency and third-party integrations capabilities)
> Software doesn’t stay solved. Every solution you write starts to rot the moment it exists.
I don't really agree with this. Yes, it gets outdated quickly and breaks often if you build it in such a way that it relies on many external services.
Stuff like relying on "number-is-odd" NPM package instead of copy-pasting the code or implementing it yourself. The more dependencies you have, the more likely it will break.
If your software works locally, without requiring an internet connection, it will work almost forever.
Now, if you want to keep developing the software and build it over a long period, the secret is to always keep all dependencies up-to-date. Is there a ExternalLibrary V2 just released? Instead of postponing the migration, update your code and migrate ASAP. The later you do it, the harder the migration will be.
To me the point of the saying is more that the assumptions and environment that the software was written with are almost always going to change. Meaning the business and requirements rather than the technical implementation choices. Software doesn't exist in a vacuum, but rather to solve a certain set of business requirements that have the potential to shift out from under you depending on the industry, legislation, and your leadership. The things that you have negligible control or foresight over, rather than knowing that there's going to be another major framework update next quarter.
There are certainly horizontal slices of every stack that can be written to remain stable regardless of the direction the business takes, but those are rarely the revenue drivers that the business cares about beyond how much they have the potential to cause instability.
I remember when Turbo Pascal 7.0 programs started failing with Division by 0 error cause Turbo Pascal runtime does calibration loop at the start to calculate how many no-ops to sleep for 1 millisecond.
So - your HelloWorld written 10 years ago suddenly stopped working after CPU you run it on got too fast.
Yes, if you change hardware, the software can break (a lot less often now than before though, as hardware changes are a negligible and they usually think about backwards compatibility).
I'm not sure if it's more or less often now, but over decades almost everything breaks one way or another.
Even if your code, OS and hardware had no bugs and was designed perfectly and you keep the same hardware to run you code forever - there's layers under the hardware - the reality outside the computer.
You have written perfectly secure website. Then quantum computers happen.
Countries are created and fall apart. People switch writing systems and currencies. Calendars get updated.
Your code might technically work after 100 years, but with almost 100% probability it won't be useful for anything.
I remember that bug. I also remember that there were generic patchers that could fix any random .exe compiled with Turbo Pascal without even having to recompile it.
That sorta supports the point the article was making though:
> ExternalLibrary V2 just released? Instead of postponing the migration, update your code and migrate ASAP. The later you do it, the harder the migration will be.
Is, to me, almost the same sentence as
> Every solution you write starts to rot the moment it exists
I mentioned updating is only necessary if you plan to keep developing the software.
If you build it once, and the existing functionality is enough (no plans to add extra features ever again), then you can remove all external dependencies and make it self-contained, in which case it will be very unlikely to break in any way.
As for the security aspects of not updating, with the proper setup, firewall rules and data sanitization, it should be as secure 10 years later as any recently developed software.
I prefer to have my flow hard-coded, with specific data input/output between steps, and have the calls be done through n8n connections instead of letting the AI call the tools with arbitrary data.
It's not an ad, the original title I submitted it with was different. I added it because I discovered it a few weeks ago, and I love the self-hosted version, very polished and well documented.
I think using this without programming knowledge is hard, you'll hit a lot of errors and data transformation issues that are hard to solve without knowing programming 101 like arrays, objects, data access, etc.
But, if you do know how to code, this makes it easy to quickly integrate stuff without having to write all the connection boilerplate code.
Yes, if you have a large team, building for scale in a VC-funded company, go for dedicated stacks, but if you are just building a start-up where moving fast is important and architecture quickly changes, any DB will do and will not be the bottleneck.
I use Postgres to cache some responses from APIs (max 100 different documents per month) where latency doesn't matter, so using Redis instead would provide no benefit (will actually make it worse, due to its "worse" persistency and third-party integrations capabilities)