Over time it becomes harder and harder and more and more expensive to maintain old technology. Because frankly maintaining old technology is pretty boring and career destroying so you need to be paid more and more to do it.
The marginal benefit of the new shiny is indeed an overhead you should try and avoid, but also you have to dodge the trap of being stuck in a technology when everybody else has moved on.
Anyway back to the Adabas support...
1. Place every new & cool technology into mental quarantine for 3-5 years.
2. If, after that:
a) the tech is still widely used, and doesn't seem to be getting overtaken by something else
b) you're about to start a NEW project where the tech would help
c) you're not in a rush and feel like trying something new
...then go for it.
Learning complex tech that just arrived is a waste of your life, if you want to accomplish things. It's only useful if your aim is to appear knowledgeable and "in the loop" to your developer peers.
You'll be very efficient at accomplishing old use cases, which is just as well, because you'll need it - the market for them is most probably commodified, very competitive, and with low margins. Dominated by big actors with economies of scale. Not a comfortable place to be.
You'll probably get into very few new use cases, because by the time your 3-5 years quarantine passes on their enabling technology, they're already explored, if not conquered, by several competitors. The exception to this is new use cases without a new enabling technology, but those tend to ressemble fads, you'll have no tech moat, so again they'll be a very competitive, low margin market.
New techs create value only when they solve some use case more efficiently for someone. This creates value. Not all new techs do this. God knows that especially in software engineering, people deliver new techs for fun and name recognition all the time. Managers allow it as a perk, because of the tight labor market. But it's a mistake to consider all new techs to be in this later category.
New techs are also tech moats, giving you negotiating power to capture some of the created value. Without the tech moat, you better have a non-tech one (regulatory, institutional, market structure, economy of scale) because otherwise the value you capture will quickly be pushed to your marginal cost - if that - by competition.
Techstacks very very very rarely enable new use-cases. They are the equivalent of fashion statements by young developers who haven't learned what it means to support software for 10-20 years.
I think your argument works fine-ish for backends but it's bananas to suggest that jQuery is the same thing as React or Svelte. I do security for a living and maybe 100% of all jQuery sites have XSS. If I find a React page I can just grep for dangerouslySetInnerHTML and I'm 80% of the way there. (I am exaggerating, but hopefully my point is clear: from both a development perspective and a safety perspective, React is not just a shiny new version of jQuery.)
I have seen a lot of sites get worse as a result of migrating from server-side rendering to client-side rendering. Things like broken back buttons, massive page sizes, and terrible rendering performance on low powered hardware.
An example that comes to mind is Salesforce. They introduced a new “lighting experience” that seems to use a newer stack. But it’s taken them ages to migrate all of the features over, forcing users to switch back and forth between the old and the new. It can also be a real beast to render on something like a surface tablet. It must be costing them an enormous amount of money to support both UIs, and I have to wonder if it was really worth it vs maybe applying some new CSS and more incremental improvements.
React being 'declarative' tends to end up with more stability in regards to UX (e.g. complex search forms). Makes the integration of third-party components smoother too.
1. Frontend was less mature to begin with
2. Frontend has a unique, definitional state management problem in the DOM
3. Actually, we can make real progress sometimes
4. Really, frontend hasn’t made strides, you’re just ignoring $x
5. Several/none of the above?
(I think real progress is possible and disillusionment with some new technologies should not prevent us from trying. But also that the DOMs unique state management problem is so overt that functional approaches almost couldn’t help but dominate.)
React is a shiny new jQuery - that's all it is. WebAssembly, Canvas, WebRTC, etc. those are something different. Those enable new use cases.
Thought experiment: why does your argument not apply to, say, C? Why bother doing new language or library design? It’s all int 80h eventually.
By the way, I do think the virtual DOM is either a fad or simply an overstatement. What I mean by overstatement is that batching updates is one of the most normal things developers have been doing, from that perspective there's nothing new here.
From a fad perspective, there is no reason why the regular DOM cannot be fast and schedule batch updates to optimize framerate (and with one less level of indirection). The virtual DOM may actually be a problem in and of itself because it makes an assumption that it knows about how to schedule updates better than the actual DOM - even if that is true today, why would it necessarily be true tomorrow?
I'm very hesitant about my assumptions here, and I am confident I'm missing an important point. So if you can clear up my understanding I appreciate it.
jQuery makes XSS more common in several ways, and some of them are really just the influence jQuery on the frontend has on how the back end works. Some of those ways are pretty subtle, eg CSP bypass gadgets in data attributes (which are very commonplace in jQ libraries). By contrast, React, by building a DOM, has contextual information that jQuery lacks. Go’s HTML templating is unique on the server side in that sense since it too actually understands if it’s in a text node context, a script context, an attribute context, or an inline JS (such as onclick) context, and hence the correct way to treat arbitrary input.
Of course, it’s not because you use React you’re immune. I got XSS against every site that used ZeitJS for example. But the pattern that lead to that (generated code dumped into a script tag) is a common pattern for pre-React frameworks.
Is that boring? It certainly has some issues.
Which means you get respect and better job offers.
He mentioned that on a bank he'd been advising, the mandate was the opposite, namely at most they could be on N-1 and were in the exact same position as we were, except that they were migrating to Windows 2008 and we to Windows 2012 as the N-1 mandate in practice meant that we'd upgrade services every other release, except when there was a waiver, which was often and explained why when I left in late 2012 we still had some Windows 2000 boxes.
As a techie, it was always a pain going back to an old box as you'd try to do something and realise that it was not possible as that feature had only been introduced in later versions. Even worse, was when it was possible but extremely convoluted and error prone.
It's interesting how everybody thinks that it's career suicide to support old stuff when it actual fact most people are hired for a mixture of their knowledge and their capacity to learn. I appreciate that it's lower risk to hire somebody with experience on the exact product but would you rather have an extremely good engineer with no experience in your stack or a good one with experience in your stack?
As with anything, there are exceptions. My phone needs security updates, and 4k displays make me more productive at work, so there were places I bought the latest-and-greatest. And when I need to develop for a platform, well, I get that platform.
But for personal life? A used iPod touch 4th gen sells for $20 on eBay. XBox 360 can be had for around $40. Exceptionally nice digital cameras from a decade ago -- workhorses professional photographers used -- can be found for around $200.
The way I figure, I just behave as if I were born five years earlier. I use the same technology at age 25 as was available at age 20.
This is a good strategy for anything with a high/quick depreciation curve. My DSLR body is pretty old now, but still works great (a D7100). The tech in bodies changes quickly so even waiting just a short period of time can save significant money. Spend money on lenses instead which hold their value and typically can be used across many bodies.
Cars are similar. My truck is a 2011, and I have no plans to buy a new used one anytime soon.
IMHO it makes sense to buy a used car at about 300 thousand kilometers. At that point it's cheap, it's already had a bunch of expensive parts replaced, and if it's survived this long it has a high chance of going another hundred thousand (given proper service, obviously).
Of course another point of view is that getting a car serviced is stressful, so it's best to buy new. But then it's even less stressful to mostly ride a bike and use a taxi or rental car when needed.
I worked in an HR company and didn't learn much.
Then at my last job I worked under a really smart guy who did everything the right way, and I'm way better now. If I had started at a company like that, I would be much farther ahead now.
However, the real think to know is how to architect a project properly with tests/dependency injection/layers, not all the newfangled technologies.
I remember looking at Node when it first came out and got mildly excited, but that excitement quickly went away after writing a couple of small apps with it. It just wasn't for me. The same thing happened with Go. I didn't see enough wins to switch over to using either of them for building web apps.
On the other hand, for me Rails and Flask stood the test of time. Nowadays I'm working with Phoenix and I'm past the hype phase and it looks to be another winner. But in all 3 cases (Rails, Flask, Phoenix) I typically don't switch away from them for good. They just become another tool that I know. Of course I will gravitate towards the one I'm the most happy writing code with, but it's not a black / white progression from 1 to the other.
I don't think there's really a definitive answer on when to change. Like 3 weeks ago I picked up a Flask contract and it was a pleasant experience, even though I'm using Phoenix to build a side project currently. You don't always need to throw the old thing out. You can use multiple technologies in harmony. You change when you start dreading writing code in the old thing, which is very user specific. In other words, write more code and you'll quickly find out what you like and dislike.
Yes but when Go first came out, a lot of people jumped on the bandwagon and started proposing they would use Go for web applications too. There's definitely some overlap in building web services with Go and Python so I wouldn't say they solve completely different problems.
Go and Python are also pretty related for command line apps too. You could totally use either one to build a CLI tool.
Distribution of Go CLI apps is much easier as you don't need to have your end users install the 3rd party libraries themselves.
But in practice as a web developer who occasionally writes CLI scripts, I personally didn't think it was worth going all-in with Go for that.
Especially not when for smaller scripts you can have a Python or Bash script in 1 file and it will run on all major platforms without installing 3rd party libraries too. Most major distros of Linux, MacOS and WSL on Windows have both Python and Bash available. For my use cases that's good enough.
: For example just the other day I released a ~200 line self contained Python script to solve a problem I had which was figuring out what changed between 2 web framework versions: https://github.com/nickjj/verdiff
If you mean Go can't read the exit status of a command it runs, that's incorrect as well: https://golang.org/pkg/os/exec/#pkg-overview
It's extremely simple and pleasant to use. All it needs is generics and it would be my go to for most web services.
The thing is you’re right, Go compiling to a self-contained binary is different from a folder of .py scripts.
But both can be deployed into production.
The deployment steps are different but the outcome is the same, so they can be used interchangeably.
Go might be a special case actually, as it was designed to be a "boring" language to reduce the cost of technology choice. But it is completely interchangeable with similar programming languages (like Python) so evaluating the cost of it vs something else is still a very reasonable thing to do.
Python and golang overlap for http webapps or apis. They both can and often are used for this purpose.
This is not quite right since they both compile to bytecode and execute in a virtual machine
shell scripting is, probably, the very rare example of "interpreted"
Generally, risks go down over time and with broad use. SQL, as a technology, is 100% risk-free, having been around forever, and widely used. COBOL is high risk, since while it's been around forever, hardly anyone uses it anymore, at least on modern projects. Moving your Cobol app to Android is fraught with unknown-unknown risk. Something that's been around 2-3 years is generally much lower risk than something which came out last year, and a good 10 years drives risks down further most of the time, but not always. It depend on whether people seem happy with it. Mongo seemed like a really good idea first few years, until people figured out (1) it had wonky performance issues (2) it was really hard to query for some type of queries (3) and what was the problem with postgresql again (it seems to do JSON pretty well too!)? Things change too. Java was the bedrock, stable, safe choice. It wasn't the fastest to code in, it was a bit clunky, but it was THE safe choice, and enterprises flocked to it. That is until Sun died, Oracle happened, and litigation+monetization kicked up to try to treat Java as a cash cow.
The flip side is why would you use it? When I was building an app a while back, I chose React although React Native had just come out at that point. It let me build the app once, and run on web, Android, and iOS, instead of 3 times. I figured cost savings of building and maintaining one codebase outweighed the risks. On the other hand, in most cases, the upsides of switching away from Python -- now three decades old -- are usually negligible, so with the exception of a specific need (run-anywhere, above), I almost never pick something different.
And the final piece is code complexity, abstraction, and modularity. I don't feel bad adapting new numerical algorithms. It's usually a few hundred lines of self-contained code. If a better algorithms comes out, I might swap it out anyways. On the other hand, a programming language or framework is a lifetime commitment.
You work through all the risks and upsides, figuring maintenance is 90% of the cost, and you sometimes end up around the rules-of-thumb everyone gave. But not always.
Trick is to learn probability. It give a good mental framework for estimating expected costs and benefits. You don't usually do this explicitly with equations (what's the probability-Oracle-screws-us times costs-of-Oracle-screwing-us versus cost of upgrading to Python?), but it gives a language to think about risks.
Most enterprise vendors have, or will soon have, comparable products for sale. My employers have Pivotal Spring Runtime. You can also get OpenJDK coverage from Red Hat, Amazon, Azul and so on.
Incidentally I resent that I sometimes wind up defending Oracle's decisions. I think it was globally suboptimal but I can understand their reasoning.
By the time they went under, Java 1.2 up to Java 5 were only available under support contracts for production deployment.
Somehow Oracle hate ends up excusing Sun for exactly the same practices.
Same here. No matter what you do, leave others to try the cool new stuff & get burned by it & work to fix it (when/if possible). Stay informed, but don't be an early-adopter. It's sound advice - though it wouldn't be if everyone applied it. Fortunately, there's no risk of that happening.
Play with the cool new thing in your R&D time. Stick with tried and tested in your implementation time. That's the difference between hacking and engineering.
This goes for all technological choices. The cost is not static, but varies over time depending on market forces.
Mini's and then Unix systems allowed us to develop systems with newer technology. Wintel systems expanded it further.
My point is that the new technologies came about via need: it allowed more people to utilize computer technology to solve problems. As needs change, we'll continue to see an evolution of technology to meet the needs.
- solve a new problem
- solve an existing problem better
- solve an existing problem cheaper
A new tech MAY allow that, and it MAY be used successfully toward that objective, but even in that case, that's hardly the core of it for most cases. Not saying it does not happen, but there are much more at play.
Then you either end up refactoring the whole setup for years (which usually is expensive, slows down the business development velocity) or rewriting it from scratch (or as a new implementation next to the old one).
If the original implementation would've been sanely made. Then making new features on top of it (or partial replacements or micro-services etc) wouldn't be that big issue. But usually these beasts are more like godzilla-level monoliths with bad architecture etc so it's probably easier to rewrite the whole thing.
If you can't find either a senior or a junior who can both use the same tech you need to perform the business task, then you might be too early or have to think about changing.
The difference in your requirements for juniors and seniors probably tells you about your potential rate of change. If you're based on a recent JS framework, those two will be closer together than a finance org running on COBOL.