Hacker News new | past | comments | ask | show | jobs | submit login

There's a longer term issue that appears to be missing here. At what point do you change? There must be a point otherwise we'd all be here writing COBOL/CICS with some whizzy Javascript interface library.

Over time it becomes harder and harder and more and more expensive to maintain old technology. Because frankly maintaining old technology is pretty boring and career destroying so you need to be paid more and more to do it.

The marginal benefit of the new shiny is indeed an overhead you should try and avoid, but also you have to dodge the trap of being stuck in a technology when everybody else has moved on.

Anyway back to the Adabas support...




Recipe for getting things done:

1. Place every new & cool technology into mental quarantine for 3-5 years. 2. If, after that: a) the tech is still widely used, and doesn't seem to be getting overtaken by something else b) you're about to start a NEW project where the tech would help c) you're not in a rush and feel like trying something new

...then go for it.

Learning complex tech that just arrived is a waste of your life, if you want to accomplish things. It's only useful if your aim is to appear knowledgeable and "in the loop" to your developer peers.


This works, but it does force you to ignore new enabling technologies that make new use cases economical, where most innovation and value creation is.

You'll be very efficient at accomplishing old use cases, which is just as well, because you'll need it - the market for them is most probably commodified, very competitive, and with low margins. Dominated by big actors with economies of scale. Not a comfortable place to be.

You'll probably get into very few new use cases, because by the time your 3-5 years quarantine passes on their enabling technology, they're already explored, if not conquered, by several competitors. The exception to this is new use cases without a new enabling technology, but those tend to ressemble fads, you'll have no tech moat, so again they'll be a very competitive, low margin market.

New techs create value only when they solve some use case more efficiently for someone. This creates value. Not all new techs do this. God knows that especially in software engineering, people deliver new techs for fun and name recognition all the time. Managers allow it as a perk, because of the tight labor market. But it's a mistake to consider all new techs to be in this later category.

New techs are also tech moats, giving you negotiating power to capture some of the created value. Without the tech moat, you better have a non-tech one (regulatory, institutional, market structure, economy of scale) because otherwise the value you capture will quickly be pushed to your marginal cost - if that - by competition.


You're talking about a tech stack, which is not the same as, for example, a modern web application. You can build a perfectly modern web app with 'old' Java/JEE stack, backed by an unsexy SQL-based database. You don't need Node.js with MongoDB.

Techstacks very very very rarely enable new use-cases. They are the equivalent of fashion statements by young developers who haven't learned what it means to support software for 10-20 years.


Do you similarly feel like frontend stacks have seen no meaningful innovation?

I think your argument works fine-ish for backends but it's bananas to suggest that jQuery is the same thing as React or Svelte. I do security for a living and maybe 100% of all jQuery sites have XSS. If I find a React page I can just grep for dangerouslySetInnerHTML and I'm 80% of the way there. (I am exaggerating, but hopefully my point is clear: from both a development perspective and a safety perspective, React is not just a shiny new version of jQuery.)


So I do think front end stacks have come a long way over the past decade... but just to give a counter example...

I have seen a lot of sites get worse as a result of migrating from server-side rendering to client-side rendering. Things like broken back buttons, massive page sizes, and terrible rendering performance on low powered hardware.

An example that comes to mind is Salesforce. They introduced a new “lighting experience” that seems to use a newer stack. But it’s taken them ages to migrate all of the features over, forcing users to switch back and forth between the old and the new. It can also be a real beast to render on something like a surface tablet. It must be costing them an enormous amount of money to support both UIs, and I have to wonder if it was really worth it vs maybe applying some new CSS and more incremental improvements.


The procedural nature of jQuery just makes for buggy as hell websites as well. Manual DOM updates etc. etc.

React being 'declarative' tends to end up with more stability in regards to UX (e.g. complex search forms). Makes the integration of third-party components smoother too.


Sure! I’ve written large apps in it and am familiar with the technology. My point is that whatever the reason, frontend clearly has made strides. So, is it:

1. Frontend was less mature to begin with

2. Frontend has a unique, definitional state management problem in the DOM

3. Actually, we can make real progress sometimes

4. Really, frontend hasn’t made strides, you’re just ignoring $x

5. Several/none of the above?

(I think real progress is possible and disillusionment with some new technologies should not prevent us from trying. But also that the DOMs unique state management problem is so overt that functional approaches almost couldn’t help but dominate.)


What is a new use-cases that React brought in, that couldn't be replicated with plain old JavaScript?

Browser capabilities are game-changers, not a tech stack that runs on top of them. I don't need React or Angular or Node or whatever to make use of them. I can use those capabilities with plain old Java Servlets and JavaScript.

React is a shiny new jQuery - that's all it is. WebAssembly, Canvas, WebRTC, etc. those are something different. Those enable new use cases.


Concepts and abstractions, like the virtual DOM, matter. Just because you could in an abstract sense (of course you could! It’s a JS library) doesn’t mean anyone actually could.

Thought experiment: why does your argument not apply to, say, C? Why bother doing new language or library design? It’s all int 80h eventually.


I'm not saying abstractions are bad. They make writing code easier for developers. It is easier for developers to write a web-app served by Node.js rather than a standalone C program.

I'm taking the perspective of the end-user. From that side, whether the application is written in C or Java or C# or JavaScript makes no difference because the end-user never knows or cares what the underlying language their app is written in anyway. The platforms are game changers; platforms like the PC, like the internet, like the web, like the smartphone, like the browser. Those enable different use-cases. They are the ones that drive broad societal changes.

By the way, I do think the virtual DOM is either a fad or simply an overstatement. What I mean by overstatement is that batching updates is one of the most normal things developers have been doing, from that perspective there's nothing new here.

From a fad perspective, there is no reason why the regular DOM cannot be fast and schedule batch updates to optimize framerate (and with one less level of indirection). The virtual DOM may actually be a problem in and of itself because it makes an assumption that it knows about how to schedule updates better than the actual DOM - even if that is true today, why would it necessarily be true tomorrow?


Doesn't XSS require a backend that can receive and then transmit malicious javascript from a hacker using the site to a victim accessing it? And wouldn't that be the case whether the front end was done with jQuery or React?

I'm very hesitant about my assumptions here, and I am confident I'm missing an important point. So if you can clear up my understanding I appreciate it.


Stored XSS requires some sort of backend, yes, but reflected and DOM-based XSS does not. Furthermore, all XSS is some variant of a contextual confusion where something that wasn’t intended to be interpreted as JS suddenly is.

jQuery makes XSS more common in several ways, and some of them are really just the influence jQuery on the frontend has on how the back end works. Some of those ways are pretty subtle, eg CSP bypass gadgets in data attributes (which are very commonplace in jQ libraries). By contrast, React, by building a DOM, has contextual information that jQuery lacks. Go’s HTML templating is unique on the server side in that sense since it too actually understands if it’s in a text node context, a script context, an attribute context, or an inline JS (such as onclick) context, and hence the correct way to treat arbitrary input.

Of course, it’s not because you use React you’re immune. I got XSS against every site that used ZeitJS for example. But the pattern that lead to that (generated code dumped into a script tag) is a common pattern for pre-React frameworks.


I'll raise you a windows service that communicates with Singleton .NET Remoting hosted with a WebForms 2.0 app or a WCF delivering SOAP service. Both talk via remoting to the windows service which talks an SQL Server choked with materialized views.

Is that boring? It certainly has some issues.


> It's only useful if your aim is to appear knowledgeable and "in the loop" to your developer peers.

Which means you get respect and better job offers.


I spoke once with a Microsoft consultant, he was advising us on upgrading strategy, as our customer had a mandate to be at least on version N-1, that is the customer must be on the latest major version or the version before, so at the time we were migrating off Windows 2003 as Windows 2012 was going through internal validation.

He mentioned that on a bank he'd been advising, the mandate was the opposite, namely at most they could be on N-1 and were in the exact same position as we were, except that they were migrating to Windows 2008 and we to Windows 2012 as the N-1 mandate in practice meant that we'd upgrade services every other release, except when there was a waiver, which was often and explained why when I left in late 2012 we still had some Windows 2000 boxes.

As a techie, it was always a pain going back to an old box as you'd try to do something and realise that it was not possible as that feature had only been introduced in later versions. Even worse, was when it was possible but extremely convoluted and error prone.

It's interesting how everybody thinks that it's career suicide to support old stuff when it actual fact most people are hired for a mixture of their knowledge and their capacity to learn. I appreciate that it's lower risk to hire somebody with experience on the exact product but would you rather have an extremely good engineer with no experience in your stack or a good one with experience in your stack?


In personal life, I do the something similar, actually. I have a state-of-the-art digital camera circa 2012, a state-of-the-art camcorder circa 2011, and similar. I'm always around five years behind the tech curve. The cost is often 1/2 to 1/10th, and I'm not sure I lose anything by being five years behind in terms of happiness or much of anything else.

As with anything, there are exceptions. My phone needs security updates, and 4k displays make me more productive at work, so there were places I bought the latest-and-greatest. And when I need to develop for a platform, well, I get that platform.

But for personal life? A used iPod touch 4th gen sells for $20 on eBay. XBox 360 can be had for around $40. Exceptionally nice digital cameras from a decade ago -- workhorses professional photographers used -- can be found for around $200.

The way I figure, I just behave as if I were born five years earlier. I use the same technology at age 25 as was available at age 20.


> I have a state-of-the-art digital camera circa 2012

This is a good strategy for anything with a high/quick depreciation curve. My DSLR body is pretty old now, but still works great (a D7100). The tech in bodies changes quickly so even waiting just a short period of time can save significant money. Spend money on lenses instead which hold their value and typically can be used across many bodies.

Cars are similar. My truck is a 2011, and I have no plans to buy a new used one anytime soon.


Agreed on camera bodies and lenses, but in my view a 2011 car is still quite new. I guess this depends on the country, taxation etc.

IMHO it makes sense to buy a used car at about 300 thousand kilometers. At that point it's cheap, it's already had a bunch of expensive parts replaced, and if it's survived this long it has a high chance of going another hundred thousand (given proper service, obviously).

Of course another point of view is that getting a car serviced is stressful, so it's best to buy new. But then it's even less stressful to mostly ride a bike and use a taxi or rental car when needed.


It's certainly a novel approach in this era where everybody seems to want to have the latest and greatest.


Counterpoint: I spent my first 3 years at a major programming coding in Java and never actually learned anything there except how to work with people. It was all adding if statements to a gigantic file because no one there knew what they were doing.

I worked in an HR company and didn't learn much.

Then at my last job I worked under a really smart guy who did everything the right way, and I'm way better now. If I had started at a company like that, I would be much farther ahead now.

However, the real think to know is how to architect a project properly with tests/dependency injection/layers, not all the newfangled technologies.


For me, change happens when I see a real improvement in almost every way possible which is usually determined by building a few things and letting my brain simmer on the technology as a whole so I can look at it with a logical and unbiased perspective.

I remember looking at Node when it first came out and got mildly excited, but that excitement quickly went away after writing a couple of small apps with it. It just wasn't for me. The same thing happened with Go. I didn't see enough wins to switch over to using either of them for building web apps.

On the other hand, for me Rails and Flask stood the test of time. Nowadays I'm working with Phoenix and I'm past the hype phase and it looks to be another winner. But in all 3 cases (Rails, Flask, Phoenix) I typically don't switch away from them for good. They just become another tool that I know. Of course I will gravitate towards the one I'm the most happy writing code with, but it's not a black / white progression from 1 to the other.

I don't think there's really a definitive answer on when to change. Like 3 weeks ago I picked up a Flask contract and it was a pleasant experience, even though I'm using Phoenix to build a side project currently. You don't always need to throw the old thing out. You can use multiple technologies in harmony. You change when you start dreading writing code in the old thing, which is very user specific. In other words, write more code and you'll quickly find out what you like and dislike.


Go is a compiled language and Ruby/Python are interpreted scripting languages. There are domains where it's a much more appropriate choice (distributing binaries, performance sensitive code). The type system is also quite nice vs. dynamic typing (in most situations). It's weird to see people comparing Go and Python in this thread as they solve entirely different problems and shouldn't be interchangeable, not due to developer preference but due to fundamental features of the language.


> It's weird to see people comparing Go and Python in this thread as they solve entirely different problems and shouldn't be interchangeable, not due to developer preference but due to fundamental features of the language.

Yes but when Go first came out, a lot of people jumped on the bandwagon and started proposing they would use Go for web applications too. There's definitely some overlap in building web services with Go and Python so I wouldn't say they solve completely different problems.

Go and Python are also pretty related for command line apps too. You could totally use either one to build a CLI tool.


> Go and Python are also pretty related for command line apps too. You could totally use either one to build a CLI tool.

Distribution of Go CLI apps is much easier as you don't need to have your end users install the 3rd party libraries themselves.


Yeah totally, for using CLI tools I much prefer using a Go binary too because it doesn't involve installing anything.

But in practice as a web developer who occasionally writes CLI scripts, I personally didn't think it was worth going all-in with Go for that.

Especially not when for smaller scripts you can have a Python or Bash script in 1 file[0] and it will run on all major platforms without installing 3rd party libraries too. Most major distros of Linux, MacOS and WSL on Windows have both Python and Bash available. For my use cases that's good enough.

[0]: For example just the other day I released a ~200 line self contained Python script to solve a problem I had which was figuring out what changed between 2 web framework versions: https://github.com/nickjj/verdiff


Given the broad capabilities of the Python first party libraries, you can do a lot of work without 3rd party libraries. It’s not in as much fashion as it was 10+ years ago, but it’s still quite doable.


This is true for Python, too, albeit quite a bit harder due to the relative lack of first-party tooling for generating standalone executables.


As far as I'm aware Go doesn't return the exit status of a process the way Ruby and Python do. Surely this is a big disadvantage for CLI scripts?


Sure it does: https://gobyexample.com/exit

If you mean Go can't read the exit status of a command it runs, that's incorrect as well: https://golang.org/pkg/os/exec/#pkg-overview


I don't know about that. I do a lot of web development and Go is really very nice as a web server.

It's extremely simple and pleasant to use. All it needs is generics and it would be my go to for most web services.


> fundamental features of the language

The thing is you’re right, Go compiling to a self-contained binary is different from a folder of .py scripts.

But both can be deployed into production.

The deployment steps are different but the outcome is the same, so they can be used interchangeably.


If you use third party Python libraries, the end user is going to have to install them too. Python really isn't a great language to be building consumer distributed command line apps in.

I think a lot of this discussion is focused around custom software or backend software, but for a publicly distributed binary, Go or any other compiled language is much better than Ruby or Python (and especially Javascript).


That isn't quite true re Python, eg https://stackoverflow.com/questions/2933/how-can-i-create-a-...

Go might be a special case actually, as it was designed to be a "boring" language to reduce the cost of technology choice. But it is completely interchangeable with similar programming languages (like Python) so evaluating the cost of it vs something else is still a very reasonable thing to do.


They do overlap. I don't see how someone can be confused about this at all because the overlap is obvious.

Python and golang overlap for http webapps or apis. They both can and often are used for this purpose.


Go has other problem domains it is appropriate for, but I agree in this domain there is some overlap. Go and Python have very different performance characteristics though, so in that sense they're not really comparable.


> Ruby/Python are interpreted scripting languages

This is not quite right since they both compile to bytecode and execute in a virtual machine

shell scripting is, probably, the very rare example of "interpreted"


Python can be compiled to bytecode, but that's not the default or standard.


It occurs with every execution, if you don’t pre-compile it. That’s what the .pyc files are. It also does it with the “main” file, but it just keeps that in memory instead of writing it to disk.


I'll contradict everyone here: You figure it out on a case-by-case basis.

Generally, risks go down over time and with broad use. SQL, as a technology, is 100% risk-free, having been around forever, and widely used. COBOL is high risk, since while it's been around forever, hardly anyone uses it anymore, at least on modern projects. Moving your Cobol app to Android is fraught with unknown-unknown risk. Something that's been around 2-3 years is generally much lower risk than something which came out last year, and a good 10 years drives risks down further most of the time, but not always. It depend on whether people seem happy with it. Mongo seemed like a really good idea first few years, until people figured out (1) it had wonky performance issues (2) it was really hard to query for some type of queries (3) and what was the problem with postgresql again (it seems to do JSON pretty well too!)? Things change too. Java was the bedrock, stable, safe choice. It wasn't the fastest to code in, it was a bit clunky, but it was THE safe choice, and enterprises flocked to it. That is until Sun died, Oracle happened, and litigation+monetization kicked up to try to treat Java as a cash cow.

The flip side is why would you use it? When I was building an app a while back, I chose React although React Native had just come out at that point. It let me build the app once, and run on web, Android, and iOS, instead of 3 times. I figured cost savings of building and maintaining one codebase outweighed the risks. On the other hand, in most cases, the upsides of switching away from Python -- now three decades old -- are usually negligible, so with the exception of a specific need (run-anywhere, above), I almost never pick something different.

And the final piece is code complexity, abstraction, and modularity. I don't feel bad adapting new numerical algorithms. It's usually a few hundred lines of self-contained code. If a better algorithms comes out, I might swap it out anyways. On the other hand, a programming language or framework is a lifetime commitment.

You work through all the risks and upsides, figuring maintenance is 90% of the cost, and you sometimes end up around the rules-of-thumb everyone gave. But not always.

Trick is to learn probability. It give a good mental framework for estimating expected costs and benefits. You don't usually do this explicitly with equations (what's the probability-Oracle-screws-us times costs-of-Oracle-screwing-us versus cost of upgrading to Python?), but it gives a language to think about risks.


Oracle created a brand new market in Java support contracts which didn't exist before, so that they could enter it and make a buck (wherein FUD is a standard sales tactic for them). They probably viewed their position on the OpenJDK as subsidising a public good, which in general is slightly out of character for Oracle.

Most enterprise vendors have, or will soon have, comparable products for sale. My employers have Pivotal Spring Runtime[0]. You can also get OpenJDK coverage from Red Hat[1], Amazon[2], Azul[3] and so on.

Incidentally I resent that I sometimes wind up defending Oracle's decisions. I think it was globally suboptimal but I can understand their reasoning.

[0] https://pivotal.io/pivotal-spring-runtime

[1] https://access.redhat.com/articles/1299013

[2] https://aws.amazon.com/corretto/

[3] https://www.azul.com/products/zulu-enterprise/


Sun also used to sell Java support contracts.

By the time they went under, Java 1.2 up to Java 5 were only available under support contracts for production deployment.

Somehow Oracle hate ends up excusing Sun for exactly the same practices.


You know, it's the same as with financial advice. Good financial advice is good.... except if absolutely everyone applies it, then it becomes a disaster. Fortunately, there's no risk of that happening.

Same here. No matter what you do, leave others to try the cool new stuff & get burned by it & work to fix it (when/if possible). Stay informed, but don't be an early-adopter. It's sound advice - though it wouldn't be if everyone applied it. Fortunately, there's no risk of that happening.


To quote Terry Prattchet: The second rat gets the cheese.


2-3 years after you read it on Hackernews is a good rule of thumb.


You change when the new tech becomes boring. Boring indicates well known, reliable, and efficient.

Play with the cool new thing in your R&D time. Stick with tried and tested in your implementation time. That's the difference between hacking and engineering.


Well you can sit down and do the maths; you mention Cobol, which you can actually map to the cost and availability of developers. The cost of that technological choice just keeps growing and growing. You can compare that to the cost of converting it to e.g. Java (and multiply that cost by 2-5x because it's very hard to make an accurate guess).

This goes for all technological choices. The cost is not static, but varies over time depending on market forces.


If everybody stayed with the boring tech, cobol developers would be abundant and cheap.


In 2016, Software AG announced that Adabas and Natural would be supported through the year 2050 and beyond. I'm not sure MongoDB will be there in 2050.


COBOL/CICS was supplanted mostly because mainframes were supplanted due to smaller organizations, that wanted to use computer technology, not being able to afford a mainframe. Companies could lease time. But I don't remember it being the norm.

Mini's and then Unix systems allowed us to develop systems with newer technology. Wintel systems expanded it further.

My point is that the new technologies came about via need: it allowed more people to utilize computer technology to solve problems. As needs change, we'll continue to see an evolution of technology to meet the needs.


Adabas brought back memories. And now when I see all the hype about NoSQL, I'm remember Shirley Bassey's fantastic song - 'History Repeating'


Is NoSQL even still hyped? I thought the hype cycle had moved on to NewSQL at this point.


You never change, at some point a competitor arises that chose different technology and it kills you.


Competitors don't kill you because they chose a new tech. They kill you because they can either:

- solve a new problem

- solve an existing problem better

- solve an existing problem cheaper

A new tech MAY allow that, and it MAY be used successfully toward that objective, but even in that case, that's hardly the core of it for most cases. Not saying it does not happen, but there are much more at play.


In my experience this all boils to the fact that usually it's not the actual chosen technology stack but the missing craftmanship when making the stuff first time.

Then you either end up refactoring the whole setup for years (which usually is expensive, slows down the business development velocity) or rewriting it from scratch (or as a new implementation next to the old one).

If the original implementation would've been sanely made. Then making new features on top of it (or partial replacements or micro-services etc) wouldn't be that big issue. But usually these beasts are more like godzilla-level monoliths with bad architecture etc so it's probably easier to rewrite the whole thing.


Probably when there isn't a pipeline of people across the skill spectrum who can understand, maintain and deploy that tech.

If you can't find either a senior or a junior who can both use the same tech you need to perform the business task, then you might be too early or have to think about changing.

The difference in your requirements for juniors and seniors probably tells you about your potential rate of change. If you're based on a recent JS framework, those two will be closer together than a finance org running on COBOL.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: