This article does link to the original, so it's not uncredited plagiarism. But it still feels a bit odd. The original has been discussed really extensively on HN in the past, and even quite recently:
That is like expecting that carpenters are going to keep making jigs and saying "measure twice, cut once" in various forms.
If you could read the advice of a from carpenter 2000 years ago, I think he'd also be talking a lot about how humans make errors and need to account for that.
I think this gets at why people keep going back to old, time-tested tools. All new languages, frameworks, and tools seem to converge on the same things with enough time. Carpentry, computers, and programming languages all end up with a router.
They also end up with some kind of cruft, or annoyance. A time-tested tool has time-tested failure modes. Everyone eventually figures out they need to start a nail with taps and remove their hand before swinging. Some people prefer to figure it out on their own, others like to know the path ahead.
Sure. I'm not trying to shut down the discussion, or flag this as a dupe, or anything like that. It's more of a PSA that if anyone finds this subject interesting and wants more, there's a thousand HN comments already written on it.
It seems that this particular link was posted by the same poster 4 times over the past few weeks [0]. Given the pattern of repeat reposts I assumed it's an experiment to see which times are best to post to get more traction.
> HN rules indicate that if a post didn't get enough attention that's alright
Which is why I explicitly stated I assumed it's just an experiment of sorts and I didn't imply anything nefarious.
The optics of posting the same links from the same websites suggest going beyond "I found this interesting" towards something more about getting the clicks.
Sometimes HN is used just to garner audience, and while the letter of the rules may be observed, the spirit certainly isn't [0] (you may need to enable seeing dead threads).
Hey thanks for the feedback! I'm the author of the blog post. The topic has been influential for me, and I just wanted to share my perspective on it, specially in the context of a recent project.
I don't think any other article talks about Choosing Boring Tech in the context of Kubernetes, and building a one-man SaaS.
Could you please clarify what you found odd? I'm just curious.
You didn't want to just share your perspective, you know this subject is popular on HN and you are brute force publishing your article until it reaches the front page (its the 4th time you are posting the link) in order to create an audience for the product your are selling.
This is not forbidden as far I know, but not really the kind of behavior HN users like
Couldn’t have phrased it better myself. I’m guessing the original poster is a member of Indie Hackers, getting on the front page of HN is one of their main mission objectives when it comes to marketing.
Personally I'm not here to tell you what to do, you are old enough to know what you have to do.
I like to read original content on HN (and I accept that this is not the case of everyone) so I don't really like when a topic keeps coming and doesn't bring anything new to the discussion.
Anyway it seems like you are deep into copying the stuff of others as Panelbear look a lot like a copy of SimpleAnalytics
> I don't really like when a topic keeps coming and doesn't bring anything new to the discussion.
It is fine to value novelty.
And also sometimes it is important and useful to return to tried-and-true ideas that others are already familiar with.
Example: If you want people's opinions on something you've realised about Kubernetes, it can be really helpful to put that discussion within a tried-and-true conceptual framework like "Choose Boring Technology"
I mean, it's basically what I wrote above. You used the exact same title as the McKinley's article, the same Maslow's pyramid visualization as their presentation, and went through their basic points in the same order.
This meant there was a major feeling of deja vu, it didn't feel like a particularly original take on the subject. Maybe the feeling would not have been there if it had been more obvious. I.e. titled something like "My take on 'Choose Boring Technology'", the link moved to the opening paragraph, and explicitly stating that you were reframing that original article in terms of your own experience.
(But to to repeat, I'm not claiming that anything untoward happened here).
It's fast enough for almost everything, you get choice of Java/Kotlin/Scala/Clojure depending on taste. PostgreSQL can represent almost any storage paradigm you want from traditional relational schemas, to graph like structures with recursive CTEs and even NoSQL JSON document style with jsonb.
Everything is super well supported with consistent release cycles, minimal regressions and runs on almost any platform. Tooling support is superb ofcourse - IntelliJ makes working with the platform a breeze. You get all of the JVM debugging and profiling capabilities and PostgreSQL has EXPLAIN ANALYZE making getting to the bottom of performance problems simple.
If you want to take on fancier paradigms like ES/CQRS/CDC you have best in class libraries and tools all built on the JVM. Debezium + Kafka being one example. Need faster OLAP queries than a PostgreSQL replica can provide, Druid is JVM based and can be integrated with Debezium + Kafka + Avro for streaming ETL. These sorts of tools are ideal because they are already best in class but can also be easily extended with any JVM language.
Want to do big data processing? If you are already on JVM it's easy to plug in Hadoop/Spark or one of the Apache Beam runners like Google Dataflow.
List goes on, there are literally hundreds of fields where JVM beats all other platforms out as it's the most general purpose platform available. It has real threads so it's not limited where single threaded languages are. It has multiple best in class GCs (latency or throughput tuned as you please). Best JIT by far with reasonable SIMD optimisations (annotations coming soon for guaranteed SIMD on supported platforms).
JVM might be old but it's 2020s most promising platform. Rust and .NET are up for contention too but with Project Loom and Valhalla on the horizon I don't see anything beating JVM in the next few years.
We had a very complex application that would take care of hundreds of events per second, doing all sorts of calculations on top of that. It would get stuck frequently in GC cycles due to huge number of objects processed in memory and developers unconstrained resource usage.
We switched it to ZGC, only by changing startup options, and the application was like a brand new thing. Very responsive and zero GC issues.
Rewriting it in C++/Rust would take many months and even more $$$, and no one could guarantee it would work better.
With some variations, this setup describes my entire career as software engineer so far. Reading Hackernews, for the longest time, I had this lingering feeling of knowing nothing or not the right things.
Now, I know that my experience isn't just the norm for a great number of developers, at least in Europe, but also a reasonable choice. If there is no need to do something fancy, don't do something fancy. Not in a professionell business environment. There are zero upsides for the business.
If you use Instagram as your indicator of what other people are doing in almost any domain, there's a good chance you'll end up with an incredibly skewed sense of how to go about life in general. You'll feel like you need to spend hours in the kitchen preparing elaborate food that's optimized for looking good in photographs rather than tasting good. Your kids' memories of the holidays will be dominated by all the time you spent trying to assemble overly fiddly and elaborate homemade decorations and whatnot, rather than the time you spent letting them get themselves transform perfectly good cookies into horrible dribbly icing covered messes, and have a great time doing it. You'll worry that the only good vacation is an expensive vacation. And so on and so forth.
Which. . . I shouldn't be so negative. Plenty of people enjoy fancy elaborate things, and love to talk about what they're doing and share it with others. And that's great. But it's really easy to look at the Internet and get the impression that what everyone is snapping photos of for Instagram is an accurate cross-section of what people are typically doing, and develop a serious case of FOMO anxiety.
Yep, if you need to get shit done in time without a need to pad your CV with fancy tech, it's really hard to go wrong with Java or C#.
You'll be done in time and on budget. You'll also be a bit bored, but you can go rock climbing or something during the time you saved by using tried and true, boring, tech. =)
Heh not the only one. Writing software that never wakes me up at night is what keeps me honing my craft. I take it as a point of pride that my software only receives attention for runtime/library updates or the rare feature request.
The JVM is vastly underestimated. I think the root cause is an image problem, stemming from a combination of bad programming practices in the past (getter/setter bloat, Factories for everything and unnecessary object orientation) and heavy and outdated frameworks such as Spring, JSF and others.
For me, one reason I avoid Java is because it’s what I used in college when I was a much less proficient programmer. I remember it being confusing and a lot of hard work. That association probably has more to do with learning programming than it has to do with Java but it has stuck, in my mind, somewhat.
And what is the alternative in the JVM environment? But I have to say, I'd like something less verbose. Springs stack trace looks ridiculously bloated. Still, other than that, you hardly notice the framework once everything is setup.
These days I’d go with a JAX Rest backend. The frontend is pure HTML+JS (so no server side templates unless they make sense). No dependency injection but „hand-wired“ dependencies at the top level. I also don’t use ORMs anymore. This means a slower start in the beginning, but later in the project you’re not caught up in a net of a dozen intertwined frameworks.
There are many alternatives to Spring if you prefer a more DIY approach. Namely Javalin, Micronaut, KTor (for Kotlin folk) and many more (Quarkus was already mentioned).
Spring however is the way to go if you come from a Rails/Django/ASP.NET background and want the framework that everything works with out of the box.
I agree and I doubt anyone starts new projects with JSF anymore. But it’s still very much on my mind when I think about Java web apps. As I said, it’s an image problem, not a technical one. the comment I replied to laid down why the JVM is more viable than ever and I hypothesized why people don’t go for it yet.
Yeah for sure, I sort of mentioned this with latency vs throughput tuned GCs being available but I guess it's worth pointing out just how insanely good ZGC and Shenandoah are.
Would just like to toss in CLR for folks who like .NET. You get C# which is great (but I'm biased), and F# if you have a functional paradigm. Both EF and Dapper can pretty easily hook into Postgres. I'm also partial to Azure over AWS so there's obviously a bias toward C# over something like Java there. There are similarly great ES/CQRS libraries for .NET - I'm not sure about CDC on the Postgres side but they exist for SQL Server if you're using that.
Just to chip in here. We deploy C# to Kubernetes on Linux, AWS with Postgres as datastore. Everything works great. AWS for .NET API support is good, typically better documented than the Azure libraries.
C# is a hugely underrated ecosystem. I have experience with Java, Python, Go, C/C++ node and Rust - and C# is my tool of choice.
Unfortunately all programming languages are flawed. It's kind of funny how we still have no programming language that does everything to a satisfying degree instead of having one major wart that always means that a different problem requires a different programming language.
There are not many languages with a lean runtime. The JVM needs a lot of memory even if you have 0 bytes of heap usage. Javascript applications are often tied to electron.
There are not many fast languages that do not cause security issues or cause hard to debug segfaults. The two obvious choices are Rust, Java (or C#) and maybe Go. I personally set the cutoff point for fast enough to be Javascript with v8. Not everything needs to be fast enough to be used for a game engine but at the same time I don't want to CPU or user time on gross inefficiencies.
There are not many modern languages that everyone agrees to use. Javascript is the pinnacle of this because people are forced to use it. The JVM suffers from having too many languages. Kotlin is the only winner but I am still bitter about that one JVM language that died. I don't know why but Rust is one of the most hated languages on HN.
There are not many easy to learn languages. C++, Rust, Haskell, Scala all tend to suffer from the ability to write convoluted code bases that nobody understands. People become experts within their own small islands. Java EE also had a phase where everything was done with impenetrable xml files that were getting closer and closer to turing completeness. Java and Go are both a direct response to C++ being too complex.
All easy to use languages (primarily JVM languages but also many interpreted languages) fail because they are not fast and lean and nobody uses them.
All fast and lean (=low level) languages fail because they are too complicated or suffer from security problems.
The answer? I don't know. I thought graalvm was going to solve this with it's native image mode but my preferred JVM language doesn't run on it. I'm hoping that Java will be good enough if I'm going with a slightly non standard setup (immutables). The lack of reflection means that you have to avoid a lot of libraries. Ecosystem advantages are disappearing if you go down this route.
While this is true JVM disadvantages are only a problem in very niche domains like embedded or kernel development. (and where you are forced to use JS like web ofc)
Overall it's definitely in contention for "universal" runtime and Kotlin/Java are definitely close to fully general purpose languages that do all things sufficiently well.
IMO the reason why they win the general purpose category is because they have the least hard limitations.
CLR is in a similar quality class to JVM, C# is a very good language. I think if you are not on the JVM that is where you should be for general purpose programming.
JS/Ruby/Python/etc are all single threaded, this is a hard limitation that I can't look past. It's a needlessly painful constraint when you just need something CPU intensive to be done quickly. Additionally lack of static typing isn't something I am willing to compromise on anymore. I'm older now, I don't have time for the sort of defensive programming required to make these languages produce programs with as few bugs as a statically typed equivalent.
I still use Ruby for scripts but it's not a tool I pick up for anything but the quickest of hacks anymore.
C/C++ are too unsafe to use without too much brain space dedicated to making sure you are managing memory correctly.
Also lacking centralised repository and package management ala Maven.
Functional languages are too polarising, I can use them and appreciate them but rallying a team around them is too hard.
Haskell and it's ecosystem taught me a great many things and it's definitely what most good languages steal from these days.
Bright spot of JVM/CLR is you can mix a bit of them in when you need to with compatibility with the rest of your code with F# and Clojure/Scala.
Rust is getting there but isn't there yet. I think it's the most promising besides JVM/CLR. Really high quality type system, tooling is coming along, great quality ecosystem, async getting worked out. As it's borrow checker gets smarter and perhaps an easy GC library is introduced (atleast easier than Rc/Arc/Cell/friends) then it's got great things in it's future.
For me it's Kotlin for now. It's the right mixture of modern, easy to learn and built on solid proven tech without any severe limitations.
This really is the exact same conclusion I have came to quite recently. All your point are strongly inline with mine.
I personally love how pragmatic Ruby is but its billion dollar null mistake + lack of typing + single threaded issues are glaring. But I still love the language and especially the community, my problem with switching over to Kotlin + SpringBoot is that lack of rails ecosystem web mindset. The SaaS web focused mindset doesn't really exist in Kotlin e.g no Phoneix LiveView/StimulusReflex competitors. Also while SpringBoot is suppose to be the like the Rails competitor I don't have access to my `rails routes` or `rails new` or `rails generate..`, the rails guys really focus on happiness.
Having said that with Project Loom coming and other JVM goodies it feels really good to be in the JVM ecosystem. It is very much the 'good enough for 95% cases' ecosystem.
Had a look there but its very far from reaching boring tech with regards to libraries also why Elixir is mostly a no go at the moment for me too. Love the look of Crystal though, basically ruby with all issues I mentioned fixed. Kotlin gets its pass on being boring with the great JVM compatibility imo.
Python and Node both have highly fragmented ecosystems, low quality packages, poor tooling and neither are statically typed, capable of multi-threading in a meaningful way and other then their niches (data science for Python and client-side web for JS) are worse at everything than JVM or CLR.
I understand their attraction, they are "simple" and "easy" languages. But they are not boring. If anything they create a ton of distinctly un-boring problems like build chains, packaging, framework of the week is no longer supported (or has a new incompatible version).
Engineers may like these for whatever reasons, especially before they have tried the higher quality tooling provided by real boring tech but inevitably they lead to projects that either run behind time because of technical issues not related to business problems and/or rot after development is paused and are hard to resuscitate and maintain afterwards.
Also NoSQL doesn't "scale" better than SQL. NoSQL stores can scale better in certain data access patterns but if your data model is inherently relational and you implement it on top of NoSQL all you have done is reinvented a relational store in your application model and likely crippled integrity, scalability and performance in one fell swoop.
And that's where folks go wrong. NoSQL is not boring tech. As soon as you need to scale, it is MORE likely, not less, that you will end up in a weird state in your app, user enrollment flow etc. NoSQL makes scaling MUCH harder in my view. SQL tools give you a common interface many folks can engage with, and many tools.
I really want to talk to people building piles of spaghetti on noSQL because "it scales". At some point you've just go to start tearing your hair out.
Postgresql can do 1.5M queries (read/write) per second on OLTP loads on one box just to get started. If you really need more you can get extremely high with replicas. Then application design comes into play.
I'm tired of folks picking "noSQL" so things can scale. Dealing with all the edges cases as these things scale is a nightmare (plug mongodb and friends seem to fall over MUCH more often, recovery is miserable with them etc).
All of the issues with SQL, including strong consistency, data normalization, table JOINs, etc., mean that any RDBMS is going to inherently going to be limited in its viability to scale compared to a properly architected NoSQL database.
Disclaimer: I work at ScyllaDB. A LOT of our migrations are from people who got started on MongoDB, and then it fell over. Another group come from DynamoDB, and then they see their monthly bill.
There are also people who have moved to Scylla from PostgreSQL because it fell over, or those who blanched at their Oracle bill.
Scalability is not inherent to SQL or NoSQL. It requires both technical features as well as economical offerings. It is a quality of a product made with users and real-world workloads in mind.
A lot of folks underestimate what one box can do. Memory / core counts have gone crazy on just one box. Local storage also has gone crazy. 4TB memory on a single node dual CPU machine, CPU's with 32 cores per CPU+?
And even more people talk about high scale workloads with no clue what they actually look like. :)
I routinely work with 10TB+ PostgreSQL clusters, 10TB+ BigTable clusters and 500TB+ BigQuery projects all in my current day job. I'm in Data Infra btw so this is sort of my bread and butter.
In the past I have worked with 100TB+ Cassandra clusters, 50TB+ MySQL+Vitess and countless other stores like MongoDB, RethinkDB, Voldemort, TokyoCabinet and probably tons I have forgotten.
It's highly unlikely one actually works with and manipulates these volumes of data on a regular basis and doesn't respect SQL stores and the JVM (the literal king of Big Data).
Load balancing across read replicas is usually handled by your connection bouncer, say pgBouncer/pgPool/etc though you may also do some amount of more complex both L3 and L7 balancing if you get really big.
Sharding is usually a matter of actually splitting the masters. There are many techniques for achieving this. If you want the database to do all the work you will probably want to use something like Citus for PostgreSQL or Vitess for MySQL.
You can also build bespoke topologies using PostgreSQL logical or MySQL binlog replication.
Failing that you can do application level sharding if you don't want the database doing anything fancy for you and manage each shard as an independent database cluster.
By the time you actually need to do this you will be able to afford one of these options. :)
In the meantime you will save a ton of CPU, storage and development time vs a "NoSQL" store as databases like PostgreSQL are inherently more efficient for all but the simplest of KV access patterns.
> By the time you actually need to do this you will be able to afford one of these options. :)
What if I need to do this now? Why would I build a distributed postgres snowflake that takes 10 hours to spin up a new replica, requires that I implement my own sharding, instead of using a datastore that is designed to handle all of these things at scale?
Comes down to your data model. If its inherently relational it's still the best play. Scale and performance are much more tractable problems than integrity and consistency. One you can measure and be sure of, the other you need a Phd to fully understand all the edge conditions that need covering.
There are some pretty decent NoSQL stores now for simpler access patterns. As long as you stay away from nonsense like MongoDB and stick to real databases like Cassandra/ScyllaDB/BigTable/etc you will do fine.
These stores are a fraction as flexible as PostgreSQL/MySQL but do allow scale-out storage and fast primary key lookups and scans. Good for when the size of your data is well in excess of 1TB+ and you don't need anything complex or consistency.
Reality - these folks don't need to "do this now".
Yes, Visa may need this. Guess what, 1TB+ of transaction data paying 30 center + 2% PER LINE - you'll be able to afford to do something reliable and scalable.
Folks don't realize, noSQL is not actually that scalable except in very narrow ways. And you can spin up pretty good scale SQL stuff with things look AWS RDS, including backups, replicas, snapshots to go back in time etc (noSQL doesn't support a lot of this).
Both SQL and NoSQL will scale fine for 99% of apps (and let's be honest, ~90% of apps don't need any scale).
Your data schema/format should be dictated by the data itself more than some handwavey "we might need to scale" requirement that isn't true the majority of the time.
Be careful. What choose boring technology really means is choose boring technology for anything not related to your competitive advantage. It is the engineering equivalent of MBA advice to differentiate between core and context, and choose boring technology for anything that fits in "context". Maybe your business's competitive advantage isn't related to your technology stack, in which case, all of your technology had better be boring. But sometimes, you really do need to run that brand-new shiny 5-GitHub-stars project in production. If you do, you just better make sure that you're one of the core maintainers, and that the time you sink into it more or less directly translates to business success.
You need someone in charge who can see when you REALLY need to break out the latest javascript library that was released last week vs "I'm bored and want to use something shiny at work".
I had a manager tell me that Blockchain was a better choice for our enterprise transactional data management. Since then, I've refrained from all technical discussion.
> manager who got the blockchain crap implemented got a nice job at a huge company out of it, so it wasn't a complete failure
Not for the manager's personal career, no. But probably a failure for both the original company needing to maintain unnecessary complexity and the new company hiring a manager who prefer resume/fad driven development.
>For example the other day I had an issue with my Django app, and a quick search led me to tens of answers to this problem in various forums and websites. It took me at most 10 minutes to get back on track and that was the end of this issue.
>
>I experienced the exact opposite a few years ago with a popular, but not so battle-tested Scala library my team had been using for a while.
This issue can be sidestepped to a large extent on non popular technologies by picking technologies which are tightly scoped, well designed and well supported by their maintainers. They'll have fewer issues and the issues that they do have will be easier to diagnose and the maintainers will probably be more help. Being very tightly scoped and well designed means that A) they won't be flooded with issues and B) there's a narrower range of things that can cause issues making it easier for them to diagnose and implement fixes/workarounds.
Django is very broad in its outlook. It tries to do a lot. It's also got some embedded design mistakes which make certain problems more common. The size of the community and the wealth of content online makes up for this. Any new web framework that makes half as many mistakes will not get off the ground because it won't have this.
Django is also not just a technology. It's as much a community and and set of agreed upon standards for building web apps. It's as much about being able to purchase django suit or use django-rest-framework as it is fixing issues with django itself. The people rather than the core library itself are what hangs it all together (& the people in this community are really lovely).
I don't like this sentiment, because it assumes you're running just-like-the-other hip startup that ships some sort of bullshit web-app with DB, backend, frontend and all that crap that media nowadays calls "tech" that caters to that mythical "user" that demands features and releases like oxygen.
There are, though, many cases of _technology_, like serious number-crunching simulation engines, where using "boring" "standard" technology would mean that you would spend 80% of your efforts battling limitations of your stack and 20% developing towards your goal.
From one point of view, Excel is a general purpose, schemaless CRUD environment with a DSL that allows you to encode your custom process and business logic.
Also disturbing amounts of our economy run on top of Excel sheets with macros and VBA.
It's the one tool that is on every desktop in the company and doesn't need a 6 month procurement process. It might be shit, but it's there and it'll get the job done well enough.
The only thing more disturbing than running large segments of our economy on Excel and VBA would be running large segments of our economy on some untested Rust or Haskell library.
If it's "shit" but it "get[s] the job done well enough" is it actually shit just because it's not being a Kubernetes cluster?
Most software is CRUD apps, because that's all that most companies need. Your small piece of the market might be different, and that's great and everything, but it's still just your small corner of a larger, mostly-CRUD universe.
I'm not going to throw out a wildly inaccurate number, but based on pure market capitalization, I'm willing to bet the majority of software driving the primary products sold by companies globally is not just CRUD apps.
You may be both right, because you are talking about different things. The important thing is that most software being written is CRUD or at least can be served by pretty simple and standard tools and architecture.
Even more accurately and more importantly from the perspective of this comment: most software developers will be working on such projects. And this is the gist, because a lot of folks talk about all these exotic/cool/cutting edge solutions while most people don't need or even should not to use them to get their job done.
I think in those scenarios you need to isolate the risky and exciting bits through APIs etc and build the rest of it using boring tech. I know of a company who failed miserably on delivery after they completed their ML risk engine and blew the budget trying to get a piece of shit Angular and nodejs front end for it working.
But boring doesn't bring young developers to your team. Like it or not, there's a fight for relevancy going on, fought by developers wanting to carve out either a niche for creativity, or, less favourably, longing for job security, resume padding, or self-promotion. And there sure is an effect of platforms building up a legacy of fossilized enterprise tooling turning younger devs away, such as Java; at a customer of mine, a team even chose Kotlin in a secret move to be presented after the fact. This was for a basic FinTech backend, so since it wasn't even Android development, I can only assume they wanted to pad their resumes. Same thing at a bank where they chose nodejs when it, as much as I like it for lightweight web backends, really lacks basics such as decimal arithmetic and its always-async nature only spells trouble for integration work, debugging, etc.
If it were only for bare requirements of business CRUD applications, we had it nailed around 1990 with client/server SQL apps already. Ever since, we seem to long for wrapping stuff up for modularization: first OOP, then package management, then "modules" (in Java land since v9/11, and OSGI before that), SOA/microservices, containers/k8s, with their accompanying zoo of tools that really don't make things any better in the slightest. About the only real progress I can see here is that modularization made unit testing mainstream.
I have inspired several young developers simply by showing them just how quickly you can deliver compelling functionality to customers and the positive reaction that generates. For us, speed of delivery is an explicit feature.
Thinking in terms of "fun" tech, these are more-or-less antithetical to the idea of quickly shipping something that is predictable. The coolest piece of tech we use is probably a toss-up between .NET Core, Blazor and SQLite.
One aspect I see rarely discussed is matching technology to the skill of the team. IMO people should not choose technologies they're unwilling to become experts with, or lack the workforce capable of. It's a hard fact of life that some are just more capable than others (either through intrinsic factors like "Intelligence", or extrinsic like family circumstances, quality of education received et al.). Obviously the extrinsic ones are in our locus of control, so an engineering team that presently is incapable, can grow to fill that need.
I don't think most companies w/ a high requirement for quality (say something that affects others' health for example) should use something like AWS unless they have a workforce dedicated to understanding distributed systems (academically) and become an expert in each AWS technology they choose. I have observed so many engineers treating these tools w/ their wishful assumptions of how they ought to work versus how they actually work... There's probably a similar conversation to be had about databases' guarantees, consensus algorithms, amongst others...
I've had similar constraints based off my experience. Tech stacks being driven more by developers experience than whats necessarily the best possible fit.
"Choose boring technology" implies many different things. I don't like this phrasing because it doesn't identify what part of "boring" is important:
* Is boring tech good because older tech with lots of commits is more likely to have identified bugs?
* Is boring tech good because it is easier to find devs with skills in those areas?
* Is boring tech good because it has lots of fleshed out examples to follow/SO support/packages?
* Is boring tech good because it is simpler and has less moving parts?
Simply saying "Postgres" or "JVM" doesn't imply boring either. Both of these have lots of advanced features and complexity. Your interactions need to be boring (simple, avoiding advanced features). Moreover it's very powerful to make your application boring. What if your whole app could only read/write data in one place? What if your whole app could be single threaded rather than multi-threaded (and still deliver results)? There are good reasons not to do these things, but those cases are rarer than the times we jump to these solutions which can create difficult problems that make our lives harder later
I agree with you and would even go a bit further. Actually the word "boring" is completely misused in my opinion. It is all about choosing the right solution. This has nothing to do with things being boring or fancy. The word boring is clearly defined by a dictionaries and is subject to personal emotions. One could say the JVM is totally amazing while the other says it is dull and boring. This can not be the basis for an educated decision.
The points you mentioned are also part of the questions one has to ask for finding a suitable approach.
I think that all those who say "use boring ..." effectively want to say something else but either do not dare to do so or are otherwise unable to express their intentions. Nevertheless redefining the word "boring" is not a solution.
The essay makes it fairly clear to me. There is a constant churn of fads in the tech world. Usually they are beneficial for some specific use cases, but get hyped up beyond all belief until everyone wants to use the new tech without questioning whether they should or not. If there aren't ton's of articles about how great a new tech is then it's probably "boring". Think of it more in terms of fashion than interesting.
The difference is that Apple, the most valuable corporation on Earth, has backed it 100%. They do occasionally kill initiatives (Can you say “OpenDoc”? I knew you could!), but not at the 50% rate that Google does, so I knew it wasn’t a big risk.
Also, I really enjoy the language.
It’s a long-term investment, and I don’t expect it to really start paying any dividends for at least another couple of years. I would be fine, sticking with ObjC, which I know will still be around for at least another decade.
But I’m at the point in life, where I want to work with tech that I enjoy.
Are you using Swift on the server, or just for iOS/Mac apps? I like Swift but it seems the consensus is that's nowhere near ready for server-side stuff. I'm leaning towards Elixir/Phoenix/LiveView for a project I have in mind, largely because I'm a (very) amateur developer and Elixir's way of doing things clicks with the way I think more than Node, which is the only other language I'm basically competent with. Plus, I guess with BEAM, the Erlang underpinnings, and Ecto's excellent PostGreSQL support, Elixir counts as "boring", well-tested technology.
I wouldn’t dream of using it on the server. It will always be a “second-class citizen” on the server. I use good ol’ PHP for my backend. If backend was my bread-and-butter, I might consider Rust or Python (both newer than 20 years, but also mature and well-supported). Maybe even C, if it was critical.
It is, however, an almost ideal GUI application language, and I write native apos.
JetBrains as a massive user of Java has a vested interest in making sure Kotlin is pleasant to use (and it's first party nature on Android helps here as well) and interops really really well with existing Java code (since that is their own use case).
May I ask why? Is it because Kotlin doesn't bring much new to the table besides fixing a few minor annoyances with Java? I have the same feeling, but can't put my finger on why exactly Kotlin seems unnecessary, except that it's a modern language that doesn't have union types.
My problem with swift is that it's moving faster than most people are comfortable with. Documentation is bare on the newer versions, every example on stack overflow is probably out of date, and it's really frustrating when trying to get things done. Oh and did I mention xcode?
Xcode is a bug farm, but that was the case, long before Swift.
Its bugs aren't "showstoppers," though. Really just annoyances. The biggest issue that I have with it, is that it is huge, and takes forever to download and install.
Swift documentation is pretty sparse, but a fairly robust "tribal knowledge" base is developing. StackOverflow is not bad. Most answers I see have at least "Swift 3" variants, which is really the line of demarcation. It usually takes me thirty seconds to tweak for language changes. Xcode is fairly helpful, there.
A lot of the "moving targets" in Swift are the system framework SDKs and APIs. The language is morphing, but mostly in less-frequented corners.
Yeah, Swift is great. Also enjoy using it. There are problems, but it is getting better. For example, now with Swift 5.3, you can have packages with resources. In particular this means you can easily (well, if you stitch together various blog/stackoverflow posts) bundle Metal shaders with your packages now, which is a huge win for the work that I am doing.
This, just like many opinion pieces I encounter, feels like someone who went "I have a revolutionary idea! What if we... took things in moderation?" and then did a huge write-up on that basic concept. Yeah, don't rush to use new tech just because it sounds glitzy and fresh but don't get bogged down in the same old tools either.
Choosing boring technology isn’t enough, one must use said technology in mostly boring ways to achieve the desired results.
One hacker in our organisation (employed as engineer) used Ansible and built the most abhorrent, complicated twisted and broken Rube Goldberg machine I’ve ever seen, it’s cost us about two weeks to get it just semi-reliable.
I’ve found most of the useful solutions and best products sit on the fringes and have boring foundations.
I don't think it's a question of "boring" or "proven" vs "latest" or "newer". It's not even an 80/20 split between "use proven technology" and "explore new tools".
What it all comes down to at the end of the day is "quality".
You need to have an eye for quality, you need to be able to recognize good design decisions when you see them, appreciate their worth and weigh their potential. You need to be able to make up your mind for yourself.
If you can spot quality in a language, framework (assuming you even need one) or database, then "boring" or "proven" become meaningless phrases.
Quality transcends age and hype.
When you can spot quality far off, you can paddle out to the swell, be there as it breaks, and enjoy a long quality ride that's not cut short because of irrelevance. You won't need to see thousands of surfers on a wave to know it's a quality break.
With technology, the earlier you can spot quality, the sooner you can start riding the wave and benefitting from everything that comes with that (long term experience with that technology, deep understanding, the opportunity to shape or introduce key design decisions at a critical stage, learning from new ideas that can compound earlier into your own output, new connections and marketing opportunities). That's how it always works. There's no point catching a wave because everyone else is riding it. By then, you probably need to be paddling out for the next big swell.
To be clear, this is more than "skating to where you think the puck will be". This is looking for quality and having an eye for it, and being sure of it when you find it.
I would say quality is undervalued when it comes to technology. Technology suffers from a plague of bloat, and it's getting worse as high level languages pile on the dependencies and abstractions, all the while justifying the waste with "quality is overrated".
I find it hard to believe that you can ship a high quality product without caring about the quality of the components going into it, or that this won't cost you down the line. It only takes a few "good enoughs" in terms of probability theory to end up with a multitude of poor product tail results.
It's also not enough to get the job done, you need to maintain what you ship. You don't need to choose the absolute best tool, but choosing quality tools pays dividends over time.
Do everyone read the article? In the article it specifically mentions:
- boring means proven, not necessarily older tech
- the ratio is 80 proven and 20 percent new tech
IMO it's a reasonable approach for production / commercial apps (maybe not for newer, cutting edge apps) to maximize productivity and keep earning experiences.
Then the more basic question is: when does one tech is considered "boring" already? For most apps, even the newer tech should already cover most, if not all the use cases.
I'd say pick mostly boring technologies but step out of your comfort zone once in a while. That way you take on some risk but not a lot. Worst case you have to change your plan but you always learn something. Best case your bets pay off and you end up with something that provides you some benefits. And you learn something.
Resource alignment costs:
(Re-)hiring + severance + knowledge lost or transferred to competitors
Refitting and development costs: converting or re-writing code + (re-)training + architectural changes + design changes
Time + impact to other initiatives and maintenance
Don’t stop after costing!
Then listen to the perceived benefits or need again, being careful to communicate and discuss the perceived costs, so that those selling the idea within the company understand, because if they don’t, their good idea will be poor morale fodder later.
This is much quicker and easier than it seems. If you still haven’t convinced the elements within the company to back down on their idea, and you’ve done a good job listening and considering their points, then either those elements are onto something or they’re “on something” (drinking the kool-aid).
I picked up Rails this year after spending a long time dismissing it as a unscalable out of date framework.
It’s the fastest thing out there for shipping features. I’m building a SaaS tool, so that’s all that matters right now. I added PDF export yesterday in about 11 lines of code using Prawn.
It’s great building stuff in Go, but it’s no good sitting there in a year with 0 customers and a system that can handle 10k rps. Or stuck still building that idea and switching SPA framework or RPC library, as I often did to procrastinate launching. By not using the shiny new thing I can focus on the product.
As a long time Rails developer, pushing products and prototypes out the door quicker than peers, it's heart-warming to see messages like yours more often on Hacker News lately.
It's true. I've been burned a few times trying to pick something new to base a non-trivial app on. Almost every time I do this I end up running into bugs that haven't been reported or documented anywhere. Building an app and starting a business is hard enough. Having to pioneer a new tech (when you're not the inventor of the new tech) is a massive burden if it's not a game changer for doing something critical in your app.
At this point I've spoken with 74 different devs who built 74 different apps (big and small) on a tech stack oriented podcast[0] and by far the most common trend they bring up is keeping things simple and choosing boring tech. This round up post[1] goes into more detail on innovation tokens (aka. a system for choosing new tech in a controlled way) and other trend extractions from 50+ episodes.
I avoid adding anything less than 20 years stable to my "stack".
It's proven and stable, it's bound to have a wide install base, and there is 20+ years worth of knowledge on the web for every issue I may happen to run into.
Try that with .net. They fucked it up at least four times in that 20 year period and there’s lots of garbage information around now.
Realistically the best approach for stability is to pick mature technology and that means stable, well designed and slow moving. Things like enterprise Linux distributions, Postgres, Go etc.
Java is mature to the point when legacy is considered a best practice even when it’s not (e.g. JavaBeans style getters and setters). Old and mature may sometimes mean that you will work with the efficiency of developers from 20 years ago.
I personally prefer an approach with controlled risks, doing as much experimentation as possible on a solid foundation. Scala won’t fit here as it pretends to be a platform, but this allows to avoid J2EE quagmire with smaller or better designed frameworks and libraries.
Java is a mysterious one really. It’s probably the most complete stack of all but you’re right about the legacy. I find it difficult to evaluate which technologies are risky with it now because some of it is throughly abandoned.
There's nothing specifically wrong with it; just that dogma without understanding why you're doing it is always problematic.
If your coding rules state that /every/ object with any properties needs an associated getter/setter/Builder/Factory for any code then it quickly leads to a huge amount of code and indirection to do the most trivial of tasks - made worse by the fact that Java doesn't have first-class macros.
In regards to getters and setters specifically; this stackoverflow post is quite good [1]. If those reasons don't apply then it just ends up complicating the solutions for very little benefit.
Actually there is something specific: while individual field accessors may make sense in some use cases (ActiveRecord or bidirectional UI mapping), the naming convention for them chosen in JavaBeans (“get/setXxx”) is too verbose and doesn’t make sense at all. Compare:
1. a.getFoo().getBar() vs a.foo().bar() or a.foo.bar
2. a.setFoo(1); a.setBoo(“moo”); vs a.foo(1).boo(“moo”); or a.onSomeEvent(1, “moo”);
JavaBeans are often used in reflective mapping, but nothing prevents from using simpler convention the same way.
In my opinion, the three extra characters are worth it for reducing ambiguity.
For example, it's clear to me what setFoo() and getFoo() does, but what about foo()? Does that return the value of foo() or does it run a process called foo()?
JavaScript has a different but also unambiguous way of doing it, with a.foo used for properties, and a.foo() used for function.
The contexts in which this question would make sense are very rare, so no, it does not worth it.
Let me also quote Brian Goetz on this topic:
„No discussion involving boilerplate (or any question of Java language evolution, for that matter) can be complete without the subject of field accessors (and properties) coming up. On the one hand, accessors constitute a significant portion of the boilerplate in existing code; on the other hand, the JavaBean-style getter/setter conventions are already badly overused. Mutability may drag with it encapsulation, and encapsulation plus transparency may in turn drag accessors with them, but we should be mindful of the purpose of these accessors; it is not to abstract the representation from the API, but at most to enable rejection of bad values and provide syntactic uniformity of access.
(Without rehashing the properties debate, one fundamental objection to automating JavaBean-style field accessors is that it would take what is at best a questionable -- and certainly overused -- API naming convention and burn it into the language. Unlike the core methods like Object.equals(), field accessors do not have any special treatment in the language, and so names of the form getSize() should not either. Also, while equally tedious, writing (and reading) accessor declarations are not nearly as error-prone as equals().)“
I'm in a .net shop, and I've avoided most of the drama by sticking with .net 4.6/4.7 until now, and just ignoring anything to do with "core".
I'm sticking my toes in the water with the new .net 5 now, hope I've waited long enough.
that wasn't a good move since 3.0 it's really really stable, heck even 2.1 and 3.0 barley changed. 5.0 is basically just a package upgrade (well npgsql+efcore has had some trouble, but that took only a few days)
also if you did not use webforms it was basically also just a rename of using statements and impl classes between 4.7 and 2.x
If it's true that there's no real difference in libraries between 4.7 and 2.1, that makes me even more sure I made the right choice. I skipped a load of stress, and didn't even miss any important new knowledge.
There are a lot of changes. Don’t assume that poster is correct. A lot of the core HTTP stack has changed and a whole chunk of third party libraries are abandoned or rewritten entirely for .net core.
Anything async is likely to be a complete pain in the ass.
That may work on the backend but not at all on the front-end. 20 years ago was the time of windows 2000 and Mac os 9. Not much about mobile applications. You could still use web technologies supported by internet explorer 5 though.
Absent a link to support your claim, color me skeptical in the extreme that your "squirming" led to something remotely resembling a reasonably interactive, performant, maintainable web application. Perhaps you mean you've published some static HTML documents resembling a 1990's website?
It is a forum website with accounts, tagging, sorting, and moderation.
All these features are accessible to Netscape 2+ and IE3+, and also Opera 3+, Opera 12, Lynx, Links, w3m, NetSurf, Dillo, and many others, with and without JS enabled.
Accounts require cookies for now. Some enhancements require JS.
Mostly built by one dev in (copious) spare time.
My motivation is to preserve the knowledge of building compatible Web apps, to allow anyone to access my site, and to embarrass the big websites which complain if your Chrome is a couple versions behind.
I believe in aiming for 100%, not 95% accessibility, and here are some scenarios I've tested with:
* Older devices like my beloved iPads running iOS 7 and 8, which won't upgrade past that.
* Vision-impaired users with screen readers.
* Library computers stuch with IE and very old Firefox.
* Restricted access connections, browsing (and posting, and voting) via Google Translate.
* Slow connections and slow devices forced to browser without JS.
* Text-mode browsers Lynx and Links which were my only option at the time.
* Device without a keyboard available (I have several on-screen keyboards to choose from)
* Device without my keyboard layout of choice. With JS, I have two translit options at this time: phonetic-Cyrillic and Dvorak. NoJS support to come.
* Retro browsers from a long time ago with their unique beauty and features, e.g. Mosaic, IE3, Netscape, IE6, Opera 3, etc.
Some may consider these to be "edge cases" not worth a bother, at less than 0.1% of the visitors. I guess they might also consider a wheelchair ramp unnecessary, since it only gets used like once a year. I consider all these scenarios to be opportunities.
That's an impressive set of claims. But following your profile link (via bog-standard iOS 14, current Safari) triggered a prompt for credentials, with no hint as to their purpose or usage. That's as broken and inaccessible a UX as I've encountered in a long, long time.
The credentials are listed directly below the URL in my profile.
The site is just a demo for hackers and friends, not for general purpose use.
HTTP Basic auth helps prevent bot crawling, which the site is not yet optimized to cope with, while being by far the most supported auth scheme from mid-90s to today.
ps I'm really not trying to antagonize you; was genuinely curious about your claimed super-accessible website, and kept trying. Done now, have a great day and see you around HN.
There's a big disconnect between many organizations that use libraries, compilers and tools from OS packages on stable Linux distributions, and the HN/SV crowd that uses docker, or pip/npm/go get/cargo even on production.
What about security vulnerabilities? Unless these are airgapped machines or otherwise not interoping with the rest of the world, you still need regular security patches to keep up with the latest ingenuity of hackers. And please don't say that old tech is immune to this, remember shellshock, rowhammer, goto-fail, spectre and many more.
As far as browsers go, when visiting my own trusted sites, I don't worry too much about vulnerabilities.
Most of the vulnerabilities are also JavaScript-driven, which is optional on my sites.
On the server side, of course, I use stuff which is still maintained and sometimes a multi-layer security approach.
There is no shortage of both established/mature and still-maintained tech around. Some examples I appreciate: GNU, POSIX, Linux, Perl, SQLite, PHP, Apache, Tcl.
For all of these examples there are many accessible resources with oodles of knowledge in many different convenient forms.
Newer stuff, where the main resources are Reddit and StackOverflow, and the writing is limited to just a couple of years, is nowhere near as rich or useful.
Many newer sites are also heavy and JavaScript-laden, both of which slows down access and skimming speed, if I can access them at all.
Thank you for your feedback. I'm glad you think it's stupid.
I make some exceptions, and Canvas may be one of them in the future. It's not a hard limit, more like a guideline.
I want to make stuff that lasts at least as long, and that means I should use tech that's already been around that long, to take advantage of the Lindy Effect.
In what way is that a backfire? Python 2 was retired after 14 years. It began to show signs of impending retirement after about 9 years, in the form of fork of "new version", parallel to diminishing support of 2.x.
Kubernetes is used by thousands of companies, big and small, has been around for more than 6 years, and is well supported by various cloud providers. There's lots of documentation and forums online with plenty help.
It'd argue that there's more documentation available than for any home-grown deployment system.
That's why I consider it battle-tested for my purposes.
Also, the post explains that boring does not necessarily mean "old". It's about using what you know best.
k8s is boring at this point. You can purchase it as a managed service from the biggest vendors on the planet and reliably find people with 2-3 years experience working with it.
I have worked with EKS for the most when it comes to k8s and I don't find it as smooth as AWS's other offerings. Check the upgrade guide https://docs.aws.amazon.com/eks/latest/userguide/update-clus... There is still quite a lot of engineering needed even when using a "managed" service.
To be fair EKS is bottom of the barrel. Things are a lot better in GKE land.
AWS never wanted EKS to be a thing, that is why it's overpriced and not very good.
Unfortunately ECS is crap and Fargate can't save it at this point.
Hopefully they get over that soon and make the industry standard stuff like EKS and MSK good and actually price competitive with running it yourself (accounting for ease of management premium ofc).
I've gone more extreme on the 80/20 ratio, instead thinking as innovation points. Fine to have a few risky things, but that should largely be driven by the business, not dev. Having 20% risky deps that you might need to track changelogs on and fix adds up as you keep adding code. We do some wild GPU and data stuff, and time spent on weird js/python/infra/etc lib issues from old risky deps is time not helping our core and our users. Nowadays, we try to remove deps, not add.
Putting 20% on innovation in the codebase makes more sense when thought of as % time improving, not the mix of risky deps. Ex: We invested in switching to more modern CI as useful innovation in the codebase (serverless GPU CI w gh integration, PR artifacts/snapshots/reproducibility, config as code, etc)... but done intentionally with boring code (gha/docker/packer). This brought in new ideas that took some iterations, but the boring code means it's only had one real incident, and all that has been justified in the business by the productivity ROI.
I think, the main reason people choose old tech is because they know it and don't have the time or the motivation to learn something new, even if the new tech is much better for their business.
And who can blame them? If you invested years to become an expert in a technology, it's hard to admit that something better comes along.
The other point that could go along with this is that often, based on constraints like cost and time, most people can develop a more stable solution using tools they know vs tools that are new to them. It takes time and often working through a lot of mistakes to come to grips with a new technology. Sometimes it's better to use what you/team understands assuming it's still the right fit for the problem. That goes beyond tech industry as well.
>it's hard to admit that something better comes along.
90% of the time it isn't better it just the latest fad. Remember NoSQL was the cool thing a few years back. Now my company is stuck on a shitty mongodb backend for what should be a relational database. We have kubernetes which causes a problems from time to time. We don't actually need to scale for the foreseeable future.
That's why our standard front-end stack is Angular + TypeScript.
Microsoft is doing wonders with TypeScript. Thanks to its type system we have much higher level of confidence and amazing tooling, so we can focus on writing tests and solving business problems.
Similar with Angular. With the AngularJS to Angular transition few years ago so we were very skeptical and thought that Angular is a ticking bomb. It turned out the opposite! Their team has learned the lesson and now they provide automatic update with their CLI. The framework hasn't changed for the past 4 years and Google just releases incremental updates with improvements in DX and speed.
They can definitely work on better build times and forms, but that's the first time we were able focus on solving customer problems without stressing out what's the new trend of creating components.
Agree on TypeScript, but Angular (at least when I last used it back in the v2-4 days) was very quirky. It felt like you had to know a lot about it's internals and the change detection mechanisms to make things work performantly, and if you needed to opt out of its model and dip into lower-level DOM manipulation code then that was quite a headache. I've found React to be a much more boring technology where everything just works.
There seems to be different aspects of boringness.
Angular has been boring for us in not having to assemble the frontend stack from scratch, keeping up to date, and hiring people who can immediately get up to speed. The least boring part of Angular for us was rxjs and forms. It's awesome having rxjs experts, but for folks getting started it can get messy. We decided to limit it's usage, and things have been going pretty well so far. In terms of performance we haven't hit any issues, but I suppose it depends on the app you're building.
We've had successful React projects as well. There the boringness was coming from the intuitive approach for building components. We found it less boring when we had to keep up to date, be consistent (used technologies), and have smooth integrations. Hooks felt very much similar to rxjs where there's so much semantics behind a short name to the extent you need to dig behind the implementation to understand what given hook does.
This is why I always start with Django or Ruby on Rails. It's "boring", battle-tested, has great support and gets you _very very_ far before you need to choose $HOTTECH.
Choosing $HOTTECH too early will destroy you. Building an entire business around $HOTTECH when the majority of people barely have a business model or a defined market are just asking for trouble.
It's also substantially more difficult and expensive to recruit for $HOTTECH but if you form a solid team on boring tech, and you build something sustainable that actually makes money, you can invest the time into building something with $HOTTECH and create subject matter experts out of your "boring" team.
This is why our company primarily recruits from coop programs - you can mold them into experts without most of the baggage from grey beards.
It probably depends on many factors. Complexity for example. I’d always chose Jekyll over Gatsby for a simple static page. Sometimes though, trying new things is great too, for example we choose Vue over React 5 years back when vue was very new. I am still happy with that decision.
When we started building Enchant [0], we used some tech that we understood well and some tech that was new and shiny.
Over the years, most of the new and shiny choices eventually caused more pain than we had ever anticipated. We ended up either ripping them out or working around the problems they introduced.
But I think "choose boring technology" is not a great guiding light. I've written previously about how to pick the "Right" technology [1], but I'll summarize below:
* It's not about using the right tool for the job. Sometimes, the right tool for the job isn't the right tool for you right now. It may be too expensive, you may not have the team necessary to maintain it, it may not integrate well with your existing stack
* It's not about using boring tech either. Boring tech can be antiquated already (or on a rapid path of getting there)
So how do you chose the right tech?
* Sometimes the "right" tech is already there - your existing tools may already be able to do what you need to do "good enough" for the next order of magnitude of scale. This buys you time before you need to take on additional complexity.
* The right tech is proven and widely deployed - and looks like it's still has a good trajectory going for it .. i.e. people still care about it!
* The right tech is robust - it's already battle tested and handles all the edge cases and big business needs. Security releases, zero-downtime upgrades, high availability and backups, known paths to scale, strong community, etc..
But this isn't a one time thing. When you start seeing signs that the tech is losing favor to newer better things, you also need to plan a migration or you end up stuck in the past (with poorly supported tech that's hard to hire for).
Today, having learned our lessons (the hard way), we evolve our tech stack much more methodically.
I tend to prefer tools that have: a) withstood the test of time, or b) I feel comfortable debugging it if my business depended on it.
My argument for battle-tested tools is, if they are still being used by a significant community even if no longer "trendy", they probably solve a problem well, there's lots of help online, and the ways in which it can fail are well known.
That said, I do try some lesser known tools if they help me ship the product. I just look at them as a risk factor until I understand them better.
> prefer tools that [...] I feel comfortable debugging it
That's really sound advice.
> prefer tools that [...] withstood the test of time
That I disagree with, sort of. In my eyes, history of tech, and hence tech popularity distribution, is closer to random than optimal. That's probably because choosing a stack is a complex process, subject to many decisions, not all pertaining to a given technology's technical efficiency.
That said, if a piece of technology is popular, sometimes its large ecosystem will help bootstrap your project faster. But not all large ecosystems, or communities, are created equal. Some ecosystems, in some cases, even though smaller in size, outperform even the largest ones.
As an addendum, choose technology you enjoy. As a consequence, it should be something you already know, but you'll be so more productive because it won't feel like a chore.
I think the main point of the article is (and I would also emphasize this) to chose proven and widely used libraries - not languages.
You want others to have experienced the pitfalls you might run into so you don’t have to wait or contribute to upstream OSS if you encounter an esoteric bug.
Basically, software is business and business is about maximizing profit (efficiency) and reducing risk (bug fixing time). There is a reason why large scale enterprises tend to be behind the curve wrt. tending tech stacks.
Well, I have a problem that if I'm not motivated I tend to procrastinate, a lot. Exciting tech enables some kind of "berzerk mode" in me where I work frantically, i.e. like a normal coder. So I tend to choose exciting tech when I can (and I must say that discovering Rust a few years ago gave me a lot of motivation to enjoy programming again).
Arghhh...why do people feel compelled to virtue signal in such stupid and unnecessary ways? This is an article about technology. Want to discuss politics? Fine. Write an article about politics. Don't shoot yourself in the foot with politics. It's a bad idea.
I know people who have absolutely ruined their careers mixing politics with work. One guy had to move to another state because nobody would hire him.
I feel younger folks are particularly naïve about this stuff. I don't care who you think you support today, left, right, middle, Martians. I guarantee you that life will eventually make you regret some of your choices and even change your mind.
Do not take a red-hot poker in the form of words and brand yourself on the internet with something you might just live to regret and will not be able to erase. You never know who's reading your stuff. And don't say, "I don't care". I know people who have lived to regret thinking that way. If you want to write about tech, write about tech and leave all else out of it.
I have lived in countries where changes in the prevailing regime came with serious consequences for those firmly and publicly aligned with the prior regime. People in places like the US (particularly younger folks who just don't know the history and nature of the world) don't understand the idea that something that might seem to like favorable virtue signaling in one epoch can turn into a death sentence when the tides change.
Don't get on lists or clans unless you fully understand the potential consequences. You want to be a software developer? Fine. Don't be a Democrat or Republican software developer. Believe me, it will cost you and you will likely never know it is happening to you.
All you have to do is reflect on what you are thinking this very moment as you read this --particularly if you down-voted the comment. Some of you are thinking something like "this guy is an idiot, I would never want to work with him". Well, there you go: QED.
Regarding Donald Rumsfeld. "Fuck this guy", among other things, says, "this person is ignorant". Not an insult, just a statement of fact. I seriously doubt they are able to recite the guy's biography. They likely "know" through a myopic third party unidimensional quasi-opinion of unknow origins. In other words, they know nothing.
No, this isn't to say I love Rumsfeld. I like very few people in politics (let's call it zero right now).
For example, among other things, did you know Rumsfeld was responsible for moving the world to high definition television? In fact, it would not be a stretch to call him "The father of HD television".
Seriously. I was floored when I learned this myself about twenty years ago. He did this outside of government, when he was in private industry. There's a wonderful little book titled "Defining Vision" that is a super interesting read about how the US, and the world, moved to HD TV. It's a great book about the tech, business and political sausage making that got HDTV going. Imagine my surprise when I was reading a book about technology and Rumsfeld surfaces as the key player that made it all happen.
I have learned we should strive not to manufacture ignorant caricatures of people and paint them in a negative single wavelength light after reducing them from a complex three-dimensional being to a single point in space.
This is no different from saying people crossing the border are criminals, that all Democrats are communists and all Republicans are idiots. These are incredibly ignorant views of reality we should all avoid. Do not brand yourself with such views, it's a bad idea.
Criticizing the man who is responsible for the deaths of thousands is not any more ethically dubious then your position of apathy until you "fully understand the consequences". Rumsfeld's advent of HDTV doesn't really offset his core role in the unjust invasion of Iraq.
Please provide detailed proof, including historical analysis and the geopolitical context that demonstrates that (a) he was personally responsible for thousands of deaths, (b) his actions did not save many more lives and, (c) the lack of such action would not have resulted in massive loss of life through other means immediately or years later.
Of course, I don't expect you to answer any of this. This simple challenge is to demonstrate just how ignorant you are, how ignorant I am and how ignorant we all are about such things.
You played precisely into what I described in my prior comment: The reduction of a person or a group from a complex multidimensional entity with context and an entire universe of intertwined realities to a single point in space fully defined by "responsible for the death of thousands".
I am NOT defending Rumsfeld, precisely because I am as ignorant as you are. The only people who possibly had the requisite context sat in a single conference room as these decisions were being made. None of us has any idea. We are all ignorant.
All Mexicans entering the US are criminals and are responsible for the enslavement of tens of thousands of women into sex trade in the US. This is the kind of statement you are making. It is preposterous when it comes to Mexicans and it is equally preposterous when it comes to most high-level politics.
Do not reduce a person or a group of people to a single point devoid of context. The conclusions you will reach as as that reduction.
This article does link to the original, so it's not uncredited plagiarism. But it still feels a bit odd. The original has been discussed really extensively on HN in the past, and even quite recently:
https://news.ycombinator.com/item?id=9291215
https://news.ycombinator.com/item?id=20323246
https://news.ycombinator.com/item?id=23444594