The case when choosing a given language will give you a considerable advantage over choosing another one is very rare.
On the other hand, look at the tools that self proclaim the best ones (lisp, haskell), how many big complex software projects have been built with them?
In the end they're just tools. Yes, they make a difference, but it's tiny compared with all the other pieces of the puzzle of how to build good software.
Languages are better or worse suited depending on the application.
How many of these amazing software projects in C++ are web backends?
Most of entire Web 2.0 boom has been built on higher-level languages/frameworks like Rails/Ruby, Django/Python, and more recently Node/JS.
Its simply not possible for the average software engineering team to build systems as fast and as reliable in something like C or C++.
Similarly, the revolution in applied machine learning we are currently seeing would not be possible without tools like NumPy or scikit-learn and the frameworks that have been built on these.
On the other hand, 3d game engine code is almost universally written in C/C++.
Sure, choosing between Rails or Django will not matter all that much to your stack in the long run.
And its just as important to build a good team around your tech.
But the best team in the world is not going to be able to write a highly optimized game engine in Python or quickly prototype new NNs in Ruby.
New languages become popular because they address some need that was not previously met.
Understanding these roles different languages play is an essential skill in software engineering.
Varnish cache is written in c
Force of will, network effects, and other political factors dominate most forms of technical merit when it comes to "success" of a project. This includes such meritorious properties as security as well as understandable code. Let's not give up, OK? Doing a good job is always going to be hard, but it doesn't have to hurt this much, and it doesn't have to be as crappy-by-default as it is now.
As we take responsibility for doing better with our tools, we need to take responsibility for doing better on our tools.
Now while I do believe that some tools are better for some jobs than others, I also think that many times people are just exchanging one set of problems by another without realizing it.
I do agree that tools (and process and ....) can make it less hard but that requires real, active participation. And I think that is even harder...
Analogy for clueless manager types: in many cases, software is 99% plumbing. But imagine the cost of changing the plumbing after the house has been built :) Also consider the problem of changing the plumbing while people are actually living in the house.
> C++ was created as a reaction to OOP being so difficult to do in C.
> Java as a reaction to memory management being difficult.
> C# as a reaction to Java not being concise enough.
> Go has a reaction to parallelism/concurrency being difficult.
> Rust as a reaction to what C++ has become.
And it goes on. There are many other languages too that you won't hear of that are created as a reaction to the status quo not being good enough either, but those ones I listed were backed by a big enough majority to become known. There is also the Lisp family and the functional paradigm too, which come round again and again in popularity.
All these languages/tools are the problem because no one is sitting back, and pro-actively realising that the entirety of these languages are no good. I am not saying this to start an argument, I have happily been a programmer for 15 years. But I recognise the problems, and they are the same problems, that will repeat over and over again ad-infinitum. I actually started writing down problems I've found and possible solutions and it's already reached over 30 pages.
What it boils down to is that all languages share the same flaw; they specify the what and the how, but never the why. Until we figure out how to encode the why, we will forever be going in circles.
I don't believe encoding the "why" is possible.
The mapping from "why"s to "machine code" is not predefined bijective nor injective. It can't be well-defined enough to make a deterministic compiler. If you restrict the set of "whys" to a very narrow set of "understandable" inputs by the compiler, you've basically re-implemented the specifying of the *"whats" again.
Building ever higher abstractions is tractable because it's what we've been doing for decades: combine several lower-level "whats" into higher-level "whats". But encoding the "whys" seems to be unsolvable. Either that or I'm not understanding what you're communicating.
Wasn't C a reaction to assembly languages not being portable enough? Weren't assembly languages a reaction to machine code not being easy enough to work with? Wasn't machine code a reaction to punch-card systems being too inflexible? Etc. etc.
I'd say these problems are technically difficult, just not deep. To clarify the difference:
(0) A difficult problem requires a lot of effort to solve. Example: Finding and fixing use-after-free errors in a buggy C program.
(1) A deep problem requires creativity and insight to solve. Example: Inventing and formalizing Rust's borrow checker rules.
Programmers are naturally attracted to deep problems, but not difficult ones. In fact, reducing difficult problems to deep ones is often considered progress, for good reason. After you come up with the right insight, solving a deep problem can be very easy. But solving a difficult problem will always be hard.
Edit: Besides that, I agree, except one thing, I'd say C# is just MS's reaction that there is something awesome which does not belong to them.
Really, C# is a public service to correct for something that for so many reasons should be awesome having turned out so-so at best.
(You may say writing a million getters and setters, declaring types twice, stuffing your code in deep directory hierarchies, and dealing with bizarre licensing/version issues feels normal; that's the Java talking.)