Even setting up a benchmark is tricky because the same binary can have dramatically different performance depending on the environment. At the machine level, caching effects from different processors can have dramatically different effects on performance. But compilers and runtimes can easily make things even more unpredictable, requiring things like warmup time to even hope to get an idea of what's going on.
I think Go starts out a bit closer to achieving predictable performance; any ahead-of-time compiler has an advantage here. Other than the use of garbage collection, the whole language seems to be designed with predictable performance in mind over ease of use, and you can avoid GC where it matters. Also, recently they replaced segmented stacks with copying of contiguous stacks, which both improves performance and makes it more predictable (while a goroutine is running).
Chrome/V8 has this problem as well - if you talk to really skilled web developers, they have a lot of performance "tricks" in their head, and the pitfall is that this knowledge decays much more rapidly than people think it decays, and so what was common knowledge in 2008 (or even 2013) is now no longer true. One major problem we faced with Google Search and specifically Instant was that it was optimized for the performance characteristics of the browsers of 2008; pretty much none of those rules apply to modern Chrome and Safari on mobile networks, and so performance is ridiculously bad on mobile.
Go achieves predictable performance only because it's relatively new. It's true that the team has tried very hard to keep things simple and predictable, but the problem is that the hardware that Go runs on keeps changing as well. As a language it was designed to take advantage of multicore chips that are just coming into use now. What if in 5 years we're all using quantum computers, or memristors, or flash memory, or the memory hierarchy no longer applies? What if everything is peer-to-peer over mobile devices?
This has already happened; what many people don't realize is that we're arguably past the multicore era. Desktop-class CPUs aren't adding more cores as quickly as we had predicted, because people aren't using them (although note that this is different in mobile). What they are adding is wider and wider SIMD units, and more and more SIMD instructions. To get maximum performance on the CPUs of today, as well as those in the future, using SIMD effectively is every bit as important, or more, as using multiple cores effectively.
In my opinion, programming languages have been fairly slow at responding to this.
That may be true, but on modern machines it's pretty much a pipe dream, as even the hardware nowadays (let alone modern OSs) is unpredictable. So once you get to the compiler it's a little late to get predictability. I think we're better off foregoing predictability altogether -- unless when writing real time applications (and I'm using realtime in its original sense) -- and getting the best performance we can. JITs currently seem to be the best hope for top performance, especially for a given unit of manual programmer effort expended on optimization.
You are certainly right if one takes "predictable" to mean something close to "deterministic"; on the other hand I think there are numerous almost-deterministic characteristics that can be evaluated without resorting to plain-ol' empiricism (particularly involving likelihood of hits in L1/L2/L3 and likelihood of being able to speculatively execute particular branches of code).
So I would agree that my a priori predictions may be a factor of 2-3x off; OTOH I often succeed in predicting that a particular tight loop will never have to leave L1, or that a low-entropy condition in the loop will be made essentially irrelevant by speculative execution.
With that proviso, I share your belief that JITs seem to be the best hope for top performance (and I am a heavy user of LuaJIT).
BTW, for anyone interested in an overview of the non-determinism built underlying modern hardware architectures, I recommend watching this great talk -- A Crash Course in Modern Hardware -- by Cliff Click, one of the world's top JIT experts.
A lot of the HPC work has moved over to GPUs, it is amazing what one can do when you have almost complete control over the memory hierarchy.
GPUs are obviously not the solution for concurrent or irregular workloads (yet), but many are surprised how much mileage one can get out of Python + CUDA for scientific workloads. The only point I was making is that there is a world where the hardware is much more deterministic (even if most of us can't go there).
Now you can still argue that the remaining calls that go deep into the recursion could account for a lot of time. I don't believe this is the case though, as in each recursion call you should terminate a constant fraction of each calls, so the drop down should be exponential.
An other interesting observation of this phenomenon is that running the recursion backwards (beginning with the large divisors) might greatly decrease the runtime as a more significant fraction of the `val`s are falsified by larger divisors in the first calls of the recursion.
With these deltas, the first few checks are essentially free. A large number of the numbers to check will be eliminated early and never try the expensive idiv tests.
Is there a way to detect when these cases happen? If there was, the compiler could choose to ignore the @tailrec directive if it wouldn't give any meaningful speed increase.
In this example, did we learn what is faster, Java or Scala? Nope. But we learned a lot digging for explanations why the results are different, which will hopefully result in having better cohesion between languages and the underlying platform.
Performance is often FUD when it comes to languages. Clojure, Scala, Haskell, and even Common Lisp are plenty fast enough for most purposes. Hell, Python and Ruby are fast enough for most applications. Besides, runtime performance has more to do with how the code is written than anything else. You can write very fast C++, if you spend a lot tweaking it and hire experience and extremely expensive ($300k/year and up) programmers, but typical C++ isn't any faster than well-written code in other languages. (I've seen C++ projects fail for performance reasons related to maintainability issues that arguably wouldn't exist if Haskell were used.) However, invoking performance is a great (if unreliable, given who it brings on to the field) way to scare decision-making business people (toddlers with guns) into taking your side on an issue they know nothing about.
This is a place where it'd be better if programmers were a little more politically savvy. (Bringing The Business into a technical dispute is not politically savvy. It ruins everything, in the long run. Never invite executives, also known as toddlers with guns, to anything. As many a Chinese noble learned about opening The Wall and letting the Mongols in to fight one's battles, it's impossible to get them out after it's done.) Let's say you have a team of 5 programmers who want to write something in Python, which is (for most purposes) fast enough. One of them stands up and says, "oh no, we can't do this in Python because if we end up running this on 100,000 boxes it will be too expensive, so we can only use C++" (premature optimization). If he were more politically savvy, he'd build the thing in Python and then, if the software were to run on 100,000 boxes, rewrite performance-critical pieces in C++, and justify a bonus for himself by pointing to the 20,000 CPUs that were just deprovisioned. And a year later, a month before bonuses are disbursed, he can rewrite another performance-critical component. This is good for him (he actually gets recognized, instead of being that annoying guy who bludgeoned a team into writing C++ and taking 4x as long to deliver an MVP) and for the business (only performance-critical components, with price tags large enough for him to care, get rewritten).
Oh, and if you think that part is constructive and boring, and you came here for a language holy-war, I do think both Java and Scala suck as programming languages, because they both allow me to write stupid programs with performance problems.
I assume this is sarcasm, but I actually worry about the fact that Those of Us Who Care About Languages are too divided over minutiae like Haskell's syntactic whitespace and Clojure's parentheses, and that may be why The Business comes in, mushroom stamps us and says, "I'm sick of your shit, programmers. Everything has to be in Java."
(Actually, if you've watched Orange is the New Black, you know that The Business has taken to calling software engineers "inmate", but that's another discussion.)
I like Haskell and I like Clojure for very different reasons, and they are very different languages, but the day-to-day real-world differences between them are small compared to the very real risk of The Business overhearing our flamewar and saying, "fuck you guys, Java all the way, now lick my SCRUM or it's minus-5 story points for you."
One of the things that worries me about Clojure is that, while I'd argue that it's the best (for a definition of "best" that includes short-term business viability; this enables me to exclude obscure niche languages that may be better on paper, but that are just too numerous for me to know anything about) dynamically-typed language-- a great language on its own right, but sitting on top of the JVM and having access to those libraries-- there's been a mind-share split between Clojure and Scala. And while Scala/Java and Clojure/Java interop aren't bad, Clojure/Scala interop is a mess. On top of this, while Odersky's brilliant, I think Scala has taken in a little too much of the Java culture for it's own good. Scala's a fine language to write in, but large Scala codebases are generally things that I'd rather not risk my sanity and career by being anywhere near them. If Scala wanted to be "Haskell with Java libraries" it would been a different and harder fight... but then again, it might not have taken off at all without the "slightly better Java" crowd, so maybe the way things happened was the only possibility.
The mind-share split between Clojure and Scala scares me, because it generates a very real risk that the intellectual energy that I'd like to see benefitting both languages, or at least consistently benefitting one of them, might fall back down the tree onto Java. The real risk, to me, is "Clojure vs. Scala: Divided We Fall". But I don't know exactly what to do about it.
I think Scala is going to win the corporate mind-share. I just accepted an offer at a large corporation that is writing all of its new back end services in Scala.
The lead architect is familiar with Scala and Clojure, and is trying to introduce the latter into the codebase (he successfully introduced Scala). There is a use case for an "Immutable Database" for one of the services, and It would make sense to use Datomic/Clojure for it.
The hesitation to use Clojure seems to come from some developers, and not the management team. I think it stems from the prospects of having to maintain a large codebase that is without first class static analysis.
Note: I think Scala and Clojure both rock.
First things first, Java 8's Lambda implementation is shoddy at best compared to first class function support in Scala/Clojure. The entire idea of having to explicitly convert a collection to a stream to access map/filter functions is non optimal. You don't just add lambdas to a language and automatically expect them to lift it's collections libraries to the level of Scala's and Clojure. The collection's lambda operations are the real benefit of using a language with first class function support.
The other inconvenient truth is that you still will be writing Java code, in all of it boilerplate glory. Still no Scala like type inference, no Clojure esque homoiconicity, no ability to reduce everything to a value like in Clojure/Scala, just plain old "it'll work" Java.
Though I had stuff like parboiled2, Slick, scala.meta, dotty, shapeless in mind (which don't support Java at all), these are good examples, too.
Call it political savvy if you want, but anybody that's actually been trained as an engineer (I mean formal engineering, not just computer science) knows that "good enough at low cost" beats "ideal at any cost" ninety-nine times out of a hundred.
Engineers shouldn't need political motivation for considering all aspects of a problem, instead of just the technical ones. It's our job.
While Clojure is my favorite application programming language, and I certainly do care about programming languages, I end up picking Java again and again and not because of stupid Scala/Haskell/Clojure bickering. The fact that Go is gaining momentum among the SV early adopter crowd shows that Java-style languages have a great appeal, probably due to their simplicity and familiarity (I've been writing Java for many years, and dabbling in Go in the past year or so, and I need a magnifying glass to tell the difference between the two languages). Attributing (or faulting, as you do) the choice of Java/C#/Go to "The Business" is both unjustifiably condescending, and quite ignorant of software engineering in the industry.
Why do you pick Java again and again?
And I think a lot of the SV early adopter crowd happily chases after the latest "fad" particularly if it is pushed by a company they like and respect. That same crowd is very anti-Java despite the similarities.
There is interesting work being done in Go at the moment. There is also a lot more interesting work being done in Java but to the early adopter crowd Java is old and boring.
Why? Because it's old and boring. Most of the terribly bad yet seductive ideas - JSPs, JSF, J2EE, RMI, Jini, ORMs, XML config files - got tried a decade ago. As a result, the new stuff that's coming out - Guice/Dagger, Guava, Java 8, Hadoop, Apache Spark, Quasar, Android - is the result of people trying to solve real problems, and tends to work a lot better. And it's still possible to find a library for basically everything. There's just a lot less bullshit in Java these days now that it's no longer the hot new thing. In hype cycle terms, it's reached the plateau of productivity.
One of the GitHub founders once said "Your tech should be boring. Make your product interesting." In my early-adopter experience, I've found that one major problem is that everybody focuses on how cool the technology is and all the neat abstractions they can do with the language, and that means that they aren't focused on how they're going to make users' lives better. (This is what killed Common Lisp, IMHO: it's just so much fun to extend the language and play with the technology that everybody in the community spend their time extending the language, which gave us awesome devtools and pretty shitty products.) The advantage of using boring tech is that you attract developers who are smart in the "What can we do with technology?" way rather than "What can we learn about technology?" way.
Also, this isn't about 'technology': both languages run on the JVM and are capable of utilizing it equally. Rather, this is about information density of source code; i.e., the signal:noise ratio. And, if you hire experienced programmers then no learning will be necessary and they can do what they want to do 'with technology' much more quickly with more powerful tools in their hands.
You must not have used a modern IDE. IDEs such as IntelliJ can fold a lot of boilerplate . Besides that, modern Java IDEs offer so much functionality for refactoring, code generation, etc., that it's often hard to be more productive in other languages (I have used C++, Python and Haskell for years, but I am more productive in Java after using it for work the last 1.5 years).
Scala simply doesn't have the same level of support in IDEs as Java does.
I've found its far better to hunt down bugs using a combination of debuggers, stacktraces, divide-and-conquer, log statements, assertions, and unit tests. I've gotten bugfix rates as high as 5 bugs/day in a fairly complex library (an HTML5 parser in C) using these techniques.
And, some languages make writing most of that sort of bug nigh impossible to begin with.
I would certainly consider Kotlin, though. Kotlin reduces the boilerplate but doesn't add much complexity, and doesn't hurt interop in the least. So you pay cheap for the (modest) benefits. This seems like a much better deal for me.
Getting a bit desperate that absolutely no one cares about Kotlin?
B) You left out at least 3 major languages that are hugely represented in large enterprises. Visual Basic (especially in the Access/Excel variety), SQL (in it's standard and non-standard formats) and COBOL. Those languages also offered huge advantages that any language trying to gain widespread appeal might want to study. That said, I certainly am not taking any job that is majority VB, SQL, or COBOL coding and none of the best developers I know (selection bias could be an issue) will either.
I don't think it is. Those CTOs/lead developers told me that the "novel language" teams are just as productive as other teams, and, in fact, tend to be less disciplined (and I'm talking top tier, millions-of-customers, technologically advanced, SV companies here). The only reason they allow those new languages is to attract young developers who get their kicks out of a new PL. If anything, this shows that some developers are immature, or that tech companies aren't doing stuff that's challenging and interesting enough on its own that developers need occupy themselves with the novelty of the language rather than with the novelty of the problem.
The deciding factor against it, for me, is that Java/anything interop on the JVM is quite easy, but Scala/anything interop is hard. That makes it relatively easy to create a mixed-language Java/Jython system or Java/Clojure system, or even Java/Jython/Clojure, but very hard to do a Scala/Jython system. And even with its advanced features, I doubt that Scala beats Python for quick & dirty prototyping. I've got a bunch of past experience in multi-language scripting + compiled core systems, and I know the benefits of doing a system at scale like that. The value proposition of Scala (and to some extent Go and Haskell) is that you get one language that is both fast and concise, but in my experience you want to separate that out into scripting and core languages, because the styles of programming themselves are very different.
Also, I wouldn't worry too much about finding Scala programmers-- in my opinion, what you want to find rather is good programmers. And, good programmers can pick up Scala (or any other language) in short order.
I miss none of the "standard" technologies.
Well, first because of the JVM (excellent performance, great dynamic linking, great runtime monitoring/profiling/management), and among the JVM languages Java is simple, fast, well supported with a huge user-base, and very suitable for large-team development (other people's code is, for the most part, easy to understand and maintain). I consider Java's downsides (verbosity -- pretty much on par with Go) minor annoyances at most. I find advanced features offered by other languages (like more elaborate type systems) not worth their cost in complexity and maintainability (and in that respect Clojure is different: it doesn't add complexity, and it tackles really big problems rather than minor annoyances).
I'm not sure what else you expect; Clojure is not statically typed, and so, it's totally uninteresting to someone looking for a statically typed functional language.
I'm vaguely aware of work such as http://typedclojure.org/, but I already have a statically typed by-default non-lisp language.
I disagree and so does Andrei Alexandrescu:
"The going word at Facebook is that 'reasonably written C++ code just runs fast,' which underscores the enormous effort spent at optimizing PHP and Java code. Paradoxically, C++ code is more difficult to write than in other languages, but efficient code is a lot easier [to write in C++ than in other languages]." – Herb Sutter at //build/, quoting Andrei Alexandrescu
Here's the thing about performance. Sometimes you care and sometimes you don't care. When you don't care- you don't care. You can write it in Python and even though it runs 10000 times slower, you still don't care. When you do care a factor of 10000 for Python vs. C++ or a factor of 3-5 (or more) of JVM languages vs. C++ can mean running 5 million servers instead of 1 million servers, or 10 frames per second vs. 60 frames per second, and the success or failure of your business. This is why pretty much all the big players who care about performance use C++ (from games to web at scale). The maintainability of large C++ projects is pretty much field proven and C++ is also evolving and while not quite fixing some of the causes for grief (because it maintains backwards compatibility) it offers new ways of doing things that are safer, more maintainable, and just as fast.
The only thing you said that I can slightly identify with is that you need good people in order to build things in C++ (and no, they don't cost $300k/year. I wish.) This is not a con, this is a pro. You want that regardless of language and good people will be expensive. Yes, you can get cheap people to write bad code in any language.
EDIT: Another data point. I worked on a huge Python project where performance did turn out to be an issue and it was virtually impossible to find a "critical" part to apply C++ to. It was just slow throughout, it was built over a huge base of meta-programming and Python specific magic. There was simply no single piece you could point to that if written in C++ would make it go significantly faster. My point there is that while it's certainly possible in a well designed system to mix languages while applying fast languages to performance critical portions it's not always possible after the fact. Language choice is an engineering decision and there's no single answer but you have to be very careful with the attitude of just throwing something together in the mistaken hope that it can be fixed later.
EDIT2: I could say more to defend C++'s "honour" but it doesn't need me as a champion... The choice of programming languages though is important and we need a way to eliminate some of the FUD. Part of that is through sharing real world successes and failures. Naturally there is some cognitive dissonance happening, that is if I chose language X, therefore I'm smart, therefore language X is the best, therefore other people who chose language Y don't have a clue. Where this turns from religion to data is when we can say share data about the project N years later that is somehow comparable to other projects and people can try to gather some insight from that data. The nice thing about performance is that it has an objective component to it, that is if we look at a certain problem we can get some numbers that we can compare. It's quantifiable. Factors such as development time, maintainability etc. are less quantifiable. Developer salaries, while quantifiable, are also hard to compare but are definitely a factor in making language decisions.
The point I was making though was that if you don't care then you don't care. I wrote some Python for a friend who wanted to scrape and process financial information from various web sites. I'd be an idiot to do that in C++. None of us cared how long it took to run (as long as it wasn't weeks). Having access to various Python libraries made this task a breeze and maintainability wasn't much of a concern either.
Andrei wouldn't have made that statement about C++ vs. Java if the difference was in the noise. I think x3-5 is a reasonable rough number to put on it but while I can point to benchmarks I can't point to a comprehensive study that shows "reasonably" written across different domains. You're welcome to take those numbers with a grain of salt and do your own investigation. Another data point there is that C++ is the dominant language in Google Code Jam and TopCoder SRMs where writing fast and correct code quickly is a competitive advantage.
EDIT: I find a lot of people will underestimate the performance advantage of native languages vs. JIT or interpreted environments. The x10000 is something I've seen in a real world system. Another thing to consider is that in a native language you can drop to assembler to optimize performance critical sections. You have 100% absolute control over your hardware. Anyone know how this: https://code.google.com/p/h264j/ compares to the original implementation?
I disagree with this. You always care, to some degree. If I write a one time, 100-LOC script to process a 1GB of data, I don't care if it takes 1 second or 1 hour. But I would care if it took 1 week or 1 month. That's why Java performance is good enough for 99% of programs I write, Python is also good for many, but if I completely didn't care for performance and wrote sloppy code in Python (or Java) I'd get into 10000x performance penalty region and this would be unacceptable in almost all cases.
"I think x3-5 is a reasonable rough number to put on it but while I can point to benchmarks I can't point to a comprehensive study that shows "reasonably" written across different domains" YMMV. The micro-benchmarks in Great Language shootout disagree with this. Most of them are within 2x range and the one outstanding is actually a benchmark of particular regular expression engine. Ok, we should not believe microbenchmarks, so what about real, optimized applications? Compare performance of Netty vs nginx. Or Tomcat vs Apache. This is a tie. Or Hypertable vs HBase (yeah, despite huge expectations and marketing, even in Hypertable own benchmarks, Hbase comes only... 50% - 2x slower). Or Jake vs Quake2 with Jake2 again not even 50% worse (actually better in some cases).
"C++ is the dominant language in Google Code Jam"
This only supports what I already wrote - when writing a very small piece of code you have full control over, like for a competition, it is much easier to achieve high performance code in asm/C/C++ than in Java/C#/Scala etc. In a competitiona like that, even a 20% overhead is not acceptable and I'd also use C or C++. But you can't extrapolate that on large-scale programming, where benefits of using a high level language matter much more.
"Another thing to consider is that in a native language you can drop to assembler to optimize performance critical sections" I can do that in Java or C# as well.
Another benchmark: https://days2011.scala-lang.org/sites/days2011/files/ws3-1-H...
Keep in mind though that the JVM is a moving target. It keeps getting better (on one hand) and on some platforms it's worse (e.g. Android, though the upcoming new version looks promising). It's possible the gap is smaller now than what I remember seeing in the past.
At any rate, if x2 is a number that feels right for the stuff you're doing I can't argue with that. You need to choose the language that works for you. Maybe you choose Java because of the libraries. Maybe you have more experience writing in Java and you're a lot more productive. Maybe there is better tooling. Maybe it's just more in line with how you think.
A web server does a lot of file I/O and a lot of network I/O. The performance of a web server is more about how efficiently you can juggle those given highly concurrent loads and what mechanisms are used at the native layer to interface to those systems. It's not so much about the "raw" power of the language. In your game engine example a lot of the heavy lifting is done in OpenGL which is native code. I'm also not intimately familiar with the details there. At some point it's also about how much effort went into optimizing things and whether or not something else was traded off.
To contrast that, Google Code Jam tasks are typically algorithmic and they stress the "raw" power aspect of the language. That said it's certainly not representative of real-world product development.
(EDIT: Yeah, you're right, the name JVM should only be used to refer to the specific type of VM that runs specific bytecode and not to other VMs. As Android shows Java does not have to run on the JVM. Thanks for the correction.)
"In 2012, academic benchmarks confirmed the factor of 3 between HotSpot and Dalvik on the same Android board, also noting that Dalvik code was not smaller than Hotspot" (from Wikipedia)
Sometimes, the situation is exactly opposite, though:
Now, I don't disagree that C and C++ are probably generally the "fastest languages" at the current time. For most programs that's completely irrelevant -- what's more relevant is that the most dangerous security issues of the age almost exclusively come from these languages. This is why, e.g. Rust matters, but also why languages without undefined behavior in general are a big deal. Undefined behavior is an abomination which has caused untold damage (even more than the billion-dollar mistake of "null").
 Somewhat of an absurd term, but let's just say that these languages encourage a style which leads to efficient machine code.
 Lack of escaping (particularly SQL) is probably a close second.
EDIT: Btw, if I were really performance-bound, I think I'd actually use a higher-level language to create a domain-centric DSL to describe my solution and then use that to generate a lower-level program (in C/C++, assembler or whatever) which solved my problem. People have been doing that kind of thing for a few years now. See e.g. https://hackage.haskell.org/package/atom
I also agree to some extent that languages themselves aren't fast or slow (though to some extent they are). In theory equivalent code in two languages should result in machine code that is just as efficient and it is sometimes a fluke or some implementation choices that make it not so.
I can talk a little bit to the security question. One thing we need to consider is that we're dealing with systems, not strictly with bits written in some language. Various injection attacks are almost universally carried in platforms that are not C/C++ and their root cause is at the interface between systems (e.g. your web back end's interface to your SQL database). Even in systems that run in various isolated sandboxes there is always the possibility of penetrating that sandbox or to have other holes in the interfaces to the rest of the system. Honestly, the undefined behaviour bits in C/C++ isn't something that I've seen matter in practice. When all system code in the world is written in C you're bound to be able to point to some mistake causing security issues but we have no way of comparing that to anything (at least none that I can think of). There's always some way someone can screw things up. That said there's no harm in trying to do better- if we can get safer languages, where it's harder to make mistakes, and have all the benefits of the less safe language it's a win-win. No downside.
For the record, I also like C++. Especially in its modern incarnation C++11. It's a vast improvement over plain C (if you care about generic programming, as I do). I just think we need an even better language to take its place -- hopefully Rust can deliver, in time.
Re: Injection attacks: Most of the "holes" in e.g. VM emulation code also come from C/C++ legacy -- at least that's been my (admittedly anecdotal) experience.
> Honestly, the undefined behaviour bits in C/C++ isn't something that I've seen matter in practice.
This isn't exactly recent, but just for example: http://lwn.net/Articles/342330/
This happens all of the time, especially as optimizers get even better at exploiting the fact that undefined behavior "cannot happen" (per the semantics of the language.)
Ultimately: Yeah, definitely agreed on striving for better and safer languages as long as they deliver what we need even if those are different languages and different needs -- as long as we each get better and safer and faster* code in the end! :)
(median x2 but this is stuff people spent time on optimizing which is not really what we're talking about here)
Java is not 20% slower or just as fast. But if it is please submit your Java solutions to the bench mark game for us to see.
Python is a few hundred time slower as long as you restrict yourself to native types (e.g. strings, integers, dictionaries, lists). Once you start defining your own classes and do more sophisticated things you get a lot slower. At any rate, my point was not to pick some specific number, my point was that sometimes you just don't care at all about how fast it runs. The x10000 was from a real world system but this can vary - a lot.
As for the other things you mentioned, those are actually pretty routine. At least in some places. If you're working on a game, then you're going to be reusing objects and calling native functions as a normal thing :)
Optimizing both Java and C++ programs.
It is a combination of better algorithms and closer-to-the-metal language that gives the 5000x speedup described in the thread.
You can get arbitrary large speedups by just choosing a worse initial algorithm and then improve it.
Of course, on numeric problems like this, plain CPython inevitably sucks performance-wise (no JIT, a lot of lookups, no value types).
This is a much better treatment of it:
The issue is, people are never going to agree. Your post implies that Clojure is more worth while than Scala. Personally, Scala's type system makes it far more valuable to me than Clojure, so I lean the other way. Trying to get a consensus, even just in the world of JVM-compatible languages, is intractable.
People individually need to choose their languages, and choose them carefully. If you don't have control over the language you work in, consider changing job.
Not in my experience. All other things remaining constant, the D Language forums (written in D) are significantly faster for me compared to almost any other forums I visit (including HN, any PhpBB stuff, Stackoverflow, etc.).
I also think that the statement, often made online, that web application performance is limited by network performance and not CPU performance, is absolute BS. My previous statement is an example of that.
Here's just a few of the things that can affect performance between different forum software written in the same language: Database schema, database software, database server resources, number of forum users, number of forum posts, average forum posts a day, number of forums viewers that are not users, webserver(s) resources, whether the webserver(s)/database server share resources with each other or other services, extra features supported by one forum and not the other, the network connection between the webservers and database server, the network connection between you and the webserver(s).
Which language doesn't?