
Which Programming Languages Use the Least Electricity? (2018) - Sindisil
https://thenewstack.io/which-programming-languages-use-the-least-electricity
======
kbenson
Interesting results, but I think the farther you go down the list the more the
fact that you're using the Computer Language Benchmark Game affects what
you're seeing, as not all languages get the same attention.

For example, Javascript and Typescript. Theoretically, I would assume those to
be very close, since one compiles to the other. And for memory usage, they are
very close. But for running time, Typescript is an _order of magnitude
slower_. That looks suspiciously like entirely different algorithms were used
in each implementation (with one being obviously superior in terms of
performance), and if that's happening in this specific case, where else is it
also causing problems in the analysis?

~~~
jlarocco
> Theoretically, I would assume those to be very close, since one compiles to
> the other.

I don't think that's a safe assumption. The Javascript results would be a
lower bound (assuming the same algorithm, etc.), but there's no telling what
"extra" Javascript (and thus overhead) might be inserted by the Typescript
compiler.

As far as individual results go, there's no point jumping to conclusions when
the results and code are available online and easy to try for yourself:

[https://benchmarksgame-
team.pages.debian.net/benchmarksgame/...](https://benchmarksgame-
team.pages.debian.net/benchmarksgame/faster/typescript.html)

Typescript is still 2-3x slower than Node in several of the benchmarks, and
the results could have been different in May 2018 when the article was
written.

~~~
markmark
Typescript is a superset of Javascript, you can literally submit the faster JS
programmes as the TS ones.

~~~
TallGuyShort
This may be a dumb question as I know nothing about Typescript, but is it
idiomatic to do that, though? You can, for the most part, use C in C++. But if
C was beating the pants off of C++ in benchmarks it wouldn't tell me anything
useful if someone submitted a C program as the C++ benchmark. If I wanted to
use idiomatic C, I'd use C. I expect a C++ vs. C comparison to compare the
encouraged features of C++ with what's available in C.

~~~
rienbdj
The benchmark is flawed. It should compare the fastest possible C++
implementation and a more idiomatic C++ implementation. The entire purpose of
the language is to allow both to coexist in the same code-base.

I would be very surprised if the fastest C++ and C are actually different.

~~~
igouy
If the benchmark claimed to be perfect then "flawed" would be an important
criticism.

You _can_ see 4 or 5 C++ fannkuch-redux programs "compared"

[https://benchmarksgame-
team.pages.debian.net/benchmarksgame/...](https://benchmarksgame-
team.pages.debian.net/benchmarksgame/performance/fannkuchredux.html)

------
arduanika
Can't run Rust without powering the HN servers, to host the screeds of its
acolytes.

I jest, (and I'm a Rust admirer myself), but my more serious point is: so many
different kinds of electricity go into a plush tech company with its well paid
developers and our copious brain food that powers us through all our
developing and debugging. If you want to talk about sustainability, ask about
the lifecycle maintenance of a software base. These benchmarks are cute, but
academic, and only tenuously related to any green solutions. Especially if
people in this thread are taking this seriously in terms of "This is exactly
how we should all be thinking about server engineering moving forward, as we
aim to drastically reduce carbon footprint within 11 years", then this is an
feels like an awful, awful way to measure it.

What language is most conducive to writing algorithms that are smart in terms
of Big O, or designing systems that can be refactored intelligently instead of
throwing boxes at the problem?

~~~
davedx
Scala is a nice sweet spot. Good type system, solid for refactoring, good for
expressing algorithms. You burn tons of CPU compiling but that’s one dev
footprint.

Anyone from Twitter care to comment on this?

------
abalone
Couple points (I've worked on app servers):

1- This is exactly how we should all be thinking about server engineering
moving forward, as we aim to drastically reduce carbon footprint within 11
years. Efficiencies at the language level are one of biggest bangs for buck
here. Just by redeploying, you can reduce energy consumption by perhaps double
digits. Imagine how hard that is to do at the hardware or energy farm level.

Think about it: You, brave software engineer, can literally make a significant
contribution to _saving the world_ by adopting a more energy-efficient
language, if you are fortunate enough to deploy something at scale.

2- Rust is killing it in these metrics but developer productivity /
friendliness is important to overall success. Looking at these results I have
a conjecture that is maybe provocative: The top two candidates for long-term
success at supplanting Java on the server in my eyes are

a) Go

b) brace yourself.. Swift

Swift is extremely young on the server, to the point where I'd expect your
natural reaction to be "WTFLOL?! never heard of it". But here's some food for
thought: the Netty team, one of the top performing Java server stacks, has
been recruited by Apple and is chewing through all that stuff and _just_
launched their NIO2 release[1] which I've heard is already very close to
Netty.

Go has an amazing concurrent garbage collector and has really pushed the
envelope with that.[2] Swift is unique in the server world in that it uses
reference counting which sidesteps the whole GC collection problem, which
could translate into very low, very consistent latencies as well as memory
usage. It's still quite early days for Swift, but these are the two languages
I'm watching the closest.

[1] [https://forums.swift.org/c/server](https://forums.swift.org/c/server)

[2] [https://blog.golang.org/ismmkeynote](https://blog.golang.org/ismmkeynote)

~~~
the8472
Go's GC is not really pushing the envelope. They have merely improved from a
STW, non-compacting, non-generational collector with a 25% CPU overhead to a
concurrent, non-compacting collector with reasonable overhead.

JVMs and CLR had those a decade ago. The state of the art are concurrent,
compacting, region-based pauseless or millisecond-pause collectors.

~~~
hu3
Comparing Go's GC to one of the many specialized fine-tunned Java GCs is
pointless, I find.

I'm sure you're also aware that recent Go's GC pauses are sub millisecond for
most use cases:

"We now have an objective of 500 microseconds stop the world pause per GC
cycle." \- 2018 Go team

[https://blog.golang.org/ismmkeynote](https://blog.golang.org/ismmkeynote)

My personal experience with microservices is to expect STW pauses in the 350
microsecond range. The best part is that it requires zero tunning or
developer's attention while still being light on memory usage. Can't say the
same for Java's default GC.

~~~
the8472
I was talking about technology state of the art, not out-of-the-box
experience.

OpenJDK's default collector - parallel or G1GC, depending on version - is not
the best available among JVMs and if your goal is pause times then yes, it
will be worse than Go's. But if you switch to say C4 or ZGC you'll get
comparable pause times and compacting on top and being able to scale to
terabyte heaps.

10 years ago we had Metronome and CMS, which are more comparable to Go's
collector.

And pause-times are not everything. Throughput and fragmentation resistance
matter too. Compacting collectors fare much better on the latter metric. I
don't know how the former is now, but those slides talked about 25% GC
overhead in older versions of Go, that's utterly terrible.

Go is facing one challange that java doesn't: internal pointers. But the CLR's
collectors have to deal with those too, so that's not terra incognita either.

~~~
hu3
Of course gains are to be expected when switching from Java's default GC to
something specialized.

> 25% GC overhead in older versions of Go, that's utterly terrible.

25% overhead of what? And compared to what? Just throwing numbers in the air
and saying it's terrible makes no sense.

The only 25%'s if could find in the slide were these:
[https://blog.golang.org/ismmkeynote/image6.png](https://blog.golang.org/ismmkeynote/image6.png)

2014 Go: 25% of CPU used by GC

2018 Go: 25% of CPU used _during_ 2x STW GC of < 500 microseconds

So even if STW GC occurred as frequent as every second (which it doesn't in my
use cases), this would amount to 0.025% of the CPU being used for GC, not 25%.

~~~
the8472
The 2014 numbers read to me that 25% of all CPU cycles are spent on GC. If you
re-read my previous post I was talking about that old version. The point was
that Go started improvements from a fairly bad place, were gains are still
relatively easy to obtain.

> Of course gains are to be expected when switching from Java's default GC to
> something specialized.

So? That's irrelevant for what's state of the art.

~~~
hu3
I re-read our conversation thread to understand our communication mismatch and
indeed you were taking about Go's GC not being novel, which it isn't. I
apologize for the unproductive conversation, it's all on me.

------
dahart
> A faster language is not always the most energy efficient

My rule of thumb now is that memory access, not compute, is the primary
consumer of energy. That would tend to confirm the above statement while also
supporting the data that scripting languages aren’t super energy efficient,
since they tend to do a lot more dynamic allocation than compiled languages,
when generally broadly speaking about common programming practices in each
language.

This is mainly colored by GPU usage and a paper/presentation some friends
made:
[https://www.researchgate.net/publication/324217073_A_Detaile...](https://www.researchgate.net/publication/324217073_A_Detailed_Study_of_Ray_Tracing_Performance_Render_Time_and_Energy_Cost)

In the GPU case, memory access costs sometimes 10x more than compute, meaning
that minimizing average (not peak) memory traffic is more or less the only
path to significantly reduced energy. (See figs 7 & 10 in the linked paper)

~~~
mehrdadn
"Costs more" in terms of energy per... what quantity exactly? Instruction
executed? Clock cycle? Datum processed?

~~~
hermitdev
I think practically, for useful analysis in business it would be wattage per
unit of useful work (or datum processed as you suggest). For instance, I work
in finance, so what is the wattage per option price calculation?

~~~
mehrdadn
The reason I asked was that it wasn't clear to me if the parent comment was
measuring the same way or not re: memory accesses. It seemed pretty expected
that a memory access would be more expensive per byte (it takes like hundreds
of clock cycles...) but it wasn't obvious to me if it would be so per clock so
I wanted to clarify which was meant.

~~~
hermitdev
I absolutely agree that it wasn't clear. I don't think wattage per clock cycle
is a meaningful measure here - after all, all languages will have the same
wattage per clock cycle, if they're executing the same instructions.

What's important is wattage per unit of useful end product, in my example
pricing an option. For others, it might be wattage per web page served or
anything else.

------
stareatgoats
Not sure what to make of this research. It's not like I can switch out JS in
favor of these other languages (many of which are compiled binaries) in my web
apps. The performance of for example web servers written in the different
languages entirely would seem more interesting to compare.

The JS/TS comparison was directly fishy. I found no mention in the paper of
how they managed to "run" the tasks in Typescript, which AFAIK is not
possible, or at least not how one would do it in a real life.

~~~
snek
Maybe you _can_ switch it out :)
[https://webassembly.org/](https://webassembly.org/)

~~~
dentemple
It _wouldn 't_ be switching out. You'd just be running WASM.

~~~
goatlover
Instead of a binary. You're running whatever the language compiles to. So then
it depends on how efficient the WASM compiler is compared to the binary on
whatever platform, and compared to JS.

------
m463
Reversible computing should be mentioned, as it could be the holy grail of low
energy use.

[https://spectrum.ieee.org/computing/hardware/the-future-
of-c...](https://spectrum.ieee.org/computing/hardware/the-future-of-computing-
depends-on-making-it-reversible)

~~~
jmartinpetersen
Almost 20 years ago I took courses under a professor who was very into
reversible computation. As I recall it was mainly on the language side,
though.

Glad that it's still moving, sad that it's at an even slower pace than the
"functional/immutable will save the parallellization worries" stuff I
encountered in the same period.

------
speedplane
Historically, virtually every move up the abstraction level sacrificed
performance to gain programming efficiency. Assembly to C, C to C++, C++ to
Java, Java to Python/Javascript. And now there are wars over Javascript vs.
Typescript, and jQuery vs. React/Angular. Every new step up the stack claims
that the higher level of abstraction can either be implemented with only a
minimal penalty, or could actually improve performance due to easier automated
optimization. However, every time, it seems that in practical applications,
performance is sacrificed to make software development faster.

I don't know enough about the performance of Javascript vs. Typescript to form
a strong opinion about it, but if history is any guide, it's likely that it
improves software development efficiency at the expense of performance.

~~~
CapsAdmin
Typescript usually transpiles to javascript, just without the type
information, and this is not done in the browser. If you target old browsers
it can add maybe add some overhead like babel does.

I don't know about javascript's usual JIT compilers, but LuaJIT's JIT compiler
(trace compiler) does a pretty good job of figuring out abstractions.

------
mlthoughts2018
I always feel like there is a big misunderstanding of what it means to “write
a program in some language” in these types of comparisons.

For example, saying that Python uses much more energy is a foolish thing to
say. The “same” algorithm written in pure Python is a very _semantically_
different thing. It involves allocation of flexible objects that obey certain
attribute lookup protocols, operator protocols, dynamic attribute mutation /
creation, iteration protocols, etc. It is presumed that if you wrote in pure
Python, you need this dynamism and ability to introspect at runtime, modify
data structure layout arbitrarily, utilize an automatic garbage collector,
etc. So you’d need a benchmark test that requires all that functionality
before it could possibly make sense to test Python. Otherwise you’re
penalizing Python for a bunch of expensive overhead, which is totally unfair
and foolish because the whole point is that such overhead exists for use cases
where either the costs are negligible or the flexibility that necessitates
that overhead in _any_ language happens to be a desired and important part,
such that to write the same functionality in other languages would first
require building all the protocols, garbage collection, etc. machinery that
essentially defines Python.

If instead you have some benchmark problem that can be solved in e.g. C or
Rust without needing any runtime dynamism or heavy machinery of certain
protocols or garbage collection, then to write it in Python you would just
write it in Cython or as a hand-made C extension module to specifically bypass
the overhead of the interpreter or Python protocols / dynamic lookups / etc.

Basically, it never makes sense to compare Python and C on a benchmark that
doesn’t require reimplementing most of Python in C first. Because to solve
that problem in Python, the ubiquitous, basic, Python 101 way to do it would
be to use Cython or numba or write your own extension module.

Python is basically a special C language DSL for dynamic types, garbage
collection, and a set of conventions and protocols.

“Pure Python” is a linguistic trick for saying “a huge ton of tools that make
C-level polymorphic structs easy to use.”

~~~
mempko
Yes except some people are using python where they should be using C++ and
powet consumption is a good reason not to use python in these cases.

~~~
mlthoughts2018
No, it’s a good reason to use Cython or one of the many other techniques for
generating pre-compiled C extensions in the Python ecosystem. You may also use
C++ directly or something if you prefer, but there would not be any
performance based reason to do so.

------
Kaveren
Really silly to state how much electricity "languages" use without specifying
implementations. You can specify for unstandardized languages, but languages
don't have performance characteristics in the typical sense, implementations
do, and this can very greatly between implementation.

~~~
igouy
_fyi_

[https://sites.google.com/view/energy-efficiency-
languages/se...](https://sites.google.com/view/energy-efficiency-
languages/setup)

------
raz32dust
The code size metric was pretty interesting to me. I would have imagined that
functional programming languages should lead to the most succinct code. But I
was surprised to see Haskell in the middle, and Go at the top there!
Functional languages are right in the middle for all 3 metrics.

~~~
userbinator
Unfortunately there are no APL/array languages there, they are notorious for
being succinct! It's not a stretch to say "lines of code" in most other
languages would be close to "characters of code" in APL. It'd be interesting
to compare it using the other metrics too.

------
forgot-my-pw
PHP's performance is quite impressive nowadays. It's one of the top fastest
interpreted languages.

~~~
jrwr
For how much shit PHP gets, it is a pretty good lang for building things

~~~
ashton314
I'm curious: I've used PHP (not by choice) for the past two years and I
strongly dislike it. (I could wax eloquent on this subject, but I don't want a
flame war. :) What do you see of value in PHP? Is it the language itself? Its
ecosystem or community?

------
vbuwivbiu
I'd be interested to know the added energy cost (in terms of programmers) of
writing the code in the first place and subsequently maintaining it

* edit- thinking about it it's probably negligable

~~~
marsokod
Cost is a good first order proxy for energy used, so this is not negligible if
you want to take into account all the energy used by the developer (including
home activities).

As for any other product, this is a trade-off between fixed and variable cost:
want to do some quick processing on an hourly basis? Just write a quick python
script with a Cron. Want to compute something billions of time? Use C or
something more modern that compiles. Need even more savings? Build an ASIC
doing the job.

------
toolslive
It would be interesting to match these results to what you get using
EO-(efficiency-oriented)languages. (MOQA for example). There's a whole branch
of computer science involved in static average-case analysis for algorithms
and data structures. The results are 'interesting'.

------
oblib
The big surprise for me is where Perl stands in the rankings, below Javascript
in all three tests.

~~~
ashton314
Amen: I'd have to take a look at the actual code in use.

~~~
igouy
You can, the authors of the paper provide the actual code they used on their
website.

------
stewbrew
“On average, compiled languages consumed 120J [joules] to execute the
solutions, while for a virtual machine and interpreted languages this value
was 576J and 2365J, respectively.”

IMHO it's not quite right to evaluate energy efficiency just by looking at the
runtime performance. Compiled (incl. JIT) programs have to be compiled. So for
compiled programs, you have upfront costs (e.g. rust, haskell much more than
c), you would have to include in the calculation. Then you have
programs/script that run once (a day maybe). I wouldn't be surprised if taking
this into account would change the calculation.

------
Mugwort
They should include Julia in the results.

------
wolfgke
No mention of assembly language, which should be even lower on electricity and
RAM usage than C and Pascal.

~~~
userbinator
Yes, I was looking for "carefully optimised Asm" on that list too. Even if
it's not faster it will pretty much always be smaller. (Compilers are
surprisingly easy to beat at size optimisation.)

~~~
makapuf
Well -Os can be a bit better. However I doubt program size is very significant
compared to data size in Compiled languages by example.

------
bhauer
Very interesting!

We have wanted to do a similar conversion from the results data in our
TechEmpower Framework Benchmarks [1] to energy efficiency. I've wanted to go
as far as converting watt-hours to some average carbon emission rate for
electricity generation in a given region.

I imagine the carbon emission per request would be amusing, if nothing else.

[1]
[https://www.techempower.com/benchmarks/](https://www.techempower.com/benchmarks/)

------
amelius
Aren't the savings more in the type of application? For example, driving users
to addictive content makes them use their phone more. Also, Bitcoin comes to
mind.

~~~
NowThenGoodBad
To me, the amount of electricity used is a similar issue to RAM. Just because
we have computers with more RAM does NOT mean programs NEED to use that RAM.
So many applications right now that don’t need as much memory as they are
using but, whether it be lazy or poorly planned code or any other excuse, they
guzzle it up like they are the only application running on your device.

I’m all for figuring out what’s the most efficient option while also ensuring
it’s an effective one as well. If you kept a knife in its sheath to keep it
nice, but never take it out to use it, then it’s not servicing its purpose as
a tool (ignoring ornamental ones). However, no matter how cool it might seem,
you really don’t need a samurai sword to pear apples.

------
userbinator
_But at the same time, when manipulating strings with regular expression,
three of the five most energy-efficient languages turn out to be interpreted
languages (TypeScript, JavaScript, and PHP), “although they tend to be not
very energy efficient in other scenarios.”_

I believe that's because they're all using a regular expression library which
is probably written in C/C++, maybe even with some pieces in optimised Asm, so
none of the _actual_ RE-work is being done in the interpreted language itself.
If you wrote actual RE processing in the interpreted language and let the
interpreter interpret/JIT it, I bet they would be as (in)efficient as the
"other scenarios" where the "heavy lifting" is running through the
interpreter.

------
techspreader
Hope that the cloud providers don't bill according to energy consumption. I
like Python a lot...

~~~
eastern
They already do, in the sense that you'll need beefier/more machines to do the
same job.

Of course, using Python means you have a lot of low-hanging fruit to pick. We
have a Python service where we moved ONE ~60 line recursive function to Go and
overall CPU consumption dropped to 15-20% of what it used to be.

~~~
monkeyshelli
That sounds pretty damn nice - would you happen to have a good link in mind
for reading more on the subject?

~~~
makapuf
What's interesting is also how go and python interaction is since you seem to
be able to call go from python and the reverse very efficiently.

~~~
eastern
Sorry but we actually cheated on that part. Redid the page so that the
function that is now in Go is a separate little service that is called via
ajax directly by the users' browsers.

So there's no actual Python-Go interaction in our code.

This was quicker, cleaner and has made the page more responsive for users.

~~~
makapuf
Thanks this comment is useful, client side integration is integration after
all, and I think if it was simple, you would have it done differently (if I
understand your words correctly - "cheating").

------
alexott
I really wonder that FORTRAN is so bad - I think that they didn’t try good
compiler, like Intel’s

~~~
igouy
_fyi_ ifort 17.0.3

[https://sites.google.com/view/energy-efficiency-
languages/se...](https://sites.google.com/view/energy-efficiency-
languages/setup)

------
jajag
Can anyone extract what the impact of a JIT compiler is on these figures? The
initial performance hit is eventually eroded by the runtime optimizations, so
will a long-running Java program eventually close the energy performance gap
on a C program?

------
mtreis86
Which lisp? Those numbers don't seem right from my experience with SBCL.

~~~
mftrhu
The data for the paper the article talks about is online [1], and it says [2]
that the benchmarks are "implemented in 28 different languages (exactly as
taken from the Computer Language Benchmark Game)".

The LISP evaluated by the Benchmark Game - and, apparently, this paper - is
indeed SBCL [3], which seems about on par with Java.

“Lisp, on average, consumes 2.27x more energy (131.34J) than C, while taking
2.44x more time to execute (4926.99ms), and 1.92x more memory (126.64Mb)
needed when compared to Pascal.”

[1] [https://sites.google.com/view/energy-efficiency-
languages/ho...](https://sites.google.com/view/energy-efficiency-
languages/home)

[2] [https://github.com/greensoftwarelab/Energy-
Languages](https://github.com/greensoftwarelab/Energy-Languages)

[3] [https://benchmarksgame-
team.pages.debian.net/benchmarksgame/...](https://benchmarksgame-
team.pages.debian.net/benchmarksgame/faster/lisp.html)

~~~
4thaccount
I think this is what is so cool about Common Lisp. You can literally get
pretty darn close to C in performance and still be at the highest levels of
development efficiency/prototyping speed. I don't think too many other
languages can say so. Take Python. It has really fast development time, but
very slow performance.

------
voldacar
C and Haskell were about what I expected, but those Pascal results have me
scratching my head, given that it performed about as well as C and used even
less RAM.

~~~
jandrese
Pascal and C are roughly equivalent in my eyes, at least as far as what they
spit out in the end. Both have pretty similar feature sets and development
methodologies.

What surprised me is how much of a hit the OO programs took. I had thought
they would compile down to something reasonably similar to the imperative
languages, but C++ ended up being 50% slower.

------
bibyte
So Rust is a little bit more efficient then C++. That's kind of surprising.
And I knew Lisp was fast but I am still surprised it scored so high.

------
mlinksva
> We then gathered the most efficient (i.e. fastest) version of the source
> code in each of the remaining 10 benchmark problems, for all the 27
> considered programming languages.

I wonder if anyone has attempted to rate how idiomatic or typical CLBG entries
are, and whether choosing the most idiomatic implementation for each language
would have obtained different results?

------
hn_throwaway_99
How does "TypeScript" come out to an energy score of 21.50 when "JavaScript"
has a score of 4.45 - that makes 0 sense.

~~~
snek
maybe they included compilation? it would make sense as that's a real world
use of cpu/memory/energy.

~~~
hn_throwaway_99
But it certainly looks like they didn't include compilation time for all of
the compiled-to-machine-code languages. C has the reference value of '1'.

~~~
jandrese
I think they included compile time if the code is JITed at startup.

------
v_lisivka
I improved Rust's version of binary-trees by about 0.05s (on my CPU):
[https://salsa.debian.org/benchmarksgame-
team/benchmarksgame/...](https://salsa.debian.org/benchmarksgame-
team/benchmarksgame/issues/113) , so Rust score may rise.

------
danielscrubs
"Of the top five languages in both categories, four of them were compiled.
(The exception? Java.)" and "While Erlang is not an interpreted language." Had
to check Wikipedia to make sure I haven't become crazy.

------
osrec
I'm a little surprised at where Erlang features in the list. I've never used
it personally, but have heard so much about it being an amazing, performant
language. I wonder why it's so time/energy/memory inefficient for the
algorithms used in this exercise.

~~~
PinkMilkshake
BEAM has a 'busy wait' feature. Schedulers with no jobs to do will remain
active so they can respond to new jobs faster. You can turn this down so that
the schedulers go to sleep quicker, reducing CPU usage.

From [http://erlang.org/doc/man/erl.html](http://erlang.org/doc/man/erl.html):

    
    
      +sbwt none|very_short|short|medium|long|very_long
    
      Sets scheduler busy wait threshold. Defaults to medium. The threshold
      determines how long schedulers are to busy wait when running out of work
      before going to sleep.

~~~
osrec
Interesting. Have you ever used Erlang in production? How did you find it?

~~~
tannhaeuser
It's pretty obvious if you're running RabbitMQ which ranks highly in the
output of `top` even if no messages are being processed.

------
anonytrary
I did not expect Lua to be that far down the list. I thought Lua was a go to
choice for scripting embedded software. Also, I'm not surprised that
JavaScript outperforms most of the other scripting languages (according to
this potentially erroneous benchmark).

~~~
fit2rule
I also was disappointed that my favourite language did so poorly in these
tests - I wonder, though, what the stats would have been like had LuaJIT been
factored in. Perhaps something for future evaluation ..

------
Too
Is JIT, interpreter and VM startup time included? Some benchmarks finish in
just a few seconds. With Python for example just launching the process takes
in the order of 100ms, which would be significant time in such short
benchmarks.

------
z3t4
For example on a server the CPU will likely be set at full thrust even if it
has nothing to do. So I don't think this matters until you need to scale,
where you would need less servers/power with a more optimized program.

------
aheilbut
Are they counting the electricity used to heat the pizzas that feed the
programmers?

~~~
jandrese
This was said half in jest I suspect, but it's not a completely off the wall
concern. For an infrequently used program the energy burned in its development
could easily be a nontrivial amount of the total energy used over its
lifetime. If you can write a 5 line Perl script that gets the job done (and is
only run a couple dozen times ever), then you're probably saving energy over
the 100 line C program despite the massive differences in runtime energy
consumption.

~~~
aheilbut
Phew, somebody gets it.

------
pauljurczak
The language which uses the least energy is the language no one writes
programs in.

------
seaish
Question: if these were run on underclocked hardware, would they use less
energy? It seems like you'd need an increasingly large amount of energy to run
something in a smaller amount of time.

------
goatlover
No Julia or Crystal, that's disappointing.

------
tomohawk
YMMV. Our experience with Go vs Java showed we could run the same workload on
far fewer machines using Go than Java.

------
apta
> Of the top five languages in both categories, four of them were compiled.
> (The exception? Java.)

Java is also compiled.

------
jayd16
So do we think Java is benefiting from J2ME and Android research? Is it just a
coincidence?

------
tjpnz
How is JavaScript so far up the list? Slack commits murder on my MBP battery
every day.

~~~
sod
Using
[https://github.com/xtermjs/xterm.js](https://github.com/xtermjs/xterm.js) as
an example (the terminal used in vscode), they draw on a canvas to achieve
better rendering in the browser. So I guess not javascript is the cpu hogging
bottleneck, but the complexety of html/css. There are just too many footguns
that slows down the rendering. If there only was something like "use strict"
for html/css.

------
crimsonalucard
Assembly language is the lower bound here. This is a 100% correct fact that
nothing can use less energy than assembly language. No such statement can be
made about other languages.

~~~
geophertz
Not necessarily. Sometimes compilers give better programs than a programmer
can with optimization and stuff.

~~~
crimsonalucard
Right but if I write a program that converts code into assembly and does the
optimizations automatically then I can get something better or equal to a
compiler.

~~~
dragonwriter
Isn't that exactly a compiler?

~~~
crimsonalucard
Yes. And all of my statements are 100% correct except for this one.

------
xiphias2
The heavily parralelized C in the language shootout is not how most code is
written.

Rust was specifically created to use the paralell hardware resources well, and
it excels at it.

------
sys_64738
No COBOL?

~~~
tapland
Seeing to the amount of transactions running over cobol it might deserve a
place in a list like this.

~~~
bastawhiz
If you're still using COBOL, power usage is likely very low on your list of
concerns

~~~
tapland
No? Just about any financial transaction, insurance calculation and a lot of
telcos operations run mainly of cobol.

Something like 90% of all credit card transactions.

~~~
bastawhiz
You can throw money at electricity use, and COBOL is compiled to machine code
so the power usage is much less of a concern than it would be in an
interpreted language.

Much higher on the list of concerns is finding capable COBOL developers,
ongoing maintenance, and security. Those are all things that you can't just
throw money at.

------
pulketo
very stupid but clickbaiting question... everything wastes everything...

~~~
RobLach
What does “everything wastes everything” mean?

------
bsenftner
Now, seriously: C and the boogieman of managing your own memory is a myth.
Anything you want from another language can be built up from C, and be in your
direct source code control working in C. Far too much propaganda is written
telling programmers perfectly capable of the gains from working in C that they
should be afraid of working with their own memory allocations. As if being
organized is impossible.

~~~
tannhaeuser
Language runtime analysis for server-like programs seems obsessed with putting
everything into a single process and address space. With C/Unix, traditionally
or at least since inetd, the way to go is start a fresh process per request,
which mostly just solves the problem of manual memory management (eg. because
the O/S clears up the mess you left behind), at the price of latency. I'd like
to see a benchmark comparing these approaches, since gc is far from being free
of overhead either.

~~~
the_why_of_y
How would that prevent any of these CWE-415 or CWE-416 vulnerabilities?

[https://www.cvedetails.com/vulnerability-
search.php?f=1&vend...](https://www.cvedetails.com/vulnerability-
search.php?f=1&vendor=&product=&cveid=&msid=&bidno=&cweid=415&cvssscoremin=&cvssscoremax=&psy=&psm=&pey=&pem=&usy=&usm=&uey=&uem=)

[https://www.cvedetails.com/vulnerability-
search.php?f=1&vend...](https://www.cvedetails.com/vulnerability-
search.php?f=1&vendor=&product=&cveid=&msid=&bidno=&cweid=416&cvssscoremin=&cvssscoremax=&psy=&psm=&pey=&pem=&usy=&usm=&uey=&uem=)

Pedantic note: quite a few of these vulnerabilities don't technically
contradict the GP's claim because they're in C++, not C, but I hope there are
enough C ones to disprove the point.

------
codesushi42
This study is a bit silly, I had to check whether it was an April Fool's joke.
The chipset running the instructions is going to have a far higher impact on
energy usage than the language compiling those instructions.

For instance, switching to ASICs or even FPGAs that are optimized to handle
certain types of computations.

Just look at datacenters or mobile devices. Sure, optimizations are made to
runtime environments to improve performance, and increase battery life. But
you are going to see bigger gains through chip architecture. And that is what
vendors focus on.

~~~
codesushi42
To the naysayers: these languages are either compiled, or will be JIT'ed at
runtime. What they will be compiled to will be chip specific binaries-- x86,
ARM, whatever. The energy usage will be markedly different depending on which
chip architecture is chosen, and will no doubt be inconsistent between
different architectures and languages.

Or to put it another way. Why do you think mobile devices almost exclusively
use ARM? Why do datacenter operators invest heavily in R&D for ASICs instead
of engineering new languages or runtimes? Because languages play a secondary
role in energy consumption.

This study is just like countless of other meaningless benchmarks seen in
language war flame posts. Pointless and misses the big picture.

~~~
Nvorzula
With regard to datacenter operators, I would argue that a simpler answer to
that observation is that they are investing heavily in the are in which they
can actually effect change. That is, they can swap in-and-out their hardware,
but language of choice is largely up to their customers - the dev. They can
advocate and champion a particular language, but really enforcing it in any
meaningful way is narrow casting their own user base.

~~~
codesushi42
They could offer their own compilers or runtimes. Some big companies do. But
most dollars and time is spent on hardware R&D and upgrades.

Facebook and Google both have in house chip designers these days, specifically
for their data centers.

------
kissgyorgy
Not sure how serious they think this is, and how much is this rather a joke.
If it's not a joke, they clearly not see the big picture. For example if you
can write a software in a higher level language in a fraction of the time
which will spare energy somewhere else in the world, it will use less
electricity overall.

~~~
dahart
How does less dev time save energy “somewhere else in the world”?

Google & AWS pay money to develop & acquire technologies that save energy
because of their scale. The energy used in dev is scratch compared to the
energy used to run programs at scale. Google, for example, has a PHP compiler
that makes all PHP web pages execute in a fraction of the energy usage of
running the PHP interpreter.

~~~
akhilcacharya
...Google has a PHP compiler? Don't you mean Hack/HHVM by FB?

~~~
dahart
Nope, I don’t mean HHVM. I was thinking of Talaria. I don’t know if they still
use it. [http://enswmu.blogspot.com/2013/03/why-google-acquired-
talar...](http://enswmu.blogspot.com/2013/03/why-google-acquired-talaria-
efficency.html)

But HipHop is certainly another good example that demonstrates that this
matters in practice.

