Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Energy Efficiency across Programming Languages (2017) [pdf] (uminho.pt)
79 points by g42gregory on Sept 21, 2022 | hide | past | favorite | 89 comments


I see similar postings coming by on LinkedIn now and then. Although there is some value in the benchmark results, the way these results are promoted is such a BS.

If your company's business is really in running some algorithms similar to binary search or the n-body problem, you should probably not be checking which language to choose that is most energy efficient, but which library or framework suits you the best. E.g you can perfectly well use Python + Pandas or NumPy even though everybody will agree that Python in itself is terribly inefficient. If you really want to go to the extreme in saving energy you should go all the way and code your algorithms in assembly. But it's obvious why this is probably not a very wise choice.

Everybody who knows a bit about IT will understand this, but the danger is that some project managers will read one of these papers and draw the conclusion that the team should switch to C or Rust because it is more sustainable ..


I'm also seeing this trend as well.. "Software needs to be made more energy efficient." Usually this comment is coming from disgruntled data center designers who feel like they are unfairly being forced to eek out more efficiency when the onus should be the IT manufacturers and SW developers. This attitude is somewhat justified as it is possible to run servers at much warmer temps, but the DC industry (namely their customers) still continue to insist on SLAs that have unreasonably low air temperature set points.

IMO, the first wave of optimizing software for energy consumption has already happened via virtualization. Ideally, I can run the same workload with less metal/watts.

I feel like the next round of optimization will be more problematic. You could re-write software to take advantage of hardware acceleration features that are baked into the silicon to ideally speed things up/reduce power consumption, however this comes with its own issues. You need to be very familiar with your hardware. Integrating much more tightly means your married to your metal... kinda like CUDA lock-in in the GPU ecosystem.


> You need to be very familiar with your hardware. Integrating much more tightly means your married to your metal

I don’t know how true it is in the HPC space, but the more optimization work I do the more I realise how horribly inefficient 99% of programs are. Most devs these days just don’t care about making software fast, and when they do they have no idea where to start or what will help.

Last year I spent some time optimising CRDTs (used in collaborative text editing). Automerge (a well known library) took 5 minutes to run a certain benchmark. Yjs (considered a mature, well performing library) took 1 second to run the same test. After spending some time optimizing, I have code now which can do the same work in 4 milliseconds. I’m not using any special hardware - this is just in straight forward rust.

The remarkable part is that automerge isn’t written in an uncharacteristically bad way. Sure, it uses immutablejs - but plenty of modern javascript programs do something very similar. Even yjs, despite being the target of some careful optimization work, is still 200x slower than it could be. - and thus, wasting 99.5% of its energy budget.

If even yjs can be sped up by 200x, how much do you think most enterprise software could be sped up, if people tried? 500x? 10000x?

I don’t think we need deeper integration with hardware. We just need more software developers who care at all about performance.

Virtualisation wipes out 20%+ of the performance of the machine, depending on what you’re doing. But we pay that cost gladly because it’s marginally easier than sharing a Linux kernel between multiple processes.

I think if our CPUs had capped out 50x slower than they are today, the average user would hardly notice any difference. The only difference would be that software would be more expensive to make, because software engineers would need to actually understand how the computer works to make software perform. Things like the virtual DOM wouldn’t be fast enough to use at scale.

Video games and zoom wouldn’t be quite as pretty. But we’d live.


> If even yjs can be sped up by 200x, how much do you think most enterprise software could be sped up, if people tried? 500x? 10000x?

Like 10x most? You compare something that is basically one hot loop with something that at most spends 1% of its CPU time in hot loops, the other being just random code paths and calls to other, often optimized libs (e.g. for making network connections, db communication, etc)


10x? Look, we can’t know for sure. But yjs was already an optimized library and another 200x speed up was available there. Why do you assume any of the network code used to talk to your database is well optimized?

A flat performance profile isn’t a sign everything is fast. It’s usually a sign everything is uniformly sluggish. And yeah, that makes optimization harder. But it’s far from impossible.

Take rendering html for example. I’ve seen rendering (just the rendering part, not database lookups) take over 200ms in nodejs. Generating a string with the equivalent rust compile-time templating engine can spit out the same html in microseconds.

There is an insane amount of performance on the table with modern CPUs. Just about everything short of network round trips and stable diffusion and should complete instantly.


It sounds like the latter thing probably has some design issues that lead to a flat performance profile where every single thing takes orders of magnitude more work than necessary?


Yes. Stuff only needs to be fast enough to be barely usable. It's a simple economic principle. Making programs faster need not be hard. When looking at programs I often think how easier it would have been to organize them right, so they would be both easier to make and faster to run. However, this requires more experienced engineers, which are a scarce resource. So we're back to economics.


> I feel like the next round of optimization will be more problematic. You could re-write software to take advantage of hardware acceleration features that are baked into the silicon to ideally speed things up/reduce power consumption, however this comes with its own issues.

I'm sorry but this is far from the truth.

The absolute majority of software written today is not optimized. There has been a huge shift away from efficient close-to-the-metal programming that was popular in the 90s and earlier, to what we have today. There is enormous energy and efficiency benefits we could reap today, without "problematic" optimizations. We just need to start optimizing at all.


>The absolute majority of software written today is not optimized.

If it's not problematic, then why is it not being done already? Obviously, it's the economics... which are problematic.


This presupposes a lot of market efficiency that may not exist. Writing faster software often allows you to avoid a lot of architectural complexity.


Is there? Most applications are not running in a hot loop over terabytes of data. There are a few where indeed insane speed-ups are possible (e.g. it would be dumb to compete with video codecs in anything other than assembly with plenty of vector instructions), but most programs barely make a CPU sweat, they just idle on IO. Virtual threads like Go, Java would bring much much more benefit over rewriting said programs in a lower level language.


No, assembly most of the time won’t be more efficient than langs like Go or Rust simply because people won’t be able to write efficient optimized code on pair with what modern languages can achieve and then it would be harder to actually architect your application correctly. Also add utilizing multiple cores to the mix.

When it comes to other languages I don’t think it’s that simple either. Yes, these benchmarks are not representative of what most people do at work, but I’ve seen orders of magnitude of difference between dynamic languages like Ruby or Python vs languages like Rust or Go when handling web traffic. Often web apps spend 50-70% of the time to handle request doing actual CPU work


Langs like Go? Go is a managed language with a compiler that barely does any optimizations (hence its compilation speed). I honestly get irrationally angry when it’s grouped with Rust,C,C++, etc.


Agreed, go should be next to java, C#, ocaml, julia, kotlin, swift etc. Solid multi-paradigm general purpose languages. Not systems/embedded/performance languages.


Go is fully native unlike VM-based languages. It stands half way between C/C++/Rust and Java/C#.

Also, for performance critical sections, you can bypass the GC by importing "C" and "Unsafe" to get manual memory management and pointer arithmetic.

https://dgraph.io/blog/post/manual-memory-management-golang-...


My experience and the benchmarks concur that go despite being native, is consistently noticably slower than c# and often slower than java. I think native vs vm isn't as relevant as I once thought it was. Then you have a dyanamic typed jit language like julia with performance leading the pack in performance, at least ong the gc languages. I would never have predicted that. Go is at the slower end of the non-interpreted gc languages when used idiomatically including all the vm languages. I would more say it lies between java and JavaScript. So between vm and interpreted. It shares that niche with swift and haskell.


Did you try with a recent release of the Go compiler? Versions above 1.15 have seen significant performance optimizations, most notably better inlining.


Yeah, Java, C# all can be compiled to native, that’s besides the point (java had gcj 20 years ago for that, not just graal nowadays). It is just bundling the runtime basically, nothing inherently different.

FFI is available in basically every single language, C# was actually very explicitly made for this use case.

And Java originally was literally made for embedded devices :D it is still running on every single SIM card and bank card, but there are many other solutions targeting embedded, for example microEJ.


I know what is a difference between Go and Rust, but again - it's not only about the raw speed. I welcome you to try to code an async application in assembler...


Have a look at the Tiny Go compiler, which does more optimizations: https://tinygo.org/docs/concepts/faq/what-is-tinygo/

As mentioned you can also bypass the GC for performance critical sections.


While I am sympathetic to your concerns about project managers making decisions that should be reserved for the team, these metrics do contain useful info. I am sad that Isaac decided numpy wouldn't be used. It is fairly non-idiomatic not to use it for numerical problems. But if I cared that much I'd fork it. He has done a good job of late meticulously identifying simple, vs reasonable vs way too optimized and non portable solutions. Python is just an odd case. I would probably have used pytorch for a problem like Mandelbrot, but throwing gpu in there is way out of scope.


Please, for the love of God, can we ignore this useless paper? It has no new insights. It simply takes the Benchmarks Game and concludes that the ones that are fastest also consume least energy.

Benchmarks Game is a fun exercise to kill some time on, but it doesn't indicate anything about anything. The solutions across different languages don't even have the same algorithmic complexity. How can these possibly be compared?


> but it doesn't indicate anything about anything.

Clearly not true. This is like when people say "IQ tests don't measure anything" but what they mean is "IQ tests don't measure what I think you think it does"

You assume we are all incapable of understanding the nuance of the Benchmark game, maybe some of us are. Not all of us!

>The solutions across different languages don't even have the same algorithmic complexity. How can these possibly be compared?

You can compare anything you want. If you feel a particular languages solution is sub-optimal, you may go fix it rather than complain. There is no perfect way to compare language performances, this way is reasonable. If you know of a better way, set it up and do it.


By all means use the Benchmark game if you’re under the impression it’s meaningful. But let’s not call this attempt to spin it as a measurement of energy consumption novel or worthy of a paper. Of course programs that execute fewer instructions and put the CPU back to sleep will consume less power. That’s obvious.

I’m not that interested in debating whether gamed benchmarks measure anything useful. You’re saying they do. Let’s agree to disagree.


What would you see as a "fair" benchmark?

Here's another, for the server use case:

https://www.techempower.com/benchmarks/#section=data-r21&tes...

I am stunned that a JS tool came out on top.


> I am stunned that a JS tool came out on top

I wanted to know why, and I found this article by the author:

https://just.billywhizz.io/blog/on-javascript-performance-01...

also: https://github.com/just-js/just/issues/5#issuecomment-778673...


I always wanted to take code that hasn't been written specifically for a benchmark and test that. RosettaCode or leetcode might be a good source for that.

leetcode even has many versions of each problem written by different people so you could see e.g. the distribution of expected runtimes for JavaScript programmers Vs Python programmers or whatever.

It wouldn't quite be fair on the language. E.g. I would expect Typescript to give better performance than JavaScript simply because the fact that you're using Typescript means you know what you're doing more than a JavaScript programmer.

Would be cool to see. Unfortunately I couldn't figure out a way to get unbiased sample programs for each problem.


For you — 2021 "Ranking programming languages by energy efficiency"

https://haslab.github.io/SAFER/scp21.pdf

"In addition, we further validate our results and rankings against implementations from a chrestomathy program repository, Rosetta Code., by reproducing our methodology and benchmarking system."


Interesting, though that seems to be the same authors so I'm not entirely sure I trust their methodology, given how much they messed it up the first time round. Also the results (page 48) show some obvious red flags, e.g. C and Chapel being suspiciously fast on Ackermann.

Still the overall results mostly match my intuition.


> … how much they messed it up the first time round.

With exploratory data analysis it is important to state how outliers will be identified and treated.

https://www.itl.nist.gov/div898/handbook/prc/section1/prc16....

The data tables published with that 2017 paper, show a 15x difference between the measured times of the selected JS and TS fannkuch-redux programs. That should explain the TS and JS average Time difference.

There's an order of magnitude difference between the times of the selected C and C++ programs, for one thing — regex-redux. That should explain the C and C++ average Time difference.

Even without looking for cause, they seem like outliers which could have been excluded.

Without those outliers you'd be telling me that "the overall results mostly match my intuition".


There are no fair benchmarks between programming languages.

> When a measure becomes a target, it ceases to be a good measure.

If you're deciding what language to use for your next project, there are better ways. Ask yourself these questions.

- Can developers be hired/trained to use it?

- Will they be happy using it?

- What will our development velocity be on this language? What will be the operational burden of keeping it running in production?

- Do we need to rely on third party code? Does that code already exist and what is it's quality?

Importantly, how will the answers to these questions change over time? Will this language be just as easy to hire for in 5 years time?

Comparing languages by benchmarks is a waste of time.


Were those prioritized? I would have put velocity first,then will devs like it. Those are coupled. If they like it, it is less important if you can hire people who know it already, and likewise, if your velocity is high, maintainability becomes less important.

All those are subject to physical constraints though. Much of the worlds code is embedded, and thus severely constrained in both compute and power. That limits your choices, but within those choices your criteria are still absolutely the correct ones.


My list was not prioritised and not exhaustive either. As you correctly point out, physical constraints matter. If there are other requirements like real-time, low latency or high throughput, we have to take those into account as well.


Ignoring performance has gotten many companies in a terrible bind down the road. Not all people's projects are performance irrelevant.


This is not obvious at all. A program may very well run faster yet use more energy for a myriad of reasons. If there have been no publications about it, then this is valuable confirmation and a good citation if need be.


That's just plain incorrect:

> Through also measuring the execution time and peak memory usage, we were able to relate both to energy to understand not only how memory usage affects energy con- sumption, but also how time and energy relate. This allowed us to understand if a faster language is always the most energy efficient. As we saw, this is not always the case.


No doubt you read the graphs right after this statement? Look at the green dots (the ratio between energy consumption and time taken). Nearly all languages have comparable ratios, meaning a lower execution time leads to less energy consumed.

> this is not always the case

Just almost all the time, with a few exceptions. Execution time and energy consumption are very strongly correlated.

Time taken is the single biggest factor in how much power is consumed in modern CPUs. The idea is to complete any task quickly and put the idle cores to sleep to conserve power.


It's explicitly a non-conclusion that there is a deterministic relationship between speed of execution and energy consumption as your initial comment implies.

> Execution time behaves differently when compared to en- ergy efficiency. The results for the 3 benchmarks presented in Table 3 (and the remainder shown in the appendix) show several scenarios where a certain language energy consump- tion rank differs from the execution time rank (as the arrows in the first column indicate). In the fasta benchmark, for example, the Fortran language is second most energy effi- cient, while dropping 6 positions when it comes to execution time. Moreover, by observing the Ratio values in Figures 1 to 3 (and the remainder in the appendix under Results - C. En- ergy and Time Graphs), we clearly see a substantial variation between languages. This means that the average power is not constant, which further strengthens the previous point.

There is a strong correlation, yes, which makes sense because more time will likely correlate to more work done by the hardware. However the paper proves that there are exceptions to this rule. That's interesting given that most people, including you, would have flatly assumed that performance is a proxy for energy efficiency.

You could argue that it's not a finding which is going to be relevant to most software engineers, but it might be interesting if you are designing or implementing a programming language for instance.


So there is a very strong correlation between time taken and energy consumption. This is because less work is done (like you pointed out) and CPUs go back to idle quicker (like I did). This part is blindingly obvious.

There are a few exceptions, but no mechanism that explains that from the paper. So statistical noise.

If anyone is designing a language to be energy efficient, they would do what is obvious - do less work by generating more efficient code (like you said) or make it easy to parallelise work so cores can go back to sleep quicker (like I said). They’re not going to design for something statistical anomalies.


It’s a computer science paper. They don’t have to present an actionable solution for it to provide value.

They have a meaningful new finding, which may provide the missing piece to some solution later on.

If we disregarded all scientific inquiry which did not result in an immediate solution to an existing problem, computers probably would not even have been invented by now.


The “meaningful new finding” is statistical noise that only affects a few data points. You’ve written hundreds of words without addressing that core issue.

Eager to hear your thoughts on this once you’ve organised them.


I've made myself clear enough. I invite you to read the paper again if you still don't understand.

You've been just wrong enough on all points that I could almost believe it's your paper and you want to bait people into pulling out the best bits and quoting them lol.


Cool.


> Please, for the love of God, can we ignore…?

Obviously not.

Seems like the combination of climate emergency and my programming language beats your programming language has a fascination.

At-least the OP posted the conference paper rather than a jpg of one table out-of-context.


Yes, let’s continue to be energy blind as we consume our planet to death.


"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

"Eschew flamebait. Avoid generic tangents."

"Don't be snarky."

https://news.ycombinator.com/newsguidelines.html


The code from this study is here: https://github.com/greensoftwarelab/Energy-Languages

From looking at the code snippets, a big issue with this study becomes clear - it doesn't reflect how languages like Python are used in practice.

In practice, the "hot loops" of Python are in c/c++/fortran/cython/numba/... i.e. Python code usually makes use of a vast ecosystem of optimized science/maths/data science libraries. Whereas the study code is mainly using pure Python.

It's an issue with the methodology that the programs are written specifically for this study; creating an artificial situation.


> In practice, the "hot loops" of Python are in c/c++/fortran/cython/numba/

This is a vast overgeneralization. You'll find these optimized loops in optimized packages, done by engineers with enough experience. In practice, there is a lot of slow running code. And that includes "optimized" code that runs on numpy or similar but simply doesn't lend itself very well to being optimized in that way.

Python is convenient for short scripts, but very slow to execute.


That is a really good point about Python being a connector language and internal API implementations done in C/C++/Fortran/etc... It kind of makes me question the entire premise of this paper.


That may be the case for numerical code. However, a big niche for Python (if not the biggest one) is web services. How common is calling C++/Fortran code in a Django/Flask project? Those projects seem to have layers upon layers of pure Python code, with metaprogramming and whatnot.

Additionally, going through the FFI from C++ to Python and back to C++ has a cost. Eliminating this overhead was one of the motivations behind Julia.


Even in web services, the computationally-intensive bits are usually the database queries, which are also not handled by pure Python.


> In practice, the "hot loops" of Python are in c/c++/fortran/cython/numba/

How would this be comparing apples to apples? The paper’s main objective is to compare programming languages in their own native calls. It is like properly translating the Bible or Shakespeare to other languages: some translations will be more textual than others, loosing all the nuances, specially if compared to the original transcript.

If you have python glue calls to other processes written in other languages, then you would be creating noisy in results. It is not the purpose of the paper at all, and it is outside of its scope.


Forgive my pedantry but the ecosystem is not the language. Nor is practice. Ad absurdum, you could include the hardware pythonists usually employ.

If you use python as glue, it should be compared to other languages used as glue.


Is it not fair enough to use ubiquitous libraries in the code for this test? E.g. look at the n bodies code.. someone trying to write efficient Python code would be using NumPy at the very least (and possibly Numba).


NumPy has lots of C calls, and the functionality has to be implemented entirely with the language features. What you are suggesting is outside the scope of the paper.


> It's an issue with the methodology that the programs are written specifically for this study…

Not correct — the programs were taken from the benchmarks game.


It’s crazy how good Java is at energy efficiency. It is criminally underhyped and are sometimes replaced by the new best PL that will change everything, and yet it is very unlikely to have better results besides the inherent one in the rewrite.

Also, the often claimed negative of Java, memory overhead is relevant here — a GCd language operates best with a deliberate overhead over the strictly necessary memory. Java’s GCs are quite “lazy” in that they will not collect unused objects until it is deemed necessary, which is in line with energy efficiency.


But there is still no nice way to embed structures "flat" in arrays or other structures, right? If there was, I'd consider switching to Java as my main programming language.

I don't understand much about the theory and practice of implementing a GC'ed language, but I figure the problem is "locking" such an embedded object, because it doesn't have an obvious reference. One could take its enclosing object's reference but I suppose that leads to complications. What about functions that need a pointer to an embedded object in order to read or write to it? etc.

I've had only short encounters with Java, but it (OpenJDK) did bite me in 2016, when I had to process millions of entities. I'd often run into low-memory situations and my machine would completely lock up for a minute or so, before the GC would give up and terminate the process. There was no nice way to solve the issue, other than allocating all my entities SOA style. This helped with the GC problem since it reduced my GC object count from #entities to #columns. But it isn't nice to read/write the code like that, nor is it good for the cache splitting up every little integer.

I hear that GC's have improved much recently, removing the performance issues almost entirely. However, I believe the problem remains that a lot of code is harder to write because you can't simply pass a pointer + length to a function or do a memcpy. I figure this is one reason why there is so much abstraction in Java, with proxy objects and virtual methods and the like.

If the "embedding" problem did not exist, there might not even be such a great need for GC rocket science, because there was a lot less need to have millions of objects in the first place.


These are called value types, and there isn’t really a problem with them if you start from scratch, but it is very far from trivial to introduce it in a backwards compatible way plus solve it with generics in mind etc.

The current plan for Java is to divide the objects into 3 buckets, one being the current ones, they are distinct from the others in having identity.

The second group looses their identity, but they will retain their nullability. A great example for that would be the java.time package with e.g. LocalDate. Here two instance with the same fields will be semantically equal and replaceable, but there is no meaningful zero value as there is with numbers. If they would auto-initialize to 1970 that would probably cause some serious, hard to debug bugs later on. So they retain their nullability and they are tear-free. At initialization time they will only get visible in a state that is meaningful.

The last group will be similar to current primitives. They have a designated zero value, and seeing them in a consistent state is not guaranteed (in java, changing an int value even under race conditions is free from introducing values out of thin air, that is only values can appear that were actually set from any thread). In case of a complex number that you are about to change in two steps, another thread is free to observe it in an inconsistent state.

I really like this plan, and this way of thinking of object semantics actually helped me in other PLs as well. Do note that they don’t give any semantics on where the objects get stored, this is up to the VM implementation. In practice what it allows for is copying/flattening/stack-allocation of the second bucket, possibly with some clever encoding of null values a la Rust’s optionals. And the third group will allow for implementing custom numerics with basically no overhead.


If your structures are simple data with no methods, serialize them to BSON and store them as String.

I’m half-joking, unfortunately.


Bitcoin Script[1] is probably the least efficient, then languages like Solidity[2] targeting Ethereum's EVM.

[1] https://en.bitcoin.it/wiki/Script

https://www.pcmag.com/encyclopedia/term/bitcoin-script

[2] https://docs.soliditylang.org/en/v0.8.17/


A colleague looked into improving energy efficiency of software as a academic project. Concluded that it's essentially the same problem as optimisation. Making the program/task finish sooner was by far the biggest contributor to energy saving, so the techniques for reducing energy consumption are those of program optimisation.


Except SIMD code while faster may sometimes be less energy-efficient than scalar code. Different SIMD extensions have different energy efficiency characteristics. Also, multithreaded code may finish a task sooner, but spend more energy budget due to costs of synchronisation, spawning and joining threads, etc.


Nice point. I'll add it to the list of reasons to avoid multi-threading things wherever possible!


That’s the reason for mobile devices’ processor setup. There is a race to sleep with regards to CPUs.


Related:

Energy Efficiency across Programming Languages [pdf] - https://news.ycombinator.com/item?id=24642134 - Sept 2020 (158 comments)

Energy Efficiency Across Programming Languages - https://news.ycombinator.com/item?id=21950341 - Jan 2020 (1 comment)

Energy Efficiency Across Programming Languages (2017) [pdf] - https://news.ycombinator.com/item?id=19618699 - April 2019 (1 comment)

Energy Efficiency Across Programming Languages - https://news.ycombinator.com/item?id=15249289 - Sept 2017 (139 comments)


Interesting, but I'd be more interested in the holistic, total energy cost of a particular language choice. Only where the program is highly compute-bound and is run for many parallel instance-hours, would an analysis like this help to determine the best options, like e.g. the language used for a widely-used spreadsheet program. In a more 'real life' case, the energy costs of developer-hours also needs to be amortised (their dev machines, their coffees & pizzas, the light and heating of their home/office, maybe travel to office etc.) which is much more conditioned by the productivity of the language for the problem domain. In many cases, the production energy use of a given language is miniscule in that totality, so only where it is a significant component should it be part of language selection criteria.


I think the answer for efficiency in programming is to reduce the total number of servers, data centers and bandwidth required to deliver the solution.

Picking a programming language has virtually no bearing on this. There are certainly some languages that can run faster in some scenarios, but these microbenchmarks have little relevance to any practical reality at scale.

For me, I like to go though hell instead of around it. Don't try to make one server sip the power obsessively. Make that one server do as much as possible, since you are already paying a fixed, idle power cost.

I think picking things like SQLite vs SQL Server have a much bigger impact on power consumption. These are less disruptive choices than swapping programming languages as well.


> Picking a programming language has virtually no bearing on this.

How so? Usage of, e.g. RAM and memory bandwidth shows huge variation depending on which programming languages are chosen - and that tends to be the most relevant bottleneck wrt. aggregating workloads onto a single server, or a limited number thereof.


But things like PL choice are a tool in the toolbox if you are trying to reduce the number of servers you use.

If your PL is extremely memory hungry, or doesn't parallelize efficiently, you're going to bottleneck a lot sooner.


It seems like a legitimate problem that JavaScript, probably one of the languages with the most code executed worthwhile, is down toward the bottom.

I honestly wonder how much time and energy capacity could be freed up world wide if we just used tools optimized for hardware and network performance and efficiency.


For most businesses, a good chunk of their compute costs are energy costs, directly or indirectly (eg. via cooling, or via paying for cloud hosting from someone else who pays the energy bill).

Therefore, the real question becomes: Which language will reduce my compute budget most?


Profiling code optimisation in joules is a fun and somewhat unusual task. Surprisingly it has come up both for the high end in a supercomputing project and at the very low end embedded. Having an execution budget in handfuls of joules is just weird, and fun.


I've seen some comments on twitter (like this one [0]) that are saying that the complexity of programs in different languages is not the same and that is impacting results. Is someone willing to develop this for me (since I don't really know any other language here other than Python)?

Bonus: here's "Ranking Programming Languages by Energy Efficiency", from 2021 [1]

[0]: https://twitter.com/Czaki_PL/status/1569636020475265025

[1]: https://haslab.github.io/SAFER/scp21.pdf


The linked c code sample sorts the array, then accumulates the unique elements in a single pass. The python code sample makes a separate pass over `unique` for every element of `items`.


The python code also prints the resulting list in the same order. It's a completely different problem to solve. In python I would throw the list in a set and return the size of it / print that.


I really need to pickup on rust


PHP looks horrific for energy use although it would have been 5.x in 2017. Think about all those PHP websites out there spending all day serving requests to bots and brute forcers. That's a lot of energy.


Rasmus Lerdorf promotes PHP8 as CO2 reducing update. I agree. You can serve many more visitors with the same server hardware.


Well, those websites just wait for the OS most of the time, not using any energy.

These benchmarks may not be representative for PHP as the usual way of operation for it is not some hot loop numerical compute done continuously.


But be careful to not conclude that we ought to use more efficient languages to save resources.

Remember Jevons' Paradox: Efficiency leads to higher consumption, because that which is efficient will be used more, which in turn leads to higher consumption of the fuel by which its efficient use is measured.

https://en.wikipedia.org/wiki/Jevons_paradox


Yeah but it's still a net benefit as you provide more value for the same amount of resources


Yep!


Surely just short-circuit this by running your apps on servers powered by sustainability focused renewables energy sources.


JavaScript and TypeScript yield different results, there could be reasons like backwards compatibility transpilation but that's an option, otherwise code should be identical if written for performance.


One of the JavaScript programs at least was concurrent, whereas the TypeScript equivalent was synchronous. No wonder there's a difference...

Haven't looked closely at the other problems, but it's apparent to me that the solutions are not even trying to be similar, so comparing their efficiency is near useless.

the problem in question was the k-nucleotide one, IIRC:

https://github.com/greensoftwarelab/Energy-Languages/blob/13...


Maybe the JavaScript wouldn't type check with TypeScript.



Note: this is from 2017.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: