With Clasp, we get the best of multiple worlds. We get a dynamic language (Common Lisp) with automatic memory management and enormous expressive power that can directly use powerful C and C++ libraries. All three of these languages are "long-lived" languages in that code that was written 10 and 20 years ago still works.
Performance is really important to me and I have written a lot of code over the past four decades. I won't develop meaningful code in any language that falls below Racket in table 4 because these language implementations are too inefficient. I furthermore want to keep using my code over the years and decades and so I won't develop meaningful code in any language where someone else can break my code by changing the standard. My program "leap" was written 27 years ago in C and it is still being used daily by thousands of computational chemists. But it's really hard to improve leap because the code is brittle largely because of malloc/free-style memory management (brrr). For a compiled, high-performance, standard language with proven lasting power and performance - Common Lisp is the best choice.
It's not uncommon in the tech industry to build some mad scientist solution - pardon the expression - because we're trying to do something pedestrian but have painted ourselves in to a corner, eg HipHop VM.
Doing it to help with cutting edge science is genuinely exciting.
I looked at SBCL, but didn't see good ways to use existing numerical libraries. I'll have to look at Clasp again.
If we treat "dynamic" as a spectrum though, Lisp still gets you a lot more dynamism for a relatively small increase in power consumption (and a somewhat larger gap in performance).
jshell> var x=8
x ==> 8
$2 ==> 24
But I haven't used it either as I never worked with a REPL other than SQL clients.
Perhaps your interest in CL is because you can recall using it in its prime. Nostalgia Or just novelty is certainly a valid argument.
Also, developing and maintaining Python/C++ bindings for complex libraries is very painful and frustrating. I wrote Python bindings for years using boost::python and earlier Swig and keeping bindings working and dealing with the different memory management approaches of Python and C++... bleh - it's a nightmare. At the same time Python changed from version 2 to 3.x and libraries I depended on and my own Python code was being broken and becoming outdated in ways that I had no control over. It was like trying to build a house out of sand.
I've only been using Common Lisp for the past 6 years - after three decades of writing in other languages including Basic, Pascal, Smalltalk, C, Fortran, Python, PHP, Forth, Prolog... Common Lisp feels great, it feels powerful and every function I write I know will compile and run in 20 years. Common Lisp has real macros (programs that write programs! implemented in one language), dynamic variables, generic functions, the Common Lisp Object System, conditions and restarts... There are many features that haven't made it into other languages. Common Lisp makes programming interesting again.
I dare say that not many in the community are from some bygone time.
The fact is CL has by far and away a more sophisticated runtime environment than the vast majority of dynamic languages, with small talk being a stand out exception.
Some of the language aesthetics have not aged well, not talking parenthesis but rather the hideously long symbol names.
That only works if your application's hotspots are in a few, decoupled parts of your code – good examples are FFTs, data compression, or encryption. It doesn't work if you can't cleanly separate your hotspots from the rest of your logic. E.g., if you write a parser and analyzer for a programming language, what part do you want to offload to C? Even if you could identify a small part that takes up the majority of execution time, it would have a complex interface, and it would take a lot of work implementing and testing that interface.
Yes in general, but you can pick a subset useful enough in practice.
Take a look: https://github.com/Const-me/ComLightInterop
I'm thinking along the lines of using interpreted languages less server side because of efficiency, but also relying on JS less client side and using WASM where it makes sense.
This has stemmed from me leaning Go last year and being moved by actually how much faster it is than Node for my use cases (API development and data processing).
Where I am curious to see the total impact is how we can take advantage of economies of scale to save money and increase efficiency. I'm thinking along the lines of scale to zero, event driven architectures.
Google Cloud, for example, claims they operate their data centers with industry leading efficiency while also being powered by renewable energy. At scale, do services like Cloud Run or Cloud Functions actually make a difference?
> data centers contribute 3% of global greenhouse emissions
Not true according to my research  — more like _Tech in general_ contributes ~4% of global GHG emissions. Within that, datacenters only represent ~20%. (The rest is 15% for networks, 20% for consumer devices consumption, and the remaining 45% for manufacturing of equipment.)
> (same amount as the entire airline indistry).
Air traffic accounts for ~2% of global GHG emissions , so Tech is actually twice as bad as air traffic there. Other ways to put it is: as much as all trucks on the planet, and 4x emissions of a country like France.
: https://theshiftproject.org/wp-content/uploads/2018/11/Rappo... (FR) (p18, p20)
When performance is an issue in running programs, a common response is: hardware is cheap, just add another energy-guzzling server or use a more powerful computer.
This attitude is embarrassing when you consider that in every other industry there is a push for reduced resource usage and lower energy consumption. The programming field is the exception.
On the other hand, when it's programmers who are on the receiving end of slow, resource intensive apps, they'll complain loudly. This industry is rife with hypocrisy.
This broad brush view that the IT industry doesn't care is absurd, they have to pay bills too and efficiency reduces those. They may not be motivated by environmental concerns, but the costs are very much obvious to them and they do address it with improved efficiencies where possible.
You really think the problem is the programmers?
Here's a homework assignment for you: approach the product managers in any given software company and pitch a new development practice, which will improve UX through superior performance, reduce AWS bills, and save the planet, all for the small cost of doubling all product release timelines. Then come back here and report their response.
Just try not to take it personally when they laugh you out of the room.
Companies care very much about resource usage past some amount of machines. They don't think in terms of power consumption or environmental footprint though, but in real dollars (the costs of hardware/cloud) that is highly correlated.
Kubernetes is the latest trend in spite of being an overcomplicated mess, precisely because it's an overcomplicated mess that can deploy and pack services more efficiently onto (less) resources.
This is mostly incorrect. Of the top 10 programming languages on GitHub  only Python, Ruby and PHP are commonly used with an interpreter. The rest are all AOT or JIT compiled. I also suspect a large fraction of the Python projects are also data science / ML projects that heavily use packages like NumPy and TensorFlow that offload most of the work to highly optimized math libraries.
I also suspect if you were to look into the programming languages used by the companies with the most servers, they would skew more towards languages like Java and C++, or custom things like Facebook's Hack / HHVM.
Some things are just faster to develop in some languages because they have different baseline capabilities. Try parsing and processing a lot of text input in C versus the same task in Perl. Assuming similar competency in both languages (hell, you don't even have to be fully competent in Perl for this, just not a total novice) and the Perl solution will come out faster unless you've already spent a lot of time doing specifically text parsing and processing in C (which is not its primary use-case for many, if not most, day-to-day C programmers).
is that the case though. I used to work in a company where we developped some apps in C++/Qt/QML and some others in Electron and for similar apps the development efforts were pretty much the same
If a programmer costs $200K/year, that salary could support about 300KW continuous power use at average US industrial electricity rates (about $0.07/KWh). So if you could spend (say) 20 KW to increase programmer productivity by (say) 10% you'd be coming out ahead.
Often, the layers add a lot of overhead, and these days not many would have understanding of underlying layers.
So it's no surprise to see that VM-based programs use more energy; they're slower.
"Even if your game is so simple that it does not require the full speed of the (16.78 MHz) machine, please put effort into optimizing it anyway so that it can spend more time sleeping in the low-power PAUSE instruction. Your players will notice the difference in battery drain and they will tell their friends."
So, the energy efficiency is actually worse than the perf loss in general.
That's not what this data shows. Java is at 1.98x, ahead of Swift (2.79x), Pascal (2.14x) and Fortran (2.52x).
That is disingenuous, I might buy comparing the best in each class, but the results are much the same as the overall.
What Java shows (and a comparison of averages per class wouldn't show) is that there is not necessarily a 4x decrease in efficiency as a result of using a VM. It depends on the implementation. And there are quite a few other VM languages that are doing far better than 4x.
Swift clearly demonstrates that native AOT compilation is no guarantee for efficiency. Swift may well have become faster since this study was run (same goes for other languages), but using reference counting for garbage collection will make it very hard to catch up to the best.
They used the metric reported by a tool that limits average power via a programmable power limiter in hardware which an interesting way to do it. Totally valid but I really wish they provided more detail here. For example, did all workloads run at limit all the time? Presumably they did. Limit based throttling is a form of hysteretic control so the penalty part will be critical. How often and when the limit is hit will be critical too.
With this, Java ranking on top 5 is quite impressive. Considering that JIT optimisations wouldn't have really kicked in. My hypothesis is that if the Java program was allowed to run a few more times, and then compared, it would rank higher.
And, along the lines, couldn't the other compiled languages and vm-based (common lisp, racket) be JIT optimised?
It's about the first handheld scanner for a large shipping company. The hardware was engineered and nailed down and a team was contracted to write the software. They got about 1/2 way completed and said the box didn't have enough ROM to handle all the features in the spec. The company contracted Forth Inc. to try and salvage the project and that was possible because they used their proprietary Forth VM and factored the heck out the code so they could reuse as much as possible and got it all to fit. (Common Forth trick)
10 Years later, a new device was commissioned and management was smarter! They made sure their was a lot more memory on board and a new contracted team finished the job. In the field however the batteries would not last an entire shift...
Forth Inc was called again. They made use of their cooperative tasking system to put the machine to sleep at every reasonable opportunity and wake it from user input.
Maybe it ain't the language that matters as much as the skill and imagination of the designers and coders. Just sayin'
It is usually well accepted that faster execution leads to lower power usage, as long as the CPU is operating in a reasonable thermal envelope.
Nothing new here, except that we can have a better grasp of the different orders of magnitude.
I imagine it's substantial and worth considering.
I don’t think we have a problem with the “how” as much as the “what” or “why”.
ObNitpick: I think porn is a genre rather than a medium.
When you consider the millions of servers in use, that additional language efficiency adds up to a substantial saving in electricity use. You can watch a segment from his presentation where he talks about this here - and the calculations he made of potential CO2 savings:
The sync client has 10^7 installations or more out there, but its backed by an engineering team of 10^2 or less. Maybe much less.
That scope of impact is fundamental to the economics of software, and its why software engineers have so much potential to do good (or ill) for the environment.
how is that possible ? I have a normal house with a few appliances turned on, plus three fairly powerful computers and some musical gear and I'm at 530VA right now according to my home's electricity meter
Have you ever purchased anything online for home delivery? How did that good get to the delivery center, then your home?
Just because you didn’t pump a gallon of gas into your personally owned tank doesn’t mean it wasn’t burned on your behalf.
Note that it's not necessary for you to personally burn the three gallons of fuel per day. It's an average.
a quick computation gives me, given that my car drinks roughly 7 liters of diesel / 100km if i'm not careful:
- 560 liters of diesel / year
- a liter of diesel is apparently ~10.74 kwh -> 6014 kwh total
- 6014 / 365 -> 16 kwh a day ? I don't see how this is making me any closer to 4000 kwh per day.
People who regularly fly would also likely quickly make this number rise.
The point is that the global emissions story boils down to transport fuels, meat, leaky houses, and a long tail of irrelevant things, such as your choice of programming language at small scale.
At large scale the economic incentives alone are enough to encourage huge energy consumers to use a decent language (for example, Google and C++). But the whole information industry taken together is irrelevant to the global emissions story as long as we have an airline industry, cars, and hamburgers.
Using an average mpg of 21.5 mpg (if the average age of a car is 12 years), this comes to ~628 gallons of gas per year.
EPA uses 33.7 kWh per gallon for ~21,160 kWh in a year. Divide by 360 * 24 and you get 2.5 kW continuous, so it seems plausible.
EU numbers based on  come out to ~1 kW for driving?
this is more than 11L/100 - I don't know anyone who has a car that consumes that much.
If we use a 2020 model year for passenger cars of ~40mpg and a more conservative 11,500 miles per year, it comes out to 1.1kW.
This definition excludes the best-selling vehicle.
Also worth noting - the US gallon is smaller than the UK one. And anecdotally, the mileage rating in the US actually tends to represent real-world consumption, whereas the EU test - until very recently - did not.
I'd also be curious to probe "worst case" scenarios. Can you cause kubernetes to thrash spinning up and killing containers really badly, and how much of an effect does that have on energy consumption?
Also, I would expect a more recent language to have a lot more low-hanging fruits than much older and highly used languages. The more you optimize, the harder it gets to optimize more.
I'm looking at their table 4, with C:1.0, C++:1.56.
This throws the whole paper into doubt. Comparing crappy code in one language with good code in another reveals little of substance.
As for compile time, the stuff is hard. There are some caching compilers, and build systems like bazel. A good build configuration can improve compile times.
Now, can we get a comparison of these results vs. LOC?
I feel like almost any assessment of programming languages should have a table weighting the results based on how many lines of code it took to get that result.
in general functional languages do worse due to the abstraction, and then VM languages do worse still, and then dynamically typed languages are less efficient than statically typed. Erlang is all the above.
F# fits the first 2 (functional lang on a VM) and has a pretty bad energy rating despite being a fast language.
Oh, Robert Virding told me once the plan was to make it essentially an OS. So, it's only a joke in the "Ha ha but seriously" sense.
Lisp also fits all those criteria and is quite efficient, but it developed under different design constraints.
Also...I guess there are situations where using something (Ex: C) could lead to faster code than me using Assembly by myself if the compiler is smarter than me (GCC knows a lot more about hardware than I do).
This makes me doubt their methodology on TypeScript at least, or wonder if they're running a tool like `ts-node` which compiles and runs at the same time, thus counting compile time in their execution time and energy.
I expect the difference in this case is either due to differences between llvm and gcc; differences in the standard library implementation; or because rust requires strict aliasing by default.
There must have used some definition that is not explicit in the paper, but you can see in this code sample that the author used various C++ standard data types (std::string, std::array), iterators, classes, concurrency (std::thread). I'm no judge of C++ style, but perhaps it's "C++ as a C++ developer circa 1997 would have written it".