Hacker News new | past | comments | ask | show | jobs | submit login
Save the planet Program in C, avoid Python, Perl (cnx-software.com)
91 points by gdrift on Nov 18, 2021 | hide | past | favorite | 127 comments



I realize this argument is probably tongue-in-cheek, but if we were to take it at face value it would be interesting to factor in energy costs required to feed/house/etc. the programmers for the additional time it would take to implement a given system in C vs python, and how often would the code need to run to make it balance out?


I realize this comment is probably tongue-in-cheek, but if we take it at face value, I think it’s failing to consider that the software’s enery usage is unbounded. An app, for example, may end up installed on millions of devices, or running 24/7 on a server for years.


I realize this comment is probably tongue-in-cheek, but if we take it at face value, I think it’s failing to consider that the logical conclusion of this argument is to write programs directly in binary.


I realize this comment is probably tongue-in-cheek, but if we take it at face value and hold a Unix shell up to the ear, would we hear the C?


I realize this comment is probably tongue-in-cheek, but if we take it at face value, I think it’s failing to consider that the logical conclusion of this argument is to not write programs in the first place.


Nopetynope, subroutine threaded FORTH is the only hope.


Every application should be solar CMOS ASIC only deployed in land thats reclaimed from farming, or else you are just a boomer who destroyed my life and our planet.


and this means, any mistake in the software is also replicated to those devices. let's say e.g. buffer overflows... and then efficiency is faster gone than achieved.


Trying, and failing, to resist the urge to bring up Zig


You succeeded, right up till the last 3 characters.


do you mean https://ziglang.org ? as a non C developer I was not aware of that. Thanks


I realize this discussion is probably tongue-in-cheek, but if we take it at face value, I think it's failing to recognize that energy supply is not unbounded. An app, no matter how widely installed, cannot consume more energy than is produced.


You are mistaken. It considers that in the part about "how many times the software would need to run to break even".


I write backend servers in C++. From my experience using modern C++ and libraries the amount of code (line wise) is about the same as in any other high level language like Python. Amount of time to write is about the same as well. I know because I've rewritten from the scratch large Python server application in C++. Performance however is hundreds of times better for C++.


This. Honestly any large program in Python suffers from massive maintenance issues because Python is dynamically typed.


Use Nim FTW on both fronts!


yep, for instance a good benchmark would be: how long would it take to build a decent and snappy GUI operating system from scratch with a custom html/css/js engine in $LANGUAGE ?

To give a data point, in 2021-era C++ it's approximately 3 years with a very large part of it being done by a single person (https://github.com/SerenityOS/serenity). How long would that take with python ?


> with a custom html/css/js engine

That almost sounds like confusing goals with ways to me. Unless those very specific ways are your goal for some reason, if you were to "build a decent and snappy GUI operating system from scratch", you probably wouldn't end up with those things in it. You'd end up with something more similar to Oberon, Smalltalk, or VPRI's system (was it Frank or something like that?).


They are talking about implementing both an OS with a snappy GUI, and a (new) web browser that runs in said OS.

(These days no OS is going to succeed with end users if it doesn’t have a browser).


Creating a browser engine is a very good test of how fast is it to develop in $LANG I'd say. I'm not saying to render the OS's UI in it, just to write a program that is able to load and execute a 2021 website once you've written your OS.


> write a program that is able to load and execute a 2021 website

But I thought it was supposed to be "decent and snappy"? This requirement seems to ruin it quite a bit since 80% or so of your snappiness goes away.


it seems that we are entirely talking past each other.

I am talking about implementing a web browser. Writing a program that can load www.example.com and displays it, in at most a few milliseconds. Of course loading a bloated website will be bloated but that's because websites use a turing-complete language - if my web page has <script>while(true);</script> of course things won't go fast, but it doesn't matter, what matters is that we write an engine that is able to execute that "while(true)" and burns those CPU cycles as fast as possible (because then a "normal" website won't be much slower than the theoretical best it can do).

It's a (semi)well-defined problem: how fast can you do network requests, how fast can you parse the DOM, can you fetch content such as images concurrently, how fast can you execute JS, which is all benchmarkable. It does not matter than it's going to be slow when loading www.facebook.com ; what matters is that it isn't slower than it should be.

That's like writing a generic sort function: you care about the performance of the sort algorithm ; of course if people use a sorting predicate which does filesystem access for them it won't be fast but it is also utterly irrelevant to measuring how fast your sort is.


Not really. Why would you do everything from scratch?


Because we'd have reference points to compare against at least


It also means that doing personal projects in language with long compile times would be worse for the environment, and interpreted ones would be better.


I am pretty sure in most cases you build less often than you execute.


Not necessarily true in development, at least when it comes to compute cycles. Oftentimes you run it once, find a bug, fix the bug, run it again. The CPU time spent in the compiler dwarfs the time spent actually executing the program.


Yes, but development is the anomalous circumstance. Once you deploy, it's all execution. That should dwarf development time.


> it would be interesting to factor in energy costs required to feed/house/etc.

Why would that be interesting? They'd exist and consume the same amount of energy regardless of what they worked on...


Not if you needed twice and many engineers, or it took twice as long due to a complexity.

But that said, the overhead of the engineers energy consumption is overhead that probably turns to nothing over the life of a successful project.


> Not if you needed twice and many engineers, or it took twice as long due to a complexity.

I really don't understand. They'd just work somewhere else and consume energy, housing and food. There'd be literally no difference.


Let's say the software solves some problem, call it P. We want to know the total energy cost for humanity to solve P. The people who are designing the software are using energy while they work on solving P. We need to include that energy usage in our total.

If two different implementations use the same amount of compute, but one took 10 times longer to design and ship, then it is a less efficient solution, no?

Those people (and their energy usage) could have been working towards solving some other problem or doing something else worthwhile.


Not OP but I think he’s assuming that programmers are on demand like temps, as in once the system is “done,” they are fired. Obviously it doesn’t actually work this way but that might be the logic.


Not just fired, but executed.


That got dark fast


No, they just understood my complaint. This was originally about reducing global energy consumption by switching programming languages. You cannot reduce global energy consumption by firing (or not hiring) programmers because they still live on the planet and do the exact same thing. You'd literally have to execute the people who would doing the debugging for C code to cause such a reduction. This is the reductio ad absurdum that shows that it wasn't a valid point.


> This is the reductio ad absurdum that shows that it wasn't a valid point.

I conceded, this is a convincing point.

To flush out the argument though, it was coming from the perspective of the system consisting of the firm+employees delivering a project rather than the system of the entire earth. I don't think the smaller scope overcomes the overall point, but you could perhaps see where there might be some merit to the distinction. It is just that that distinction doesn't really matter.

Theoretically, if you have fewer employees required to complete a project, then that project might use less energy. And if this holds true across the industry, then those non-employees _could_ be doing something more green. But equally they could doing something worse. And in practice for software, they would just be working on a different project or shipping more features or whatever, so it doesn't change the steady-state, even if a distinct project is delivered with lower energy cost. Even if all projects are more human-energy-efficient.


Think of all the energy needed to produce all the coffee they consume!


It’s not energy/time that’s being compared; that’ll be the same. It’s total energy (energy/time * time). For example, a C++ GUI could take longer to get up and running than, say, Python. And assembly would take even longer.


I honestly hate headlines like this. Jokey, tongue-and-cheek takes that effectively treat environmentalism like every little bit counts. It doesn't.

We are in the midst of a serious crisis, we know the reasonable solutions we should be taking to avoid catastrophe, and we are collectively choosing to ignore them. It should be treated seriously. If we're going to joke about it, it should be dark comedy.


This would make sense if all computers ever did was mathematical micro-benchmarks.

Can you create a web framework in C that has as many features as Rails or Django? And what's the power consumption of that?

Higher level languages do more stuff. It's not like Guido and Matz just woke up one day and were like "fuck your CPU".

Plus, compilation requires a decent amount of power, as shown in those graphs. Does every C/C++ project run perfectly with no bugs the first time?


Big companies like Facebook and Google do write their web tech in C/C++ because performance and power consumption does matter. If something written in Rails is ten times slower then in C, that means ten times more servers using ten times more power. As I'm sure you can imagine, cutting a zero of these companies server bills is a big deal. You can buy a lot of hours doing manual memory management for that kind of money.


I don't know about Google, but that certainly didn't seem to be the case at Facebook. PHP/Hack was the language of choice for front-end stuff, C++ for most of the code in infra services, and Python holding everything together.


Yes, both are replacing some PHP/Python bits with C++ AFTER scaling to billions of users...

Still curious what the power savings are... I'm sure they've done enough profiling to know their performance is increasing. But are they saving energy? How about the compilation cycles to get to where they are? The carbon footprint of the additional employees? Etc...?


> How about the compilation cycles

When the code will be deployed on 10K-1M machines, compilation costs are noise. In fact, FB does all sorts of expensive optimizations to ensure that the deployed executables are as efficient as possible, and the effort is worth it many times over. I'm sure the same is true at Google and elsewhere.


Facebook runs PHP via their own JIT, HHVM (https://en.wikipedia.org/wiki/HHVM). Researching a bit now it appears they've actually migrated to their own subset of PHP called Hack, which is measured in the benchmark, and looks more efficient than even some compiled languages in that benchmark.


Matz did actually say pretty much that.

https://evrone.com/yukihiro-matsumoto-interview


This comes from an academic who doesn’t support any code. Why would I spend the time as a developer to do manual memory management under the guise of efficiency, when instead I can support twice (arbitrary) as many apps making a computer do it for me? This is a human, whose carbon footprint is terrible compared to a little memory overhead. SMH. Silly articles.


Also a badly written C program can be more inefficient than a python one.


Doesn’t matter. The fewer lines of Python are more maintainable in human hours. I am not debating the efficiency of the compiler or program. I am however stating there is a massive oversight on the maintenance of the code, which would lead to a net loss in human overhead.

Some napkin carbon footprint math—a computer on for eight hours daily does ~200 kg of CO2 in a given year. An average human on the planet is doing ~7,000 kg. Which means the average human’s 8-hour workday is ~6.4 kg CO2.

Yeah, let’s pull out the abacus and the slide rule while we’re at it, in the interest of carbon footprints. /s


Slow languages create greenhouse gasses, C does not produce more humans (yet).


I understand that premise—read my prior comment again, or do you, like the author, also not understand how code is supported?


I understand perfectly. You are not the first one who thinks its easier to ignore the climate crisis, then to put in money and effort needed to deal with it.


The human effort saved by using a less compute-efficient but easier-to-work-with programming language could be redirected to attacking climate change from another angle.

We'd have to look at the details to see which one is more effective overall. Although my gut says that "not programming in C" is generally the correct approach for your average piece of software.


If you want to spend the screen time to prematurely optimize in the name of future savings, knock yourself out. I’ve been doing this long enough to understand where efficiency gains exist—it’s not in human-conceived memory micromanagement.


Please no, a lot more energy and carbon will be wasted undoing the damage done by full stack engineers now suddenly finding out about memory management problems.


Not sure if this is satire or some light techie ecofascism


The “donate via crypto” links at the bottom make it even harder to tell


I think there is another alternative: slightly goofy but also interesting research.


I've written about this topic before and I think this is not right. Scroll down to "Environmental Impact".

https://towardsdatascience.com/high-performance-code-pays-di...

"The climate crisis threatens human civilization on a grand scale and electricity is frequently produced from fossil fuel. Electricity is then consumed by the operation of computer code. Google alone used 12.8 Terrawatt-hours in 2019 according to its 2020 environmental report, more than many countries and more than several US states. It seems intuitive that improving code efficiency should reduce environmental impact. Does it?

The answer is not at all clear and must be considered on a system by system basis. The main reason is induced demand.

...it will take a centrally coordinated process at both national and international levels that can allocate energy to different industrial and consumer sectors and manage the energy production mix on behalf of the whole of society. Only then we can solve this problem in a rational way that doesn’t depend on luck.

In fairness, software companies aren’t power companies, it’s a little ridiculous to expect them to build their own generation. What we can more expect from them is to pay taxes, divest from fossil fuel companies if they hold securities, and hold their total energy consumption under a ceiling set by a regulator while the society changes the energy mix as fast as possible."


Besides the already mentioned "donate via crypto" red flag, this argument is NULL and void.

Different languages were made with a reason, and they solve different problems. Even Python can be efficient when your largest bottleneck is network or disk IO.

Nowadays we have Rust, which is slowly catching up to C speed, and sometimes even surpasses it.

C is a beautiful language when used in embedded space, where memory allocations are minimal. Once you start throwing mallocs left and right, you lose that simple elegance.


I have been saving the planet by writing all my code in Pascal

From that paper, only C and Pascal, are optimal for energy and memory usage.

And Pascal has memory safe strings with reference counting. C is just too unsafe. 70% of all security issues are caused by C.


Yet another reason to use Pascal. More people need to do their part to save the planet by writing efficient code, that is also safer, more readable, and more maintainable.


Or just use an ad blocker. It's a 0% vs 100% CPU use on some websites.


Oh yes, let's reinvent the Erlang/OTP in C just to save CPU cycles (I bet that won't be the case). Let's waste millions of hours of programmers' life, just to make a point.

C or any "so called" efficient languages are wonderful for their use cases. Python or Erlang have other use cases. We have all of those languages for a reason.


I almost want to stoke the fire and make a joke about rewriting things in rust, as the energy expensive seems comparable.

Anyway you can take my Erlang from my cold, dead hands.


The sweet spot seems to be Go, Haskell, Java, Lisp, OCaml and maybe JS.


I always wondered if setting up those Bitcoin rigs in that abandoned warehouse was just a giant waste of energy.

Now I have proof it wasn't.

Thanks, C++.


To be more accurate, it needs one variable. - Butt(s) per feature

That includes defects,

It's one of many reasons I like new compiled/vm languages like Go, Swift, Nim F#. You got the productivity of python, but near the performance of C. User wins, dev wins and planet wins.


The compilers, runtimes, and by extension requisite operating systems are fatter though.


If you're worried about the environment, avoid researching how microprocessors are made with respect to chemicals used for lithography and how much waste water is generated when making a modern microprocessor.


Many script language runs C in its modules, or c++ for that matter.

From the best analysis it does seem modern c++ could be the optimal performance/energy/speed/safety choice.

And yes I'm a proud c/c++ programmer.


Came here to say this. The planet is programmed in C - it's just the unimportant parts that aren't.


I think they should offset all of these things against the probability that the IDE you have to use has its own ozone hole.

That stab is mostly aimed in the direction of C# and Java


> As a former software engineer who’s mostly worked with C programming, and to a lesser extent assembler, I know in my heart that those are the two most efficient programming languages since they are so close to the hardware.

one could argue, that programming languages that are more close to the mind are more efficient. At least when looking at 2nd to n-th order efficiency effects, not just pure runtime performance.


Yeah, maybe taken in isolation they are more efficient, but if your recompiling every minute in development, that's probably gonna add up.


Then don't recompile every minute? Or use a language that supports incremental compilation down to even the function level.


Does this factor in the wetware upkeep during the development cycle? Development in C generally is slower than in most of the other languages.


I once re-wrote a non-trivial program in ruby that was originally written in C. The performance and scale of the ruby version was considerably better.

Could it have been even better in C? Probably, but it was a lot easier to conceptualize and properly choose the architectural features that made the difference using ruby.

These days I would have done the rewrite in golang, and had the best of both worlds.


Oh, are we doing this now? I mean, if we are, let's get rid of JS/TS and move to Rust or something, anything really.


I think an argument could be made for writing code in a systems programming language like C or Rust to save energy, but I'd be much more interested to know what the same choice would be for front-end development.

Could we get something as flexible and expressive and ergonomic as React or Vue with the performance of WASM?


This is a bit deceptive, because we could always go back to writing in assembly language. The point of scripting languages are usually to make creating programs easier, requiring fewer lines and time to create. There are different aspects of efficiency that is being overlooked.


1) HN shoudn't be dependant on JS

2) There is a gopher proxy for HN: gopher://gopherddit.com

3) Use CLI tools, avoid JS ridden apps. If so, download the video and use mpv offline. Many times less power wasting.


Perl and Python are scripting languages, their energy savings come from the fact that the programmer is typically the only person who will be running it, so it makes more sense to write something that minimizes screen-on time than it does to micro-optimize an app that hundreds of thousands of people will interact with. It's entirely a matter of scale: if you're working server-side, I'd wager you're saving quite a bit of energy by not using an IDE that constantly relies on static analysis to make sure you're not shooting yourself in the foot.


You are right about Perlubie and Python being scripting languages but unfortunately people use them for full applications that use 10 to 100 times the memory and CPU of what is necessary. Now C and C++ are the other side of this coin where they could be 10 to 100 times more efficient than scripting languages but it is often difficult to get that right without a lot of work. But there are a number of languages in between where doing the correct thing is easy and CPU and memory efficiency is close to or on par with best C and C++. So we can have nice things after all!


> You are right about [Perl] and Python being scripting languages but unfortunately people use them for full applications that use 10 to 100 times the memory and CPU of what is necessary.

And honestly, that would be an improvement. The trend seems to be to write more and more applications in JavaScript with Electron [1], which I unrigorously assume uses 10 to 100 times the memory and CPU of an equivalent Python app (given Electron apps seem to most frequently make my fan spin or have some kind of memory leak requiring a force-kill).

[1] Now ranking higher on Google than the subatomic particle: https://www.google.com/search?q=electron (https://archive.ph/J2HYI)


Javascript is faster than Python by a not-insubstantial amount, due to JIT and the sheer number of man hours poured into optimizing the browser Javascript engines.

I have no idea about memory consumption. I assume Python does better than Javascript there due to reference counting.


Well, JS on v8 (node.js) is usually more CPU efficient than a Python app due to its good JIT compiler. Perhaps the memory usage is not as great due to requiring a GC.

I suspect that Electron apps are so easy to create and run somewhat acceptably, that the trend is for the developer to add more bloat than would otherwise be there. So a very efficient runtime leads to developer laziness.


But you have to take these results with a grain of salt. Both the Perl and Python tests are implementing binary trees in vanilla Perl and Python. Both Perl and Python have performant libraries written in C which is what you would use in a situation like this.


> Both the Perl and Python tests are implementing binary trees in vanilla Perl and Python.

Well, that's how you judge a language. You write code in it, as opposed to writing it in some different language not under consideration.

But a much worse problem seems to be the fact that binary-trees is not much of a representative benchmark. It's mostly a benchmark of your memory management. Unless your programs spend 80% of their time managing memory, they'll probably not match the results of binary-trees.


that is _not_ how you judge a _scripting_ language

it is trivial* to add C code to Perl, even when using XS, I guess it is as easy to do the same for Python

if you do heavy computations in pure Perl5 or pure Python you're doing it wrong by definition

*assuming you can write C


> Perl and Python are scripting languages,

With the exception of shell languages (and its overstated even there), the description “scripting language” is usually wrong and at extremely deceptively reductive, even for languages where that was a motivating or at least early major use case.

> their energy savings come from the fact that the programmer is typically the only person who will be running it,

Even when languages are used for scripting, that's not true.


Python is not only a scripting language. It powers many many websites including Youtube. It's a full power limitless language. Your statement is simply wrong.


> Python is not only a scripting language.

The Python (programming language) is. Ok, if you want to get pedantic you can talk about the language vs interpreter vs ecosystem? I'm not sure what you are getting at.

> It powers many many websites including Youtube. It's a full power limitless language.

Lots of things "power" Youtube. I'm not sure how that's important to point out.


> It's a full power limitless language

Unless you want to do something crazy like run 2 threads at the same time


Python is not limitless. It has very much reached its limit and has major problems with performance, concurrency/parallelism, static analysis and formal semantics. This is why many refer to it as a "scripting language". This doesn't necessary stop people building billion dollar businesses with it though, look at PHP.


> It powers many many websites including Youtube.

And this was such a scalability problem that Youtube decided to spend the unimaginable amount of pain and suffering to rewrite in C++.


I partially agree with your statement. Writing anything in C++ would cause _unimaginable_ amount of pain and suffering. Better way is to isolate slow CPU heavy stuff and write it in C without suffering.


That's certainly an opinion that C fundamentally makes things easier to write than C++, but not one held by Google in general.

A major problem with scalability at Google is that the CPU heavy stuff has often already been isolated and what is actually needed is a sort of ambient performance rather than just focusing on hot loops.


That is true of Perl, but Python powers web apps viewed by billions of users, including reddit and instagram.


And we also burn coal and other lovely stuff for energy.


Python is just as shitty as Perl, it just has different syntax.


This is a very miscalculated take, please avoid.


How is Go so high? It’s far higher than even Java or Erlang. Is it due to a high CPU usage that compile times in Go are so fast?


Binary-trees is just an exceptional case for Go (and an unnatural/unusual program to boot). If you look into the paper (https://greenlab.di.uminho.pt/wp-content/uploads/2017/10/sle..., page 261, Figure 2, bottom left chart), you'll see a different benchmark yielding much better results relative to other languages. You'll find overall results on page 263 in Table 4 which aggregates 27 different benchmarks. There you see that time-wise, Go is roughly comparable to Pascal or C#.


I see, thank you.


Runtime footprint and CPU is pretty tiny for Go. It’s shockingly low cost to run it. I am frequently surprised.


That’s what I mean. Go has small memory and CPU usage which is one reason it’s good for cloud stuff, but the results indicate opposite. It’s even higher than Haskell which is strange.


It’s a bit like saying the world would be better off if we went back to pre industrial revolution practices.


Or if you look at modern tools on the list, you also see golang and rust.


Perl is the worst programing in terms of energy usage.


It seems unusual to have a post that makes an environmentalist statement about computer energy consumption, but then end it by requesting donations via cryptocurrencies which are mined through their enormous consumption of electricity.


It’s only unusual if you read things online without a healthy sense a humor.


Ethereum is moving to proof of stake, will hopefully solve that problem.


Ethereum has been moving to proof of stake since forever...

Ethereum was released in 2015. The first PoS cryptocurrency (Peercoin) was released in 2012.


Yes, but not yet and we're not 100% sure it will actually happen or will retain a lot of users. So if you care about computing efficiency, taking crypto (even ETH) now is a weird choice.


1. It’s already live on the PoS beacon chain, and has been since December 2020.

2. The specs for the transition are ready and it’ll happen next year. Target is before June. First long-lived Merge testnet goes live next month.


Yes, there's a live test. So as mentioned before: we don't know if it will happen for real transactions, if a meaningful number of people will accept it, and when will it happen. (It's likely, but still months away)


It's not a test chain, it's _the_ PoS chain that will be transitioned to. It's running on real Ether, over $30b worth of it. You're incredibly out of touch.


Sure, that chain is the target for the merge and it's running with real Ether. But it's still a test. Right now if PoS nodes say a transaction exists but PoW say it doesn't... then for all practical purposes it doesn't exist.


Someone recently told me that proof of stake is less energy intensive than proof of work, but it’s still pretty energy intensive.


Even if you assume 100 watts of power per validator, it’s a >99% reduction in energy consumption. In reality, the average power consumption per validator is probably closer to 1–20 watts due to the fact that additional validator don’t stress an existing system effectively at all.


Reading up on it some more, it does sound significantly less resource intensive, but it also sounds like migration from PoW to PoS is rather difficult and has some technical disadvantages. Is Bitcoin ever likely to migrate? If not, then it seems kind of moot, but perhaps new PoS players will come along and take over the market?


Just think of what it'd be like if Bitcoin was developed in the AI field and mining was only possible to implement in Python.


Would it be any different? The amount of mining done is related directly to the value of mining and inversely to the cost of hardware; the "efficiency" of mining a block doesn't matter directly. If you made it 50 times hard to mine a block (i.e. you reduced the hash rate by 50x), then you would most likely make each Bitcoin 50x more valuable because of the resulting scarcity. So if you could only implement mining in Python, exactly the same number of CPU / GPU cycles would be spent on mining as before.


For the moment, using higher-level languages seems a natural go-to. It's comfortable, and energy and hardware are abundant.

But this won't last. Many energy sources and metals are peaking and I can imagine a day when efficiency will count heavily.


This means we'll see more effort in optimizing compilers, but probably not a loss of the higher level languages. They're just too useful when it comes to working on large-scale systems. Do you want to rewrite Linux in hand-optimized assembler? And then rewrite it for every single platform that you may want to run it on?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: