I'm not arguing that data centers aren't power hungry but I will say that I'm a bit concerned that proof-of-work crypto-currencies are consuming way more power than the coins will ultimately be worth. I know that bitcoin has made millionaires but I still can't spend it at my grocery store.
48V is interesting. It's a fairly safe voltage for humans to encounter. POTS in USA has used it for a long time, and PoE runs pretty close to it. It's easy to buy batteries for backup. As noted in TFA, it doesn't require components as beefy as those with 12V. I wonder if this sort of effort on Google's part could lead to more consumer-level electrical gear with strange voltages? ISTM a modern home with modern lighting should drive all of that with PoE or something else much like it...
Im not sure the economics of running our own rack servers makes too much sense for most companies and teams. But for groups like cloudflare, netflix, any edge computing or distributed cache systems this makes total sense.
If I had to guess Id say oxide will look like some kind of high voltage rack level installation of ARM cpus for groups or teams that actually still need on-prem or cdn-esque capabilities.
I was excited to read about how to return power to the people from centralized services and data centers that arose to power the modern Web.
Indeed, this article started out talking about centralized data centers, but then veered off to talk about power consumption instead of Power to the People. I guess as a play on words, “power” can refer to power consumption.
But if you want to find direct solutions to centralized data centers usurping the power of people and their local communities, I wrote an article two years ago with the same name:
I wonder what software stack choice brings to the table here. Are the differences between the power draws of services written in (equivalent scope & quality) Python/jvm/Rust/node of any measurable significance?
Nice, thanks. I've just glanced through and will have a proper read later. It would I guess be pretty complex to extend these kind of isolated tests to whole real-world installations.
In any language I can write efficient or inefficient code.
Practically you could make some kind of synthetic benchmark for each language but then you would have to look at the whole system holistically.
AMD epyc vs IBM Power vs RISC V vs ARM is going to be one set of things to test.
For instance a server that shows google.com vs a server run by Microsoft’s project Oxford doing image classification as a service and a server that runs Reddit are going to have so many variables it’s probably impossible or would require a dedicated engineer(s) to benchmark it.
So basically so many systems are so bespoke it would be really hard.
I mean, you do still have to run the software! Take basically any online service and do implementations in Python and Rust -- I guarantee you there'll be a huge differences in amount of power used.
Or just run the same thing written in Python and Rust and your laptop, and see which one drains the battery faster.
Generally energy is proportional to instructions executed so faster software is also more energy-efficient. So yeah, empirically, C/C++/Rust is going to be 10x more efficient than interpreted languages.
That is an excellent post, but shouldn't we rethink our usage of data instead?
We know we shouldn't rely that much on private corporation to run public services infrastructure and own people data.
Today there is plenty of tools emerging to host your own cloud, that is a path we should explore as well there is so much to be done in this space to create simple to use product for all.