This general concept is very similar to Apple’s Bonjour based “Sleep Proxy” service. Basically, if you have an Apple TV or something similar it will automatically wake up a sleeping device when someone requests its services. SSH or media sharing for example. While a device is sleeping the proxy will keep advertising its services via Bonjour. Low power always-on Apple devices are sleep proxies by default but you can turn any Mac into a Sleep Proxy by passing a flag to mDNSResponder.
This is a nice idea, I was jsut wondering how to do that.
Another approach (if it is supported by the adaptor) is to configure WoL so that it does not need a magic packet. Some adaptors can be configured to wake on any "directed packet".
I have a linux server sitting downstairs that I might play around with. It would be nice if the sleep support featured a timer and integration with the system cron daemon. Then it could wake on a directed packet or when there is a scheduled task.
Interesting! I believe all the new Mac products also work like this. While sleeping a Mac will still stay on WiFi if it’s on a power adapter and the sleeping M1/M2 Macs stay on WiFi even while on battery. They also still respond to some mDNS traffic. When I try to access certain services on a sleeping Mac, like SSH, it’ll immediately wake up. Even port scanning with `nmap -p 22 <ip>` will cause a wake up. That sounds like the wake on “directed packet” thing? In practice I think this mostly obviates the need for a Bonjour Sleep Proxy.
Also, fun tip: You can easily check if a Mac on the network is sleeping or not based on its ping response TTL. While awake it uses a TTL of 64 but while sleeping it uses a TTL of 32. You can use this little signal to roughly keep a log of when the Macs on your network are awake vs sleeping. (Not too much of an actual privacy concern I think)
I kind of made a general purpose sleep proxy myself a few years ago with a apache and some cgi scripts getting triggered when I visit my /jellyfin or /guacamole
It just listened to the fixed routes, woke the server and proxied the page when finished (sometimes took 2 loads)
But basically is the same as in the arzicle except I didn‘t need to manually WOL anything and my server just went to sleep by itself
>Why isn't this the default (or at least a configuration option)?
Because 1) you still need a second machine to wait for incoming requests, 2) in a server there's no general way to check if the server is serving requests or not. Plex happened to provide an API that specifically does what you needed, but you had to resort to a bit of a hack for Time Machine. This is unlike a desktop context, where you can check if the user has moved the mouse or pressed a key in the last N minutes. And 3) because the delay of waiting for the server to wake back up is undesirable.
>Seriously, there is soooo much potential to save energy in IT.
I don't think so. For VPS instances it doesn't matter, and for internal servers it's almost certainly more efficient to virtualize a bunch of services onto a single box, if there's multiple servers idle most of the time.
your VPS will have a shared CPU with hundreds of other customers... So will probably be using far less power (for you) than your home server.
In Europe at least, most industrial users were paying more for electricity than residential users over the winter (gas shortage). Governments subsidized residential users more than most businesses.
And doing this is not going to largely affect the energy usage of the EU - at least, in comparison to, say, Dell putting Platinum rated PSUs in their workstations and AMD making good server chips that are twice as efficient at the same performance point as Intel's chips.
Ok, the implementation wasn’t very fantastic cost wise. However I still think this could be a big area of improvement. How can we reduce energy usage of servers?
There’s a lot of work with ARM lately, I’m curious if that could be used in server workloads like this.
The most effective way is to be able to dynamically distribute load across a large array of servers, turning some off at times of lower load. Anyone who runs a large datacenter probably already does this or something equivalent, either to save money or to service more customers with the same amount of hardware. It's more effective to optimize the hardware architecture, but it's worth noting that improvements in efficiency usually don't translate to reductions in power consumption. Instead, it causes induced demand and usage increases to fill available capacity.
> Anyone who runs a large datacenter probably already does this or something equivalent, either to save money or to service more customers with the same amount of hardware.
Possibly, but few cloud providers actually do live migration[0] since it's risky to pause a VM for typically less than 1 second, but up to 5 seconds.
Real world pause times for live migration are mostly imperceptibly small. Like, you might as well have a router glitch for the same whatever milliseconds.
To demonstrate Ceph RBD network block storage back in the day, I ran VNC sessions to two libvirt hosts and live migrated a full VM with a desktop and a bouncing ball animation in a window between the two hosts & VNC sessions. The only thing a human could perceive was that the desktop image in one VNC window goes black, and the animation continues in the other.
I was actually thinking about elastic AWS instances when I wrote that. IIRC they're meant to be spun up and then shut down as a service's demand varies.
VPS providers are already overprovisioned anyway, so they have less idle hardware to begin with.
Isn't that really expensive? Far more expensive than just keeping a cluster running 24x7? I think the much bigger cost here is shelf space and the hardware, and so the retail price isn't impact much by the cost of power.
For the record I like the concept and hope it becomes a lot cheaper over time.
Smartphones can be connected for days on the internal battery, using well under a watt, and they can be ready to respond instantly, and they have an app container system that makes everything just work.
I really think we need to start making Android servers a thing, at least for self hosting, maybe for everything.
Judging by how WearOS seems to be able to run on 35mA or less, I think there's a good case for just making an open fork of Android with similar optimizations, and putting it on literally everything resembling a smart device. I wouldn't be surprised if some kind of Android for Devices comes out someday, assuming Fuschia doesn't ruin the party.
Raspberry Pis (and similar devices) already work great as low-power servers.
The big issue is the home server community being overly focused on creating "battle stations" that look cool and run 40 different Docker containers rather than simple low-power solutions.
Pi doesn't work that great, power consumption is still several watts, too much for batteries to be cheap and compact if you want a day or two of backup, plus self hosting in general is very much enthusiasts only.
Android does what Docker does while remaining very low power, because it gives you one standard API level, rather than some minimal syscalls and you have to bring all your own libraries right in the container.
The power saving is not because of Android it is because of the ARM arch being used under the hood. And ARM servers for power saving purposes are very much a thing already.
> The power saving is not because of Android it is because of the ARM arch being used under the hood.
[citation needed]
Based on my understanding, the underlying OS has way more to do with power usage than the instruction set, and AFAIK Android has tons of optimizations specifically geared towards reducing battery consumption (and therefore power usage) on mobile devices.
With that said, it's true that, in practice, the power floor seems to be lower for ARM devices than, say, x86, but I'd wager that this has far more to do with target market than with ARM's instruction set. Low power x86 boards do (did?) exist.
It's both, if you want to achieve the goal of operating for multiple days on the battery.
A modern battery powered device will have a 'sleep mode' where the current consumption is measured in microamps; and an operating mode where the current consumption is many times higher. Battery life relies on two things: Firstly, the sleep and operating modes being as efficient as possible (the CPU design) and second of all getting into the sleep mode for as much time as possible (the OS design).
If only one app can be on screen at a time, and the OS is free to stop anything that isn't on screen if it wants to - that gives the OS a lot of power to achieve long periods of sleep.
(It's actually more complicated than that, with multiple sleep levels, high- and low-power CPU cores, and on smartphones there are issues like powering the screen and radios, and things like hardware decoding of video - but for a server, who needs a screen?)
Not having a screen is another blocker for self hosting some of the time.
It means you have to enable SSH with some file in a directory, then SSH in, make sure you have a strong password, turn SSH off when you're done, maybe copy and paste or SCP a key somewhere.
With a screen you don't depend on the state of anything else to do admin. Your more savvy relatives can install things for you without them needing the special paired remote admin app.
And of course, you can do "Scan to get started" QR setup.
Plus, in a real consumer self hosting scenario, it would make sense for the server to also be the smart display if you want one, to avoid needing more devices, and keeping the screen would mean developers don't have to explicitly code from both screenful and screenless setups, sometimes things that shouldn't depend on the screen do unnecessarily.
On Linux stuff works differently if you run as a service vs under the DE, because you're missing some environment variable that points to dbus or whatever, and you need extra work to get around it.
> Firstly, the sleep and operating modes being as efficient as possible (the CPU design)
My point is that this part doesn't really have much to do with the instruction set, though. Theoretically, there's no reason you couldn't design an x86 CPU that is just as efficient as an ARM one (or at least, within spitting distance); it's just not done because the economics wouldn't make sense.
Or a dedicated server device that only uses the battery as a backup. Some of the stuff in a phone is unnecessary for a server. Some might be useful like LTE and GPS, but a full resolution AMOLED isn't, a cheap 3" screen is enough.
Or a timer/smart plug on the outlet that keeps the battery from being 100% charged all the time.
It'd be nice if things had logic to control their own charging -- start charging only when the battery drops below 40%, stop charging when you hit 90%. The hardware can do that already.
(And then you can expect something like normal phone battery lifetime, which is still limited.)
As I see it there's a few missing features, and they need to be solved anyway to make self hosting less painful in general, right now I don't see many good ways to do it at all unless you like stuff that unpredictabily needs maintenance.
Play store needs to be split into provider and interface APIs, so you can add your own repos and still have it work with existing tools.
Remote management needs to be a thing for sure, installing apps is the hard part of that. WearOS does it amazingly, just connected to BT and the host play store does it, but you'd need the split app store to make that work well with open source third party repos.
Automatic updates would be solved by that, just deploy your app on a repo, and update via the play store or your interface app of choice.
Finally you'd have to solve the biggest problem with ANY self hosting as far as I'm concerned, how to handle certs and domains with zero manual IT work(Has to be truly zero, no chance it needs maintenance while you're on vacation).
I've been saying this for a while, we need pubkey-based domains like .onion URLs, but without the onion routing, so there's no dependence on a CA or anything that might be hard to do in a fully automated "Scan the QR and connect" context.
Or, more likely, Google gives you a service.yoursubdomain.google when you buy Google One, and routes stuff ngrok style for you, with lots of spying, because P2P hasn't been their thing so far. Perhaps one of the forks could do it though.
Without eliminating domains and certs, I'll continue mostly ignoring self hosting, I don't want any chance it goes wrong while I'm too busy to fix it, since there's no dedicated support staff, just me.
It would also solve the other half of remote admin, apps could just declare a web interface for admin that you access if you have the secret URL.
Then you'd need some systemd alternative to manage processes. You'd have to be able to say "Start this on boot, restart it on failure, and never clear it out of RAM".
On the hardware side, phones are already close to perfect. Keep the screen and NFC and Bluetooth, for QR based connections and the like, keep the battery as a UPS, but add more USB ports, Thread/Matter, and ditch or downgrade the cameras.
The filesystem probably needs to support snapshots, and Android should get a Full Backup and Restore Provider API to make it easier to back up everything, just the home folder, just one app, just the settings, just the data, etc, to your chosen provider, maybe even with hooks for apps to respond to import and export, so that the state of your app is totally portable and needs only a few clicks to restore from auto backup.
Finally, we need a cpanel-like VPS management tool to migrate your stuff to cloud hosted if you want, that can take backup files, and show you what's on the remote device's virtual screen(Again, so QR stuff works) or at least what's being displayed via some "Admin screen API" if you don't want to drag the full graphics API into it(But keeping graphics all the time seems like a good plan for standardization, and for kiosk type stuff).
And then finally, you'd probably want a some of the features from a normal-ish Linux userland, or as close as you could get, to make porting existing stuff easier, but even without that, I'd say the porting would be worth it.
but Android is just based on Linux, so in theory you could tweak a regular Linux server on ARM hardware to get good energy efficiency. The downside of Android is its reliance on Java, which adds a lot of overhead compared to bare-metal languages (I strongly believe - but have no sources or figures to back this up - this is why Apple phones have been faster and more energy efficient, especially in the earlier days (think iphone 4 and co)
Android doesn't run apps in a Java VM. At build time, the Java bytecode is translated to DEX bytecode, which is designed specifically for Android. Today, at install time, the Android runtime compiles DEX bytecode to native code. You're probably right about this being true in the early days, but not so much anymore.
I think the power usage difference between Android and plain Linux on the same hardware is mainly due to the way Android aggressively puts the CPU into sleep states and uses timers (etc) to wake it up as needed.
I'm no Java enthusiast but this screams for source.
I'v always thought it is how iOS manages threads vs in a way Android cannot. But all that info is from more than decade ago, so things may have changed. Can anyone chime in to explain?
To this day iOS feels snappier, perhaps lower latency input/output or something. Am saying as someone who uses and prefers Android and currently using Samsung Galaxy S10 phone (flagship phone line)
Android has the NDK now, it can run native code with only tiny headers of java to launch it, plus ART compiles the Java anyway as I understand it.
Regular Linux could be tweaked, but I think it would be way easier to start on Android and add all the missing stuff you want for a server, than to start on Linux.
Regular Linux still uses a bit more power, and they don't have any kind of container+sandbox that works quite as well, on account of the very tiny userland API that requires containerized things to bring all their own libraries, making them like 800MB a lot of the time, can't really run more than a few apps in 16GB of cheap slow flash like that.
> Regular Linux could be tweaked, but I think it would be way easier to start on Android and add all the missing stuff you want for a server, than to start on Linux.
Things still break on Linux almost every update. Android has a lot that Linux doesn't, as far as I know Linux has very little that Android doesn't other than easy support for legacy Linux apps.
Termux works great as a server environment really. The trickiest part is to start phone without old battery and actually have automatic booting after power outage.
I accidently discovered earlier today that the Fairphone 3 works without it's (easily swappable) battery. So maybe other brands still build them like this.
Since we're talking about an Android phone for self hosting, if the phone is glued/the battery is soldered, after rooting it, the Advanced Charging Controller App* may be an option.
I wrote something remarkably similar a few years ago, for similar reasons. I was pretty baffled that nothing similar already existed. Auto-wake with the RTC timer was what I really wanted.
I have a "NAS", which is really an enormous desktop tower crammed full of hard drives. It auto-wakes once per day, pulls backups from my various servers all over the place, then returns to sleep.
I own both a passively cooled 2016 Mini PC OLOEY Intel Core i7-7500U (as a home media center) and a 2023 Minisforum UM690 (for work and it's amazing btw), and they both use about 7 watts when idling (screen off).
How op is using 43 watts for an idling media center is just beyond me, he must be using a 1978 Intel 8086, right?
Anyhow, 4 watts is still far better than 7 watts and this is a great achievement, but those 43 watts are really sketchy imho.
I've found that most desktop PC-s have idle power usage around 35-50W. That was the main motivation to look for alternative solutions like ASRock Deskmini, or the Dell/HP/Lenovo TinyMiniMicro form factor machines. Mini PC-s tend to have great idle power usage compared to big towers, which is probably down to the selection of the CPU, components on the motherboard and the power supply which usually has to only output 12V/19V. ATX PSU-s need to handle 3.3V/5V/12V, which probably plays a role in efficiency at idle.
You're right, using a desktop CPU for a mediacenter is not a good idea.
Laptop CPUs are, however, really well optimised for very low consumption when idling and that's what you should be using for a mediacenter.
Also, IMHO, ASRock Deskmini would not be a good idea for a media center since it's using desktop CPUs. The idle power draw will not be as good as mini pc with a laptop CPU.
If that's the CPU's own power consumption as measured by the CPU something is broken in your system or you have redefined "idle". When these things hit PC6—all cores off, caches off, iGPU refresh off—they should fall below 1W. This is a typical idle state for a desktop computer that is nominally "on" but nothing is happening.
> I've found that most desktop PC-s have idle power usage around 35-50W.
My desktop server uses 50W at idle. ASRock Z390 motherboard, Pentium G5400, 32GB RAM, no GPU. There are 15HDDs, but 50W is with all of them spun down. It hits 120W will all disks spun up and CPU under load. I've come to the conclusion it's the motherboard, because the parts individually shouldn't draw that much power.
Consider yourself lucky - in Germany we're talking 20 EUR/month for a constant 45W load.
On the other hand, if you can, please consider using a more energy efficient alternative (and/or ensure you're getting the electricity from green sources) - the planet will warm up regardless of whether the electricity was cheap or expensive.
45W 24/7 would work out at about £10.50/month in the UK right now (~$12.60).
My total household usage averages around 4.5KWh/day so adding that 45W 24/7 server would add about 24% to overall load! It would need to be an important little server to justify the increase.
I have a 5 year old home server I'd love to replace as it idles at 40w, while an old laptop that could do the job idles at 12w. What I'd like to do is merge that and my old NAS, so ideally somethig with a few sata slots, m2 ssd, and low idle power. Any recommendations for this?
If you find something with thunderbolt, you can get an external controller in there via thunderbolt->pcie. But between that power draw, and the draw of the drives, you'll be bumping close enough to 40w that it'd be a wash.
I do this math myself every few months and ultimately, it seems for a large nas/server, 40-50W is about as low as you can idle.
I don't think it has improved much. No one has been benchmarking these metrics for desktop PCs for a decade or so, so they have gotten worse, if anything.
I have not yet found any LGA 1700 motherboard were the total idle power would be less than 25W, for example. I find that the power supply inefficiency by itself, even on standby , is usually already 2W -- larger than some of the alternatives discussed here. I have to go to Intel's soldered solutions to find <10W idle consumption (and even <2W). Forget about last decade's desktop AMDs either, they have very high idle power, with the exception of some APUs that may go down to 10W. They may also have custom solutions that idle at significantly less, though.
There's a German forum where they've been crowdsourcing this info for a few years[0]. According to their spreadsheet[1] there's a few systems with 12th gen Intel cpus with <15W system power, and it looks like those are using bigger psus so they might be able to be lowered still with something like a Pico PSU. There are lots of low power LGA 1151 setups so as you say things may be getting worse (or the contributors aren't using new computers yet). I think newer pcie generations have higher power consumption, but I don't know if that's true when you put in a lower generation device into a pcie 5 motherboard for example, or if it affects idle power or just peak. It would be nice if this info were more readily available.
I have found that a 30 to 60 minute sleep time-out provides the perfect balance for infrequently used disks in terms of silence/power savings and longevity.
Interestingly, one of my wireless routers with a USB port had a bug, and allowed a disk to sleep and cause it to wake afterwards. This made the disk overflow its SMART start/stop counter (so it's >65536, possibly double that number), yet the disk is still working at full performance to this day, and shows no performance degradation either. It's a Seagate notebook hard drive converted to an external drive (Seagate Expansion Portable, or something like that).
I remember having bad experience once one of our guys decided "hey, it's backup server, we can just put some of the greens in it, backups are run all in bulk right?".
Think 2/3 out of them died in a year, we preemptively replaced the rest.
I'm running it's older cousin - N36L with 4 Drives. The CPU in that one is Athlon II Neo N36L with 12W TDP, 3.5" spinning drives are 7-10W per drive. So 40W is very reasonable.
Op should just run more services on his server, but that probably won't be 43W at idle, when the box would actually be doing something ...
I have a synology nas and is unable to stop the spinning disks. A shame, every other thing in it is nice, but could have beter optimizations, as the disk are the most of its energy consumption.
You can set the power options so that it can shut down after a period of inactivity. Mine has 8 HDDs. According to my power meter plug, it uses 70W when all are spinning, 30W when the drives are spun down, and about 3W when "off" (configured to WOL). When I need to use the NAS for a while, I usually bring it up via an SSH app on my phone, which connects to my always-on Raspberry Pi that can send the magic packet. I use the wol package in Arch.
I also have a Synology nas, a DS413 for 10 years now.
and I use it like this also for 10 years now. It works great. Only consumes ~4W in deep sleep/system hibernation (the wording depends on from which year you find text mentioning the above nas. Synology decided to reword the function back in some year), and automatically wakes up whenever it is needed.
(also not this is not only HDD sleep, it is full system sleep)
I use an old Intel Macbook to selfhost this kind of services. The idle draw is around 2W and you get a free UPS if your battery is still working.
The sleep management on macOS is really well done and you don't even need a magic packet to wake the computer. You can configure `pmset` to wake on modem access (ring).
Thanks for the idea. Currently i have a Raspberry Pi3 with an external USB-SATA 1TB disk connected which is on 24/7 doing stuff like PiHole, Navidrome, etc.
I wonder if putting that all on a fairly new laptop with all the energy savings enabled would actually be more energy efficient.
Even on a fairly old one (my "home server laptop" is 8 years old) it's really close in terms of power usage. And the laptop is much faster than the Raspberry and usually comes with fast SSD storage built in.
Did the same with 16GB i7 Dell E7420 laptop (2013), it draws 4W when idle, can spin up two cores instantly and fairly quickly transcode video with iGPU or host multiple Minecraft worlds. No way phone or RPi can get these params.
Same here but with an XPS 13 model. Power usage from laptop chips is great for home servers. They're still really fast when needed compared to a Raspberry Pi, and typically come with 256gb or even 512 of built in fast SSD storage.
The battery in mine is old, but with the screen off it still lasts several hours as a sort of UPS.
You don't even need magic packets from a dedicated machine for wake on lan for some NIC's. For example on linux using `ethtool -s enp3s0 wol pug` will wake the system on p (PHY activity), u (unicast activity) and g (magic packet activity). Of course this is entirely dependent on how "noisy" your network is.
This is something I've been thinking about for some time as I'm working on re-building my homelab for a more powerful system with more storage. On of the big things I'm trying to find out is if there is a good way of truly shutting off hard drives when not in use or if the hit to longevity of the drives would be too much.
I've been shutting off my hard drives but found that anything short of unmounting them caused them to be awoken frequently by random processes with timer-based disk access.
My solution was to move all my online use-cases to a separate SSD and mount the HDDs only when I need them for a manual process. E.g. when I want to watch a movie I have downloaded, I'll mount the disk pool and copy it to the SSD (although my internet is fast enough that just downloading it again is usually faster than the manual process).
I wonder if you could make the 'wake up' a bit more advanced.
Run tcpdump or something similar on the Pi, with a filter set for inbound packets to the media server on ports that Plex or Time Machine uses. If it sees those packets, send the WoL magic packet.
This might not work if your switch does proper port isolation - but you could perhaps add port mirroring or something like that.
Perhaps using something like keepalived on your server and Pi could be useful here. When the server is off the ip will failover to the Pi to receive server traffic to know when to power on the server. Once the request comes in and the pi powers on the server, it'll come active again and recieve the IP while the pi goes in to standby. That way the Pi doesnt need to be in the read or write path.
I'm not sure how Plex clients would handle it, but putting an nginx server with the same certificates and just returning HTTP 429 (Too Fast) or 500's and a "Please try again in 1 minute" response, along with sending the magic packet would work great for most HTTP based things.
I think you could maybe just snoop for ARP - when you see something trying to discover the IP wake the machine up. Supposedly some Intel NICs [0] already support this, though they also wake on multicast which you wouldn't want here.
Reverse proxy means that your pi now has to handle all the traffic for Plex and any other services. It could also make other things difficult like CIFS/whatever else you're hosting.
Depending on what Pi you're using, it's networking is over USB, and so is quite performance constrained.
Pi is perfectly capable of running Plex. Maybe run it on there with some cached data on the Pi and then the main content stored on the server. Then when accessing Plex, it will have all of the content showing what is available but in the background will be starting up the main server with the main content.
I guess you could just attach an external HDD to the Pi. Thats worked for me before.
You could use DSR (Direct Server Return) so that the pi only handles the initial connection. All the video traffic goes directly from the server to you without passing through the pi.
>Does Time Machine run on a schedule? Add a cron job (whatever the Mac equivalent is) to send a WoL ~1min beforehand?
I believe on x86 the BIOS supports setting a timer for self-wakeup. The OS can tell the BIOS "I'm going to sleep. Pull me out of sleep if the RTC passes this datetime."
Heh, my homeserver uses about that much (4W) in idle ;) 2 SSDs, 8 GB RAM and a passively cooled J4125, more than enough for what I need (2-person household). It also runs Home Assistant, so I can’t let it go to sleep, anyway.
I have an extra Pi4 with Proxmox Backup Server, that could actually sleep most of the time as it backups only twice a day. But even with German electricity prices, a Pi4 with a single SSD is not exactly expensive. Still, maybe I’ll look into that.
Indeed, modern hardware is getting ridiculously efficient, and if you go the extra mile to disable hardware you don't use, as well as strip your configuration to run only the software you absolutely need, you can really drive it into the ground.
The later hardware revisions (such as 1.5) of the Raspberry Pi 4B, even with 8GB of memory, can idle at ~1.33W on WiFi with everything unnecessary disabled -- 2GB versions are close to 1.1W. The 8GB version on power-hungry Ethernet (at 100Mbps) can actually be under 1.45W, compared to 1.68W for 1Gbps. PoE is nice, but the hat adds a lot of power -- though it's possible with an 8GB to squeak under 2W via PoE at 1Gbps, though only just barely (~1.98W).
-- -----
For those who go, "but I need real power"... you can have an 8-core Ryzen 7 PRO 5750GE (with GPU cores for transcoding), 64GB of memory, multi-TB NVMe SSD, and two NICs (one 1GbE and one 2.5GbE) plus WiFi and Bluetooth idle at under 9W. No, it's not 4W, but to say it can do nearly anything you can imagine is an understatement. You're at the same CPU performance of an i5-12600, with tons of memory for VMs galore, in a system that has the same idle power as a cable modem.
I believe Apple rates the newest Mac Mini with an M2 Pro, 32GB memory and 2TB SSD at just 7W idle. Again, given it's capability, that's pretty wild for 7W.
> The later hardware revisions (such as 1.5) of the Raspberry Pi 4B, even with 8GB of memory, can idle at ~1.33W on WiFi
How is this measured? I spent _years_ doing experiments on the Raspberry Pi 4B and was never able to get it to idle to anything less than 3ish W at the wall, albeit arguably this was not with the "1.5" hardware revision.
One day I bought an ASUS PN40, which has an x86 Intel N4000 CPU.... and on the first day, just after installing openSUSE, without even customizing _anything_, I measured _1.7 W_ power consumption at the wall. This is for a full x86 server, with GbE ethernet, 8GB of RAM and a 1TiB SATA SSD.
That server is still my one and only homeserver. It has my calendar, media files, talks to my Zigbee/BluetoothLE devices, and it does so at a fraction of the power used by the Raspberry Pi that TFA uses for waking up his homeserver.
So to ultimately validate you can either use pretty pricey USB power meters that not just display the amperage and voltage, but can record that telemetry, which you can also validate further with clamps and multimeters.
There is a ton you can do to drop Pi power consumption into the ground, which largely is turning off all the hardware on the SBC you aren’t using, followed by undervolting (same clocks, just less power to maintain them), and last by using a very lightweight configuration (look at the Diet Pi distribution) to keep CPU and network interface(s) in the lowest possible power state by just running the absolute minimum you need (and lightweight versions at that) to do the job. After that you could further consider underclocking. Disabling cores can get further tiny gains as well.
Then you could consider purely PXE-booting where you only boot from the SD card to then disable it — relying on keeping everything memory-resident — and having permanent storage be available via network storage presented via ISCSI, or block storage. You’d then only turn on the SD card as part of configuration management changes, to turn it back off after the configuration has finished — or just rebuild the image that’s available via the network then reboot the Pi to start using it. You also then can be picky about what SD card you use, by actually tracking the power consumed accessing particular models — yes, there is a measurable difference.
If I went truly “all in” I’m fairly certain I could have usable 2GB Rev. 1.5 4Bs under 1W of power consumption, with 8GB models under 1.2W, though I’ve been focusing my hobbyist time elsewhere. This isn’t even looking at the Pi Zero 2W, or the Pico, which can be dropped to absolutely absurd low levels of power consumption and do some actual useful things.
I really should do some YouTube videos at some point. I’ve a coworker that’s really starting to ramp up his sub count on his homelab shenanigans, and he says what I’m doing relative to the largest homelabbers blows his mind at times.
There's also the flip side, which is "well since I already have real power..."
I.e., like many HNers, I own a powerful coding/gaming Linux desktop with plenty of HDD space, which gets regularly backed up to cold storage. It does however use a frankly obscene 90W when idling, due to the GPU, fans, and multiple hard drives and peripherals.
Building a separate, power-efficient home server would take a year or two to break even, even compared to running the workstation 24/7. But it's more like 3-5 years when you consider that it's going to run several hours most days anyway, and that it can be safely put to sleep at night. Plus of course it's more effort to manage two servers rather than one, so everything else being equal the single desktop is preferable.
Now if I can use some of the tricks described in this thread to turn it off when I'm not using it, and only temporarily wake it up when I need to stream or upload something, then using the desktop as a home server becomes kind of a no-brainer.
Of course, if you only use a laptop, then a low-wattage home server is a totally sensible complement.
My workstation is a Zen 3 Threadripper Pro that mostly serves as a virtualized Spark cluster, an entire virtualized infrastructure stack, and video editing (plus occasional games). Wife lives in CAD tools on similar hardware with a bit less CPU and less memory. We already have real power.
- - - - -
However, we mostly prefer to have those off when we’re not doing work that requires them (I’m mostly on an iPad), plus there’s stuff that the entire family benefits from with the lab. Considering their performance the workstations are actually very power efficient setups for their capability, but they can noticeably change the temperature of a room in an hour when they’re under load, and both of them combined add about $35-40/mo to our electric bill (we pay ~$0.50/kWh in the Boston area) just for light loads, sometimes quite a bit more.
- - - - -
I will never suggest building something new JUST to reduce power consumption of existing stuff. ROI never makes sense. But when you DO build something new, I DO suggest looking hard at power consumption. There the savings can be quite material. Plus never having to hear any of the equipment EVER (barring the workstations going flat out) is a great thing. Silence is golden. You’ll also learn a lot more about your hardware.
FWIW, everything added together for my homeserver in the parent comment was about €120 ;) With the 0.4€/kWh that I pay here, that’s worth it quite quickly.
I think those usually also want real storage space, NVMe either can’t offer that, or it would make the build so expensive, electricity stops mattering. And in those cases, 4-12 24 TB HDDs will out-consume your other hardware anyway ;)
It depends how one uses the storage, but on my home server I didn't need to access the (external) HDD storage often so I created a systemd service that spun it down after set time of inactivity. The response when the HDD is needed again is not instant, but okay enough for what I needed.
Yep. I have a storage server, also with a 5750GE, 128GB ECC memory, 40Gbps (QSFP+, passive DAC), EIGHT 16TB drives off a modern HBA, two 2TB NVME SSDs (mirrored) as a cache/ingest tier, a SATA DOM to boot off of, and a BMC (which alone draws 3.2W).
132TB raw storage organized into 16TB and 64TB ZFS volumes, plus the 2TB ingest/cache tier. Can saturate 40Gbps up and down.
43-44W at the lowest idle state, about 55W with just the 16TB volume spinning. Though with all the memory and the cache/ingest tier, it doesn’t spin up often, and often it’s mostly spinning up just the two-disk volume. So most of the time it is completely silent — not even a fan whirr.
And that provides the bulk storage and backups (plus pushing offsite) to the PCs, a six-node Pi cluster, and those 9W virtualization servers by providing iSCSI volumes. Also lets any new device PXE boot, and if a device has no OS and no configuration, the hardware will self-install, then self-configure after the first boot. A lot of work, but very valuable experience that paid off at the day job.
A lot of VERY deliberate hardware choices to drive power consumption that low, and probably two weeks measuring power consumption of everything. Something people fail to realize that 10GbE Ethernet is rather power hungry if you’re using copper, but if you’re using fiber or (ideally) passive DACs, you can eke out lower latency, better performance, and drive power consumption into the ground. It matters when you consider you pay for that power consumption for 10GbE copper at every single RJ-45 port in the chain between the devices. You can pay < 10W for all the devices in a 40Gbps connection (four ports) with passive DAC, but with copper it’s more like 40-42W.
A refurbished Fujitsu FUTRO S740 thin client, it has one SATA m.2 slot, and I replaced the Wi-Fi m.2 with an adapter for an extra NVMe. They are pretty big in Germany as small homeservers, because there are a lot of used ones around (I paid 44€ for the 8 GB version). There’s an overview [0] page for them, and a list of power measurements [1], both in German, though.
Thanks to Valve's focus on improving Linux suspend on AMD hardware, suspend recently started working reliably on my desktop. It now suspends automatically, and I set up a button on my phone to send a wake-on-lan packet. My phone is always VPNed to home, so I can wake it up from anywhere.
For my server, I just focused on choosing very power efficient hardware. I use a passively cooled baytrail SoM.
I'd kinda like all machines to do this all the time...
The OS should go to sleep, and give the network hardware enough info to respond to simple requests (ie. maybe ARP requests, mDNS, and keeping a DHCP lease alive). Anything that comes in addressed to the machine that can't be handled by the network hardware should wake the machine, leaving the packet in a buffer for the machine to handle and then go back to sleep.
The OS the one-laptop-per-child machine used was capable of this - it could sleep, with the screen either on or off, between keystrokes and network packets. Lots of the network stack was moved into the always-on network hardware, so it could for example participate in the mesh network while powered off.
Sadly the whole project folded (for mostly non-tech reasons), taking most of it's cool tech with it.
Android can do this - the linux kernel goes into suspend-to-ram while the network is still up. If someone sends you a tweet, the cloud messaging service wakes the OS with a TCP packet, which eventually causes the "ding" notification that Musk has said something again...
I think this is what Modern Standby aka "S0" power state is doing. Worse it is not always properly implemented and laptops turn on burning hot in backpacks (looking at you Lenovo Legion).
If you're penny pinching also have a look at your network equipment. As a test I set all switch ports on my 24 port HP from 1Gbps to 100Mbps and the power usage dropped by 10W. About half the ports on the switch are occupied.
But then again, with the current prices it might make more sense just to replace the hardware with a newer more efficient model and it will pay itself back in a year instead of 5.
not necessarily, depends on the embodied emissions in the replacement gear. It is also surprisingly hard to get any information on those. I tried seeing if there were estimates of the kg co2 in a modern CPU chip but I found nada
I wonder if that could just be extrapolated from cost to make silicon wafer of required grade. Surely the amount of chemicals and power used would be similar per mm2 of silicon processed, regardless of whether it is big CPU or a bunch of smaller chips.
Sure the machines will be more expensive but they are amortized over decades of production
A Python script that automatically suspends the server, when it is not used, and another script to wake the server up again, in case there is work to do. To my surprise I could not find any out-of-the-box solution, so I thought, it is a worthwhile effort to write about it.
Currently, the server is primarily used for two things:
- Plex Media Streaming (remotely)
- Time Machine Backups (locally)
To monitor Plex activities, we access the local Plex API and for Time Machine we simply monitor any file access at /mnt using "lsof." In case there has been no activity for 15 consecutive minutes, the server goes to sleep. (Nobody streams, pausing a video doesn't count as activity, and no backup is running.) A Web server on a Raspberry Pi hosts a website that obtains the current state of the home server provided via the Home Assistant REST API. In case the server sleeps, and I like to backup or stream something, I can wake the server using a simple button press that sends a magic packet using a wakeonlan Perl script.
I've also heard of a trick of integrating WoL with DNS, where if the server requests a lookup for a local IP, it'd send WoL packets to the destination. You'd probably just need to set the TTL for the server's IP very low so that it doesn't get cached.
Just trash the 43w while idle monstrosity and put Plex on your Raspberry Pi 4. Use an external drive and it will power on and off automatically. I see absolutely no need for the second, much more inefficient server.
Germany made the silly decision in 2011 to shut down their existing nuclear reactors, and build more natural gas plants to import cheap, natural gas from Russia. Then after Russia's second invasion of Ukraine last year, the gas got cut off. Overall, it was a pretty bad idea that most people should have saw coming since Russia first invaded Ukraine in 2014.
We know why we can't have cheap and efficiently delivered power in California: PG&E has no incentive to deliver such a product.
My solution is to run my servers and live life as I please (within some degree of reason), and not look at the power bill too often. In my experience, the monthly charges don't correspond with my actions attempting to reduce or increase consumption in specific categories (e.g. hottubs, servers).
With that said, I do flip off the lights when not in use :p
It depends a lot on the specific area you live in. Down here in the Peninsula, it varies by municipality and 1/4 mile+ zones. The wealthier the area, the more reliable the service. Also, tree limb falls happen.
The jaded side of me attributes this to PG&E executives not wanting to anger the elite powers that be. Atherton service has likely never been affected by the rolling brownouts in summer. Go a mile or three away and it's a very different story.
I'm in BC and BC Hydro has two step billing. $0.0950/kWh for the first 1350kWh/2 months and $0.1408/kWh beyond that [0] - and that's CAD so about $0.071/kWh and $0.11/kWh in USD. I'm guessing you meant $0.2/kWh for Oregon?
I'm definitely less concerned about the cost of running a machine 24/7 here than I was back in the UK.
Here in South Africa I pay around $0.11 per kWh, not that bad all things considered.
The challenge is more that electricity supply is not consistent, which luckily is easily remedied by investing in solar/batteries. It's not yet common for consumers to feed back into the grid.
If you want to push energy efficiency the Raspberry 2 (old ARM v7 version) is the best device at 1W idle / 2W full load, it saturates the SD card. One step up is Raspberry 4 for heavier workloads/networking at 3W idle / 6W full load, and after that 8-core Atom at 15W idle / 25W full load.
As mentioned elsewhere in this thread, 100Mb/s, vs 1Gb/s vs 10Gb/s makes a lot of difference. IMO 1Gb/s is the only reasonable speed for home hosting as to saturate 10Gb/s you would need at least 20x (50Ah) lead-acid batteries of backup and that is almost industrial levels of investment/maintenance.
The final constraint is that disks and RAM uses more energy than you would think, 256GB DDR5 uses 80W! And they wear faster than you would think too at these nm! Be careful with writes!
So you need to plan your structure with the proper redundancy without going to far above the previously mentioned 1Gb/s saturation for your particular applications.
If it sounds unbelievable, that's because it isn't remotely true. DDR5 is 13% more efficient than DDR4 and has 4 times the max die density. 256GB broken up into four 64GB sticks would use about 8 watts.
This person make insane claims about power all the time. They claim nothing will surpass old raspberry pis and intel atoms in efficiency, that nothing will surpass 1 gb networking due to power constraints (even though it all has existed for well over a decade) and even claim that certain video games won't be playable in the future from power constraints.
Have you checked your settings with powertop? Apparently some hardware has broken power management, but if yours doesn't, you might be able to get most of that savings from just making sure all of your power management settings are on (there may be BIOS settings to adjust too).
I've spent the last 5 years trying to figure out how to get Macs to remember which display goes on the left when waking from sleep. No luck yet, maybe it's fixed in Ventura or with M2s?
My usual comment: out of windows, macos and Linux only the Linux on laptop and on steamdeck can sleep and wake up reliably without draining all the battery for me. I know others have a different experience, but I'd just go with "none of the systems handle it reliably" if we're generalising.
Agreed. I find Macos on a fresh installation and new hardware will work fine, but over time it seems to fall apart somehow. The last Mac I used (when I worked for $FORMER_EMPLOYER) would sleep fine but whenever I woke it up several hours later I had a 50% chance of it being completly flat (and the other 50% it would have varying levels of charge). It got to the point I got into the habit of plugging my Macbook anytime I needed to wake it up after several hours of sleep to ensure it actually would wake up properly. This was because occassionaly it would wake up for a few seconds and durin that time it'd trigger another suspension which would itself fail to work and power-off part way through the suspension requiring a fresh boot. Battery supposely fine etc and would usually go for several hours no issues while still powered on. YMMV of course but I've had better experience with Linux and Windows on my Lenovo ThinkPads--if it goes to sleep at a certain level of battery charge then it usually wakes up only a few percentage points below that after several hours. shrugs
I'm amazed at the regression in this on Linux in recent years. It's a complete coin toss whether sleep will work at all on my laptop, and what state it will be in when I open in again.
Brutally draining the battery while aSleep should never be called "supporting Sleep".
And even before then Sleep was very hit-or-miss anyway. I've never used Sleep because it's just far too inconsistent and thus unreliable, compared to just shutting the thing down when I don't need it for a while.
Still, TFA is about putting a server on stand-by. A server is not going to run off a battery most of the time, so it's not very relevant if it's drawing 4 W instead of 1 W. Even if it's not running at the absolute minimum power level, by putting it on any stand-by mode it's going to draw an order of magnitude less power than if it's idle.
The majority of Sleep/Suspend/Hibernate related issues I had on Windows were related to it either not sleeping at all, or waking up at the wrong time.
Annoying when it's a laptop that you've just unplugged and stuffed into your backpack, and you're wondering why there's now an oven on your back. Less fatal for a desktop when the goal is to save power.
For linux, sleep/suspend is also pretty reliably good in my experience.
I use an Odroid HC4 (two SATA 3 slots) powered by a Shelly smart plug. My backup script powers the plug with a HTTP call. Waits for the server to be online by repeated ssh attempts. Unlocks the backup disk by asking me the LUKS password. Starts the backup. Sshs to the server to shut it down. Polls the plug over HTTP until the power meter shows a less than one Watt consumption for a while. Eventually powers off the plug, HTTP again. The backup procedure includes a remote encrypted mirror. Power consumed between backups: zero plus the smart plug.
My full server closet (which is also used to heat my jackets and coats) consumes about 50W of power. The actual two servers being responsible for less than 10 of it. Those are RockPro64 with 2 SATA SSD on each. No fans.
Most of the heat comes from the network stuff, that is the cable modem, UDM router and switch. Replacing the UDM+switch with an 8 port UDM SE might be an interesting idea but I’m not sure since the trashcan UDM also does wireless. Going by the heat the UPC cable modem seems to be responsible for most of the power draw.
You should also electrically fix your VAr, i'm not fresh with my electronics knowledge but VAr (Volt-Ampers Reactive) has no effect electrically on equipment (those are waste) because you are billed by the resultant vector of the sum of Watts (work) + VAr (Waste) = Tariff Bill
About your monitoring command:
lsof -w /mnt/* | grep /mnt/ | wc -l
Have you taken a look into incrontab, maybe it's more cpu efficient, but i don't know, so you could take a look into it.
Just wanted to chime in and say that I get excellent results with a laptop homeserver. I haven't tried to hard looking into it's exact power consumption, but for the load I use it for it seems to be negligible. I'm not streaming media too frequently on it though. It sits at ~95% idle on average according to powertop.
Anecdotally, the costs from this website (https://ecocostsavings.com/how-many-watts-does-a-laptop-use/) seem to check out based on what I observed. Idk the model # off the top of my head, but it is a lower end hp laptop my friend sold me running debian.
The one-liter mini-pc's are incredibly abundant & cheap on tbe used market. They mostly idle.under 10w.
I had some acer chromeboxes that were converted to linux, & those things idled at 5w! had em filled with 32gb ram. amazing little machines. the little i3 and celeron cpus were very weak but still held up well. had almost two years where i used one as my main desktop, literally velcroed to my monitor, but eventually decided the somewhat subpar perf (especially at 4k!) was a bit of a drag.
I bought a Lenovo M900 Tiny a few months back which is quite a few years old (think maybe 2017?) and upgraded it to 24GB RAM, 120 GB SSD and 500GB nvme.
With proxmox and a few vms running on there it idles around 11-12W which isn't raspberry pi levels of efficiency but still pretty decent.
I think the OP's server may have one or more mechanical HDDs in it that are consuming most of the power.
If your power management is working right there’s only a slight difference between suspend and the lowest idle power state. I would look into why your thing draws 43W. Common reasons: ASPM disabled, using wired Ethernet when wifi would do the job, and having too much RAM.
Wow, 49 euro cents per kWh is roughly four times as much as I currently pay in south-east USA! I will not complain about my next utility bill. Blessed be the nuclear plants and considerable quantities of pumped storage near me.
I think just sleeping the HDDs might get about the same result.
I have a couple systems with i3-7100U CPUs, with 2 sticks of RAM and an NVMe SSD they idle at about 1-2 watts booted up into Proxmox.
HDDs suck down a lot of power relative to that, about 5W each just sitting there spun up.
Unfortunately I haven't figured out a good option for sleeping drives that are part of a ZFS pool in proxmox, there's just too much going on with constant drive activity.
My old laptop is more than powerful enough for Plex and backups, and idles at around 3W. Why someone would use a desktop PC as a home server in Europe is beyond me.
Does anyone have tips for a Synology NAS to hibernate reliably?
My fear of powering the machine on and off completely on intervals, would be damaging the HDDs.
I built something similar to this to turn on / off a windows gaming PC when I first made my switch to 100% full Linux on my main computer: https://github.com/gravypod/SteamStreamScripts
There's a huge amount of savings in energy to be had from automatically sleeping + WOLing PCs.
They could use something like [1] as a network bridge which would send the WoL packet when the machine is sleeping and something is trying to reach it :)
i'd love to try to turn a server on every 30s for like 2s then turn it off again. rtcwake could be used for this. dont spin up your platter drives each time though or you might wear em out fast.
the tricky part is that a consumer would have to catch you at just this time. a mdns broadcast being a signal- contact me now- would work, but what apps are going to have that kind of timing built in?
it'd be interesting to see which kinds of protocols would be ok with servers that just keep disappearing. webdav might be fine. smb/cifs would probably throw a fit.
or... ideally the nic could be some kind of smartnic that could stay on & buffer some traffic, keep connections open. i wonder if cxl gear, with multi-host capabilities, might enable this sort of thing.
Is there something that puts my server to sleep at a particular time and then wakes it back up at a particular time? This is what I've been looking for but can't find anything..
More details: https://en.wikipedia.org/wiki/Bonjour_Sleep_Proxy