Even for a homelab these are useful features to have, I don’t want to go hooking up my monitor and keyboard every time I need to troubleshoot some boot issue or install a new OS. I can also get used DDR3 ECC memory for a hell of a lot cheaper than DDR4 right now.
Unfortunately the person who wrote this article is in the EU who has much slimmer picking in the second-hand market. I can buy a Dell R620 for $200-300, two 10-core Xeons for another $300 if it doesn’t come with enough cores, and 128GB of 16GB DDR RDIMMs for $200 - total price is under $1000USD, and that’s excluding any components that come with the server I may opt NOT to replace.
Edit: iDRAC is the Dell LOM, iLO is HP
I have been using the X9SCM-F for the last 9 years, and enjoy being able to modify the BIOS remotely, boot remote ISO images, or troubleshoot any boot problems without physical presence.
Some of the SMC boards listed support i3 processors, but generally, if you want a BMC, you're stuck with enterprise-grade processors.
No BMC license fee. Officially supports ECC UDIMM, Linux and PCIe bifurcation.
All Supermicros for EPYC have IPMI.
This EPYC is $599:
Intel® 8th/9th Generation Core i9/Core i7/Corei5/Corei3/Pentium®/Celeron® series Processor, Intel® Xeon® E-2100 Processor, Intel® Xeon® E-2200 Processor.
Single Socket LGA-1151 (Socket H4) supported, CPU TDP support Up to 95W TDP
You can get out-of-band management with Xeon motherboards, or with Epyc motherboards, but you're likely not going to get a motherboard with out-of-band management with a Ryzen CPU or a Core i.
iLO / iDRAC implement IPMI, but they are not the same thing. iLO and iDRAC provide web interfaces, remote iso mounting, granular boot control, remote vga, etc. There's a lot of useful features not available in a "standard" IPMI implementation.
> Systems compliant with IPMI version 2.0 can also communicate via serial over LAN, whereby serial console output can be remotely viewed over the LAN. Systems implementing IPMI 2.0 typically also include KVM over IP, remote virtual media and out-of-band embedded web-server interface functionality, although strictly speaking, these lie outside of the scope of the IPMI interface standard. 
The BMC that vendors use implement IPMI and also usually provide these features.
If you're just running a couple servers in a home lab, yeah, it's probably not worth it but for a company with a bunch of servers the additional management features are absolutely worth it, IMO.
Redfish  is looking like it's gonna be pretty awesome too.
Dell ties them to the service tag, which is quite easy to set via racadm and you can buy a license off eBay. Older Dell systems required a $30 part you can find on eBay. HP just uses a key they send you on paper.
Hell, many times these decoms still have the license applied to them. I have 2xR210 II’s, a R320, R520 and formerly had a HP ML10 and a Lenovo TD340. The Lenovo was the biggest pain to get VKVM because of the stupid hard to find dongle.
Don’t buy Lenovo servers for a homelab; BTW. Dell lets you override the ramp in fan speed when a non-Dell branded PCIe card is installed, newer Lenovo servers ramp them up to 100% and you can’t turn it off (I had to leave my TD340 on the initial BIOS revision with security issues because of this).
The argument is they don’t know the thermal requirements of the card because it’s not OEM-branded so they max the fans “to be safe”, but not letting you override it is just bullshit.
Also, same stupid justification,”this configuration hasn’t passed FCC EMI tests” when the cards a qualified directly, they are the FCC certification boundary.
I was indeed that lucky. Thank you <bank>.
If you’ve got one or two, then, sure.
You could install the free version of VMWares ESXi and use that as a sort of iLO. In most case that would be a more pleasant experience.
It would be useful, but why not just run VMWare or Proxmox at that point?
But the same is true of iLO. Worse in that case, because the iLO won't receive security updates for as long and is less well scrutinized in general than common Linux distributions.
Waiting 2 months for Threadripper would probably be more sensible unless you need it RIGHT NOW though.
Here is a 128gb kit for under $600 dollars. https://www.newegg.com/corsair-128gb-288-pin-ddr4-sdram/p/N8...
If your workloads require 128GB of RAM then you probably want ECC. You'd be skimping in the wrong place.
big "if" there really, and that makes it an apples-and-oranges comparison IMO.
The motherboard, RAM, SSDs, CPU, case, powersupply all have to be factored.
This is an apples to oranges comparison.
IMO it always makes more sense to build your own PC vs buy a used server.
It's going to be have better ventilation and be much quieter in a home setup.
Much better value can be had than OPs buy - just picked up a Dell R820 (quad 8core,3.2ghz v2 Xeons, hyperthreading is overrated yo) with 768GB of ram and dual 10G nics for $1500. Idles around 170W (about $5/mo where I live). The real magic: 8 full (16x 3.0) pcie slots.
The 3900x is a beast for the workstation though, and I have it running there with nothing but praise.
I just checked eBay and it seems R820 with 128GB and 4x v2 Xeons goes for 1200-1300.
How did you manage to get the extra 640GB for 200-300 ?
PS 6 months ago I upgraded my 2x Xeon 2697v2 Dell Precision T7610 and paid $200 for 128GB DDR ECC (8x16)
Didn't expect to see another price drop.
I paid $85 per CPU, so while a new Ryzen 9 does make sense for a lot of people with its fast single threaded speed and low power consumption, the old server gear I bought still wins in highly parallel tasks for less money.
Edit for proof of my geekbench score: https://browser.geekbench.com/v4/cpu/13853828
That was before I expanded the RAM, but the score didn’t change with the extra memory.
The other good thing about geting a Dell/HP rack server is you can have 16 drive bays (or more) easily.
Kudos to AMD
RAM (DDR3) and storage capacity (cheap SAS drives) in these systems is really where the savings is... I don't think I'll ever own a "regular" desktop again.
The biggest problem I have with a used server is the inability to incorporate a GPU into the architecture. But I guess that really depend on what you want to do with it, serving a website will absolutely not require a GPU.
Epyc2 is what I'm fantasizing over lately.
FYI the E5-2680 is like $300 on Amazon right now, so that's a better comparison.
I run my old desktop (a i7-6700K) in a rack in my basement, now, with 64GB of RAM, a Mellanox Connect-X for 10G networking, and half a dozen disks, and it idles under 15W. The entire rack, UniFi stuff/POE wifi APs and all, sits around 50W. 96W just for a single machine is A Lot.
I have an 8 core i5. I bought Intel (18 months ago) because I didn't want to deal with any AMD incompatibility especially since I wanted to also run it as a gaming machine using VT-d. If I were doing it again, I'd definitely go with Ryzen.
The fans in my 1U dell servers are fairly quiet when idle around 3600RPM. They make noise, you wouldn’t want to be sleeping or watching TV in the same room - but with the door to my office closed they can’t be heard in the hallway.
Right now my noctuas are running about 1000rpm and keeping the Xeons around 40c (under load this will increase with minimal db)
My CPU is a 7100 (ECC supported!) with a Noctua L9i so I never have problems there either. Power draw is a little high at about 70W with 8 3.5" drives spinning, but most of that is the HDDs (rule of thumb is 5W per drive) and the alternative would be spinning them down, which isn't ideal.
> By default, this server is turned off. When I want to do some work, I turn the server on using wake-on-lan. I do this with all my equipment except for my router/firewall.
If it is only on when it is doing work, idle power doesn't matter _too_ much.
The actual efficiency is defined by capacity, load, and miscellaneous other factors, but should be in the neighbourhood of 20% of delivered power.
If the system idles at ~100 watts, the PSU might add another 20-30 to the mix, not 750.
Again: peak draw, when on.
I'm not arguing that this system is particularly efficient, only that you don't want to add 750W for the PSU to the draw.
I've not specced out low-power systems myself. I doubt you could get 40 hyperthreads running way below this, though there are definitely some low-power systems which might have a total budget below 50W. Reddit's HomeLabPorn may have some more useful guidance: https://old.reddit.com/r/HomeLabPorn/
(Not my area of expertise, I've never really stayed current in HW. Though I'm aware power/thermal budget has been a major focus, both mobile and server, for the past decade or so.)
I specifically mention the L3 cache size because, while not being substantial, a large cache can get you 10 to 20% improved performance due less CPU stalling from cache misses. For comparison, the Xeon in question has 25MB L3, so we'd be looking at 50MB cache, split in two dies (so it doesn't quite work as a whole block of 50mb cache).
greatjack613 also mentioned the fact that an AMD Zen 3900x matches the multicore performance (and has better single core perf) for a fraction of the TDP, see https://news.ycombinator.com/item?id=20687932.
All in all that was the general idea: yes, you can get a used server for a good price, but we shouldn't forget the efficiency aspect of it, compared to new hardware.
I bought a used intel Nuc a while back with 15w idle. And it runs 24/7 and does everything I need from downloading to nextcloud to plex, my website, my emails etc...
I really wonder what you do with this kind of power.
I'm definitely getting cheaper power than a lot of people at $.08/kWh, but it looks like the US average is only about $.12 - are there places where a couple hundred watts is going to be a significant financial burden on the average IT worker?
I expect that once, next summer, we get the house kitted with solar and batteries, I will feel a little more okay about it.
EDIT: Assuming a US household paying between $0.10-$0.20 per kwh.
I've got a rack that pulls ~600W 24/7 but that's across 4 machines, 3 UPS's, >30 HDD's yielding >20TB across multiple zpools, >256GB RAM. And I'm actually using most of that capacity, not just idling away.
I do hope to upgrade soon to power-sipping platforms like you mention but currently I'm still on R710/R810 stuff (Westmere Xeon's).
It's really just one big toy for me. I'm using Proxmox for high availability of over 30 VM's (still want to play with containers soon).
EDIT: OPNSense firewall says I uploaded over 15TB in the last month. Fortunately Google fiber doesn't care!
I also hdparm'd it such that my spinning disks park pretty aggressively.
2.5" HDDs and SSDs use about the same amount of power, but 3.5" HDDs use 2-3x more. So you could potentially save 12 watts if you're using 3.5" drives.
Make sure your PSU is a high efficiency one, and isn't too low, or or too high wattage. From what I've read the maximum efficiency is in the 40-80% range.
Damn...meanwhile my fairly modern Asus laptop idles at 27W.
(Probably the GPU...think I forced it to dedicated only)
Stuff that I was planning to do:
* Managed VM platform (~"EC2")
* Centralized auth (FreeIPA)
* ZFS NAS (also possibly ceph) + backuping
* Container platform
* Your typical web/email stuff
* Monitoring/alerting/log management
* VPN endpoint (and other more advanced networking stuff)
* Probably something more I have already forgotten
I realized that building a private cloud actually takes serious effort and not just putting some lego pieces together. There is also bit of circulatory stuff there that makes bootstrapping more difficult, especially on one single box.
VMware vSphere Hypervisor, Proxmox, or Microsoft Hyper-V Server 2019 are all free options. It makes it easy to experiment with the above in parallel.
My current VM Host has
* One VM as a docker host (turtles all the way down) for development tools. Build server, bug tracker, private artifact repo (Sonatype Nexus).
* 3 VMs as a Kubernetes cluster for experimentation
Building a NAS is the only thing that would take significant effort and is a project unto itself.
Unless you have a strong desire to experiment with failure modes (network dropping out, killing iSCSI, online VM fail over), stick to a single box.
Red Hat Cloud Forms -> ManageIQ (to manage your virtual private clouds, hypervisors, etc)
Red Hat OpenStack -> DevStack (the cloud itself)
Red Hat OpenShift -> OKC (container orchestration)
Red Hat Virtualization -> oVirt (for VM's)
Red Hat Ansible Tower -> AWX (to automate everything, including deploying all the previous software listed)
If you plan on doing all of this from one machine, understand that you will need to enable nested virtualization which will require some BIOS/OS configuration to make it work.
This part, though, is really easy nowadays. FreeNAS is idiot friendly.
At some point, I need to migrate my current FreeBSD/ZFS setup to something newer, and I'll probably use FreeNAS next round simply because it's so much easier to manage. (Yes, I can do it from the command line--but I do it so rarely that I always have to go reload all the ZFS command set into my working memory.)
Why didn't you at least do this?
Docker/Docker-Compose isn't quite lego.. but it's awfully close.
Asrock Rack c602 mobo
2x Xeon e3-2650Lv2 (20c/40t)
8x8GB Samsung DDR3 ECC
generic EATX case with fans
2x CPU heatsinks
3x case fans
XFX fanless modular PSU
if the price and specs alone aren't compelling enough:
it runs idle at ~50w
it has a similar passmark of >15000
it has 4x GbE ports
it has 4x PCIE3 slots (!)
I'd never heard of iLO, which other commenters mention as a selling point, but a quick search leads me to believe this is HP's take on IPMI, which this mobo has.
originally built as an HTPC server, I had 2 main criteria for my build: cool and quiet. hence opting for low powered processor versions, 0db PSU and PWM case fans. if you don't require these criteria you can knock ~20% off the budget.
there was so much power being unused that I binned a few other devices (namely crappy ISP-provided router and tv box) and made this build the heart of my home network. it is now my family's router, firewall, adBlocker, movie and tv server, game server, free cloud storage manager (synced to every household device), OS updates cache, music streamer, torrent client/server, VM server, web server, database server, VPN client, proxy server, etc., the list is virtually endless. these are all run simultaneously with ample resources leftover for frequent workstation usage.
I should admit that I thoroughly researched every component's specifications and price, and as such it took me around 3 months of waiting to source them.
I also admit this use case and learning curve is not for everyone, but it was ultimately a rewarding experience for both my brain and wallet.
I admit I was more lazy, I have a separate room for this machine so I don't mind the noise that much. I just configured what I wanted and ordered it. I only did some research on the CPU performance.
1) Setting up a RAID array isn't that difficult and it makes for more reliable storage.
2) Using dual supplies actually lowers the fan noise because the supply running at half power generates less heat than one running at full power. You can plug them both into the same outlet strip :-)
3) These things have a "lifetime" which is the point where things are easily found on the web which support them. And then they become "anchors" without all that support. Very carefully and diligently download and archive all of the necessary software, drivers, manuals, and extra cables for the system so that in another 5 years when it breaks you can reconstruct it successfully.
If I really need data backup, I can use one of the other SSDs or even the single spinning disk I put in as a backup target.
2) That's interesting, I don't think the noise comes from the PSU though, mainly the six case fans.
3) True point. I'm not too worried. The machine is fully supported by Linux (no drivers required) and the latest SPP is applied. I never expect to do hardware changes/upgrades down the road. And in five year's who then lives, deals with the shit ;-)
On top of all that it's pretty standard (at least in the VMware world) to store the OS on an SD card. So the OS has to be read into memory and ESX is kinda slow to boot even if installed to a disk.
The reason it takes this long is that it does a bunch of self tests, and then it has to load all the ROMs for the components (NICs, HBAs, etc.) which often trigger messages like "X Loaded, press CTRL+L to configure" which stay on screen for 5-10 seconds each.
If you yield control to some kind of Broadcom controller it'll do all kinds of shit before giving up and handing you back to boot a disk.
Servers are built to be reliable. It is better for them to boot slowly and correctly than quickly with silent memory failures.
..In that context crazy inefficient boxes make way more sense. Esp with the reduced e-waste contribution
There are people who use GPUs and ASICs mining cryptocurrency as heaters for greenhouses. Spending the energy on crypto mining is basically a way to recoup some of your energy costs. https://cointelegraph.com/news/cryptomatoes-grows-5-acres-of...
Saw a site recently that loans out home GPUs for tensorflow airbnb style. Nifty idea
(don't recall the name sorry)
However, being forced to use a proprietary tool (ssacli) and limited drive compatibility do not sound desirable. This seems like an odd limitation - is this a normal thing with these types of projects/machines?
The tool is not mandatory. You can fully configure storage by booting into the storage utility of the RAID controller.
It just saves you a ton of time.
It's a headless system in a tiny enclosure. i5 processor with up to 16GB ram. Power consumption is nice and low. And they feature remote management via serial over IP, remote power cycle, etc.
They can be had for less than $100 on ebay, and for me have been rock solid.
The loved one spent about five minutes yesterday evening just watching the rendering of the water in WoW. Quite the improvement over her HP Envy laptop, l33t as it was for a laptop ...
The PC is a huge black monolith. Easy to open, add cards to, etc - extremely maintainable. Also just about silent, hauntingly so.
But really - everyone after a new box should go to eBay, PCs, search on "workstation". There's lots of ex-corporate beast machines just waiting for a home. Put 20 threads to work on your compilation!
The K is for Kepler :)
Jokes aside, I ran a DL160 server at home for a couple of years until the motherboard started acting up. The fans would all go to 100%, and then back down again, then remain at 100% for a while.
Then there were intermittent crashes.
The only solution was to replace the entire motherboard. At this point I stopped looking and replaced the whole darn thing with an old headless Macbook Pro running Linux.
For my purposes it is fine, and I haven't looked back. The power savings are great too.
Still, even with power cost factored in, I think I'll be better of with my own hardware.
it works great as a linux workstation, but its just so hot. My whole room is the warmest room in the house. I actually have to run a window air conditioner in the room to make it comfortable (I work from home, so use the office all day). I imagine its probably one of the largest power consumers (including the cost to cool with the window ac) in the house.
I look forward to tax season, I am going to replace it with a new AMD 3700x based system next year.
I even managed to get it to boot from a PCIe NVME drive with an internal USB stick running the Clover bootloader (yes, the Hackintosh bootloader) to bootstrap into Ubuntu. It makes for a great VM server.
Also helps that iDRAC 7 is aeons ahead of the horrible iDRAC 6 servers I was using before.
Being able to completely control every aspect of the thing from my Mac via a simple Browser + VNC combo is incredible (even when it is powered off)
However...it was noisy AF and consumed something like a few hundred Watts. Maybe as much as 500. Needless to say, I have not been running it. Does anyone have some tips on where to get cheap power? ;)
Do you live in a location that offers real-time pricing? Where I live, you can opt-in to such a scheme and then monitor an API from the power company and adjust/schedule your power usage to favor times in which electricity is very inexpensive.  Sometimes, you might even get paid to consume electricity:
> Negative Prices: With real-time hourly market prices, it is possible for the price of electricity to be negative for short periods of time. This typically occurs in the middle of the night and under certain circumstances when electricity supply is far greater than demand. In the market, some types of electricity generators cannot or prefer not to reduce electricity output for short periods of time when demand is insufficient, and as a result some generators may provide electricity to the market at prices below zero. Since Hourly Pricing participants pay the market price of electricity, they are actually being paid to use electricity during negative priced hours. Delivery charges still apply.
With a 16GB NUC I can easily provision 6-7 small vms without an issues which is enough for general self-host and exam prep. With docker you can run a simple instance of just about anything and leave it up all the time.
The short answer is Yes, servers built to be servers are designed to get the heat out and keep internal temp. down - so noisy fans and as much heat as you generate outside of the box.
Of course, nothing stops you replacing fans with quieter ones (at least one of more expensive or air movement) or putting consumer hardware (which has different design goals that you might prefer in the home) in a rack-mount chassis.
The 100W power consumption definitely makes the room warmer. It is approximately like having another person present.
> My very subjective opinion is that at 50 dB the sound level is reasonable for a server like this, but it's definitely not quiet. I would not be able to work, relax or sleep with this server in the same room.
> Although this server is fairly quiet at idle, it does need it's own dedicated room. When the door is closed, you won't hear it at idle, but under sustained load, you will hear high pitched fan noise even with a closed door.
This is a poor tradeoff to me. A low power computer can be left on all the time, and be there for you when you need it for things like openvpn or ssh tunnels etc.
Leaving this on and idle would cost around $20 a month in electricity here in the bay area.
I have other low-power hardware for the OpenVPN / firewall /routing stuff so I don't need this machine turned on.
I got 10 seconds single thread on the DL380p. The scooter is indeed the opposite of this (awesome too btw).
So to have fun: 1 core is ~3 scooters. So this box can do the work of 60 scooters ;) (only counting real cores).
The point on people being happy with slower CPU cores is kind of weird to bring up with a server. Most games don't push CPUs that hard, you usually need a really expensive GPU before you see noticeable benefits in gaming from faster processors.
Having done some core critical work for the last few years (media processing/systems programming), my recent upgrade from a 4th gen i7 to a Zen2 CPU is paying in spades. If I was building a server to do some of the batch processing stuff I'd like, I would definitely invest in a faster, cooler, more power efficient machine. But that's just me. I don't think I could beat that price point though.
It sits powered-off most of the time though because I haven't been able to put it to good use, yet.
For a while it was running my Unifi controller + Pihole ... but you don't need the Unifi controller unless you are actively performing maintenance, and Pihole happily hums along on a Rpi 3 that uses far less power.