
My Home Lab Server with 20 Cores / 40 Threads and 128 GB Memory - louwrentius
https://louwrentius.com/my-home-lab-server-with-20-cores-40-threads-and-128-gb-memory.html
======
greatjack613
To put this in perspective, The new $500 ryzen 9 3900x matches this $1700 xeon
server in multi core geekbench, and kicks it in single core perf with a
fraction of the power usage.

[https://browser.geekbench.com/processors/2587](https://browser.geekbench.com/processors/2587)

~~~
snuxoll
It also won’t have iLO, etc.

Even for a homelab these are useful features to have, I don’t want to go
hooking up my monitor and keyboard every time I need to troubleshoot some boot
issue or install a new OS. I can also get used DDR3 ECC memory for a hell of a
lot cheaper than DDR4 right now.

Unfortunately the person who wrote this article is in the EU who has much
slimmer picking in the second-hand market. I can buy a Dell R620 for $200-300,
two 10-core Xeons for another $300 if it doesn’t come with enough cores, and
128GB of 16GB DDR RDIMMs for $200 - total price is under $1000USD, and that’s
excluding any components that come with the server I may opt NOT to replace.

~~~
mey
Useful iDRAC features like remote display and remote ISO install isn't in the
base Dell iDRAC license. So I still need to hook up keyboard/monitor if it's
more than a reboot.

Edit: iDRAC is the Dell LOM, iLO is HP

~~~
snuxoll
Sure, but the licenses to unlock them are easy to acquire.

Dell ties them to the service tag, which is quite easy to set via racadm and
you can buy a license off eBay. Older Dell systems required a $30 part you can
find on eBay. HP just uses a key they send you on paper.

Hell, many times these decoms still have the license applied to them. I have
2xR210 II’s, a R320, R520 and formerly had a HP ML10 and a Lenovo TD340. The
Lenovo was the biggest pain to get VKVM because of the stupid hard to find
dongle.

Don’t buy Lenovo servers for a homelab; BTW. Dell lets you override the ramp
in fan speed when a non-Dell branded PCIe card is installed, newer Lenovo
servers ramp them up to 100% and you can’t turn it off (I had to leave my
TD340 on the initial BIOS revision with security issues because of this).

~~~
bubblethink
The 100% fan on non-whitelisted card is such a dick move. Is there a single
legit reason for this ? Did someone's building burn down because they didn't
buy the quadro gpu that Dell sells ?

~~~
snuxoll
It’s not even limited to GPU’s, a quad-port Intel NIC in my TD340 causes it to
max fan speeds if I’m not running the A01 BIOS.

The argument is they don’t know the thermal requirements of the card because
it’s not OEM-branded so they max the fans “to be safe”, but not letting you
override it is just bullshit.

~~~
sitkack
They do something similar on Chromebooks and 3rd party mini pcie WiFi cards.
If you insert a non Lenovo branded card, it will fail to even enter the bios
screen with a Dick Move error.

Also, same stupid justification,”this configuration hasn’t passed FCC EMI
tests” when the cards a qualified directly, they are the FCC certification
boundary.

~~~
bubblethink
I think lenovo has stopped doing that recently. However, we now have soldered
wifi cards.

------
eropple
Neat machine, but 96W idle is a _lot_ for a home server, IMO. Maybe you're
somewhere where power's super cheap (and hopefully clean), but a lot of folks
aren't.

I run my old desktop (a i7-6700K) in a rack in my basement, now, with 64GB of
RAM, a Mellanox Connect-X for 10G networking, and half a dozen disks, and it
idles under 15W. The entire rack, UniFi stuff/POE wifi APs and all, sits
around 50W. 96W just for a single machine is A Lot.

~~~
atonse
In addition to wattage, my other concern for a home server was fan noise. If
it's anywhere that I am, I really don't want to hear it. So I got these
amazing Noctua fans and simply can't hear the server. (Updating language based
on replies) A datacenter-like server probably isn't designed for that. So it
wouldn't necessarily work out.

I have an 8 core i5. I bought Intel (18 months ago) because I didn't want to
deal with any AMD incompatibility especially since I wanted to also run it as
a gaming machine using VT-d. If I were doing it again, I'd definitely go with
Ryzen.

~~~
skunkworker
I’m not sure what you’re getting at with the home server fans and not working
out. I’ve been using a noctua on my Xeon home server without and issues, and
reconfigured the fan sensors and control using ipmi. There’s no problem making
a quiet Xeon home server.

~~~
atonse
Sorry I wasn't clear. I meant something like a Rack mounted server designed
for a datacenter. Would it become whisper quiet if you just replaced the fan?
Is it that simple?

~~~
snuxoll
Replacing the fans on server systems is tricky, many “quiet” fans are such
because they have a lower max RPM which will freak the hell out of your
management controller. Also, there’s only so much optimization you can do to
40mm fans in a 1U chassis.

The fans in my 1U dell servers are fairly quiet when idle around 3600RPM. They
make noise, you wouldn’t want to be sleeping or watching TV in the same room -
but with the door to my office closed they can’t be heard in the hallway.

~~~
skunkworker
Ah I understand what they were getting at. If you're using a server
motherboard in a regular 1-4u case this becomes a different issue entirely.
I'm using a supermicro x9 with dual xeons in a upright enthoo pro case with
it's own fans. You're right about the management controller freaking out
because the IPMI threshold settings are expecting the supermicro 3 fan
assembly instead of different case fans. Fortunately with some tweaking and
ipmi tool you can effectively use fans that have a much larger or smaller
range of acceptable RPMS without your management controller thinking the fans
have gone down into lower non recoverable.

Right now my noctuas are running about 1000rpm and keeping the Xeons around
40c (under load this will increase with minimal db)

~~~
paulmd
My solution was to take some spare Noctua low-noise adapters (basically inline
resistors with fan connectors attached) and just drop the max fan speed. The
fan controller can freak out all it wants, it can never go above about half
rpms, and it still generates a noticeable breeze through the hotswap bays.

My CPU is a 7100 (ECC supported!) with a Noctua L9i so I never have problems
there either. Power draw is a _little_ high at about 70W with 8 3.5" drives
spinning, but most of that is the HDDs (rule of thumb is 5W per drive) and the
alternative would be spinning them down, which isn't ideal.

------
old-gregg
Ugh... that's crazy. Instead, look at ASRock X470D4U AM4 server motherboard
(with IPMI and ECC) paired with something Ryzen 2700, which should give you
near complete silence, 25W idle power usage and 8 modern cores boosting up to
4Ghz, for well under $1K

Link:
[https://www.asrockrack.com/general/productdetail.asp?Model=X...](https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U#Specifications)

~~~
whalesalad
Where can you find 128gb of ECC memory for that board for under 1k (w/
processor and board)?

------
zokier
I got bitten by homelab fever few years back and got myself a small server. I
had such grand dreams with it that never materialized. Now it sits unplugged
in the corner deprecating itself :(

Stuff that I was planning to do:

* Managed VM platform (~"EC2")

* Centralized auth (FreeIPA)

* ZFS NAS (also possibly ceph) + backuping

* Container platform

* Your typical web/email stuff

* Monitoring/alerting/log management

* VPN endpoint (and other more advanced networking stuff)

* Probably something more I have already forgotten

I realized that building a private cloud actually takes serious effort and not
just putting some lego pieces together. There is also bit of circulatory stuff
there that makes bootstrapping more difficult, especially on one single box.

~~~
mey
To make it simple, since you already have the hardware, I would suggest
setting up the server as a VM Host to allow for experimentation with little
effort.

VMware vSphere Hypervisor, Proxmox, or Microsoft Hyper-V Server 2019 are all
free options. It makes it easy to experiment with the above in parallel.

My current VM Host has

* One VM as a docker host (turtles all the way down) for development tools. Build server, bug tracker, private artifact repo (Sonatype Nexus).

* 3 VMs as a Kubernetes cluster for experimentation

Building a NAS is the only thing that would take significant effort and is a
project unto itself.

Unless you have a strong desire to experiment with failure modes (network
dropping out, killing iSCSI, online VM fail over), stick to a single box.

------
lostlogin
For anyone keen to see and read more, there is a helpful community and great
write ups at
[https://www.reddit.com/r/homelab/](https://www.reddit.com/r/homelab/)

------
867-5309
$1700 seems like an awful lot to spend on this setup, even with 1yr warranty.
I recently bought the following from eBay on a £500 budget:

Asrock Rack c602 mobo 2x Xeon e3-2650Lv2 (20c/40t) 8x8GB Samsung DDR3 ECC
generic EATX case with fans 2x CPU heatsinks 3x case fans XFX fanless modular
PSU

if the price and specs alone aren't compelling enough: it runs idle at ~50w it
has a similar passmark of >15000 it has 4x GbE ports it has 4x PCIE3 slots (!)

I'd never heard of iLO, which other commenters mention as a selling point, but
a quick search leads me to believe this is HP's take on IPMI, which this mobo
has.

originally built as an HTPC server, I had 2 main criteria for my build: cool
and quiet. hence opting for low powered processor versions, 0db PSU and PWM
case fans. if you don't require these criteria you can knock ~20% off the
budget.

there was so much power being unused that I binned a few other devices (namely
crappy ISP-provided router and tv box) and made this build the heart of my
home network. it is now my family's router, firewall, adBlocker, movie and tv
server, game server, free cloud storage manager (synced to every household
device), OS updates cache, music streamer, torrent client/server, VM server,
web server, database server, VPN client, proxy server, etc., the list is
virtually endless. these are all run simultaneously with ample resources
leftover for frequent workstation usage.

I should admit that I thoroughly researched every component's specifications
and price, and as such it took me around 3 months of waiting to source them.

I also admit this use case and learning curve is not for everyone, but it was
ultimately a rewarding experience for both my brain and wallet.

~~~
louwrentius
That's the first good example I've seen that really undercuts my price, but it
also has 60% of the single threaded performance. Probably still enough, I can
see that, it works for you.

I admit I was more lazy, I have a separate room for this machine so I don't
mind the noise that much. I just configured what I wanted and ordered it. I
only did some research on the CPU performance.

~~~
867-5309
yes I only really notice the single core performance on OpenVPN streams which,
despite the CPUs' AES-NI and offloading to mobo's Intel-branded Ethernet
chips, still cap out at ~80Mbps. it's fine for most internet needs and a few
simultaneous video streams, but really bottlenecks torrenting and shifting
large files around the web, on 1GBps FTTH. I toy with the idea of buying a
generic fanless Chinese 4-port i7-8th Gen 15W 'U' variant to handle most
networking duties, thus rendering the behemoth to be purely on-demand, WoL,
semi-idle, etc. which would cut costs on electricity long term, but with such
devices currently priced ~$300 and with Brexit threatening to bloat that, I am
not in any rush. plus it gives me more time to research / discover / await /
implement a multi-core VPN solution

~~~
867-5309
edit/corrections: E5-2650Lv2 not E3-2650Lv2 1Gbps not 1GBps (I wish! :)

------
ChuckMcM
For lab set ups "last year's" enterprise hardware can be a good deal. A couple
of notes on the article

1) Setting up a RAID array isn't that difficult and it makes for more reliable
storage.

2) Using dual supplies actually _lowers_ the fan noise because the supply
running at half power generates less heat than one running at full power. You
can plug them both into the same outlet strip :-)

3) These things have a "lifetime" which is the point where things are easily
found on the web which support them. And then they become "anchors" without
all that support. Very carefully and diligently download and archive all of
the necessary software, drivers, manuals, and extra cables for the system so
that in another 5 years when it breaks you can reconstruct it successfully.

~~~
louwrentius
1) That is true. My storage is all SSD and I'm deploying everything through
Ansible, I can always redeploy.

If I really need data backup, I can use one of the other SSDs or even the
single spinning disk I put in as a backup target.

2) That's interesting, I don't think the noise comes from the PSU though,
mainly the six case fans.

3) True point. I'm not too worried. The machine is fully supported by Linux
(no drivers required) and the latest SPP is applied. I never expect to do
hardware changes/upgrades down the road. And in five year's who then lives,
deals with the shit ;-)

------
kart23
Boot time of 4 minutes! Is that normal for servers? And why would it be this
long if so?

~~~
sh-run
It's been a while since I was in a position where I regularly watched servers
boot, but 2015ish I was doing a lot of 'boots on the ground' sysadmin and
virtualization work. Servers do a lot of additional testing during boot-up,
temperature sensors, RAID cache batteries, memory and RAID arrays are all
checked. Some of those checks can be disabled, but you don't really reboot
production servers regularly so you typically wouldn't want to. The extra 3
minutes of boot time is much easier to deal with than a bad host coming
online.

On top of all that it's pretty standard (at least in the VMware world) to
store the OS on an SD card. So the OS has to be read into memory and ESX is
kinda slow to boot even if installed to a disk.

~~~
kart23
makes sense. thanks for the answer!

------
Havoc
I wonder if a old server can be used for heating...like literally exhaust the
air into a duct and pipe it into the desired room

..In that context crazy inefficient boxes make way more sense. Esp with the
reduced e-waste contribution

~~~
opportune
Yep, it can. Although you would want the server to be spending time doing
something productive rather than just idling or spinning worthless cycles

There are people who use GPUs and ASICs mining cryptocurrency as heaters for
greenhouses. Spending the energy on crypto mining is basically a way to recoup
some of your energy costs. [https://cointelegraph.com/news/cryptomatoes-
grows-5-acres-of...](https://cointelegraph.com/news/cryptomatoes-
grows-5-acres-of-fruit-from-bitcoin-mining-heat)

~~~
Havoc
>doing something productive rather than just idling or spinning worthless
cycles

Saw a site recently that loans out home GPUs for tensorflow airbnb style.
Nifty idea

(don't recall the name sorry)

~~~
dest
Qarnot?

------
otter-in-a-suit
Sometimes I wonder whether going the "homelab" route would have been
easier/cheaper for me. I built my server a couple of months ago, from
scratch.[1]

However, being forced to use a proprietary tool (ssacli) and limited drive
compatibility do not sound desirable. This seems like an odd limitation - is
this a normal thing with these types of projects/machines?

[1] [https://chollinger.com/blog/2019/04/building-a-home-
server/](https://chollinger.com/blog/2019/04/building-a-home-server/)

~~~
louwrentius
Nice blogpost!

The tool is not mandatory. You can fully configure storage by booting into the
storage utility of the RAID controller.

It just saves you a ton of time.

------
hrangozz
My home lab system of choice is the lenovo m92p tiny (or updated version)

It's a headless system in a tiny enclosure. i5 processor with up to 16GB ram.
Power consumption is nice and low. And they feature remote management via
serial over IP, remote power cycle, etc.

They can be had for less than $100 on ebay, and for me have been rock solid.

------
muro
I didn't buy a single computer currently at home new - a few year old hardware
is almost new and undistinguishable in performance. However I would stay away
from servers - workstations (e.g. Z440) can be bought with almost the same
hardware and for similar cost, yet are quiet.

~~~
davidgerard
been eyeing up workstations for a gaming rig. Second-hand, 6-12 months'
warranty, a ton of CPUs, a ton of memory, Nvidia Quadro cards aren't
_directly_ equivalent to the same generation of GTX but your game will run
_just fine_ ...

~~~
davidgerard
aaand we just ordered a Dell T3610 (from 2014) with four-core Xeon E5-1620v2
and Quadro K4200 4GB (also from 2014) and 32GB RAM. Just under £600. The loved
one is looking forward to her new video production workstation and gaming rig.

~~~
muro
Maybe a newer graphics card would make sense (depending on the games),
otherwise nice find!

~~~
davidgerard
oh yeah, a K4200 is about £100-200 second-hand. About equivalent to a GTX
960/970 for gaming. Hot stuff for 2014!

The loved one spent about five minutes yesterday evening just watching the
rendering of the water in WoW. Quite the improvement over her HP Envy laptop,
l33t as it was for a laptop ...

The PC is a huge black monolith. Easy to open, add cards to, etc - extremely
maintainable. Also just about silent, hauntingly so.

But really - everyone after a new box should go to eBay, PCs, search on
"workstation". There's lots of ex-corporate beast machines just waiting for a
home. Put 20 threads to work on _your_ compilation!

~~~
muro
I think a K4200 is the equivalent of gtx760, so a generation earlier than 960.

The K is for Kepler :)

------
blackflame7000
I built a very similar computer and tbh I prefer to use an 8th gen core-i7 at
4.8ghz as it is faster and less power-hungry for most everyday applications.
Javascript, for example, is mostly single-threaded and you can see a
noticeable difference in web loading times.

------
sgt
Been there, done that. Today it's obvious that your home server should be one
place only; in the Amazon cloud, running as individual lambdas and perhaps
some m4.large EC2 instances.

Jokes aside, I ran a DL160 server at home for a couple of years until the
motherboard started acting up. The fans would all go to 100%, and then back
down again, then remain at 100% for a while.

Then there were intermittent crashes.

The only solution was to replace the entire motherboard. At this point I
stopped looking and replaced the whole darn thing with an old headless Macbook
Pro running Linux.

For my purposes it is fine, and I haven't looked back. The power savings are
great too.

~~~
louwrentius
I haven't touched upon this in the article but for $1700 you can get quite
some cloud stuff.

Still, even with power cost factored in, I think I'll be better of with my own
hardware.

~~~
sgt
I also prefer having something local. Then you're also completely immune to
any network issues outside of your house.

------
briffle
I bought a 2013 Era Workstation last year. (Dell Precision T3600) It had a 6
core xeon, and 32GB of ECC ram, and was $300 (I had my own hard drives, and
got a decent video card)

it works great as a linux workstation, but its just so hot. My whole room is
the warmest room in the house. I actually have to run a window air conditioner
in the room to make it comfortable (I work from home, so use the office all
day). I imagine its probably one of the largest power consumers (including the
cost to cool with the window ac) in the house.

I look forward to tax season, I am going to replace it with a new AMD 3700x
based system next year.

------
jagger27
I picked up a Dell R820 with quad 8-core (E5-4650L) and 96GB (24x4GB) RAM for
USD$700 in March this year. Because of the memory mezzanines, it’s only half
full in this configuration. And if I find a decent deal on E5-46xx V2s at some
point I could get up to 96 threads.

I even managed to get it to boot from a PCIe NVME drive with an internal USB
stick running the Clover bootloader (yes, the Hackintosh bootloader) to
bootstrap into Ubuntu. It makes for a great VM server.

Also helps that iDRAC 7 is aeons ahead of the horrible iDRAC 6 servers I was
using before.

~~~
whalesalad
> iDRAC 7

Being able to completely control every aspect of the thing from my Mac via a
simple Browser + VNC combo is incredible (even when it is powered off)

------
parkaboy
When I was looking at Monero mining ages ago, I built a rig using the Dell
Poweredge R810. It has 4x CPU sockets in which I have Xeon E7-4860 (for mining
that uses AES) that each have 10 cores/20 threads -- 40 cores/80 threads
total. I found cheap deals on both scouring eBay. The entire setup cost me <
$500.

However...it was noisy AF and consumed something like a few hundred Watts.
Maybe as much as 500. Needless to say, I have not been running it. Does anyone
have some tips on where to get cheap power? ;)

~~~
OrangeMango
> Does anyone have some tips on where to get cheap power?

Do you live in a location that offers real-time pricing? Where I live, you can
opt-in to such a scheme and then monitor an API from the power company and
adjust/schedule your power usage to favor times in which electricity is very
inexpensive. [1] Sometimes, you might even get paid to consume electricity:

> Negative Prices: With real-time hourly market prices, it is possible for the
> price of electricity to be negative for short periods of time. This
> typically occurs in the middle of the night and under certain circumstances
> when electricity supply is far greater than demand. In the market, some
> types of electricity generators cannot or prefer not to reduce electricity
> output for short periods of time when demand is insufficient, and as a
> result some generators may provide electricity to the market at prices below
> zero. Since Hourly Pricing participants pay the market price of electricity,
> they are actually being paid to use electricity during negative priced
> hours. Delivery charges still apply.

[1] [https://hourlypricing.comed.com/live-
prices/?date=20190813](https://hourlypricing.comed.com/live-
prices/?date=20190813)

------
zantana
I'm always wondering what are people doing in these homelabs that they need
this type of hardware. I worked for an MSP and sometimes people would take
home an old DL380 or something, but it always seemed like a waste.

With a 16GB NUC I can easily provision 6-7 small vms without an issues which
is enough for general self-host and exam prep. With docker you can run a
simple instance of just about anything and leave it up all the time.

~~~
louwrentius
Maybe I have to admit my purchase is not entirely rational. But it's fun!

~~~
zantana
Well I can't argue with that.

------
sigmonsays
Where do you store this machine? How is the noise? also, I wonder about the
heat generated. Does it keep a room toastier than the rest of the house?

~~~
OJFord
Many examples on reddit.com/r/homelab - including even more hungry (or several
of them in a rack(s)) systems.

The short answer is Yes, servers built to be servers are designed to get the
heat out and keep internal temp. down - so noisy fans and as much heat as you
generate _outside_ of the box.

Of course, nothing stops you replacing fans with quieter ones (at least one of
more expensive or air movement) or putting consumer hardware (which has
different design goals that you might prefer in the home) in a rack-mount
chassis.

------
Havoc
This is the reason I browser /r/homelab on reddit. It doesn't make any sense
to me, but is somehow cool anyway

------
dev_dull
> The server reports around 98 Watt of power ... by default, this server is
> turned off.

This is a poor tradeoff to me. A low power computer can be left on all the
time, and be there for you when you need it for things like openvpn or ssh
tunnels etc.

Leaving this on and idle would cost around $20 a month in electricity here in
the bay area.

~~~
louwrentius
It would probably cost me 2 euros per year per watt. So I think it's about the
same cost.

I have other low-power hardware for the OpenVPN / firewall /routing stuff so I
don't need this machine turned on.

------
adolph
It would be interesting to compare this approach with an updated Atwood's
Scooter Computer.

[https://blog.codinghorror.com/the-scooter-
computer/](https://blog.codinghorror.com/the-scooter-computer/)

~~~
louwrentius
sysbench --test=cpu --cpu-max-prime=20000 run

I got 10 seconds single thread on the DL380p. The scooter is indeed the
opposite of this (awesome too btw).

So to have fun: 1 core is ~3 scooters. So this box can do the work of 60
scooters ;) (only counting real cores).

------
wodenokoto
In this context, what is a lab setup? A LAMP stack? A tensor flow training
server?

~~~
louwrentius
In my case a KVM server to spin up virtual machines in which I can test
anything from playing with Kubernetes to Grafana, Elasticsearch, Ubuntu MAAS,
or whatever I want to toy with.

------
timw4mail
So...what's the real advantage of using server-grade hardware for this use-
case? Wouldn't a desktop/workstation with consumer parts work just as well,
and give better performance for a similar price?

~~~
louwrentius
Some examples are reported by people in this post. Where I live, I could not
get 20c/40t + 128GB for this price point any other way.

------
holy_city
I think for $1700 with the memory/storage capacity that's a great deal, but
those benchmarks can be topped by the higher-tier consumer CPUs today.

The point on people being happy with slower CPU cores is kind of weird to
bring up with a server. Most games don't push CPUs that hard, you usually need
a really expensive GPU before you see noticeable benefits in gaming from
faster processors.

Having done some core critical work for the last few years (media
processing/systems programming), my recent upgrade from a 4th gen i7 to a Zen2
CPU is paying in spades. If I was building a server to do some of the batch
processing stuff I'd like, I would definitely invest in a faster, cooler, more
power efficient machine. But that's just me. I don't think I could beat that
price point though.

~~~
whalesalad
Dunno... I have an R720 with similar specs (half the ram, though) and it was
only $400. I also idle around ~100w because I ripped all the SAS 10k drives
out and put SSD's in.

It sits powered-off most of the time though because I haven't been able to put
it to good use, yet.

For a while it was running my Unifi controller + Pihole ... but you don't need
the Unifi controller unless you are actively performing maintenance, and
Pihole happily hums along on a Rpi 3 that uses far less power.

------
vorpalhex
This is a neat writeup, thanks! Do you use a proper server rack and if so,
where do you keep it installed? Right now my homelab is in a tower, but that's
obviously not ideal for density.

~~~
louwrentius
Thank you. I use a lack rack setup.
[https://gathering.tweakers.net/forum/list_messages/1656249/l...](https://gathering.tweakers.net/forum/list_messages/1656249/last)

------
aussieguy1234
How good would this box be at mining CPU based cryptocurrencies, like Monero?

------
eanzenberg
But can it play Crysis?

