> But when I showed off my build on /r/homelab, reddit’s homelab subcommunity, they mocked me as a filthy casual because I used consumer parts. The cool kids used enterprise gear.
It should be noted that you can purchase used enterprise servers for peanuts.
I'm looking right now at an ad for a used Dell PowerEdge R730xd SFF 24x which packs two Intel Xeon E5-2680 v3 and 64GB of RAM which is on the market for about $1200, and it was literally the first search result.
A Dell PowerEdge R720 8x 2U LFF with the same CPU/RAM/HDD combo is on the market for less than $1000.
A Dell Precision T7610 workstation, with dual Xeon ES-2670 and 64GB of RAM, can be purchased by around $600.
So reading through the article, he really didn't know what he was doing and should probably have asked /r/homelab for assistance before he just tore ass into building this thing.
He got mocked for building using consumer gear (he didn't, the poster, while standoffish, did bring up good points). Then he goes and buys parts piecemeal from eBay and builds again...
I bought a Dell PowerEdge R610 w/ 48 GB of RAM, two quad-core CPUs, and two 146 GB 10K RPM drives for $229 on eBay about 3 or 4 years ago. That system still functions as my development server for testing today, and will likely have enough oomph to do the job for another year or two.
At which point I'll buy a Dell PowerEdge R730 with 192 GB of RAM, dual 8-to-12 core CPUs, and a few SSDs for $500 or so... maybe even cheaper.
I asked a lot of questions from IT friends of mine who work in enterprise before I purchase any enterprise-grade gear, and I also watch a lot of Level1Techs (since Wendell likes to repurpose enterprise stuff for home use as well) to get new ideas for cool projects.
You go halfass into any domain without even bothering to ask the journeymen and experts and then post about it on their own board / subreddit, yeah, you probably will get made fun of.
That's true in the US, but when I looked to buy in the past in London, and a different time in Tel-Aviv (and did a quick look for comparison in other places), it seemed like everything local is being sold at ~50-75% original price, which -- factoring efficiency, CPU advances, and wear -- is more expensive than buying new.
It's probably a cultural thing, but it seems that most enterprises prefer to offload at these cheap prices to resellers who like to charge high prices (except, somehow, in the US). Also - both in London and in Tel-Aviv - I had never known any business who bought second hand servers or workstation. I'm sure these businesses exist - but every IT person I've ever talked to only buys new.
The US 2nd hand equipment economy makes sense to me, but no other place in the world does.
While those prices seem pretty good for first world countries, many of the people in my country don't make more than $1000 per month, so it'd still be a substantial investment.
There's nothing wrong with consumer hardware for homelabs (except for ECC RAM sometimes not being supported), for example, i use Athlon 200GE's for mine because of the 35W TDP and however much RAM i can buy, with some SeaGate HDDs for storage.
That's definitely enough to run some virtualization, though nowadays i mostly use Docker Swarm (while K3S would also run with little issues on that hardware).
Of course, there are also the option of looking at alternatives to expensive cloud vendors: not everyone needs AWS, GCP or Azure, which are better suited for businesses. Some alternatives that come to mind are:
Personally, i use an even cheaper host, called Time4VPS, which is in Lithuania: https://www.time4vps.com/?affid=5294 (disclaimer: affiliate link), which i've been using for a few years. Of course, using something like BackupPC https://backuppc.github.io/backuppc/ to make incremental backups with rsync is also useful in the case of all of these services.
> While those prices seem pretty good for first world countries, many of the people in my country don't make more than $1000 per month, so it'd still be a substantial investment.
That price is just the benchmark for that specific combination of CPUs and RAM.
There are plenty of used servers in the market that are being sold for less than 200€.
The used workstations make a lot of sense for low noise and great price. Focus on RAM. Homelabs may not need a lot of CPU as things are generally idle, but admins always find a reason to run more VMs and eat up all the RAM they have. I went with the HP Z820 and 96GB RAM for under $1000.
The best server I have ever purchased or built is the 710 I bought 4 years ago for $400-500. It's been rock solid, unlike every machine I have ever built from scratch that always had weird issues (most not occurring with any regularity, but annoying enough when they occured).
even if one can save a few pennies by building a server like this themselves, I'd always strongly recommend they go with a similarly specced off lease server.
Yeah but at what (power) cost? I'm sitting on a J5005 based home built NAS (ITX system with 6x 3.5" drive slots) and a 6500T uSFF PC as a hypervisor. I don't need the CPU core count of teh author, sure, but I'm running at maybe 150w under full load and about 50w idle between both systems.
My whole stack is, on average, less costly to run than 1x Xeon V3 CPU standalone. But power cost is also a priority for me vs more cores.
Only ones I can find on the classifieds site here in Norway are listed at several hundred dollars. $600+
In general I think it is difficult to come by cheap computers here in Norway.
I remember back when I was member of a student union for people interested in computers and programming, former members or friends of current members would sometimes donate decommissioned servers to our student union. That was great. And it also makes me wonder why I don’t see any reasonably priced (or even free?) servers on the classifieds site. I suppose in most cases the decommissioned servers are just sent to recycling stations. Or employees get to take them or something.
I also run a home VM server for several things, and you can do very very well with consumer hardware - and you don't need to go super high end.
One of the biggest disadvantages with using older enterprise hardware is that the power draw and noise can be substantially more than using up to date mid-range consumer hardware. There's not anywhere near as much well-priced second-hand hardware available outside of the USA, either.
I refreshed mine recently, and I went for an ASRock DeskMini H310 (not much bigger than NUC sized, but takes a full desktop processor), an i7 9700, 64GB of RAM, 2x 2TB SATA SSDs and a 1TB NVMe SSD. Runs Linux + KVM/qemu, is near silent, and overall works very very well - for a total cost of about £1200 incl VAT.
I'm not convinced it makes me directly more productive - really it's a hobby. But it's very useful for educational purposes - it teaches a lot of skills which are helpful professionally - especially if you work in smaller businesses and not FAANG.
On that. Highly recommend not using NUC, they get super noisy as soon as you have any load.
I would otherwise also use consumer hardware if it wasn’t for IPMI. Particularly if you are running an OS like windows which loves rebooting randomly and which updates often go wrong or take hours, if you don’t have a way to see what is happening while the server isn’t online, you end up walking a lot to the server with a screen. I think the sweet spot is a server MB (and therefore CPU) but everything else consumer.
I do understand this sentiment but I think it is waaaay over estimated by most people. I run a (mostly) full Windows environment and I've had to use a monitor/etc 2 times in the past 12 months, once after moving to a new apartment and standing things up and once after I replaced the boot drive in my NAS and moved to Server 2019 (ran the install and re-mounted disks with the PC on my desk). I've actually figured out that my old, PoS, Sentio SuperBook 'just works' with modern DisplayLink drivers on Linux or Windows so I have a tiny, battery powered screen/KB/mouse and USB port I can connect if needed.
I'm also not running any work or professional stuff, just Plex with array for storage, a Windows Domain (2x DCs), pihole in VM, and Ubuntu server for Unifi and stuff that works better on Linux. No websites or developer spaces though my uSFF hyperV box has free RAM if needed. Windows boxes prompt for updates once or twice a month as none of that is managed but reboots happen at 2am or so mid week and I don't have issues other than I overlaoded a J5005 ITX and it took maybe 20 minutes to reboot due to VMs taking forever to shut down. Now those are all on the uSFF box with a 6500T and it reboots in a few minutes even with updates.
The nice thing with using smaller hardware that is also super quiet is that you can just leave it on your desk - in this case the machine is on a secondary input to one of my monitors - a Dell that also has a built in USB switch, so changing the monitor input over also moves my keyboard and mouse over!
I'd go for a server motherboard if the machine was remote though - eg if it was going to sit in a cupboard, basement, etc. Remote media with a decent IPMI is a godsend for that.
I've run a homelab for years. My current setup is: Two HP Proliant tower servers with twin Xeons and 32Gb RAM with about 4Tb of storage. They run Windows Hyper-V Server, and are always running 10+ VMs, both Linux and Windows. I run AD, Exchange, SQL, Web, plus various Debian installs doing specific stuff. One of the servers is dedicated to lab work with regard to my actual job.
I buy the hardware on eBay as refurbished - something that's ~4 years old can be had for under £500 and is always very capable. When I started out, I self-built the VM server out of consumer parts, but due to the 24/7 usage, had endless problems, so I switched to enterprise kit. They perform faster and have better specs for the price if you buy them secondhand. Most enterprise servers on eBay come with quad port NICs as standard. If you are lucky you'll get iLO or iDRAC included as well. They are bit noisy, but they live in my garage, so I don't hear them.
Currently one of my servers has some sort of memory fault causing it to reboot every 6 weeks or so. With consumer kit, you'd be dead in the water, but on boot it just disables that DIMM and carries on with slightly less memory. I need to fix it... but time.
In short, I would always recommend an enterprise server over self-build for homelab. you want it to be reliable and not have to worry about it.
I have found that homelabbers have broader skills sets, all else equal. Those skills probably never get used in a large company, but in a smaller company they sometimes do.
The same can be said about basically any workflow investment.
Of course "tons of productive developers manage without." Most productivity changes aren't make or break. That does not mean with hindsight it would be a bad investment.
If the author had suggested learning a few key bindings or setting up CI, sure.
But a homelab that's cost 10-100s of hours to setup and maintain to "sparing me the headache of dependency conflicts and TCP port collisions"...
I'm not saying I've proved it has poor payoff. I'm saying I don't believe it has good payoff, similar to all the other alleged "helps me work better" usecases for homelabs.
Homelabs are fun, and a place to learn/experiment.
Here's what I did back in 2014 because I wanted a fast local machine for cheap, AND the smallest amount of labor.
I actually don't like researching and fiddling with hardware. I did it 20 years ago, but it doesn't feel worth it now.
- Buy a Dell XPS for $700. At the time I got one of the fastest Intel CPUs on the market -- I think it was THE fastest in terms of single-threaded performance. The whole machine is not that much more than the cost of buying the CPU alone. Look at the CPU prices on newegg or Amazon.
- Buy a bunch more RAM and disk off newegg. It's significantly cheaper than getting it from Dell.
(Optional: I replaced the video card with the one from Amazon, because the fan on the stock XPS was very noisy.)
I'm using that machine now and it's fast and stable. The case isn't ideal for expansion, but it works fine. It will fit a bunch of disk and RAM.
-----
Looking at Dell's site 6 years later, I think the strategy still holds. They go from $600-$800.
So the total cost with a bunch of disk and RAM is like $1000-$1200. And you spend 10 minutes unpacking the XPS box, and maybe 20 minutes swapping out the RAM/disk/video card, and you're done.
I believe that is a very good low effort solution to getting significant computing power. In my case, I wanted to build Android and other Linux distros from scratch, which requires a lot of CPU and I/O speed.
If you start going with custom components the cost adds up pretty fast, and the possibility of incompatibilities goes up nonlinearly. So I think it's a good idea to outsource most of this work to Dell. I have seen people spend WAY too much time being a system integrator.
It's not going to satisfy everyone, but I'd say it's the 90% solution for 40% of the price.
(I now feel tempted to upgrade my 6 year old machine, although I don't really need it :) )
----
addendum: Wow the CPU that the XPS gives is SIGNIFICANTLY faster than that of the OP's build:
17475 / 2927 single-threaded vs. 15513 / 1801. That is a HUGE difference because of the single threaded speed.
Although the Android build almost perfectly parallelizes (e.g. could use 16 or 32 cores), most software doesn't.
This is what I noticed back in 2014: desktop CPUs have better single-threaded performance than server CPUs, and that matters for a lot of applications, including running VMs I imagine.
It looks like the total cost of this build was $1,368.74.
So again I would go with the XPS + upgraded disk/RAM solution in a heartbeat. For $1000-1200 you'll get a significantly better machine.
> desktop CPUs have better single-threaded performance than server CPUs, and that matters for a lot of applications, including running VMs I imagine.
For home lab situations, where almost everything is idle all the time then you're probably right.
But when you're working with lots of active VM's on hosts, the overall CPU throughput (more cores, hopefully fairly high clocks speeds) is important.
Just to re-iterate, the article author made the choice of "more cores" over "higher clock speed" for their situation already, and their build shows that. :)
This is a really good write-up - I enjoyed it. Lots of agreement on reboots, remote admin, where you put your disks (local SSD is just incredibly fast for pretty low cost relative to anything).
iDRAC enterprise licenses are $50 or much less on ebay. TinyPilot looks better though! This has ALWAYS been a pain point for me - I just hate the cable / monitor / keyboard dance.
I've been thinking about a home build recently - 1GBps to the house with redundent wireless ISP on roof has made this more appealing. I'm mostly pure AWS and wanting to try GCP, so just torn between doing the GCP experience buildout (more sensible) and the home server (more fun).
I'm always astonished that Proxmox is not the first choice if you don't have to use VMware-Stuff. Installed at least 30 Proxmox hosts, some with ceph some with zfs and others with hw-raid and XFS, with full KVM or Containers...and it's just a perfect product.
Have you ever had to use the ceph features - recovery of disks, nodes, etc? If so, how much tinkering did it take and how much did you have to learn in the process?
Author here! Happy to take any questions or feedback about this post.
I'm not an expert on computer hardware or building servers, but I feel like there aren't good resources for people at the intermediate level of choosing components for their home server, so I wrote this in the hopes that it might be useful to others interested in building a home VM server.
How did you go about picking the software? You could have picked xcp-ng, hyper-v, or just run a small Linux machine with QEMU-KVM that you could SSH in or VNC into and change. This would allow SCP or FTP capabilities for your ISOs and images. I've personally never used Proxmox, but I'd like to know how you ended up picking this one over other solutions. Why ditch ESXi?
How do you end up managing using the same VMs? Do you have a template and you clone the template? Or do you have a way of just running both a template VM and an application VM, and have the app VM use the template VM for most of it's OS level needs. If you end up having 10 VMs, how do you properly update?
I wanted to go with something that was pretty mainstream. I don't care that much about cutting edge features, and I'm more interested in a mature product that I don't have to tinker much with. My experience with Kimchi made me more cautious about taking a chance on smaller projects.
I keep wanting to like ESXi and not really enjoying it. I had a hard time scripting things with ESXi, and certain things seemed to require performing actions in the GUI. I can't remember at this point where the gap was, but I wasn't ever able to figure out how to clone a template and boot it up via CLI, whereas I found it easy to do that with Proxmox.
>How do you end up managing using the same VMs? Do you have a template and you clone the template? Or do you have a way of just running both a template VM and an application VM, and have the app VM use the template VM for most of it's OS level needs.
Yeah, I just have one pre-configured Ubuntu 20.04 template VM, and I clone that whenever I need a new VM. Occasionally, I'll boot up the template just to update its packages and tweak anything. I do all my VM management with Ansible, so there's a standard set of scripts that update and configure the machine whenever I clone a new one.
>If you end up having 10 VMs, how do you properly update?
I sometimes use Ansible to update everything, but honestly I get behind on updates. I wouldn't do that with an Internet-facing server, but none of these VMs are accessible outside my local network, so there's not really much attack surface.
if motherboards were sparse why not look on ebay or something?
this motherboard seems like a really bad choice, because it's for a workstation not a server - doesn't have vga out, does have audio output and doesn't have IPMI
meaing you not only paid extra for wrong MB but also for the gpu and kvm
i'd also go for a better powersupply than bronze - at least gold, if that thing runs 24/7. The return of investment is like ~2.5yrs depending on the PSU (at least in my country)
>if motherboards were sparse why not look on ebay or something?
I've never tried that. I thought that newegg had pretty good coverage for motherboards, and they make it easy to filter by CPU type, whereas I don't think eBay does. I just tried searching eBay for "intel 2011-v3 dual" and I did get
>this motherboard seems like a really bad choice, because it's for a workstation not a server - doesn't have vga out, does have audio output and doesn't have IPMI
I don't need audio output, so that doesn't make a difference to me. VGA output would be convenient, but it's also pretty easy to fix with a cheap GPU. IPMI would have been nice, and some readers have pointed out to me that SuperMicro has IPMI on a very similar board, so I should have gotten that one.
>i'd also go for a better powersupply than bronze - at least gold, if that thing runs 24/7. The return of investment is like ~2.5yrs depending on the PSU (at least in my country)
Does that assume the server runs under load 24/7? My CPUs are at very low load most of the time.
> >if motherboards were sparse why not look on ebay or something?
> I've never tried that. I thought that newegg had pretty good coverage for motherboards, and they make it easy to filter by CPU type, whereas I don't think eBay does. I just tried searching eBay for "intel 2011-v3 dual" and I did get
> >this motherboard seems like a really bad choice, because it's for a workstation not a server - doesn't have vga out, does have audio output and doesn't have IPMI
> I don't need audio output, so that doesn't make a difference to me. VGA output would be convenient, but it's also pretty easy to fix with a cheap GPU. IPMI would have been nice, and some readers have pointed out to me that SuperMicro has IPMI on a very similar board, so I should have gotten that one.
I agree - you don't need audio output, but you've paid for it. That was my point. You could've saved on all three things by choosing the right motherboard. (edit: 3 things - integrated audio, dedicated gpu, kvm)
> >i'd also go for a better powersupply than bronze - at least gold, if that thing runs 24/7. The return of investment is like ~2.5yrs depending on the PSU (at least in my country)
> Does that assume the server runs under load 24/7? My CPUs are at very low load most of the time.
also you might wanna check bios what mode you're running - if performance or eco or whatever (honestly not sure if workstation MBs are different in this regard, i am dealing with server MBs only) but you might be running on higher TDP than you think anyway
An avenue that can be useful to explore - and my personal preference for home (quiet) gear - is professional workstations. They're often server grade gear, but in a tower form factor and made for low noise output. They can be pretty cheap on Ebay.
For example, the Dell Precision 7810. Ebay (US) examples:
Those look pretty good CPU wise, also having dual E5-2680's. They're the newer v4 revision though (more cores) and also leave plenty of room in the budget for upgrades (ram, ssd).
---
If you specifically wanted server gear for some reason though, enterprise (server) gear generally have tower models.
Careful research is needed though, to make sure you're getting something that isn't too noisy. ;)
Those both would leave plenty of room for upgrading the ram to 64GB, and (depending on their purchased options) may also come with remote KVM. Note that I haven't personally mucked around with Dell server gear in years, so no idea of their KVM situation these days. ;)
For that model (PowerEdge T620) specifically, it's noise seems potentially workable after it gets past the first part of server boot:
Note that although I'm just showing Dell gear here, HP, Lenovo, etc all have their own lines. eg HP Z840. With a bit of Ebay searching, it's possible to find some really good deals. :)
> One downside for me is that I enjoy the process of picking out all the parts ...
While that's true for me too, there turns out to be tonnes of different workstations around. Different models, generations, etc.
So, the same interest + effort in picking our parts translates pretty cleanly to investigating + figuring out the best model(s) for you, then looking for the best deal on Ebay (etc). They often have a mix of parts (various GPUs, network, etc) which are interesting to explore and choose between. :)
I did appreciate reading your article. However I’m not sure on the quality of the feedback given by the reddit crowd regarding your previous home lab setup.
Reason:
1. Using pre loved (= second hand) rack servers in the living room with redundant PSUs will cause a lot of trouble with every female living in the same household. It is too noisy....
2. Consumer grade hardware is pretty reliable nowadays. I had no failure for quite some years despite the fact that I’m operating my machines on the upper limit of their thermal specifications. (I have a GPU cluster and a cluster of RPis)
3. Ryzens or Even a ThreadRipper is much more power efficient than the good old Xeon E5s. Thus you may have a lower operating cost.
Server noise aversion isn't only a female thing. I'm male, and replace my home server fans with blessedly quiet Noctuas, and the 24/7 servers are usually passively-cooled Atoms, or running on repurposed OpenWrt SOHO routers.
BTW, although severs-at-home techies seem to be disproportionately male right now, maybe it doesn't hurt to mention that servers are equal-opportunity. For example, I know at least a few women with servers at home. One even used to have one of the fridge-sized Cray supercomputers at home, just because. Another one, when I saw a K8s-at-home post a few days ago, she was the first person who came to mind to send it to, since she's the most enthusiastic, besides being a recognized technical expert on some related enterprise architecture. It feels a bit silly to articulate examples, but seems that people (and kids, especially), subconsciously pick up on signals about what we are supposed to do, and those signals can be misleading.
> will cause a lot of trouble with every female living in the same household
You meant "with every people not having this hobby". You know there are females with this very hobby, don't you? Even r/homelab and r/datahoarder mods are sick of it.
> 1. Using pre loved (= second hand) rack servers in the living room with redundant PSUs will cause a lot of trouble with every female living in the same household. It is too noisy....
Speak for yourself! I love our two servers and custom-hacked rack! Yeah, it can be noisy, but we just found a way to get it out of the way a bit. Not a problem!
Every once in a while the drinking glasses in the kitchen started to clink, until i found out they did that when scrubbing on the SAN was running (complete different room), not directly from the start, but the vibration amplified and after some minutes the ground/wall vibrated.
I run a smaller homelab setup on consumer-grade hardware, and the cost of replacing failed components is usually lesser than switching to Enterprise components (where I live).
One or many Intel NUCs will get you surprisingly far. Higher spec models will take a surprising 64gb of ram, and run ESXi nicely. Extra nics can be achieved with a USB dongle if wanted (a little gross) and the units fly. The higher end model has 6 cores and runs ESXi free. Licences for home usage a free for ESXi 7, although braving their website takes nerves of steel. It’s awful.
For any VM that needs a GPU the NUC8 is considerably better than the 10, but has less cores. With m2 drives they fly. Bulk storage can be achieved with network storage or USB attached storage.
This is all a rather dirty solution but I love my setup.
How do you find the NUC for noise? I've found that the CPU fans on them are annoyingly loud, and that they are a particularly irritating shrill frequency - and that they start ramping up at relatively low CPU usage.
I found going slightly bigger (around mini-ITX sized) resulted in vastly less noise, and more machine for the price.
I don’t have any issue and am noise sensitive. The real bugbear is the Synology that’s next to it. 16TB drives are very very noisey. They chatter constantly and it drives me mad. This may also cover the fan noise.
The NUCs are in a hallway cupboard, so aren’t next to me when working and this would also lessen the noise.
I have a NUC8i5BEH here that is used as a linux desktop for testing and the fan spins up annoyingly often (and it's quite loud when it does). Maybe the cupboard is just enough to muffle it :)
>cause a lot of trouble with every female living in the same household.
That's his experience, and by the way, mine too, but i would have wrote it a bit different like: Pizzabox Server big NONO for loved partner and or Cat/Dog. But that's not Sexism.
2U ones seem to give more options for quiet cooling, but the noise on almost any rack server as it boots or comes under load really is incredible. Having just pulled apart some server power supplies, how they get air through their components is clearly brute force. Power supplies have almost zero space for airflow.
> 1. Using pre loved (= second hand) rack servers in the
> living room with redundant PSUs will cause a lot of
> trouble with every female living in the same household. It
> is too noisy....
A great time to remind your partner that these projects put the bread on the table. Also remind them how much money it is saving.
There is of course always the possibility to move it elsewhere.
> 2. Consumer grade hardware is pretty reliable nowadays.
It really de[ends on what you're running and how much a crash with affect you. For example, a bit flip in RAM could be catastrophic. Or hard drive failure could be a serious problem (despite backups), in which case you might want some commercial grade RAID.
You also get some nice server specific silicon on your dedicated hardware that you might not otherwise, that could seriously speed up your application under high-load conditions.
> A great time to remind your partner that these projects put the bread on the table. Also remind them how much money it is saving.
My wife does all the ML and graph stuff off of her laptop and an external disk. After developing on her laptop, she just SSH’s into her cluster at work and schedule’s her job on whatever queue she needs.
I, mainly doing web projects, have and keep a small homelab.
Even in the article, they’re just doing it for the sake of saving time and not having to wait for the VM to load. Not really a requirement. Specially looking at the benchmarks at the end.
> After developing on her laptop, she just SSH’s into her
> cluster at work and schedule’s her job on whatever queue
> she needs.
Then I would imagine your partner is understanding. I would imagine most situations would be where one partner is computing orientated and the non-computing partner might not understand why there is this massive electric munching air breathing rack in their house.
> That’s just the wrong argument.
Clearly I was responding to the points in the parent comment, not specifically the article.
Not really. I mean, she's sympathetic to the fact that I want them, sure. But who really wants servers in a corner on the living room, cluttering and making noise.
It's a hobby maintaining them, but I could easily turn off vm's, turn on other vm's with some wait time. It would be cool to have them and swap quickly, but I could still do without them.
I tried buying a used 1u server and put it under the sofa so it wouldn't be seen, but the sound was insane. Quickly got turned off by me since it was just too much when watching tv.
Now we have a house and have space to put them away but that wasn't true for a long time.
I highly recommend Hetzners Server auctions for people who would use "old enterprise hardware" anyways: https://www.hetzner.com/sb
You won't get latest hardware, but you don't need to care about noise and get a real public IP in return. After experimenting with enterprise hardware in a 1room apartment, I found that to be a nice alternative.
I recently got a 4C Xeon/64GB Memory/2x500G SSD for 35€/month... it's also a nice alternative to Cloud, since canceling period is only 30days.
I used one of their MicroServer (only a little bigger than a NUC) and it's a great little server! Not that noisy and it packs a punch with 64Gb of RAM!
Is there somewhere to discuss what a homelab is for? I too get quite excited about building a home network, server, etc, but unless you need to simulate a complex work environment at home, I can't really see the cost/benefit. In the case of development VM's, just fire up the ones you need when you need them for a particular project/client - you generally only need a few at a time.
A rather large usage for mine is just pirating. Hosting Plex as your own personal netflix and having automated tools around that to pull new content and organize.
I'm in IT and decided to build a homelab for learning stuff and ended up buying an Intel NUC with a 10th gen i7-10710U CPU. 6 cores / 12 threads. Great little unit! Silent and efficient.
Same NUC, got two of them with 64GB of memory each, a 250GB 970 Evo NVMe, and and a TB3 10GBaseT adapter. Absolutely wonderful experience! I run ESXi 7.
I don't get why folks use VM when they can use containers, unless you're running different types of OS and CPU, why VM? LXC/LXD is much lighter weight, and faster.
> But when I showed off my build on /r/homelab, reddit’s homelab subcommunity, they mocked me as a filthy casual because I used consumer parts. The cool kids used enterprise gear.
It should be noted that you can purchase used enterprise servers for peanuts.
I'm looking right now at an ad for a used Dell PowerEdge R730xd SFF 24x which packs two Intel Xeon E5-2680 v3 and 64GB of RAM which is on the market for about $1200, and it was literally the first search result.
A Dell PowerEdge R720 8x 2U LFF with the same CPU/RAM/HDD combo is on the market for less than $1000.
A Dell Precision T7610 workstation, with dual Xeon ES-2670 and 64GB of RAM, can be purchased by around $600.