So roll on to now I’m using a silent build Ryzen windows desktop with 64Gb of RAM and a couple of mundane SSDs that I fire up VMs in virtualbox as required. At night it gets turned off. I’ve got a $5 digitalocean box that runs all my persistent linux stuff. If I want to play with networks it’s done with GNS3. Office 365 runs my email and all my stuff is sync’ed with onedrive and a couple of offline SSDs occasionally when I get nervous. My network is the fritzbox my ISP gave me plugged into the back of the desktop via Ethernet. That’s it!
My life is better for this. I hope people grow to realise this is much less of a mental burden over time which gives you more head space and a clean context switch away if you need it.
I still have a home lab. It's just virtualized.
So I think your final line is more personal preference than a lesson most people with home labs need to realise.
Bang on here. I am a PM at a startup who doesn't really "setup" servers at work but I did enjoy researching how to turn my Lenovo SFF PC (used , bought off Ebay) into a nice little home server (ESXi running multiple VMs) for things like storing media , Adguard VPN etc. Still work-in-progress but loving the experience so far :)
The cost of purchasing (eBay) and running the servers paid for itself in 6 months on my first "IT" salary.
I sold on the whole lot to another guy trying to boost his skills too.
It was fun and served its purpose.
Got to be one of the least offensive hobbies going.
Norco ITX-S8, http://www.norcotek.com/product/itx-s8/
Silverstone GD07 plus 3-bay hotswap cages for dual-bay 5.25 = 11 drives total with audio device aesthetic, https://www.silverstonetek.com/product.php?pid=330&area=en
It turns out PCI express slots are either something people want 3+ of (like a streamer setup of gpu, capture card, high end network card) or 1 of (GPU only) or none (business PC, devs happy with Intel or APU graphics). So really the only tradeoff the largest chunk of the market is making picking miniITX is that I haven't seen any with 4 RAM slots and a m.2 drive (though both features are available seperately on many motherboards). And even then the standard gamer setup is 2x8, while the amount of professionals for whom 64gb (2x32) is too little but 128gb is enough is currently a small market also.
Case manufacturers are not so quick to abandon the size though, with the result that if you get a case that you like for size/features/airflow and it happens to be a microATX case, most of your motherboard options are going to be miniITX.
I recommenced it if you only require 6 HDD's, and I don't put ITX into ATX cases haha. I've always had fun building ITX systems.
Did i do anything specific relevant on it which would have required it? Nope.
But boy did i learn things.
If not us as Software Engineers / Tech people, should have something like a home lab?
I have setup a k8s cluster at home; Do i need it? Nope for sure not; Is it nice? Is it running smoothly? Did i learn things? Is it interesting to use and fun? Does it help that it does run 24/7? yes, yes yes and yes.
When i have my own house, i do plan to centralize everything in it in one rack somewhere nice, optic fiber in every relevant room, vlan, backup server/nas etc. Alone something like this does require quite a lot of hardware.
And i think its very nice to have vlan capabilities to seperate that one iot device from the rest of my network.
So you go rid of the lab instead of the obvious solution?
Especially since my entire industry seems determined to see how far it can push spying on its users, leveraging power over them, etc. (All evidence is: no real limit yet.)
Also, it's just fun to play with gear.
Currently, I want to engineer for reliability, audible noise, power consumption, and price. (Will a RasPi with its SD card and wallwart be reliable enough? What about an Atom box, with Noctua fans, and mirrored 2.5" drives? Do I really want a pfSense box and some external WiFi APs, or can I make OpenWrt on plastic home routers do everything, and isn't Ubiquiti gear pretty but closed? Remember the price doubles if I want to have cold spares. Has the novelty of the big blue Palo Alto Networks box worn off yet? Can I discreetly mount my rack console in my living room IKEA, without a deep 4-post cabinet, nor the official rails kit?)
Toys get traded on /r/homelabsales, craigslist, and eBay.
Exactly my reason as well. RaspberryPi k8s cluster of some old Pi 3's I had lying around - why not? Hardware seems to generally last longer than you expect, and can be repurposed for, at the very least, a fun Sunday morning.
> Do I really want a pfSense box and some external WiFi APs, or can I make OpenWrt on plastic home routers do everything, and isn't Ubiquiti gear pretty but closed?
Just went through this too. The research and rabbit-holing down different hardware paths is so much fun. I ended up with Ubiquiti over pfSense; it is pretty, hot dang.
So I went from Tomato on an old Cisco router to buying a fitlet2, installing Proxmox on it, setting up pfSense as one of the VMs and then finally connecting a Ubiquity Unifi AP to the box.
pfSense is very nice and definitely beats Tomato in terms of things you can do with it (although Tomao/OpenWrt is great for 'upgrading' an existing router). Because my network is 100% wireless the Unifi UI has a great overview of all connected devices and I get to set up separate SSIDs with their own VLAN tags which then feed into the pfSense's firewall. This means my IoT devices are on their own network and can't interact with the rest of my stuff and my one external camera has no direct internet access at all.
It's all great fun setting up!
And the more fringe but OCD-pleasing https://www.reddit.com/r/cableporn (SFW)
r/homelab has been a source of great inspiration since 2012.
There's a stark difference in that the things you host and do with your home setup are your own, but the learnings and technologies can be applied more broadly.
If you're setting up a homelab, you might as well use industry best practices for it.
I'm not sure if you've looked at techs like Portainer or Proxmox but the unit economics are pretty compelling compared to a cloud vm.
My personal AWS resources are on FreeBSD. My home server is on Windows Server. The few places where I need Linux are at least Debian.
Home projects are to scratch a personal itch, try and avoid getting too rusty at the things I don't get to keep up with at work, and/or expand the breadth of my knowledge, not to replicate what I'm doing at work.
Using a homelab for learning isn't somehow mutually exclusive with having your own AWS account for learning.
This is the exact opposite of my thinking.
I want my experiments to happen on a remote server I don't care about...not on my home network that I really don't want to compromise/open up by accidentally misconfiguring something.
VPS are cheap - I'm paying like 7 bucks for a 16 gig / 4vcore. (Admittedly a once off deal). 7 bucks to have the blast radius of my networking noobness somewhere else seems like a bargain.
I have a reference point for how my software runs on real enterprise hardware. I know what unexpected things my software does when various hardware/VM/network issues arise, and I’ve learned ways around them and how to write better software that takes these edge cases into consideration.
Are you really full stack if you didn’t reflash the firmware on your RAID controller? When I say I’m full stack, I really mean it. :)
That said, I do keep separate VLANs for my “test” and “prod” environments here at home to keep frustrations at a minimum.
A lot of times you cannot simulate the exact scenario you’re preparing for without actually doing it in a real setup. This is the point of a home lab.
I couldn't figure out what the established approach would be to having access to the infrastructure over a consumer internet connection and without exposing the rest of the network. All resources I've seen so far don't give enough background to seem reliable.
PFSense or OPNSense, which are based on FreeBSD, are also great if you have any remotely modern spare x64 computer lying around with two network ports.
IPv4 is purely outbound NAT, IPv6 I have several subnets carved off that will allow IPSEC traffic to certain local hosts for some of my remote office setup but I've mostly switched to using Wireguard from basically everything that hits the Edgerouter and drops me onto a private v4 and real V6 space.
Works great for my phone and laptops from basically anywhere and tunnels all my traffic back to my homelab and then out to the internet again. I have Wireguard uses UDP port 443 on v4/v6, which now that QUIC is common enough can tunnel out of every network I've tried, even normally hyper-anal corporate ones.
Locally I have a Microtik switch with 10G fiber between my work machines and gigabit ethernet to the rest of the house, then a few Ubiquiti Unifi "semi-pro" APs for the house and back yard.
Primary storage host is running FreeBSD serving iSCSI from local zfs raid with consumer NVME SSDs as cache on top of generic and easily replaceable SATA drives. I still have this IPV6 accessible with IPSEC so I can basically treat it like local storage from all over the world, but I'll probably turn that off now that I'm using wireguard nearly all the time (IPSEC is faster since the tiny EdgeRouter processor isn't having to handle it).
It's pretty neat being basically "in your home office" from almost anywhere with decent internet.
In my case its mostly because I tend to run VMs on various older semi-retired machines with limited or slow local storage that I only turn on when I need them, and VMware's VMFS is cluster-aware, so it really doesn't matter which hypervisor is the one I end up spinning it up on.
I haven't dealt with Unraid specifically but there are a lot of caching and network parameters that can wildly affect performance -- VMware for example wants to do synchronous writes on network storage for obvious reasons, and having a safe write-cache and large transfers with enough in-flight commands can make a night and day difference.
If you're primarily just using NFS/SMB as file shares then getting iSCSI working probably isn't going to be a good use of your time versus figuring out why the existing setup behaves that way -- Samba and SMB performance tuning can be a frustrating experience but iSCSI is far more opaque and inscrutable on Windows particularly.
The one downside I've encountered so far is that I often have devices sending traffic from home network -> droplet -> home network, since they "don't know" that they're actually on the same local network, and exclusively communicate through the VPN. My ping to DO's datacenter is low enough that this hasn't really bothered me, though.
Now I run a Linux VM and nftables for my router and it is by far the best system I have tried because it is so simple to manage. It took maybe half a day to learn to set up. As a bonus I am using NixOS as then all of my firewall config, interfaces, VLANs, + any other router-like config lives in a declarative configuration file which I can apply with one command. Before I was using Ansible + Ubuntu but I much prefer NixOS for a router.
If you want a local VPN without too much fuss, Ubiquiti gear is popular, including the EdgeRouter X gear for off the shelf configuration.
A lot of this stuff is pretty neat these days, set it up once, forget it as long as it's supported and updated from the vendor.
Sure, the article if full of techporn but I haven't read anything so unabashedly sexist in nature since the late 1970s. It was truly cringeworthy. Reminded me of reading Popular Mechanics from the 1950s or a pulp SciFi novel by Murray Leinster or something.
I lost 50GB of options data going back 4 months yesterday + 2.5TB of currency data going back 20 years because I never checked that my backups were running.
Please don't let this happen to you.
EDIT: Literally just write a script that backs up flat files or binaries to S3 every N intervals.
I'm working on this exact problem at the moment: a dashboard to keep track of your backups to check they never fail. https://backupshq.com
* Get notified when backups fail or take too long via email, slack, webhook, etc.
* Run your backup scripts and tools with an open source agent, which takes cares of reporting backup results to the dashboard.
* Alternatively, write your own integration with our API to report backup results and send logs yourself.
I'm hoping to launch privately soon, I'd love to hear what you think. We haven't made any pricing decisions yet, but definitely want to give free credit to homelabbers!
Sorry you lost your data, data loss is terrible
while doing a simple copy to S3 is better than nothing, Everyone wanting to backup their data should follow a 3-2-1 Policy 
3 Copies of the data
2 different storage media's / devices
1 offsite location.
backing up only to S3 can still leave you vulnerable, for example I remember one person that was doing cloud backups and their card was declined and they missed the emails for awhile so their data was deleted...
I wrote more about it here: https://photostructure.com/faq/how-do-i-safely-store-files/
To me the "2 media" It is not strictly 2 media, ie hdd and dvd. Or hdd and tape.
It is primarily to ensure you are on 2 devices/systems/ etc
For individuals the could be their local system and a home nas
On reason this was created is that in enterprise most systems are backed by a SAN. Having your prod data AND backup data on the same physical storage device or cluster would violate the rule. Often times admins miss this because the storage layer is cluster and partitioned in away that could give the appearance of it being separate even if physically it is the same device
Further cloud storage can count has separate media as well
So if you have the prod copy of data + a backup on a local nas + copy on backblaze or another cloud storage provider you have followed the 3-2-1 rule
(Sorry about your stuff.)
Restore your data once in a while to check that it's really getting backed up.
For me personally redoing my PC is worth couple of years of backup subscription plus HDD docking station plus two 4TB hard drives dedicated to backups.
I think for my next machine I'm going to dump Mac OS and switch Windows + WSL so may investigate fully automating my setup and config then.
I have a bunch of custom-compiled software with options for my specific needs, and my two main computers (PC and a laptop) have both huge history of various workaround that I don't even remember any longer and would have to research all of them again. I also have a bunch of Virtualbox VMs. These are a Windows VM configured for remote work, couple Linux VMS with configurations for embedded development (I do ARM development). I have one dedicated to Matlab, Simulink and LabView and host of plugins and customizations.
While keeping dotfiles and scripts is easy (I just have a repository on bitbucket) I would still have to spend a day or two just getting everything in order and probably much more to get these VMS for embedded development which is just full of workarounds for various problems.
As a contractor paid by the hour I am aware of my hourly rate and this helps me put things in perspective. Once you calculate the value of services and tools what seemed expensive frequently becomes dirt cheap when you factor time saved or risk averted.
For backup I keep USB docking station and two 4TB HDDs. I try to keep one of them at a different site. I also keep a copy on an online service mostly for convenience but also as another layer of protection.
Edit: thanks for the responses, I didn't know this was even possible.
You can also use something like DuckDNS to have a backup in case you are away from home and your IP changes, you can still find your server. You could even theoretically write a script that then checks DuckDNS for your IP and updates the record with your Domain Registrar automatically (assuming they have an API)
I have a dynamic DNS thingamabob going (a script on a raspberry pi that pings my website a few times per day) and have had no problems. Haven't even noticed my home servers becoming unavailable due to IP address changes for a few years, probably because my ISP (Charter a.k.a. Spectrum) doesn't change my address very often. I use NearlyFreeSpeech for hosting the website and they also handle DNS and have an API for it, so it's all very nice and simple.
More reliable than dyndns (since you never know how long DNS servers are actually caching)
Residential ISPs also often provide static DHCP leases that are quasi static IPs.
Some ISPs also sell an addon for a static IP to the service.
That was fine when I only had 2 drives, but now my "USB Octopus" is getting silly.
What's the next step in hardware if I want my server to access a bunch of disks?
It was cheap and easy, (the chassis much more expensive than the shelving/rack - but the value's in the hot swappable bays) think I'd recommend it.
I say get a beefy Synology, maybe a tape drive, and rent whatever else you need in the cloud. Plus a desktop/laptop as you prefer.
> As a person interested using Xpenology you should keep in mind, that this is not an official Synology release. Although it’s based on the official Synology DSM which is the same as whats found on the actual Synology devices, it’s not officially supported by Synology. You could get support from the community for this software, but not from the company itself. Typically once you’re up and running things are very stable without any issues but just know that calling support is not an option when you’re not running on an official Synology device.
I've never been unhappy with anything I've bought from them.
If you want a lot of 3.5" internal hot-swap drives, you may need to find a case with 5.25" bays, which is getting harder to find.
I wish I could find an alternative that is as affordable.
I personally prefer the QNAP options as those either re-branded or integrated from chipsets that are widely supported in Linux and BSD.
There are rackable ATX cases like the Rosewill RSV-L4500 which don't have the enterprise-y niceties but are perfectly service-able and fairly quiet for a dozen disks.
Separating your storage as an appliance/service (dedicated NAS with low power usage) is worth running a comparison on.
When I had the full whack server rack it was pulling that a month and 99% of the time it was idle.
The price/watt ratio is helpful to have something you set, forget and don't notice running in the background.
You can probably do a lot better by building a custom one, but that is going to be a lot more expensive. I would love to hear from anyone with experience in this actually, as I'm kind of starting to get fed up with it.
Aside from aggressively controlling the fans, I find the HP and Dell firmwares to be entirely useless, as they all apparently require a web browser with an ancient version of Java. In one case, Java 6. Yeah... this was a terrible decision even 10 years ago.
Another annoyance is that they will mostly come with some kind of hardware RAID card that doesn't passthrough raw disks. Often, a completely worthless one that must be replaced. Even if it's a less terrible one, it will require reflashing if you want to use it in JBOD mode (for ZFS or software RAID).
All that being said, considering that I paid $140 shipped for this HP server 3 years ago, I'm not sure how much I can complain. Setup was a pain in the ass, but once I got it done, it's been working mostly flawlessly since. The only ongoing pain points for me are the lack of USB3, and the fan noise. (I could add a USB3 PCI card, perhaps, but I'm sure that would increase the fan noise even more!)
Newer HPE servers with iLO 5 need any plain browser to access iLO, and for the remote console, give you the choice of HTML5, .Net, or Java. I have a Gen10 Microserver and the HTML5 console works flawlessly in Chrome.
If you need just about 8 or fewer HDDs and don't need hotswap, there's many options like just normal ATX case.
If you need silence, Don't get a server case that supports 8cm fan only!
The needs of each homelab can be different though. Some homelabs don't need to be that serious from a power/noise/energy cost perspective just to get ECC, unless there is an explicit benefit from it.
Making do without ECC can be perfectly possible for some or most portions of a homelab.
If you are planning to host stuff, or even just to protect your own things, you better get something like Sonicwall or similar.
At my home I have a sonicwall, synology nas, 2 dell servers, power supplies. Not in a single closet though. To me, the server security is far more important than the looks. Spend the most money on firewall, rest is just for looks. Patch panel for home setup sounds excessive because I am not going to run all my cables thru my garage. Have individual switches in every room if you want. Most living spaces need only one or two ethernet cables, rest of the devices can be wifi easily.
A big advantage is that you can do vlans, so you can seperate your homelab from the rest of your devices. You could also do things like block outbound traffic from your home network.
power consumption of these NUCs is almost negligible while one Dell R620 with dual CPU takes 150-200W at idle.
Another trick is to have 5v, 12v and 48v DC lines with silent PSU like Corsair RM series directly feeding some servers so they didn't have to use an internal PSU, that leaves only the CPU and GPU as "hot" components (and the NAS, but that's another story and a solved problem).
Lasted about 18 months before my wife vetoed it.
From memory -
Sun ultra 60 x 2
Sun ultra 80 with the big foam sound deadening kit option
Sun ultra 5 x many (6?)
Sun ultra 10 x 2
Sun 6 bay scsi external disk caddy
Cisco 24 port 10/100 managed switch but i forget the model, something XL (ios 11 or something truly ancient like that)
2 cisco routers with PPPoA adsl adapters
3com unmanaged kit with POE
IBM AIX b50 telco 1u server
An openbsd host on an old x86 desktop
A freebsd node on a beefy (for the time) AMD opteron host
And probably a bunch of other toys i’ve forgotten by now.
I've done it every other way - I have a dual chip Xeon Silver machine that I hand built, an IT reclaimed Dell rack server, three NUCs (used to be 4 but one bit the dust after 4.5 years of hard service), and a smathering of other hardware (NASes, 10G and 1G switches, etc.)...
After the absolute pain of the Xeon Silver build - having a brand new motherboard that was DOA and basically being told I was out $600 after multiple rounds of escalation, then having the replacement board fight with my GPUs, I'm not sure if I'm willing to bother doing it again. It would have been worth it to pay the markup on a gently used commercial machine and I would have been up and running faster... Probably would have been cheaper, in fact.
My log machine uses six 16TB SATA drives configured as ZFS mirrors, so effectively 48TB of storage, which I back up to S3.
SAS drives are better but the current setup meets my needs.
The media server is an 2U HP G8 DL380P with the 12LFF drive cage. I run Xpenology (synology hacked to run on other hardware). That allows me to run Plex and Docker along with some auto backup stuff from my other computers (also serves as a central point for VM images to roll out to my computers depending on which I am using that week). I was incorrect about the RAM, this server is only 48GB since Xpenology can't use it all. Server has dual E5-2440s.
The other server/NVR is a 1U HP G8 DL380e that has a 4LFF drive cage. Came with the 4x2TB setup already and it's running Ubuntu server 20.04. Zoneminder is a great software package and easily handles my 8 1080P or better PoE cameras around my house/shop. I think it was also ~$250 on ebay...with another $50-60 in RAM from eBay (Zoneminder does a lot of caching in RAM for prior to recording movement detection). This machine sports 192GB of RAM and dual E5-2650s.
That's a lot of spindles.
I’ve had good luck using horizontal Silverstone ATX cases with fans on three sides. That said, I’m sure there are 4U units that offer huge but low rpm fans that would satisfy the bedroom server-lab niche market.
Usually 4U cases have a lot of fan mounts so rather than a few large radius fans you have lots of small radius fans. It used to be that all small fans were loud but in the past decade or so you've been able to get smaller fans at just about any noise point you want.
Using a Rosewill 4U ATX enclosure (RSV-L4312); replaced the stock case fans with Noctua NF-A12/NF-A8 (120mm / 80mm). It's extremely quiet, and now I have a full custom 4U with hot swappable drives, while still maintaining plenty of airflow.
My only gripe is the quality of Rosewill case isn't what I'm used to with enterprise-grade or even gamer-grade chassis. I really wish other companies would get into the homelab space and start making quality ATX rack-mounted chassis. I think there's a decent sized DIY market for this.
I've been searching for something like that a few months ago but couldn't find anything that has schuko sockets on the other side.
They're quite expensive but looked like a good homelab alternative for people lacking space for a proper rack.
Someone knows it?
Now I'm looking for a smaller, more beefy machine with less storage capacity, for some @home experiments.
Kind of depends which way you wanna go. NAS is probably the term you're thinking off.
I already have a DIY NAS at home, I'm now looking for an extra small machine to run more experimental stuff, playing with k8s, etc.
I don't want to do that on my NAS.
I am SOOO SICK of watching this highly technical and aesthetically inconsiderate hobby turn into an issue of neckbeard Modern Living. A new generation of boutique providers selling storage racks at silly markups.
Pretty soon you're going to have the worst kind of hipsters -- keyboard nerds -- obsessing over the material used in rack standoffs or something. Because you know that aged Persian rubber dampens vibrations +.03db better...