Hacker News new | past | comments | ask | show | jobs | submit login
Building a Home Lab Beginners Guide (haydenjames.io)
214 points by ashitlerferad 6 months ago | hide | past | favorite | 180 comments



I had a fairly large home lab once. I had a fully topped out SPARCserver 1000E and a disk enclosure in my bedroom. I also once lived with an E450 on the kitchen table for a month. But they’re noisy as hell, inconvenient, expensive to keep running and expensive to feed with power and take up a lot of room and thus are not compatible with family and general sanity over time. They become needy balls and chains.

So roll on to now I’m using a silent build Ryzen windows desktop with 64Gb of RAM and a couple of mundane SSDs that I fire up VMs in virtualbox as required. At night it gets turned off. I’ve got a $5 digitalocean box that runs all my persistent linux stuff. If I want to play with networks it’s done with GNS3. Office 365 runs my email and all my stuff is sync’ed with onedrive and a couple of offline SSDs occasionally when I get nervous. My network is the fritzbox my ISP gave me plugged into the back of the desktop via Ethernet. That’s it!

My life is better for this. I hope people grow to realise this is much less of a mental burden over time which gives you more head space and a clean context switch away if you need it.


Your conclusion is wrong, you just found out homelabs is not a hobby for you. People have way more expensive and time consuming hobbies.


My conclusion is that my home lab was a learning exercise that comes to an end and that having physical hardware was actually detrimental to it.

I still have a home lab. It's just virtualized.


Whatever works for you, just don't generalize it.


Maybe a bit too direct, but I 100% agree with this comment.


Well, I guess if you are serious about a home lab you need both your normal network and a seperate homelab. I like tinkering with my homelab but I need not to depend on it for my work or internet access.


What if your true Internet access depends on it? Providing un-GFWed Internet access for all my devices is so much easier with the networking gears which happens to be in my homelab setup.


For a lot of people, a home lab isn't just to provide themselves a service, they actively enjoy the setup and management.

So I think your final line is more personal preference than a lesson most people with home labs need to realise.


>For a lot of people, a home lab isn't just to provide themselves a service, they actively enjoy the setup and management.

Bang on here. I am a PM at a startup who doesn't really "setup" servers at work but I did enjoy researching how to turn my Lenovo SFF PC (used , bought off Ebay) into a nice little home server (ESXi running multiple VMs) for things like storing media , Adguard VPN etc. Still work-in-progress but loving the experience so far :)


Yeah, home labbing is about the journey but eventually the journey ends once you've learned how to use some of the technology you wanted.


My home lab helped boost my career enormously. 5 old HP servers running Windows 2000 network introduced me to AD, Exchange, MS-SQL, etc. hooked up via Cisco hardware. (100% pirated I will admit)

The cost of purchasing (eBay) and running the servers paid for itself in 6 months on my first "IT" salary.

I sold on the whole lot to another guy trying to boost his skills too.

It was fun and served its purpose.


Never heard a single story about setting up a home lab that isn’t wholesome!

Got to be one of the least offensive hobbies going.


I have a mini-itx case with 8 5400 rpm drives, and some basic xeon (E12xx, forgetting the exact model), and I'm shocked at how well modern power saving techniques work. When it's just sitting there being a file server it consumes less than 100 watts, and even when I'm pulling from it at over a gigabit it goes up to maybe 110. It also has 120 mm fans, but I barely notice when they kick on. The loudest part of the whole thing are the heads on the HDDs kicking back and forth!


Mini-ITX case recommendation that holds 8 3.5" drives please?


InWin MS08, https://www.ipc.in-win.com/soho-smb-iw-ms08

Norco ITX-S8, http://www.norcotek.com/product/itx-s8/

Silverstone GD07 plus 3-bay hotswap cages for dual-bay 5.25 = 11 drives total with audio device aesthetic, https://www.silverstonetek.com/product.php?pid=330&area=en


I’m very happy with my U-NAS enclosure (NSC-400; 4 hotswap bays), and they do have 8-bay enclosures.

https://www.u-nas.com/xcart/cart.php?target=product&product_...


My recommendation would be the Fractal Design Node 804.


People actually put a mini-ITX in a micro-ATX case?!?!?! faints


microATX motherboards are dying out - with the ability to omit drive bays entirely and just mount SSDs to the back of a motherboard tray, full ATX cases can get reasonably small now, so there's less demand.

It turns out PCI express slots are either something people want 3+ of (like a streamer setup of gpu, capture card, high end network card) or 1 of (GPU only) or none (business PC, devs happy with Intel or APU graphics). So really the only tradeoff the largest chunk of the market is making picking miniITX is that I haven't seen any with 4 RAM slots and a m.2 drive (though both features are available seperately on many motherboards). And even then the standard gamer setup is 2x8, while the amount of professionals for whom 64gb (2x32) is too little but 128gb is enough is currently a small market also.

Case manufacturers are not so quick to abandon the size though, with the result that if you get a case that you like for size/features/airflow and it happens to be a microATX case, most of your motherboard options are going to be miniITX.


I build my partner a PC in the Node 304(not 804). I made a mistake on the number, sorry. The 304 can only fit 6 3.5" HDD's.

I recommenced it if you only require 6 HDD's, and I don't put ITX into ATX cases haha. I've always had fun building ITX systems.


I had a debian root server for years;

Did i do anything specific relevant on it which would have required it? Nope.

But boy did i learn things.

If not us as Software Engineers / Tech people, should have something like a home lab?

I have setup a k8s cluster at home; Do i need it? Nope for sure not; Is it nice? Is it running smoothly? Did i learn things? Is it interesting to use and fun? Does it help that it does run 24/7? yes, yes yes and yes.

When i have my own house, i do plan to centralize everything in it in one rack somewhere nice, optic fiber in every relevant room, vlan, backup server/nas etc. Alone something like this does require quite a lot of hardware.

And i think its very nice to have vlan capabilities to seperate that one iot device from the rest of my network.


> expensive to feed ... take up a lot of room ... family ... become needy balls and chains.

So you go rid of the lab instead of the obvious solution?


One of the reasons I still run some servers and play with networking gear at home, even when SaaSes, Linode, and EC2 are so affordable, is as an exercise in being somewhat more self-sufficient, than depending on "other people's computers" for everything.

Especially since my entire industry seems determined to see how far it can push spying on its users, leveraging power over them, etc. (All evidence is: no real limit yet.)

Also, it's just fun to play with gear.

Currently, I want to engineer for reliability, audible noise, power consumption, and price. (Will a RasPi with its SD card and wallwart be reliable enough? What about an Atom box, with Noctua fans, and mirrored 2.5" drives? Do I really want a pfSense box and some external WiFi APs, or can I make OpenWrt on plastic home routers do everything, and isn't Ubiquiti gear pretty but closed? Remember the price doubles if I want to have cold spares. Has the novelty of the big blue Palo Alto Networks box worn off yet? Can I discreetly mount my rack console in my living room IKEA, without a deep 4-post cabinet, nor the official rails kit?)

Toys get traded on /r/homelabsales, craigslist, and eBay.


> Also, it's just fun to play with gear.

Exactly my reason as well. RaspberryPi k8s cluster of some old Pi 3's I had lying around - why not? Hardware seems to generally last longer than you expect, and can be repurposed for, at the very least, a fun Sunday morning.

> Do I really want a pfSense box and some external WiFi APs, or can I make OpenWrt on plastic home routers do everything, and isn't Ubiquiti gear pretty but closed?

Just went through this too. The research and rabbit-holing down different hardware paths is so much fun. I ended up with Ubiquiti over pfSense; it is pretty, hot dang.


> Do I really want a pfSense box and some external WiFi APs, or can I make OpenWrt on plastic home routers do everything, and isn't Ubiquiti gear pretty but closed?

So I went from Tomato on an old Cisco router to buying a fitlet2, installing Proxmox on it, setting up pfSense as one of the VMs and then finally connecting a Ubiquity Unifi AP to the box.

pfSense is very nice and definitely beats Tomato in terms of things you can do with it (although Tomao/OpenWrt is great for 'upgrading' an existing router). Because my network is 100% wireless the Unifi UI has a great overview of all connected devices and I get to set up separate SSIDs with their own VLAN tags which then feed into the pfSense's firewall. This means my IoT devices are on their own network and can't interact with the rest of my stuff and my one external camera has no direct internet access at all.

It's all great fun setting up!


Don't forget the amazing content on:

https://www.reddit.com/r/homelab

https://www.reddit.com/r/HomeLabPorn (SFW)

And the more fringe but OCD-pleasing https://www.reddit.com/r/cableporn (SFW)

r/homelab has been a source of great inspiration since 2012.


https://www.reddit.com/r/selfhosted/ is an adjacent interesting subreddit


r/homeserver for those looking at single system or less intensive setups.


Mostly skimmed over here, but a key point is that you should be trying to build a cheap version of what you'd use at work. The closer you are to your work environment, the more you can applicably "try things" at home, and bring them to work with you. And since most of us are pretty busy, it's nice to not have to learn a whole new skill set just for maintaining our home networks.


I don't get why we'd pay for and maintain the test environment the employer should have. Should I bring my own printer and laptop to the office too?


I have done what the parent describes, but also agree with you. I have enjoyed doing things "the right way" at home when I've seen something heinous at work, but....why? I try to spend personal time on personal things these days. I've been playing with IPFS lately :)


I think this misunderstands the point of the parent comment. They're not saying "replicate your work environment so that you can test your things at home", they're saying "replicate your work environment so you can learn best practices and apply your learnings at home when you're at work".

There's a stark difference in that the things you host and do with your home setup are your own, but the learnings and technologies can be applied more broadly.


And it works vice versa! I don't have to learn to support my home environment... I can actually do research at work for my job and also conveniently apply that to home!


Yes, you should subsidize your employer by investing in your own specialized training and your own specialized equipment. With all the cutbacks the government isn't doing that for them any more so it's up to you to deliver solid value to the shareholders and quarter-over-quarter growth.


Either you're exceptionally dense or you're purposefully ignoring the point.

If you're setting up a homelab, you might as well use industry best practices for it.


With more companies doing Work from Home, and MS offering Autopilot I have see a few companies start to explore BYOD for things like Laptops and printers... so I dont think this is very far off


It's less about the employer and some choosing to take an interest in expanding their skills in this way.


This is a terrible use of money for expanding your skills. Rent the machine time for peanuts, learning even more relevant skills simultaneously.


It's rarely peanuts by the time you architect a set of machines to work together.

I'm not sure if you've looked at techs like Portainer or Proxmox but the unit economics are pretty compelling compared to a cloud vm.


It would make sense to build a version of what you're not going to use at this workplace, but what you'd like to use in a hypothetical next one.


Yeah, if I’m directly testing things for work I’m, at the very least, going to use work resources if not work time for that. We're all Ubuntu on AWS at work.

My personal AWS resources are on FreeBSD. My home server is on Windows Server. The few places where I need Linux are at least Debian.

Home projects are to scratch a personal itch, try and avoid getting too rusty at the things I don't get to keep up with at work, and/or expand the breadth of my knowledge, not to replicate what I'm doing at work.


Sadly my last employer used so much internal tooling and IPs that trying to do that would have involved transferring internal packages from the company network to personal infrastructure and probably gotten me fired as that would look pretty indistinguishable from an attempt at contract violating IP theft.


I love homelabbing but the ongoing problem for homelabbing to training skill is that more jobs need to use cloud services, rather than on-premises.


If given an option between a job maintaining cloud services and a job maintaining on-premises technologies, I'd take the latter every time. You spend less time fighting support ticket queues and more time doing things.


So, why not run open-source cloud software (OpenStack, CloudStack, etc.) in your homelab?


That's OK for learning general things and the software but jobs tend to use AWS.


Sure, so, you'll want to use some AWS (much of which can be within free tier), some of your homelab with CloudStack, etc., and some Terraform (etc.), and learn heterogenous hybrid cloud.

Using a homelab for learning isn't somehow mutually exclusive with having your own AWS account for learning.


>Think of a home lab as a place where you can fail in the privacy of your own home.

This is the exact opposite of my thinking.

I want my experiments to happen on a remote server I don't care about...not on my home network that I really don't want to compromise/open up by accidentally misconfiguring something.

VPS are cheap - I'm paying like 7 bucks for a 16 gig / 4vcore. (Admittedly a once off deal). 7 bucks to have the blast radius of my networking noobness somewhere else seems like a bargain.


For me, part of homelabbing is learning the entire stack from physical layer up. I learn a lot of stuff running on my own physical hardware that I would have never learned when using someone else’s hardware with many nines of uptime.

I have a reference point for how my software runs on real enterprise hardware. I know what unexpected things my software does when various hardware/VM/network issues arise, and I’ve learned ways around them and how to write better software that takes these edge cases into consideration.

Are you really full stack if you didn’t reflash the firmware on your RAID controller? When I say I’m full stack, I really mean it. :)

That said, I do keep separate VLANs for my “test” and “prod” environments here at home to keep frustrations at a minimum.


The goal is to have a staging area where you can encounter failures before they happen in the real environment.

A lot of times you cannot simulate the exact scenario you’re preparing for without actually doing it in a real setup. This is the point of a home lab.


There's been a lot of failure over here. I'd break my home server almost daily while tinkering. Now that it's been rock solid for about a year I kind of wish it would break and give me a problem to solve.


Hahaha I know that feeling!


Any recommendations on home networking?

I couldn't figure out what the established approach would be to having access to the infrastructure over a consumer internet connection and without exposing the rest of the network. All resources I've seen so far don't give enough background to seem reliable.


Seconding that Ubiquiti Edgerouter if you want a mostly GUI guided config. They're solid enough for gigabit internet in the better models and the builtin firewall and tunneling options are good enough for most purposes -- the recent releases also let you install Wireguard, which is getting to be the preferred low-effort VPN solution for most platforms if you have a limited set of remote clients.

PFSense or OPNSense, which are based on FreeBSD, are also great if you have any remotely modern spare x64 computer lying around with two network ports.

IPv4 is purely outbound NAT, IPv6 I have several subnets carved off that will allow IPSEC traffic to certain local hosts for some of my remote office setup but I've mostly switched to using Wireguard from basically everything that hits the Edgerouter and drops me onto a private v4 and real V6 space.

Works great for my phone and laptops from basically anywhere and tunnels all my traffic back to my homelab and then out to the internet again. I have Wireguard uses UDP port 443 on v4/v6, which now that QUIC is common enough can tunnel out of every network I've tried, even normally hyper-anal corporate ones.

Locally I have a Microtik switch with 10G fiber between my work machines and gigabit ethernet to the rest of the house, then a few Ubiquiti Unifi "semi-pro" APs for the house and back yard.

Primary storage host is running FreeBSD serving iSCSI from local zfs raid with consumer NVME SSDs as cache on top of generic and easily replaceable SATA drives. I still have this IPV6 accessible with IPSEC so I can basically treat it like local storage from all over the world, but I'll probably turn that off now that I'm using wireguard nearly all the time (IPSEC is faster since the tiny EdgeRouter processor isn't having to handle it).

It's pretty neat being basically "in your home office" from almost anywhere with decent internet.


Great write up and nice setup! I’ve been running Unraid on a fast box for my local storage host and using NFS and SMB with dismal performance. I’m looking at 10gbe and building up my cache pool, but as it stands it takes 40+ seconds for my laptop to mount and browse even a small share. I’m intrigued by zfs + iSCSI- do you think it would give me some improvements over SMB?


Probably highly dependent on what you're doing with it and whether or not your SMB implementation supports the direct RDMA extensions.

In my case its mostly because I tend to run VMs on various older semi-retired machines with limited or slow local storage that I only turn on when I need them, and VMware's VMFS is cluster-aware, so it really doesn't matter which hypervisor is the one I end up spinning it up on.

I haven't dealt with Unraid specifically but there are a lot of caching and network parameters that can wildly affect performance -- VMware for example wants to do synchronous writes on network storage for obvious reasons, and having a safe write-cache and large transfers with enough in-flight commands can make a night and day difference.

If you're primarily just using NFS/SMB as file shares then getting iSCSI working probably isn't going to be a good use of your time versus figuring out why the existing setup behaves that way -- Samba and SMB performance tuning can be a frustrating experience but iSCSI is far more opaque and inscrutable on Windows particularly.


USG and PoE switch are on my short list to increase functionality of my single Unifi AP AC lite that covers my apartment very well. I'd like the wireguard option and sort out how to redirect my laptop back to my home network for simple management. At the moment I just connect to a VM using Chreom remote desktop and remote out to systems from there.


I can't replace/reconfigure the router I'm behind right now, so I set up a Wireguard VPN. A $5/mo DigitalOcean droplet is the "hub", which then has point-to-point connections through Wireguard to each of my devices. They all get their own IP addresses through the network—I used the 10.101.101.0/24 subnet for memorability—and since I manually allocate IP addresses, I can actually remember which one is which! Then they can talk to each other through the VPN, and if anything wants to be accessible from the public internet, Nginx on the DO droplet reverse proxies to it. My favorite advantage is that my phone/laptop always have access to my devices at home, even when roaming. Plus, in theory, I could relocate any device to another network with no downtime except the time it takes to physically move it and plug it back in—I haven't tested this yet.

The one downside I've encountered so far is that I often have devices sending traffic from home network -> droplet -> home network, since they "don't know" that they're actually on the same local network, and exclusively communicate through the VPN. My ping to DO's datacenter is low enough that this hasn't really bothered me, though.


Not as popular as Wireguard, but Tinc VPN is a mesh VPN where each client is also a node. I've got it running on my actual router (pfsense), so my phone will then attempt to connect to the router node when I'm on my home network so any connections will stay in my home network. And then for everything external I can have it go through a VPS (or multiple) like you.


I started with pfSense but very quickly got frustrated with the web UI although it was a massive step up from the ISP supplied router.

Now I run a Linux VM and nftables for my router and it is by far the best system I have tried because it is so simple to manage. It took maybe half a day to learn to set up. As a bonus I am using NixOS as then all of my firewall config, interfaces, VLANs, + any other router-like config lives in a declarative configuration file which I can apply with one command. Before I was using Ansible + Ubuntu but I much prefer NixOS for a router.


Is your config file publicly available? I'm experimenting with nixos on server and thinkpad. I was thinking about using/learning opnsense.


A traditional router can be enough to start with depending on what you're after.

If you want a local VPN without too much fuss, Ubiquiti gear is popular, including the EdgeRouter X gear for off the shelf configuration.

A lot of this stuff is pretty neat these days, set it up once, forget it as long as it's supported and updated from the vendor.


Maybe a bit off-topic but what do people actually use for monitoring/intrusion detection on small-scale to hobby projects that are internet-connected? I feel like the "established" monitoring frameworks require too much maintenance work and care to really be effective but the operating systems generally used for this kinda work have a way too large surface area to just leave running on their own. (Hello Unikernels?)


I am using Sonicwall NSA2600. Works well, has 300mbit SSL filtering, very robust rules and nat configuration. Most small/medium size companies can use the model I am using. Has a very nice web interface too. Comes with SSL VPN and Global VPN.


What do you pay for licensing the services?


The author of the article can afford all this stuff because he keeps a 'little woman' locked in his basement ironing his shorts.

Sure, the article if full of techporn but I haven't read anything so unabashedly sexist in nature since the late 1970s. It was truly cringeworthy. Reminded me of reading Popular Mechanics from the 1950s or a pulp SciFi novel by Murray Leinster or something.


Source? The only time the author mentions his wife in the article is that she's working full-time and would dislike some server placements.


PLEASE BACKUP YOUR STUFF NO MATTER WHAT YOU DO!

I lost 50GB of options data going back 4 months yesterday + 2.5TB of currency data going back 20 years because I never checked that my backups were running.

Please don't let this happen to you.

EDIT: Literally just write a script that backs up flat files or binaries to S3 every N intervals.


That's terrible. There's nothing worse than losing data.

I'm working on this exact problem at the moment: a dashboard to keep track of your backups to check they never fail. https://backupshq.com

* Get notified when backups fail or take too long via email, slack, webhook, etc.

* Run your backup scripts and tools with an open source agent, which takes cares of reporting backup results to the dashboard.

* Alternatively, write your own integration with our API to report backup results and send logs yourself.

I'm hoping to launch privately soon, I'd love to hear what you think. We haven't made any pricing decisions yet, but definitely want to give free credit to homelabbers!


>>Literally just write a script that backs up flat files or binaries to S3 every N intervals.

Sorry you lost your data, data loss is terrible

while doing a simple copy to S3 is better than nothing, Everyone wanting to backup their data should follow a 3-2-1 Policy [1][2][3]

3 Copies of the data

2 different storage media's / devices

1 offsite location.

backing up only to S3 can still leave you vulnerable, for example I remember one person that was doing cloud backups and their card was declined and they missed the emails for awhile so their data was deleted...

[1] https://medium.com/@nakivo/the-3-2-1-backup-rule-an-efficien...

[2] https://www.backblaze.com/blog/the-3-2-1-backup-strategy/

[3] https://www.veeam.com/blog/how-to-follow-the-3-2-1-backup-ru...


The "2 different media" heuristic became irrelevant as optical media capacity stagnated, tape drives didn't really drop in price, and spinning rust dropped below $20/TB.

I wrote more about it here: https://photostructure.com/faq/how-do-i-safely-store-files/


Your write up is pretty good

To me the "2 media" It is not strictly 2 media, ie hdd and dvd. Or hdd and tape.

It is primarily to ensure you are on 2 devices/systems/ etc

For individuals the could be their local system and a home nas

On reason this was created is that in enterprise most systems are backed by a SAN. Having your prod data AND backup data on the same physical storage device or cluster would violate the rule. Often times admins miss this because the storage layer is cluster and partitioned in away that could give the appearance of it being separate even if physically it is the same device

Further cloud storage can count has separate media as well

So if you have the prod copy of data + a backup on a local nas + copy on backblaze or another cloud storage provider you have followed the 3-2-1 rule


Please don't use uppercase for emphasis. If you want to emphasize a word or phrase, put asterisks around it and it will get italicized.

https://news.ycombinator.com/newsguidelines.html.

(Sorry about your stuff.)


Whats the markup format? I've tried the common MD shortcuts but they never work, maybe I'm just a trog



Backups--very boring. Restore--very exciting.

Restore your data once in a while to check that it's really getting backed up.


Commit and push to GitHub more frequently for me. Don’t wait until the pr is perfect before committing.


So you don't value your PC and you are completely ok loosing the configuration and have to redo it from scratch?

For me personally redoing my PC is worth couple of years of backup subscription plus HDD docking station plus two 4TB hard drives dedicated to backups.


Same as sibling, my setup isn't super custom. I backup my dotfiles and that's about it. Takes about ~1-2 hours to setup a fresh machine.

I think for my next machine I'm going to dump Mac OS and switch Windows + WSL so may investigate fully automating my setup and config then.


Not OP here but it takes me about an hour to rebuild my machine from scratch.


I don't understand why "pushing to GitHub more often" implies "do not value PC and are OK losing configuration"?


I think OP was referring to things like vim configs, shell customizations, nginx configs, etc. There's lots of computer customizations I could think of that aren't always thought about in context of storing in a git repo. Also you could have large files such as market data, movies, or music collection that aren't a good fit for a git repo.


I noticed the usage of computers changed drastically over years. When I started last millennium it was normal for people to have large amount of installed tools with large amount of customizations.

I have a bunch of custom-compiled software with options for my specific needs, and my two main computers (PC and a laptop) have both huge history of various workaround that I don't even remember any longer and would have to research all of them again. I also have a bunch of Virtualbox VMs. These are a Windows VM configured for remote work, couple Linux VMS with configurations for embedded development (I do ARM development). I have one dedicated to Matlab, Simulink and LabView and host of plugins and customizations.

While keeping dotfiles and scripts is easy (I just have a repository on bitbucket) I would still have to spend a day or two just getting everything in order and probably much more to get these VMS for embedded development which is just full of workarounds for various problems.

As a contractor paid by the hour I am aware of my hourly rate and this helps me put things in perspective. Once you calculate the value of services and tools what seemed expensive frequently becomes dirt cheap when you factor time saved or risk averted.

For backup I keep USB docking station and two 4TB HDDs. I try to keep one of them at a different site. I also keep a copy on an online service mostly for convenience but also as another layer of protection.


I dont value the config of my PC. I dont next to no customisation of my setup.


I recommending setting up "restic" to S3 or one of its many clones: https://restic.net/


It's only a backup if you have more than 1 copy of it. Storing it in one cloud doesn't count. Losing data sucks, but seems to be a right of passage.


I think backblaze will come cheaper than writing to S3 and avoids you maintaining the cronjob for the script


I am working on using Syncthing. Not a cloud backup solution though.


For people who are really into system administration.


Is that a bad thing? You seem to be implying it's a bad thing.


The work has come much down by many orders of magnitude. It's possible to have digital ocean/cloud type experiences locally.. pretty reasonably.


How can you do that? Don't you need a proper static IP from the ISP unless you have some sort of a mega business connection?

Edit: thanks for the responses, I didn't know this was even possible.


In my experience, your ISP will only give you a new IP address if your modem disconnects for a while. You can easily use the same IP address for over a year.

You can also use something like DuckDNS to have a backup in case you are away from home and your IP changes, you can still find your server. You could even theoretically write a script that then checks DuckDNS for your IP and updates the record with your Domain Registrar automatically (assuming they have an API)


There are also prebuilt Docker images that will update duckdns for you if you are using docker in your lab


If you mean being able to access your server from elsewhere...

I have a dynamic DNS thingamabob going (a script on a raspberry pi that pings my website a few times per day) and have had no problems. Haven't even noticed my home servers becoming unavailable due to IP address changes for a few years, probably because my ISP (Charter a.k.a. Spectrum) doesn't change my address very often. I use NearlyFreeSpeech for hosting the website and they also handle DNS and have an API for it, so it's all very nice and simple.


You can (and probably should for security reasons) have any entrypoint on e.g. cloud/vps/etc. If you have changing IP where your servers is you can just connect via VPN and then it's pretty seamless even as your IP changes.

More reliable than dyndns (since you never know how long DNS servers are actually caching)


You can get a static IP through my business provider, (Cox), for free. It comes with the business subscription. Getting a business subscription is as easy as signing up for it. I work from home and the guaranteed speed and higher line priority is worth ~$20/month premium. Also no data cap!


Many domain registrars give you an API for DNS updates. Every minute resolve your IP and if it changes, update it. Service won’t be available during propagation but home IP’s don’t change too often


Dynamic DNS has existed for the better part of 20+ years.

Residential ISPs also often provide static DHCP leases that are quasi static IPs.

Some ISPs also sell an addon for a static IP to the service.


The admin is admittedly the worst part about it, but since virtual machines and Ansible were invented it's at least a tractable problem.


I had been wondering if retrocomputing were the informatics equivalent of model railroading, but homelabbing might be an even better fit.


What I'm most fascinated by is that this is a solo blog that has sponsors and plenty of affiliate links, and has over 4 million readers. Very curious what kind of revenue it's pulling in. Can't seem to find a search function to see if the topic was already covered.


i'd be interested if it comes close to covering the cost of the setup. It looks beautiful but I bet it comes at a price.


I currently have a home server with a bunch of USB hard drives hanging off it, which I pool using ZFS.

That was fine when I only had 2 drives, but now my "USB Octopus" is getting silly.

What's the next step in hardware if I want my server to access a bunch of disks?


I bought, for about £150-200 iirc, a 4U chassis with 16x 3.5" bays for this reason, reckoning it will always be enough (I won't fill them before the optimal £/TB disk size increases and I can go through upgrading them). It sits in strips of U-spaced punched aluminium, sideways on some quick and dirty shelving I built. That's not crucial except that you probably don't want it on the floor, and if you're going to rack mount it (as opposed to just sitting it on a shelf) the (width) sizing is relatively crucial.

It was cheap and easy, (the chassis much more expensive than the shelving/rack - but the value's in the hot swappable bays) think I'd recommend it.


Synology is really high quality stuff and will pay for itself in reduced power usage. Any ancient rackmount gear you buy will guzzle electricity


Also in reduced maintenance and suffering. They are designed to do the things we want.

I say get a beefy Synology, maybe a tape drive, and rent whatever else you need in the cloud. Plus a desktop/laptop as you prefer.


Their hardware is great, but pricey. Used server can be had for a lot cheaper and you can get the same functionality with Xpenology. Not sure the electricity savings outweighs the upfront hardware costs.


> Is Xpenology Stable?

> As a person interested using Xpenology you should keep in mind, that this is not an official Synology release. Although it’s based on the official Synology DSM which is the same as whats found on the actual Synology devices, it’s not officially supported by Synology. You could get support from the community for this software, but not from the company itself. Typically once you’re up and running things are very stable without any issues but just know that calling support is not an option when you’re not running on an official Synology device.


I'm sure everyone has different experiences, but it's been super stable for me. The biggest issues come with the software upgrades and certain configurations may have issues, however the Xpenology forums have massive support and everyone lists how their upgrade experience went across multiple platforms (VM and bare metal). It was worth it for me.


A Synology or QNAP NAS is a good idea in that it will be an appliance type use, which is a big step from having to build and maintain it yourself, which isn't for everyone.


Find whatever fits your needs here: https://www.icydock.com/index.php

I've never been unhappy with anything I've bought from them.

If you want a lot of 3.5" internal hot-swap drives, you may need to find a case with 5.25" bays, which is getting harder to find.


Dunno how many is a bunch, but I have an HP microserver that's quiet and cheap. They're intended for small offices, and can be picked up refurbished for $250. I have ~12(4x3)TB in mine, in two mirrored vdevs for 6TB usable. Less than $125 for all the (used) HGST drives too.


"Shuck" all of the drives into a Rosewill 4U chassis, if you dig around you can find them cheap open box on Amazon and sometimes even eBay. They're durable, pretty cheap and come with 8 3.5" bays.


Unfortunately those Rosewill 4U cases with 15x 3.5" drive haves have been out of stock since COVID started.

I wish I could find an alternative that is as affordable.


There are a number of PCIE SAS HBAs ,or SAS RAID that you can run in HBA mode, which will handle SAS and or SATA drives with an adapter. Getting 16 drives off a single HBA is totally doable and you don't need the more expensive battery-backed cache models since ZFS is doing the pool work.

I personally prefer the QNAP options as those either re-branded or integrated from chipsets that are widely supported in Linux and BSD.

There are rackable ATX cases like the Rosewill RSV-L4500 which don't have the enterprise-y niceties but are perfectly service-able and fairly quiet for a dozen disks.


Buy a used server from Ebay that has lots of hot-swappable drive bays. Many can be had for $100-200 and work fine.


Your electricity usage footprint might outpace the savings that appear to be the case up front.

Separating your storage as an appliance/service (dedicated NAS with low power usage) is worth running a comparison on.


So much this. I think my energy bill for my single PC as mentioned elsewhere in this thread works out about $70 (equiv) a year.

When I had the full whack server rack it was pulling that a month and 99% of the time it was idle.


This might depend on where you live...my server pulls 180W average with 11 hard drives (10 reg, 1 SSD). That works out to ~$150/yr at my electricity rate of $0.0816/kwh.


Energy isn’t going to get cheaper and neither is the environmental impact of consuming lots of it.


Running a less efficient server or making a brand new server that consumes raw materials. Same can be said for a 10yr old Honda Civic versus a brand new Tesla. Either way is wasteful...just depends on your viewpoint.


Yeah which is why I do neither.


Once you add up multiple devices it can get out of hand.

The price/watt ratio is helpful to have something you set, forget and don't notice running in the background.


I did this also. An HP G8 DL380p with 12LFF spots was had for <$250 with plenty of RAM. Add Xpenology and it was like having a Synology NAS for about 1/5 the price. Currently I'm running 8x8TB in RAID6...something that would have required a ~$1200 machine from Synology (at least when I was looking to upgrade).


I would like to do this, but am worried about the noise. I've heard servers that sound like jet engines. Is anything I can look for to avoid that?


I would love to know, actually. Mine are all annoyingly loud, especially the HP. The worst part is that there is no operator control over the fan speed what so fucking ever, and it automatically goes up much more than necessary just because a PCI card has been added or some such (even an HP-branded PCI card).

You can probably do a lot better by building a custom one, but that is going to be a lot more expensive. I would love to hear from anyone with experience in this actually, as I'm kind of starting to get fed up with it.

Aside from aggressively controlling the fans, I find the HP and Dell firmwares to be entirely useless, as they all apparently require a web browser with an ancient version of Java. In one case, Java 6. Yeah... this was a terrible decision even 10 years ago.

Another annoyance is that they will mostly come with some kind of hardware RAID card that doesn't passthrough raw disks. Often, a completely worthless one that must be replaced. Even if it's a less terrible one, it will require reflashing if you want to use it in JBOD mode (for ZFS or software RAID).

All that being said, considering that I paid $140 shipped for this HP server 3 years ago, I'm not sure how much I can complain. Setup was a pain in the ass, but once I got it done, it's been working mostly flawlessly since. The only ongoing pain points for me are the lack of USB3, and the fan noise. (I could add a USB3 PCI card, perhaps, but I'm sure that would increase the fan noise even more!)


> Aside from aggressively controlling the fans, I find the HP and Dell firmwares to be entirely useless, as they all apparently require a web browser with an ancient version of Java. In one case, Java 6. Yeah... this was a terrible decision even 10 years ago.

Newer HPE servers with iLO 5 need any plain browser to access iLO, and for the remote console, give you the choice of HTML5, .Net, or Java. I have a Gen10 Microserver and the HTML5 console works flawlessly in Chrome.


I guess that's a pretty significant improvement, but I still don't understand why a server console needs a web browser at all? Why not just fire up a VNC server and call it a day?


Do some reasearch. The G8 DL380p I bought sounded fine until I didn't use the built in RAID and opted for software RAID due to the software I wanted to use. Not using that RAID caused the fans to run 100% all the time. I opted to hijack the fan voltage (which was harder than it should be for proprietary fan connectors/sensors) and manually crank down the speed to an acceptable temperature/noise ratio. More than I needed to do if I had just done some research first.


If you need 12-15 HDDs, Rosewill's 4U case (I have RSV-L4500) is good for bedroom because it supports 12cmfan. Fractal Design's Define 7 XL is recently released and also looks good if you prefer normal ATX case dimension rather than rackmount.

If you need just about 8 or fewer HDDs and don't need hotswap, there's many options like just normal ATX case.

If you need silence, Don't get a server case that supports 8cm fan only!


Avoid real servers. Intel NUC, and tiny pcs from lenovo, hp, dell etc are perfectly serviceable for this purpose.


AFAIK, none of those support ECC RAM, making them eminently unsuitable for serious storage.


ECC ram is definitely a consideration. I've appreciated having it when it was around.

The needs of each homelab can be different though. Some homelabs don't need to be that serious from a power/noise/energy cost perspective just to get ECC, unless there is an explicit benefit from it.

Making do without ECC can be perfectly possible for some or most portions of a homelab.


most of the synology units are not using ECC. should we avoid them?


as a hardware engineer, I got all excited it would talk about budget friendly oscilloscopes, dmms, and power supplies. Still a fun read.


Router =/= Firewall.

If you are planning to host stuff, or even just to protect your own things, you better get something like Sonicwall or similar.

At my home I have a sonicwall, synology nas, 2 dell servers, power supplies. Not in a single closet though. To me, the server security is far more important than the looks. Spend the most money on firewall, rest is just for looks. Patch panel for home setup sounds excessive because I am not going to run all my cables thru my garage. Have individual switches in every room if you want. Most living spaces need only one or two ethernet cables, rest of the devices can be wifi easily.


Maybe a noob question, but why do you need a dedicated firewall appliance? What is it doing for you that just closing all ports and then selectively reopening as needed (my current solution) won't do? What kinds of threats is it protection your network from?


Also mostly a noob, but my understanding is that most of the value comes from having a subscription to the firewall vendors threat database, and using the box’s deep packet inspection to verify gnarly stuff isn’t coming over port 80 or other common protocols. You want it in an appliance so that you can be assured of isolation from the rest of your network. You can run firewall software in a VM, but you would still want that VM’s host physically isolated or with dedicated NICs for the VM.


Well I dont think you need a real firewall but a serious router is nice for the home network.

A big advantage is that you can do vlans, so you can seperate your homelab from the rest of your devices. You could also do things like block outbound traffic from your home network.


i'm temped to buy the used enterprise equipment but in the end, i ended up buying small Intel NUC machines for my homelab.

power consumption of these NUCs is almost negligible while one Dell R620 with dual CPU takes 150-200W at idle.


I looked at NUCs and settled for a used ThinkPad. Integrated keyboard, monitor, mouse and battery, and potentially even more judicious with power consumption, but oomphy enough to run image recognition on my photo collection while hosting lots of dockers. Backup to a local NAS that I only switch on during the backup window, and backup remotely to backblaze B2.


i bought used NUCs too. new one cost way too much :D


Same for me. I can fit two Intel NUCs under my monitor stand - one a headless server, another the primary input on the monitor - and can accomplish a decent amount with 16 GB RAM (on an older generation NUC) and two internal drives.


Kinda curious what the noise level is with those fans. I know there's no proper rack mount servers there that need the fans running full blast, but I imagine 4x120mm fans is still loud.


120mm + closed loop watercooling makes almost no noise. 140mm makes no noise (compared to background), but don't fit in the standard "U" format. 3U (ATX) can fit a full 120mm, but not a closed loop. 4U (ATX) take a lot of space, but the airflow and WC flow is easy to get right. I used to be a self employed embedded system (Linux) engineer and had to compile whole OS images all day long on air-gapped, on premise, systems, so I ran my own "semi-pro home lab" for a few years.

Another trick is to have 5v, 12v and 48v DC lines with silent PSU like Corsair RM series directly feeding some servers so they didn't have to use an internal PSU, that leaves only the CPU and GPU as "hot" components (and the NAS, but that's another story and a solved problem).

https://imgur.com/a/cGYxIPW


Yeah that was my first thought. Noise is really what killed my home lab in the 00s. It was over 3kW when all turned on and was audible from any room in the house.

Lasted about 18 months before my wife vetoed it.

From memory -

    Sun ultra 60 x 2
    Sun ultra 80 with the big foam sound deadening kit option
    Sun ultra 5 x many (6?)
    Sun ultra 10 x 2
    Sun 6 bay scsi external disk caddy

    Cisco 24 port 10/100 managed switch but i forget the model, something XL (ios 11 or something truly ancient like that)
    2 cisco routers with PPPoA adsl adapters
    3com unmanaged kit with POE
    Terminal server

    IBM AIX b50 telco 1u server
Several linux x86 hosts in a “beowulf” cluster - single machine image spread over commodity nodes. Seriously cool, even today i’m in awe of the concept (but not the practicalities)

An openbsd host on an old x86 desktop

A freebsd node on a beefy (for the time) AMD opteron host

And probably a bunch of other toys i’ve forgotten by now.


That's a serious amount of kit, and there's no way 20+ machines isn't going to emit a constant roar. At some point, if you're going to have that kind of setup, it needs to be in a dedicated workshop that's either free-standing, or somehow otherwise sound-isolated from the main house. I know I'd never get away with running that much equipment all at once!


18 months, lol. His wife must be a saint.


She probably complained earlier, but couldn't hear her over all the noise!


Any rackmount server will be a nuisance. I ran rack servers at home for years but recently replaced my servers with ATX form factor machines, which are super quiet, ventilate much better and still stack nicely. There are plenty of ATX Xeon motherboards that support ECC ram.


I won't be the first to admit that the rackmount hardware is a nuisance, but at the same time, it's relatively inexpensive since there's no practical reason to buy new hardware for a homelab and the IT renewal companies are absolutely overflowing with reclaimed hardware on the cheap. Also it's often easier to find hardware configurations that require no tinkering at all, verses having to slog through a PC build and hope that nothing goes wrong. My favorite hardware though are workstations and tower servers, since they're basically the best of both worlds - commercial machines with a decent amount of replacement parts with tool-less (or even hot) swapping and people with experience servicing them while still being fairly quiet at heavy load.

I've done it every other way - I have a dual chip Xeon Silver machine that I hand built, an IT reclaimed Dell rack server, three NUCs (used to be 4 but one bit the dust after 4.5 years of hard service), and a smathering of other hardware (NASes, 10G and 1G switches, etc.)...

After the absolute pain of the Xeon Silver build - having a brand new motherboard that was DOA and basically being told I was out $600 after multiple rounds of escalation, then having the replacement board fight with my GPUs, I'm not sure if I'm willing to bother doing it again. It would have been worth it to pay the markup on a gently used commercial machine and I would have been up and running faster... Probably would have been cheaper, in fact.


Yeah everything you mentioned is spot on. I wish I had a garage to operate all my equipment, but have to work within the constraints of my small apartment.


Depends on your storage needs. More than 6-8 hard drives and getting an ATX form factor machine is near impossible, while many brands/generations of rack mount servers fit the bill for cheap.


Fair point.

My log machine uses six 16TB SATA drives configured as ZFS mirrors, so effectively 48TB of storage, which I back up to S3.

SAS drives are better but the current setup meets my needs.


SAS are better, but for my NAS/Media server needs I use SATA. I have 8x8TB, 2x4TB and 1x500GB SSD...all within a 2U rackmount server that set me back less than $250 that includes things like backup power supplies and 192GB of RAM. I bought another 1U rackmount to run my personal website and NVR (Zoneminder).


Out of all the setups I've read about here, I would love to hear more about yours and the costs involved, please?


I haven't tallied all the costs...nor do I want to.

The media server is an 2U HP G8 DL380P with the 12LFF drive cage. I run Xpenology (synology hacked to run on other hardware). That allows me to run Plex and Docker along with some auto backup stuff from my other computers (also serves as a central point for VM images to roll out to my computers depending on which I am using that week). I was incorrect about the RAM, this server is only 48GB since Xpenology can't use it all. Server has dual E5-2440s.

The other server/NVR is a 1U HP G8 DL380e that has a 4LFF drive cage. Came with the 4x2TB setup already and it's running Ubuntu server 20.04. Zoneminder is a great software package and easily handles my 8 1080P or better PoE cameras around my house/shop. I think it was also ~$250 on ebay...with another $50-60 in RAM from eBay (Zoneminder does a lot of caching in RAM for prior to recording movement detection). This machine sports 192GB of RAM and dual E5-2650s.


Now it's possible thanks to Fractal Design's Define 7 XL.


So for the just under the price of a used 4-5 year old server you can get that case...not really practical for someone on a budget.


Holy crap: "18 HDD/SSDs plus five SSDs"

That's a lot of spindles.


a 4U rackmount server is identical form-factor to a 19" tall tower.


That’s right, though 4U cases generally only ventilate from the front and back, which seems to contribute to their noise level. I figure that only a small minority of the 4U market runs their servers in apartment kitchens.

I’ve had good luck using horizontal Silverstone ATX cases with fans on three sides. That said, I’m sure there are 4U units that offer huge but low rpm fans that would satisfy the bedroom server-lab niche market.


I guess I've never owned a tower that ventilated on the top or bottom. I don't like top fans because gravity means things get in them, and bottom fans don't work on carpet.

Usually 4U cases have a lot of fan mounts so rather than a few large radius fans you have lots of small radius fans. It used to be that all small fans were loud but in the past decade or so you've been able to get smaller fans at just about any noise point you want.


Off-the-shelf rack servers are the worst with noise level, so I personally don't recommend that for a homelab. With my latest upgrade I went custom for this reason. You can build a quiet ATX-based NAS with fairly cheap consumer/gamer-grade components these days.

Using a Rosewill 4U ATX enclosure (RSV-L4312); replaced the stock case fans with Noctua NF-A12/NF-A8 (120mm / 80mm). It's extremely quiet, and now I have a full custom 4U with hot swappable drives, while still maintaining plenty of airflow.

My only gripe is the quality of Rosewill case isn't what I'm used to with enterprise-grade or even gamer-grade chassis. I really wish other companies would get into the homelab space and start making quality ATX rack-mounted chassis. I think there's a decent sized DIY market for this.


I had a rosewill and hated it so much I replaced it with an iStarUSA case. I have the 3U because I wanted to fit it, plus a power-strip in a 4U rack, but I would recommend the 4U version to anyone who has the space, because iStar uses a removable motherboard tray that makes some "3U ready" parts not actually fit in the case. 4U is the same height as a standard tower is wide, so there should be zero issues with that in their 4U cases.


Anybody knows if those kinds of power strips mentioned in the article are available for the Europian market (https://www.amazon.com/gp/product/B00KFZ98YO/)?

I've been searching for something like that a few months ago but couldn't find anything that has schuko sockets on the other side.


You can search for PDUs (Power distribution units) in North America to get you pointed in the right direction.

https://www.amazon.com/s?k=pdu


I was expecting how to genetically modify yeast in the kitchen or something.


I'm trying to remember the name of these really small, nearly portable homelab servers.

They're quite expensive but looked like a good homelab alternative for people lacking space for a proper rack.

Someone knows it?


I am very happy with my proliant microserver gen10 plus. holds 4x 3.5" drives, is absolutely tiny, good at virtualisation, quiet-ish, upgradeable


Very nice! I configured my NAS a few weeks ago, went for a fractal design core 500.

Now I'm looking for a smaller, more beefy machine with less storage capacity, for some @home experiments.


Maybe Antsle? https://antsle.com/


You might be thinking of Intel NUCs, or tiny PC's like the Lenovo m920q's, HP EliteDesks, etc.


NUC? NAS?

Kind of depends which way you wanna go. NAS is probably the term you're thinking off.


Nope, it was indeed NUC.

I already have a DIY NAS at home, I'm now looking for an extra small machine to run more experimental stuff, playing with k8s, etc.

I don't want to do that on my NAS.


I think it was Intel NUC.


Funny. I have a full 42U rack at home but because I am moving to another city I had to dismantle everything to get it packed. All the critical VM's and services are now running on an old NUC and a HP low-spec laptop. Made me rethink if I really need all the hardware I have or I should downsize.


One note... the modem recommended is only DOCSIS 3.0, not 3.1, so it will not support gigabit networking.


Great, now the time-honored nerd practice of running shit out of your closet/garage has been picked up by hucksters as "homelab"ing. If it's anything like the damage that YouTube did to computer building, next thing you know this practice is going to suddenly turn into aesthetics and bragging rights.

I am SOOO SICK of watching this highly technical and aesthetically inconsiderate hobby turn into an issue of neckbeard Modern Living. A new generation of boutique providers selling storage racks at silly markups.

Pretty soon you're going to have the worst kind of hipsters -- keyboard nerds -- obsessing over the material used in rack standoffs or something. Because you know that aged Persian rubber dampens vibrations +.03db better...

Yay.....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: