There are lots of enormously better solutions for that. Personally I've dumped all my UniFi gateways (including some of the "next generation" ones I had for testing) and moved to OPNsense running on decent SuperMicro 1U edge systems (excellent value, quiet, regularly available dirt cheap on ebay and the like). But there are many great options there including VyOS, OpenWRT, or even just running straight OpenBSD if that's what you like. Router/gateway is one area where I thinking devoting real metal, preferably with higher reliability hardware and extra remote management options (like IPMI), is worth it. It's an important linchpin for a typical network, and while virtualizing it is possible and definitely desirable at large scale it's not where I want to add more moving parts without a team behind it. Everything else can go tits up while I mess with it or there are odd interactions and as long as the gateway/routing is still solid odds are I can recover without much disruption or having to physically be there.
This is depressing to read because I worked a Ubiquiti before things got bad. The company culture took a huge dive when the CEO began removing executives and managers that were getting things done. The company was always weird, but when I started they were good about finding strong engineers who cared about networking and letting us build good products.
Everything changed when they tried to reoganize the company around the UDM and move development to China. The UDM reorg was so awful that employees and executives were leaving in droves. For months we couldn't even tell who was supposed to be in charge of UDM or UniFi because so many people left at once. I think we had 4 different UniFi lead developers get hired and then quit a few months later after the early employees all left.
Sad situation. Ubiquiti was a cool company that paid well and let engineers do good work when I joined. I still wonder why it had to fall apart so fast. The company was never perfect but it was sad to watch the CEO drive away all of the good parts of the company.
What a shame. Thanks though for your own small part in doing something pretty special. If nothing else Edge/airMax/airFiber/UniFi opened a lot of our eyes to what SMB networking could be vs abominations like old Cisco.
I think everyone should be worried when a company founder/CEO of a tech firm starts doing silly stuff like going out and buying a professional basketball team. That's never a good sign.
Don’t have any personal experience with them, however.
At the moment I'm using Unifi for my router, but I'd consider switching to something as secure as Mikrotik but with support for mDNS across VLANs.
They have some equally complicated to setup provisioning software like Ubiquiti (when you have just one device it's overkill), however you can actually configure everything on the device directly from the web interface, and no cloud required (I spent an hour trying to set it up before I realised this....).
It is classed AC1750, so the same as a UAP-AC-PRO which is around €150 here. I paid €95, so quite a bit cheaper. They also have some WiFi 6 (AX) devices, but I don't have any compatible clients so didn't bother with those.
I haven't moved everything on WiFi or L2 switching yet and am still evaluating that, poking at different solutions from a variety of places across price ranges (from MikroTik to Peplink or Ruckus). Part of the true pain of all this is that there isn't any clear obvious successor to all the things Ubiquiti aimed for or else a lot more of us would have probably bailed completely already vs drawing things out. Not merely the price point, which I'd actually be fine paying somewhat more for given how important it is, but zero cloud dependency, the fairly pleasant (though getting worse) single pane of glass for managing things, and decent physical design that doesn't look like giant mutant space bugs all matter.
However, even in WiFi it has started to get flakier and while the older WiFi 5 devices which benefited from what was once a lot of great engineering talent have held on my experience with the newer WiFi 6 stuff has been mediocre. And the controller has continued to go downhill as well, actively removing useful functionality and information density as they keep messing with the UI every single version in the most classic bikeshedding fashion :(.
WiFi 6E/WiFi 7 and multigig will I think be the major decision point, so another 1-3 years. 6 GHz spectrum will be genuinely very useful in a variety of settings, so we're going to be looking at a point where it'll be desirable to replace a lot of hardware anyway. Once there is that kind of commitment that's going to represent not wanting to change again for another 5-7 years, well, might as well reevaluate. And I hate it but I just don't see Ubiquiti getting better.
I will say that with a touch of irony, Ubiquiti's rot spiral has actually reinforced the value of Ubiquiti's approach IMO. Because such things can happen to any company, and at least with UniFi/UNMS the self-host control thing has worked out exactly the last-resort backstop we all always said it could. There isn't any required cloud tie. There isn't any forced firmware updates. The system can be quite well isolated, with routing/gateway gone the rest can go onto a dedicated management VLAN with zero ingress/egress. That leaves a lot more runway even without updates. I'm sorry it was needed but it makes me more determined then ever to avoid remote dependencies. So I'll thank Ubiquiti for that I guess :\.
As a counter point, mine works beautifully. UDM Pro, multiple Access Points, access point roaming, 2.4 and 5ghz networks, and all the goodness. Everything works flawlessly and it's been quite reliable.
They seem to really enjoy spending money moving my cheese on the web site admin interface, and a few unexpected features seem to have vanished, but overall... there's nothing as good or centrally integrated. Everything else is a collection of point solutions, which is more than many (including myself) have the time for...
I agree with what you have done here and I will echo the sentiment to run away screaming from Ubiquiti/UniFi.
It's easy to be confused by all of the professional ISP gear that Ubiquiti used to produce and think that the UniFi products (like the "Dream Machine") are professional networking equipment. Try blocking ubnt domains at your network edge losing the ability to log into UniFi ... you'll see how "professional" it is right away.
I will also say that it is ironic, and sad, that you mention Supermicro as an alternative. Their recent moves make it quite clear that they are trying to move in the same direction ... which is to say, making wonderfully built products that fulfill useful (but boring) use-cases is never enough.
Supermicro circa 2010 was making people rich but not billionaire rich ... and that is a problem they are working hard to fix.
>It's easy to be confused by all of the professional ISP gear that Ubiquiti used to produce and think that the UniFi products (like the "Dream Machine") are professional networking equipment. Try blocking ubnt domains at your network edge losing the ability to log into UniFi ... you'll see how "professional" it is right away.
The 'dream machine'/UniFi OS stuff is real trash and maybe where the rot truly showed to have completely taken over, though there were very bad signs earlier like when they trashed their self-hosted video solution and reversed earlier promises to restrict the follow up to their proprietary hardware (CK G2). Worth clarifying though that it's still perfectly possible to have a full self-hosted controller with local accounts, multiple sites and zero greater internet dependencies (route management VLANs through wireguard back to the L3 controller). The UDMs are in no way necessary. Their problems are more fundamental than that :).
>I will also say that it is ironic, and sad, that you mention Supermicro as an alternative. Their recent moves make it quite clear that they are trying to move in the same direction ...
I'm genuinely curious what you mean by that? At your scale of course you have far more insight into businesses of that size, but I honestly haven't seen any signs of that on the SM side nor is it quite clear to me how they would even go about that? They just make systems don't they? Even their IPMI is based off the AST2k series BMC, and they've been extremely reasonable about what they offer there in stark contrast to players like HPE. Their systems also aren't packed with proprietary junk that punishes you the instant you try to do anything else, again in stark contrast to HPE. They're just computers. Of course I'm only in at the lower end of their spectrum of offerings!
>making wonderfully built products that fulfill useful (but boring) use-cases is never enough. Supermicro circa 2010 was making people rich but not billionaire rich ...
One of the particularly frustrating/outright confusing things about Ubiquiti FWIW is that this isn't actually true. Ubiquiti did in fact turn Pera into a billionaire just by making wonderful products that fulfilled important use cases, and there were clear and loudly suggested natural revenue opportunities that they never even bothered with. Looking at all of their shitty moves and self-destruction, it actually doesn't generally have any sort of revenue ties! And indeed on the contrary, they've repeatedly skipped out on (accepted even) features that would flat out encourage more hardware sales and revenue (like L2 replicants to L3 control). They've actively spent major money and development time on things that people hate yet are completely free. It's not like all those new ever worse Controller versions actually are paid. It's not like they've added subscriptions and microtransactions everywhere. Even their cloud stuff again doesn't actually have any revenue story attached. They made a big deal about offering one of the most classic super high margin boosting things of all, "Premium Business Support Contracts", and then... just kind of abandoned it and let it die out pissing off a ton of professionals and business in the process even though it could clearly print money.
It's not a matter of their being some cold money grubbing business logic to their moves that, however evil, one can understand the point of. There simply isn't any logic at all, beyond maybe trying to outsource so hard and create such a toxic environment that they literally just don't have the internal capability to execute on much of anything anymore. Outside of a few remaining decent people still plugging away a bit on bug fixes, most of the activity seems directionless, meetings and throwing stuff at the wall with no strategy or follow up, and ever widening set of products that sometimes get dropped before they are even out of "early access".
So yeah. If anything the Ubiquiti of old would be doing fantastically better right now just because demand for networking and everything related has accelerated so much. It's just so stupid.
For managed switches, look at the Aruba Instant-On 1930 series. I've just ordered two of these so I don't have any first hand experience, but the feedback online is generally positive. Do note, these switches can only be managed from HTTPS, but the interface seems clean. From my research the cheaper Tplink and Netgear don't have an isolated management interface, meaning it can be accessed from all vlans. This was a deal breaker for me. I also considered the HP Officeconnect 1820 series switches, but they've been out for a while and I worried their EOL may be coming up shortly.
For Access Points, look at the Aruba Instant-On AP 22. The biggest down fall is the access point uses a cloud controller, requiring an internet connection to manage the device. There is no local management. This is the same exact hardware as the Aruba AP-505 which runs for ~$400. Given Aruba makes solid wireless hardware, the advanced features compared to other units in this price range, and the lower price point, I'm willing to give up local management control. After all, I don't modify my AP that often. My Ubiquity access point has crashed multiple times, the most recent crash appeared to be memory leak. Maybe I'm just unlucky, but this aligns to numerous complaints about firmware quality.
For Netgear you want a T eg GS724T(P) in the model which implies smart switch (web managed). They do have command line managed ones too ie "managed switch".
For home use, decent L1 and 2 is enough - you don't want to do L3 switching in general, unless your house is at the top of the Mall. So, GS724TPv2 and GS110TPv3 get you a 24 or 8 port PoE+ switch with L1 and L2 covered in a web interface with VLANs etc. The newer interface with the PVID section that shows tags and ports n LAGs at a glance is one of the best, regardless of price or status. For the money those switches are quite hard to beat.
I use a lot of these in my personal networking setup but I am going to start moving to a raspberry pi CM4 built into this dual-ethernet board:
It's a smaller footprint, more CPU horsepower, etc., and I like mounting little devices onto DIN rails more than I like trying to rackmount the 7" or 9" APUs ...
I have one single site that is doing much heavier 10G+ routing and usage that I also wanted to mess with more intensive SDN and security with. For that last year I ended up picking the much beefier and much more expensive EPYC Embedded based 5019D-FTN4 and putting a Mellanox card in it. It's also extremely quiet and has been really impressive, but that's stupid overkill right now. Also, EPYC Embedded is currently still based off of Gen1, there was no Zen3 update due to not having low TDP given the way the chiplets were upgraded vs the IO chip. I expect Zen4 next year will see an upgraded Embedded platform that will essentially be a 3 gen leap forward, so at this point not the best time anwyay.
There is no perfect solution IMO. Though probably nothing that'd throw off the typical HNer, OPNsense does have its warts, rough edges and missing bits (no Webauthn so no security keys for login for example). It's based off of a FreeBSD variant (soon to be directly off FreeBSD) with all that comes with that for better or worse. Like, OPNsense does have a user space plugin option for WireGuard (along with ZeroTier and so on), but WG has not yet made it to the FreeBSD kernel which in some situations could be an issue (countered by raw CPU in my case). But it's powerful, well maintained, overall fairly user friendly, has pretty solid documentation and getting started guides, and a nice community with a good mix of developers and some companies behind it. The company Deciso for example does offer a paid business edition and paid support options if it's desired. It does have DNS Blacklist options ala PiHole, stats/telemetry/IDS/IPS via built-in and 3rd party offerings like Sensei, etc. There are plugins for Let's Encrypt, FreeRADIUS and other handy functionality. Someone who is very familiar with Linux might find VyOS more worth looking at but with my background I found OPNsense reasonably pleasant to get into.
The decision tree here does also depend on how much network functionality you want to have into your gateway/routing system vs how much to stick on a separate server elsewhere (maybe virtualized or as a part of a NAS). Gateways can be very minimal or can handle damn near everything on the network. There are straight forward tradeoffs there in terms of failure modes and complexity.
Strongly disagree, even if you disregard price point. Unless you are limiting your criticism to the USG/UDM and routing. Which I think you are, but just want to be sure.
For the wifi and switching they are very very hard to beat. For normies and techies alike. I only dislike that the APs are underpowered so you need to carpet bomb your house if you have a large footprint. For dense living it's great though. Even for large area (blocked by walls) it's ok if you don't mind filling in the gaps with mesh. By the time you buy 1 AP and 1 mesh you are at $400 though.
The gateway products are absolutely abysmal, even before considering the buggy new hardware.
It comes with nice addon packages for stuff like Wireguard, all kinds of tunnels/VPN, adblockers, runs containers and a ton more. I even run it on a VPS as container with it having exclusive access to the "physical" NIC. The parent OS isn't directly accessible at all. Makes firewalling a breeze. The only open ports are for the Tor relay and Wireguard, through Wich I connect to the webui/ssh and do everything else.
Of course, my router also runs OpenWrt...
You can do a lot with these commodity ARM CPUs, 64-128MB of ram and a few tens of megabytes of flash storage.
How does it stack up against VyOS (which just recently got VRF-lite support)
You SSH into it. Then do whatever you need to. There's nothing in the UI that can't be done via CLI as far as I know. Some plugins might not be 100% CLI compliant but at least the Base UI (luci) is completely transparent to the CLI via uci.
I haven't done a ton of research in this area, but I'd certainly like to use OpenWRT or OPNSense on my home routers/firewalls.
Side note: I've been trying to figure out a decent way to get rid of Android on my Galaxy S9+, but it appears to be locked.
At the end of the day, I just want to be in control of my bandwidth, my data, and know whats going on. Big companies are making this very complicated with all the tracking.
I recently re-enabled my pi-hole on a virtual machine, and it never ceases to amaze me what is talking to the internet without my permission. After digging into DoH a bit, I'm about to the point where I think I need to put in an outbound proxy, deny all outbound access except via the proxy, and iterate again and again.
I just don't want to have $200 a month in power bills to support my home network to save bandwidth and know what is traversing the net.
The R7800 is well supported by OpenWrt, has the hardware features I need, some room to gro, and it's affordable used. I paid about $90 for my first one, and about $70 for my backup unit.
For OpenWrt for smaller purposes, for which an R7800 is both overkill and physically bulky, I understand there are a bunch of near options now. I just keep some old WNDR3700 and WNDR3800 units on hand, which used to be my main routers, and actually still could be. (Sometimes they might be a simple WiFi bridge or print server. Other times, they might be an experimental LAN that needs different properties than I have set up for my main router, and with which I don't want to complicate my main router setup.)
That being said there is no hardcore prosumer hardware out there for this purpose. The moment you go beyond home user router hardware like the WRT3200acm you are in either CISCO Buisness stuff or custom server builds. Potentially a Raspberry Pi 4 with a PCIE ethernet card is closest to prosumer hardware out there and there's a lot of hacking involved to get that running to the same degree as a openwrt router
But I doubt most consumers really care. It's complicated.
If you don't like the fork, at least with the Turris Omnia it looks like you can put on vanilla OpenWRT , but as always check the OpenWRT table of hardware for details before buying.
I think the PSU of the Turris Omnia is rated at 40W max, but I don't know what sort of real-world power draw you'd get with your specific use-case. I guess it depends on whether you use WiFi, the SFP port etc.
I think there's a special configuration command that might fix some of the above issues, but I've been using the web interface (which actually does support committing and, to some extent, validation).
The way OpenWRT handles routing and firewall rules is particular and they apply their own terminology for some things. They have their own distro-specific packages for things like DHCP (odhcp(c)d) and firewall (fw3).
For very simple networks, it's very smooth to get to where you want. Add on dual-stack v4/v6, vlans, multiple firewall zones, routing policies etc and things start becoming very unpredictable.
Oh, and that adblock package? Turns out a single invalid line in a blocklist will completely break DNS (at least on the version I was running from last year).
Not to mention that (AFAIK) there's no good way to keep up to date with security patches and bugfixes while keeping the system stable.
After all the countless hours I poured into OpenWRT configuration, I finally realized that it's so much less pain and confusion with vanilla Debian with systemd-networkd (which BTW natively supports setting up Wireguard interfaces now) and firewalld+nftables, everything configured via ansible playbooks.
For someone diving into this today, it's a lot easier and more future-proof with nftables than iptables - and OpenWRT will be married to iptables for the foreseeable future.
It's great that it works for you, but if you like I did have some imposter syndrome over not perfectly understanding Linux networking and are happy that OpenWRT takes care of those confusing iptables rules and routing policies and what-not - you may just discover that learning how it actually works will take less work than abusing OpenWRT into doing what you want.
Sure, you have to give up the WebUI and some of the custom add-ons.
I am sure BSD or Rocky Linux are fine choices as well; Debian just happens to be what I mostly use for servers otherwise.
I don't want to hate too much on OpenWRT as it's great for novices with trivial needs and there are many devices where it or dd-wrt are the only readily available options. But if you run Linux anyway and have an x86/amd64/arm device you're going to use as a main router, I'd recommend choosing a "normal" distro and setting things up from scratch.
you can buy them in the uk/ ebay for 15-20 pounds already with openwrt installed (you can do it yourself, but it includes a bit of soldering) - I have two in case the main one fails - talks to most if not all ISPs
I love openwrt now, it does take a bit of getting used to if you havent used it before.
I mainly use to lock my wifi down between hours for the kids. whilst keeping another wifi/ SSID open.
for security all my NAS's are wired and locked down to key wired computers - I keep meaning to create a Nextcloud gateway on docker
A Palo Alto VM gets you pretty much most of the sweet PA features without the cost, and a better approach than an outdated strategy like VLAN as Access Control, or zone firewalling, permitting the use of permit/deny by protocol, and overall better privilege tiering by network area.
Does Palo Alto have some kind of no-cost offering in their VM line?
This isn't free, but $50-200 is alot less than $2-4k
My setup could be even better if either Fios or Orbi provided a half-decent router, but just using PiHole as my DNS server has been awesome. When my Pi (SD card) crashed and I had to revert to my old router setup, it was shocking how slow everything was. I'd become so used to pages loading in a blink, waiting a second or two while ads loaded seemed to take forever.
If you have any unused SFF computer gathering dust, give PiHole a try (runs on any of the major linux distros). Initial setup takes under an hour. You can customize the backend out of it with blocksite lists; I'm blocking so much my wife and kids can't even install new iPhone apps. Amazon Devices, Rokus, Google, nothing gets to call home unless I allow it. I have it forcing all Google/Bing searches to use their clean filter; worrying my kids might google "porn" is a thing of the past. There are easy to find tutorials out there, just be sure to filter Google images, as well.
Tomato has long been my chosen router firmware - much easier to use than OpenWrt, but not quite as feature-laden and runs on a more select set of hardware (mostly ASUS routers). But it has recently gotten a development boost from a new maintainer, and the ad-blocking seems just as good as PiHole.
I suppose I can unplug my PiHole now that there’s no traffic going to it...
A few things ...
First, you can make pihole-like DNS ad-filtering available to everyone you know by using nextdns.io as your DNS and (basically) moving your pihole into the cloud. It's a tremendous product and I wish I had thought of it.
Second, aren't all of these things (pihole / nextdns) already obsolete ? Browsers (like firefox) are enabling DoH by default and devices in your home as well as apps on your devices are going to migrate to DoH as well.
Unless there is a solution I am missing I fear that we had a brief golden age where properly configured ad-blocking, via DNS, was a simple and useful solution but now that's falling apart ...
If you are using a filtered DNS, there is a domain (use-application-dns.net) that you add to tell Firefox to not activate it (unless if the user explicitly activated it). It's already included in Pi-Hole, and some hosts list includes it (despite Firefox prioritising hosts list before DoH).
Plain-text DNS are redirectible (technically a hijack, but whatever).
Ironically, I think that most IoT devices will be the one with hard-to-shut-off DoH/DoT: even worse, they have the incentive to develop a proprietary protocol for ads, so the next step-up would be IP blocklists. Or, I dunno, just hostage your device if you don't allow internet connectivity.
It’s also fairly easy to keep a VPN to home running to avoid nosy cooperate wifi and enjoy my own vicious filters.
There was a surprising amount of traffic that bypassed local DNS.
It’s a strange feeling when you are actively fighting your own devices.
That's because they're not your devices. As soon as certificate pinning for DNS-over-HTTPS becomes commonplace in consumer electronics filtering traffic by way of MiTM'ing name resolution will be completely game-over. You are an evil nation state actor trying to subvert the privacy of your people, per the DNS-over-HTTPS threat model.
I assume manufacturers are just going to stick 5G chipsets into things to get around user control anyway.
I seriously hate this future.
I wish a lot of consumers wouldn't but they don't know any better (and probably don't care).
Apparently if Android gets an IPv6 address it expects an IPv6 DNS address as well or it will fall back to Google's own. Had to configure the PiHole with an ULA address and make the router serve that.
For example, no DHCPv6 support: https://issuetracker.google.com/issues/36949085?pli=1 (note the date)
They're so religious about it they wrote an entire RFC and made it a BCP so they could justify their silliness: https://datatracker.ietf.org/doc/html/rfc7934
In addition to the services mentioned in the article, one that I recently added to my home network is Stratum-1 NTP server that gets it's time via Pi GPIO GPS module. Whether or not this has any real positive privacy or security implications, I do like the idea that I have significantly reduced outbound traffic to/from port 123 from the devices in the home network. Interestingly most devices are happy with NTP server provided via DHCP, but notably Apple and Sonos products always want to go to time.apple.com and Sonos' alternative (and I haven't yet have had heart to setup split DNS to try and redirect them to local NTP anyways...). edit: typos
And then there is of course the pure hack value of playing with the setup I don't get to/have to deal with / manage otherwise.
The following things are easy:
- Tweaking a container version in the composefile to upgrade or downgrade
- Entirely swapping out the underlying Linux distro without touching a line of code in existing composefiles
- Isolating all incidental data generated by the application from the user-generated data (for backup purposes)
- Infrastructure as code (so you can easily migrate between servers, and version your setup)
- Quick iteration on service set-up are all so much easier with containers. It's possible to remove services entirely too, so experimentation between different options is very easy.
A sustainable self-hosted setup is one that is quick to maintain and upgrade. If you don't do both, security issues and incompatibilities will eventually be a problem.
- Using docker means I can easily specify the version I want to run.
- Docker images are, for some reason, often simpler to configure than OS packages, for the same result.
- It's also really easy to tell what you need to back up, and to be sure that you got all of it. Plus it's very easy to make sure everything—data and config—for every service lives in a single branch of the filesystem tree, for further convenience.
- For similar reasons, entirely erasing a daemon is very easy.
- Using docker means I don't have to care which version(s) of the software I need is provided by my distro, or go track down extra repos, or whatever. This makes it easy to run a boring-but-stable distro to minimize maintenance, but still have the latest versions of the things you're actually running, and upgrading one of them will never mess up anything else.
- There's a lot more server software available through docker-hub than most (all?) distro official package sets—and, again, that works the same no matter which distro you're on, so everything about it, including the knowledge, is portable.
- All that, using the exact same docker-compose files or simple shell scripts, works the same on any distro, plus macOS. Migrating to a new server can, trivially, be made as simple as rsyncing your entire docker-stuff tree and then doing one "docker-compose up -d" or running a single very boring and simple shell script one time.
Really, the only down-side is you can't use it on FreeBSD right now.
As someone who's run Linux servers (public-facing, even, in the early days) at home since, oh, 2000 or 2001 (plus managing them professionally) I can say that Docker's great for home servers. Less fiddly trivia to worry about. What distro's on my home server? I'm not even sure. I think it's Debian? Dunno. Hardly matters. I've written zero "config as code" stuff for my home set-up, yet could have a new server up with identical services in maybe 3-5 minutes longer than it takes to install the OS, and all but maybe a minute of that would be hands-off, just watching progress bars complete. With effort I could get that down even lower, but I got that much for free.
Yeah, I think that's got something to do with it. At the very least, you practically have to document right up front where all the config affecting the container lives, both files and env vars. It's also tempting to put the most commonly-used config items in env vars, if they weren't already. Consequently, dockerized Samba, for example, is the easiest config of that daemon that I've ever performed, for any of my never-unusual-or-complex use cases, including with GUI tools, over a couple decades of using it.
I like the way you think, and was thinking about some of the same. Could you share what solution you went with in terms of hardware? I am interested in using better than commodity time that could serve both NTP and precision time as well.
The benefits for better time, and local time are fairly impacting if you are doing anything with time sensitive computing (PKI, performance measurement, and audio are interests of mine).
As a word of caution - it looks like most guides that I came across are outdated, or indeed have never produced a reliably (or at all) working NTPd setup. Instructions that lead me to the 'right' path is linked here. I am still in the process of tuning GPS offset in my setup. But all in all, it's been a great learning experience into complexities of the protocol I've mostly taken for granted for a very long time.
I have FTTH (France) and the first thing I did is to replace the ISP-provided router. They are really poor quality. I bought a Ubiquity ER-4 and, retroactively, I should have bought a mini fanless PC and run Debian on it.
Why Deian and not OpenSense or something like that? Because I have control over the very limited services I will use on the router, namely routing, firewalling and dnsmasq for DHCP and DNS. It is difficult to keep up with the security / functionality otherwise. Teh worst is a closed archaic system such as the one on the Ubiquity ER-4. A step up is something more dynamic such as OpenSense or similar. If you have the will to learn a bit of Linux Debian is the cleanest solution. Again - this is really for a very small set of service you will almost never touch.
Behind the ER-4 there is a switch that powers a few end user PCs, a PoE port for a Ubiquity Unifi WiFi access point and a server
The Unifi AP is fantastic. This is such an extreme upgrade vs the ISP provided one that I cannot imaging going back. Really invest in a higher-range consumer AP. In my apartment I used to have along the rooms (more or less linearly) 4, then 2 then 1 and 0-1 "wifi bars" (roughly the quality of reception). I now have, for the same emission power, 4 4 4 3
Then the server. There are two basic parts on it: the "services" part and the "home automation" part.
The services are things (web sites, MQTT, ...) that are run exclusively from Docker. I tested the recovery of the whole system from only the backups (of the configuration (docker-compose.yaml) and the non-volatile volumes) and it took me 1 hour from "I do not have any documentation, nor ISO" to "the lights work again"). Docker is a life saver here.
Backups with Borg.
The second part on the server is home automation: a docker container for Home Assistant and a USB dongle for Zigbee. I then have Zigbee and Wi-Fi devices all over the appartment to get rid of fixed wall switches. Plus automation.
What I would have done differently? A mini PC instead of the ER-4, two electrical wires in the wall (phase and neutral).
What I will do in the future? A redundant setup for home automation (this is not simple at all).
I would love to get some feedback for those who are more experienced or went though things they would do differently, or again.
My current goals are proper IPv6 support in case the day comes where my ISP will no longer give me an IPv4 address. The setup is currently close to perfect, but my DDNS service (google domains) isn't letting me make dynamic AAAA records for some reason. I'm in talks with support right now. I also have not successfully made ip6tables entries for rerouting IPv6 DNS traffic to my local DNS servers. When I have made entries it totally breaks IPv6.
I too would prefer an x86 box for a router, but OTOH purpose-built routers are much more energy efficient. Heat is bad for reliability and more power means less run time on a UPS.
My next home network purchases will be an all-flash TrueNAS box that is backed up to Backblaze. I want to get off privacy-devoid cloud services.
Doesn't compute !
Even more so as the OP chose Cloudflare ! US Jursisdiction + Commercial Company ?
Well, if you're going to go to the trouble of building and running your own VMs or docker images then the obvious recommendation is to run your own recursive resolver (Unbound or Knot Resolver).
The added bonus is that CDN content will operate as expected for your IP range instead of you talking to the CDN server nearest to whatever datacentre you spoke to the third-party DNS BGP anycast on. This may or may not be a big deal depending on your geographic location and internet connection. I am aware some third-party DNS services have options to try to fix the CDN problem, but YMMV as to whether this works in practice.
Or, wait until ODoH goes mainstream: https://datatracker.ietf.org/doc/html/draft-pauly-dprive-obl...
I run my own resolving nameserver (unbound) on a server in a datacenter (but could be DO or EC2 or whatever).
The upstream for my nameserver is the DNS I set up for myself at nextdns.io which is ad-filtered like a pi-hole.
So I use my own DNS server and distribute that address to whomever or whatever needs it - and I control the traffic and usage and monitoring of it - but I also get ad-filtering without having to run a pi-hole.
I don't want to run my own infrastructure. But the commercial providers will scan your data, "unperson" everything they, the government, or the copyright holders don't like, and charge me a fortune for the privilege or withhold features.
If you write a playbook to install some software on your home network, you have some YAML that took longer to write than it would have taken to do the task manually and isn't likely to be useful to anybody else. And without maintenance, that playbook will eventually decay and stop working. The scarf will continue to keep you warm forever.
This is the part that has killed this as an activity for me. I really enjoyed the experience of setting up playbooks and so on for my home server (deluge, jellyfin, zoneminder, etc), but in the end I didn't have the time to do much more than make the occasional config tweaks and restart it all live, and once that started happening, the scripts were no longer of value, and it was clear there was no point.
That said, I know some Nix acolytes swear by it for this kind of thing, in part because it really forces you to do things using its declarative wrappers and intentionally doesn't provide workarounds the way containers do ("just shell in and change whatever, lol").
Many other things - Nextcloud, caddy, proxmox to name a few, are long gone now because they demanded hours of attention at unpredictable times, and I just can't be jeffed.
I still love tinkering with bits of tech on the home network, but I'm extremely wary of coming to rely on any of it until it's proven itself to need minimal ongoing babysitting.
But also, why do you care so much about what other people do with their time? Live and let live, my friend. Nobody is forcing you to write ansible playbooks, build your own homelab, or comment on posts you view as a waste of everybody's time.
Even if you go by the philosophy that something has to be useful to be valuable (not something I personally believe), toy projects have their place. Plus, they’re fun.
'If you use ansible to configure some stuff on your home network, you have some self-hosted stuff at the end of it. You can use it yourself or let guests log in too.
If you use needles and threads, you have some tools that took longer to go to the shop and buy and use to make something than it would have taken to go to the shop and just buy the thing. And without regular use, you'll lose the needles and the yarn will tangle and discolour. The self-hosted services will continue to run.'
This is debatable. Some of those manual tasks you'd simply forget about and would end up with partially reproduced environment later, which is okay if you miss out on configuration that you don't actually particularly care about. It is much more annoying when you expect everything to work but realize that your workflows are randomly blocked because you forgot to manually install and configure some piece of software when reinstalling the OS or perhaps have to set up your IDEs completely anew because the configuration directories weren't preserved anywhere and are now simply lost. One of the benefits of using something like Ansible would be avoiding situations like this, or perhaps using something like NixOS (even though there are usability concerns presently).
In the more common case of using Ansible to manage servers (perhaps a homelab) as opposed to just using it for handling personal devices, the implications of this would be far more sinister - i've seen Java applications fail to work after migrating to a new OS release because fonts weren't installed, because someone forgot to document that step as necessary. And even if instructions are given (say, either when setting up some package anew, or reading your own documentation about your setup), there's no guarantee that you'll remember to follow all of them to the letter, or maybe you'll simply glance over a failure in one of the steps. That becomes even more likely as the size of your homelab and the count of your personal devices increases.
Clearly the impact of things like that happening is far lower in a homelab setting than it would be in a professional environment, but that also means that you'll essentially be pigeonholed into using some sort of an automation solution at your workplace (hopefully) to avoid situations like that, so also using it for your personal stuff is just the next logical progression. No one likes dealing with failures that aren't immediately apparent and could have been avoided entirely. I don't know about you, but Ansible's standard modules provide really good reusability, so i've definitely borrowed samples of how to do something from my personal setup for my work projects and vice versa (not verbatim, but syntax, how to use parameters and get things done in general).
> And without maintenance, that playbook will eventually decay and stop working. The scarf will continue to keep you warm forever.
This doesn't feel entirely accurate - everything, from your software, to your scarf will eventually decay. It's only a matter of time for the most part, though you can mitigate this by using more stable OS distributions like Ubuntu LTS (until very recently, i would have also recommended CentOS) or by using better materials for your scarf. Oh, and choose the stable versions of boring software, perhaps cut out the technologies that have the most rapid changes out of your stack entirely, until the development there slows down.
For me, that currently means:
- using Debian because it's stable and boring enough for my needs (both servers and desktop/laptop with XFCE)
- using Ansible for servers, treating personal devices and disposable otherwise (no attempt to preserve configuration, too much effort)
- using automated incremental backup software for the data, just in case
- manually provisioning any VMs/VPSes that i require, but having most of the configuration be similarly automated
- using Docker containers within those VMs/VPSes with Docker Swarm liberally, to separate software runtime environments from their data output and their configuration input
- using Docker Swarm to make managing all of that simpler and partially automate it, alongside something like Portainer for making that process more user friendly
- using Caddy to never have to deal with certificates manually, even though i manage DNS manually
- not updating software i don't expose publically and don't need the newest versions of (GIMP, Blender, LibreOffice, some private containers)
- using automated security upgrades within everything else, but also using the latest stable versions of server software, never bleeding edge
But many of these things aren't actually a major effort. Its some yaml for ansible to configure a raspberry pi for security and running docker. And run a few existing docker images. Its might even be less total time spent to just stick it all in ansible and run it rather than have to run each command by hand.
And there are actually some really nice benefits to running something like pi-hole on your home network and forcing all DNS through it. You can get ad and malware blocking on devices that don't typically let you easily. And you can just set it up in one place rather than on every single device.
For you, sure. For many others, it's just about doing something you like doing just because you like doing it. DJ'ing is my hobby, has been for 15+ years. I regularly spin sets a couple of times a week for at least 2, often up to 4 hours at one time, just for myself. I love doing it and feel so content and happy inside when I'm done. I feel even better during it, when everything clicks and I hit a really nice groove.
I have no need to share it with the world, have other people listen to it, or use my hobby to get gigs and make money with it. My sets are ephemeral, my own happy place, and always will be. I'm sure this is the case for many other people and their own hobbies.
Why can't people just enjoy doing a hobby simply because they like doing it? Why does there often need to be pressure from somewhere to do something more with a hobby?
Tapping your foot can't be considered a hobby right? It produces nothing tangible and involves no creativity.
The same can be said about a lot of other hobbies, regardless of what kind of process it might or might not be. Just doing it and being in it - reading and getting lost in a book, watching a movie and getting lost in it, going on a hike and getting lost in the woods (metaphorically, not actually lost, lol), writing code and getting lost in it, etc..
Ansible playbooks can be the same way! The point of hobbies for many people is just because they like to do it, nothing more, nothing less. There may also be other reasons, such as the creativity for me, but there are also times when I'm not feeling super creative and my set will just be an un-mixed playlist of music that I just want to get lost in in that moment... if that makes sense.
Some people might just like losing themselves in an Ansible playbook, or think they're fun for fun's sake or whatever reason. :)
Also, there is no requirement that a hobby has to produce something tangible or be creative. Watching television, drinking, people watching, and meditating could all be considered hobbies. Anything you do somewhat regularly to relax or for enjoyment, that isn't your job, is a hobby.
It doesn’t have to be creative nor worthwhile. Just something that you enjoy. Perhaps yours is trolling nerd forums about other peoples choice of hobbies? :P
Let me introduce you to the world of tap-dancing my friend.
This toxic "grind until you die" mindset needs to end.
Hobbies *do not* need to produce something to "show for it". The thing you are "producing" is happiness for yourself.
None of this tooling is "heavy duty" by any means. If you just apply a little thought, you'll see how this saves time and effort.
How dare someone experience pleasure! Especially from a hobby.
Like the experience and newfound knowledge that Ben gained from this endeavor?
How I re-OVER-ENGINEERED my home network for privacy and security
My Ansible playbooks are barely a step up from shell scripts, my Docker images are basically always whatever each project supplies themselves (and don't run on top of Kubernetes), and my infrastructure consists of a grand total of one Raspberry Pi and an 8-port switch.
These things don't have to be overcomplicated if one doesn't want to overcomplicate them. Docker and Ansible lead to real time savings, both because they document the exact software setup and because they make replicating it very easy when switching servers (as inevitably happens in a hodge-podge home lab).
Every once and awhile a good clean 'toss the whole thing out and start over' is in order. For me it is typically an ubnutu upgrade has done something odd. Then I spend a few hours digging on it and fixing it. But it would be easier/quicker to just scratch the whole thing and rebuild it, or at least toss it in a VM beforehand so I know what will go sideways. A bit of ansible would let me do that quickly. The other 4 I can just leave them alone. Starting at a known state (broken or fixed) is always handy. Basically it may save me several afternoons of work. Where I would rather be programming instead of digging out some weird error some upgraded bit of software started throwing because some leftover config file is getting in the way.
It astonishes me how people can enjoy this kind of tedious work as a hobby.
It's just mental exercise.
But it looks like a lot (all?) of the work here was just to get pi-hole functioning well - and that now needs him to deploy pi-hole, cloudflared, caddy plus all the surrounding tools of Docker, Docker Compose, Ansible etc.
You're spot on that if he'd have applied some thought outside of simply 'fixing pihole' he might have seen there are more modern and elegant solutions to configure network-wide adblocking.
e.g. AdGuard Home is an opensource single-binary precompiled for most architectures and OSes. Self-updating with an HTTPS multi-user GUI; supports DoH, DoT, dnscrypt out-of-the-box. Has a single yml config file to backup/sync; supports adblock lists based on regex as well as hosts file format.
Downloading and running that one little binary gives him the endgame he wants a lot more elegantly.
It's like those "there's no place like 127.0.0.1" shirts, which work despite the fact that we don't pronounce the localhost address as "home". It's a way of saying "hey, look, I'm just like you!" without being overt about it.
Microsoft is a clever company, and they recognize that they have to do a lot to offset their corporate stink that they brought to the GitHub and npm brands.
Also kubernetes is easy when using the right spin up tool, like portainer or rancher. It's also the only way to run docker images across multiple machine today. For a while I ran swarm but that's dead, then I just used docker compose but started running into compatibility issues. With kube you also get to leverage all the existing helm charts., which saves a lot of time.
It's still a bit overkill but a lot of fun, and I learned networking and infra things doing it on a homelab. I also use it to try new things.
There are also political reasons to do so. I suspect the internet will get less free and it's nice having your own private space.
The next stop down this fun slippery slope is kubernetes :)
That said, it also is worth mentioning NextDNS as an option. It basically is “pihole as a service” and is reasonably affordable. I ended up going that route to achieve similar goals as to the author and have been happy with it.
There's ControlD.com as well (by Windscribe) that's super neat which I really like, that offers more than just DNS.
Disclosure: I co-develop a FOSS NextDNS alternative.
I've had to turn most of it off.
PiHole means every single web site issue for any user on the WiFi ends up on my plate immediately. Wifi network segments mean Minecraft doesn't "Just work" for the kids and their friends. IOT networks means Airplay doesn't find devices. SDDP doesn't work, etc.
Having a functional home network turns out to be more important than having a secure home network.
1) UniFi devices just recently had a massive security vulnerability. Also not a good idea to be letting third party servers, access your home network, directly via your router
2) Instead of CloudFlare, use something like Unbound. I have mine setup to fetch directly from the Root servers. Why send your DNS queries upstream?
3) Caddy is the one decent thing suggested, however I am skeptical of the benefits of having HTTPS internally in your LAN. If an attacker is in your LAN, then it is already Game Over
I’d suggest blocking port 53 at the firewall too. I was surprised how much stuff doesn’t go through the cloudflared tunnel. You think it’s all going through the Pihole but there is so much rogue stuff
Proceeds to send all his DNS queries to Cloudflare
I'd love suggestions for better solutions though, I'm sure there's something I haven't considered.
1. It denies my ISP the chance to look at my DNS requests.
2. It doesn't involve yet another third party which may or may not try to monetize my dns requests.
3. It mixes my dns related traffic with that of a datacenter, which doesn't provide a compelling source of data for advertisers IMO.
DoH throws a wrench in the works. Still considering my options on how I want to address it.
A use case would be to host internet facing applications at home and still be safely consumed locally.
Alternative solution is to firewall traffic from the DMZ to the rest of the LAN but allow LAN to DMZ.
The better way is to separate networks by vrf, ensuring packets take the right path and go through your dnat rules without possible shortcuts (firewall separation works too, but then you're at the mercy of not fat-fingering any rules)
- hardened Linux (Kicksecure/Whonix)
- openrc (no systemd network vulnerabilities)
- NAT + IPv6
- stealth/hidden master DNS server.
- Shorewall firewall
- Wireless, 4 SSIDs
- Wireguard to remote VPS carrying DNS queries
DMZ covers all IoT, Alexa, SmartTV, game console, cable TV; both by wireless and CAT5e.
Household network covers laptop, desktop, cloud servers (Proxmox), Kerberos ticket server, file server (NFSv4, media DVR, ownCloud server.
And my white lab is totally airgapped and occasionally waterfalled firewall as needed.
I have a subnet that offers encrypted MAC.
Now doing transparent interception squid.
And I bookmarked this comment so I can reply to you later
These days i've taken it at step further (back). PiHole got replaced by AdGuard Home, which then in turn got replaced by NextDNS. PiHole/AdGuard Home only works on your own network, and NextDNS works everywhere. The price they're asking ($20/year) is less than the cost of the Raspberry Pi and power required to run it, and requires no maintenance on my end.
Everything that previously ran in docker at home has been replaced with Cloud equivalent services, source encrypted by either rclone + encryption, or just using Cryptomator. Again, the cost of the cloud services is cheaper (about half) of what i paid in power consumption for just running the server. Then add hardware costs on top of that.
As for my home network, since everything is now "out there", i've retired about half my network infrastructure (yay, less stuff that can break), and the internet connection is now the lower common denominator. I used some of the money i saved on power consumption to instead upgrade my internet connection to a 500/500.
All that remains is a single Mac Mini that is powered 24/7 for the purpose of pulling our iCloud data back home so that i can make backups. I would really love if TimeMachine or 3rd party backup tools could backup that cloud data directly from the client, downloading it only once, and other than that leave it in the cloud.
I still have to move over my torrent client though. Perhaps I'll write some Ansible playbooks to make this easier to manage.
But I'd love to be able to separate IoT devices across multiple networks, so I could have one for smart switches, one for TV/media players, etc.
Of course, a lot of consumer-grade hardware won't do 802.1x so you end up stuck with needing a bunch of SSIDs (and wasting air-time on beacons).
Yes, that's why I want more SSID's. My internal network does use 802.1x, but as you said, few devices outside of laptops support it.
Not many kernel can separate traffic within a single SSID (even if you did use VLAN, tcpdump on a malicious IoT can still view the traffic.
Better to have four to seven SSID, each mapped to a subnet. Make one subnet/SSID for encrypted MAC with laptops.
Cable TV, Smart TV, power-line LAN adapters, smart lightbulbs and webcams should go on separate SSID/subnet.
You will not have to trust the potentially outdated wifi firmware, that is quite likely vulnerable to all the latest holes in wifi security.
It's worth noting though that more than 3 SSIDs brings with it radio overhead which can degrade performance of your wireless network.
I've also thought about just setting up multiple nodes, one set for secure devices, one for insecure devices.
Cisco "only" supports 16 SSID's, which is probably more than I'd even need.
I used similar techniques, except I preferred not to use any external DNS server¹, so I hosted a recursive DNS server (bind9) on a small fanless server (apu2d4).
The DNS server can also block ads, similar to how pihole does it.
1. Why is everyone trusting cloudflare to centralize all their DNS queries? Even Firefox is migrating all their browsers to use them.
It would be good if there were more players in this space. A nonprofit might help here too, like Mozilla. Maybe they can run a public DNS server.
How do you create a rule to isolate a printer? It still needs to receive requests from the network, but I don't want it probing other devices...?
More, spent money on a bunch of gear and installed PiHole!
I did the same though LOL
However, in the spirit of his comments about Outsourcing to The Experts, I switched from PiHole to AdGuard Home to NextDNS.
Very happy with NDNS, just wish there was a kill switch!
I'm up around 50 devices on my home network, so the standard home router from the ISP, or even the fancy 'gaming routers' that look like alien spiders don't really cut it!
I get terrible internet where I am in Australia, so I have to be creative:
• 3x Unifi UAPs
• 1x USG Pro
• 1x USW 16 POE
• 2x 4G modems
Connected to the USG Pro and load balanced!
You can't LB with a UDM!
•1x M1 Mac mini with 12TB RAID as a TMS and Content Cache!
Serves around ~500GB a month to my network!
udm-utilities  basically allows you to run all kinds of stuff on it through podman.
Went from trying to teach it to mimic my ISPs router to get internet working, to banging my head against IPv6 prefix delegation, to now running adguard, homebridge and a bunch of other things across different VLANs on it.
All in all, I am happy with the result, and happier that I got dragged out of my progrmaming bubble to learn proper networking with IPv6.
1. Lock down DNS at all costs (and he means, all costs), going so far as to use PiHole and then ridirecting DoH/DNS to CloudFlare to scan for more mal sites.
2. Create virtual subnets at home for untrusted IoT devices
I've started hardwiring devices that I have some trust in (eg laptops, desktops, etc) on to their own VLAN. I really want to setup a second wifi network, maybe a third for Rokus, IoT and other things, and force them through a proxy. Or they just don't get access to the Internet at all.
It seems silly that these companies think that can use my bandwidth, whether or not it's metered, to do as they wish.
Of course it might be that he already had a personal domain available from some other project, that he could use.
But actually buying a domain just so your raspberry pi can use https seems like the ultimate overengeneering.
I was running Pi4 before but felt it was WAY overrated for this purpose, you can buy a used micro computers for $150 which come with SSD, 8GB RAM and are multiple times faster than Pi4.
Is there a system equivalent to Ansible that uses a decent config file format?
That such efforts are necessary, and as the commentary shows, an active area of interest and hacking and development by so many very smart people,
is an absolute condemnation on the dystopian state of the surveillance capitalism that so many of us here have helped build.
It's not malicious actors that are the sole or even primary vector being blocked. It's the now-systemic misbehavior of so many of our own products and services.
e.g. celebrating and being inspired by the intellectual and technical details of a smart industrious individual attempt to reasonable degree insulate themselves from the abuses of surveillance capital,
and consider the forest-level view,
that while few were paying much attention,
our society in general, and the industry most HN readers contribute to in specific–in some cases, very very directly–is now profoundly, all but irredeemably broken, wrong, amoral, and bad, pick a word.
Articles of this type and fandom and advocacy of this or that open source solution to a tiny piece of the surveillance capitalism nightmare,
are in small way part of the problem; they normalize the experience well sketched here: https://den.dev/blog/user-hostile-software/
But that article only approaches some sides of the problem.
The political and philosophical ones, which are implicit but not blown out, are deep, real, and corrosive.
The subtext, which is not so subterranean, is that readers here who are cashing a pay check accelerating total information awareness, dark patterns, and the abuse of users in exchange for calculatedly seductive or addictive services whose fine print all know no one ever reads,
are culpable, and that damage is real, and the costs real.
And they should do better.