Hacker News new | past | comments | ask | show | jobs | submit login
How I re–over-engineered my home network for privacy and security (balter.com)
345 points by chmaynard 48 days ago | hide | past | favorite | 198 comments



I've gotten into networking and very much enjoy seeing how others approach the problem space at the scale level! It's gotten more important with not just the explosion of online threats, smart home devices etc., but also just because there is more and more value to be extracted from a high bandwidth WAN link via your own private services. That said I definitely strongly recommend against using anything UniFi based for routing/gateway tasks at this point (author mentions their AIO custom hardware UDM thing). They have significant issues now in WiFi and even switching, but at least remain kind of workable there. But the company has been an absolutely depressing development dumpster fire for years now and the trajectory still appears aimed straight down. And for whatever reason routing/gateway in particular is an area that only ever briefly got much oxygen there. So for a greenfield situation I'd be wary.

There are lots of enormously better solutions for that. Personally I've dumped all my UniFi gateways (including some of the "next generation" ones I had for testing) and moved to OPNsense running on decent SuperMicro 1U edge systems (excellent value, quiet, regularly available dirt cheap on ebay and the like). But there are many great options there including VyOS, OpenWRT, or even just running straight OpenBSD if that's what you like. Router/gateway is one area where I thinking devoting real metal, preferably with higher reliability hardware and extra remote management options (like IPMI), is worth it. It's an important linchpin for a typical network, and while virtualizing it is possible and definitely desirable at large scale it's not where I want to add more moving parts without a team behind it. Everything else can go tits up while I mess with it or there are odd interactions and as long as the gateway/routing is still solid odds are I can recover without much disruption or having to physically be there.


> That said I definitely strongly recommend against using anything UniFi based for routing/gateway tasks at this point (author mentions their AIO custom hardware UDM thing). They have significant issues now in WiFi and even switching, but at least remain kind of workable there. But the company has been an absolutely depressing development dumpster fire for years now and the trajectory still appears aimed straight down.

This is depressing to read because I worked a Ubiquiti before things got bad. The company culture took a huge dive when the CEO began removing executives and managers that were getting things done. The company was always weird, but when I started they were good about finding strong engineers who cared about networking and letting us build good products.

Everything changed when they tried to reoganize the company around the UDM and move development to China. The UDM reorg was so awful that employees and executives were leaving in droves. For months we couldn't even tell who was supposed to be in charge of UDM or UniFi because so many people left at once. I think we had 4 different UniFi lead developers get hired and then quit a few months later after the early employees all left.

Sad situation. Ubiquiti was a cool company that paid well and let engineers do good work when I joined. I still wonder why it had to fall apart so fast. The company was never perfect but it was sad to watch the CEO drive away all of the good parts of the company.


Thank you for sharing, and agreed it's a really sad situation. What an absolute total waste on a variety of fronts, from the engineering to the community. I'm only on the outside looking in as a small (<$100k) customer, though I've seen other ex-UBNT engineers comment over the last few years too. From the operator side it's been really painful watching so much promise and such a great community wither, not merely in a slow fashion but with actively hostile measures like nuking the forums and what minimal bug/feature tracking we had in favor of the horrible new-web thing they've got now. So much lost just there. And literally just nothing to be done. Last I checked Pera owned a super majority of the stock and had a tiny compliant board, so there isn't even any hope of shareholder intervention. Pressure from the financial side is a really lagging indicator in networking due to what a irreversible tipping point type of movement function customers have.

What a shame. Thanks though for your own small part in doing something pretty special. If nothing else Edge/airMax/airFiber/UniFi opened a lot of our eyes to what SMB networking could be vs abominations like old Cisco.


> Last I checked Pera owned a super majority of the stock

I think everyone should be worried when a company founder/CEO of a tech firm starts doing silly stuff like going out and buying a professional basketball team. That's never a good sign.


Is there a company that these people are going to? What Ubiquiti/Unifi alternatives do you like?


I’ve heard good things about MicroTik. Not as consumer-friendly as Uniquiti Unifi, but solid engineering from a company that seems to really understand networking at the 1/2.5/10/25/40 Gig Ethernet scale for anything from SOHO to small/medium size businesses.

Don’t have any personal experience with them, however.


MicroTik is solid and I used one of their routers for a few years. However, there's no support with mDNS across VLANs. You can obviously roll your own solution, but who has time for that? I prefer to have my IoT devices on a separate VLAN. Without support from the router, you're SoL if you want your devices to be discoverable by HomeKit and other services.

At the moment I'm using Unifi for my router, but I'd consider switching to something as secure as Mikrotik but with support for mDNS across VLANs.


Ruckus for Wifi. More expensive, but way better.


TP-Link Omada



What did you move to for wifi? Gateway/firewall/edge is easy to replace, but I haven't seen much that could compete with UBNT at their price point.


I replaced an old Unifi AP with a TP Link EAP 245 v3 and its been running fine for 3 months.

They have some equally complicated to setup provisioning software like Ubiquiti (when you have just one device it's overkill), however you can actually configure everything on the device directly from the web interface, and no cloud required (I spent an hour trying to set it up before I realised this....).

It is classed AC1750, so the same as a UAP-AC-PRO which is around €150 here. I paid €95, so quite a bit cheaper. They also have some WiFi 6 (AX) devices, but I don't have any compatible clients so didn't bother with those.

https://www.tp-link.com/us/business-networking/omada-sdn-acc...


>What did you move to for wifi?

I haven't moved everything on WiFi or L2 switching yet and am still evaluating that, poking at different solutions from a variety of places across price ranges (from MikroTik to Peplink or Ruckus). Part of the true pain of all this is that there isn't any clear obvious successor to all the things Ubiquiti aimed for or else a lot more of us would have probably bailed completely already vs drawing things out. Not merely the price point, which I'd actually be fine paying somewhat more for given how important it is, but zero cloud dependency, the fairly pleasant (though getting worse) single pane of glass for managing things, and decent physical design that doesn't look like giant mutant space bugs all matter.

However, even in WiFi it has started to get flakier and while the older WiFi 5 devices which benefited from what was once a lot of great engineering talent have held on my experience with the newer WiFi 6 stuff has been mediocre. And the controller has continued to go downhill as well, actively removing useful functionality and information density as they keep messing with the UI every single version in the most classic bikeshedding fashion :(.

WiFi 6E/WiFi 7 and multigig will I think be the major decision point, so another 1-3 years. 6 GHz spectrum will be genuinely very useful in a variety of settings, so we're going to be looking at a point where it'll be desirable to replace a lot of hardware anyway. Once there is that kind of commitment that's going to represent not wanting to change again for another 5-7 years, well, might as well reevaluate. And I hate it but I just don't see Ubiquiti getting better.

I will say that with a touch of irony, Ubiquiti's rot spiral has actually reinforced the value of Ubiquiti's approach IMO. Because such things can happen to any company, and at least with UniFi/UNMS the self-host control thing has worked out exactly the last-resort backstop we all always said it could. There isn't any required cloud tie. There isn't any forced firmware updates. The system can be quite well isolated, with routing/gateway gone the rest can go onto a dedicated management VLAN with zero ingress/egress. That leaves a lot more runway even without updates. I'm sorry it was needed but it makes me more determined then ever to avoid remote dependencies. So I'll thank Ubiquiti for that I guess :\.


Ruckus for me. Single AP replaced too, and has been way better.


> I definitely strongly recommend against using anything UniFi based

As a counter point, mine works beautifully. UDM Pro, multiple Access Points, access point roaming, 2.4 and 5ghz networks, and all the goodness. Everything works flawlessly and it's been quite reliable.

They seem to really enjoy spending money moving my cheese on the web site admin interface, and a few unexpected features seem to have vanished, but overall... there's nothing as good or centrally integrated. Everything else is a collection of point solutions, which is more than many (including myself) have the time for...


" ... moved to OPNsense running on decent SuperMicro 1U edge systems ..."

I agree with what you have done here and I will echo the sentiment to run away screaming from Ubiquiti/UniFi.

It's easy to be confused by all of the professional ISP gear that Ubiquiti used to produce and think that the UniFi products (like the "Dream Machine") are professional networking equipment. Try blocking ubnt domains at your network edge losing the ability to log into UniFi ... you'll see how "professional" it is right away.

I will also say that it is ironic, and sad, that you mention Supermicro as an alternative. Their recent moves make it quite clear that they are trying to move in the same direction ... which is to say, making wonderfully built products that fulfill useful (but boring) use-cases is never enough.

Supermicro circa 2010 was making people rich but not billionaire rich ... and that is a problem they are working hard to fix.


Not sure if you're still here or will see this, but:

>It's easy to be confused by all of the professional ISP gear that Ubiquiti used to produce and think that the UniFi products (like the "Dream Machine") are professional networking equipment. Try blocking ubnt domains at your network edge losing the ability to log into UniFi ... you'll see how "professional" it is right away.

The 'dream machine'/UniFi OS stuff is real trash and maybe where the rot truly showed to have completely taken over, though there were very bad signs earlier like when they trashed their self-hosted video solution and reversed earlier promises to restrict the follow up to their proprietary hardware (CK G2). Worth clarifying though that it's still perfectly possible to have a full self-hosted controller with local accounts, multiple sites and zero greater internet dependencies (route management VLANs through wireguard back to the L3 controller). The UDMs are in no way necessary. Their problems are more fundamental than that :).

>I will also say that it is ironic, and sad, that you mention Supermicro as an alternative. Their recent moves make it quite clear that they are trying to move in the same direction ...

I'm genuinely curious what you mean by that? At your scale of course you have far more insight into businesses of that size, but I honestly haven't seen any signs of that on the SM side nor is it quite clear to me how they would even go about that? They just make systems don't they? Even their IPMI is based off the AST2k series BMC, and they've been extremely reasonable about what they offer there in stark contrast to players like HPE. Their systems also aren't packed with proprietary junk that punishes you the instant you try to do anything else, again in stark contrast to HPE. They're just computers. Of course I'm only in at the lower end of their spectrum of offerings!

>making wonderfully built products that fulfill useful (but boring) use-cases is never enough. Supermicro circa 2010 was making people rich but not billionaire rich ...

One of the particularly frustrating/outright confusing things about Ubiquiti FWIW is that this isn't actually true. Ubiquiti did in fact turn Pera into a billionaire just by making wonderful products that fulfilled important use cases, and there were clear and loudly suggested natural revenue opportunities that they never even bothered with. Looking at all of their shitty moves and self-destruction, it actually doesn't generally have any sort of revenue ties! And indeed on the contrary, they've repeatedly skipped out on (accepted even) features that would flat out encourage more hardware sales and revenue (like L2 replicants to L3 control). They've actively spent major money and development time on things that people hate yet are completely free. It's not like all those new ever worse Controller versions actually are paid. It's not like they've added subscriptions and microtransactions everywhere. Even their cloud stuff again doesn't actually have any revenue story attached. They made a big deal about offering one of the most classic super high margin boosting things of all, "Premium Business Support Contracts", and then... just kind of abandoned it and let it die out pissing off a ton of professionals and business in the process even though it could clearly print money.

It's not a matter of their being some cold money grubbing business logic to their moves that, however evil, one can understand the point of. There simply isn't any logic at all, beyond maybe trying to outsource so hard and create such a toxic environment that they literally just don't have the internal capability to execute on much of anything anymore. Outside of a few remaining decent people still plugging away a bit on bug fixes, most of the activity seems directionless, meetings and throwing stuff at the wall with no strategy or follow up, and ever widening set of products that sometimes get dropped before they are even out of "early access".

So yeah. If anything the Ubiquiti of old would be doing fantastically better right now just because demand for networking and everything related has accelerated so much. It's just so stupid.


I'm about to do a big revamp of my home network and would love any specific gear recommendations to research - are these the supermicro setups you're talking about? https://www.supermicro.com/en/products/embedded/fanless-and-...


For router hardware look at the PC Engines APU2. Sadly they appear to be out of stock until 2022, but they are great low power devices. I ran OpenWrt on mine in the past and currently running Fedora. Can do about 1Gpbs and cost around $200 built.

For managed switches, look at the Aruba Instant-On 1930 series. I've just ordered two of these so I don't have any first hand experience, but the feedback online is generally positive. Do note, these switches can only be managed from HTTPS, but the interface seems clean. From my research the cheaper Tplink and Netgear don't have an isolated management interface, meaning it can be accessed from all vlans. This was a deal breaker for me. I also considered the HP Officeconnect 1820 series switches, but they've been out for a while and I worried their EOL may be coming up shortly.

For Access Points, look at the Aruba Instant-On AP 22. The biggest down fall is the access point uses a cloud controller, requiring an internet connection to manage the device. There is no local management. This is the same exact hardware as the Aruba AP-505 which runs for ~$400. Given Aruba makes solid wireless hardware, the advanced features compared to other units in this price range, and the lower price point, I'm willing to give up local management control. After all, I don't modify my AP that often. My Ubiquity access point has crashed multiple times, the most recent crash appeared to be memory leak. Maybe I'm just unlucky, but this aligns to numerous complaints about firmware quality.


"From my research the cheaper Tplink and Netgear don't have an isolated management interface, meaning it can be accessed from all vlans."

For Netgear you want a T eg GS724T(P) in the model which implies smart switch (web managed). They do have command line managed ones too ie "managed switch".

For home use, decent L1 and 2 is enough - you don't want to do L3 switching in general, unless your house is at the top of the Mall. So, GS724TPv2 and GS110TPv3 get you a 24 or 8 port PoE+ switch with L1 and L2 covered in a web interface with VLANs etc. The newer interface with the PVID section that shows tags and ports n LAGs at a glance is one of the best, regardless of price or status. For the money those switches are quite hard to beat.


"For router hardware look at the PC Engines APU2"

I use a lot of these in my personal networking setup but I am going to start moving to a raspberry pi CM4 built into this dual-ethernet board:

https://www.seeedstudio.com/Dual-GbE-Carrier-Board-with-4GB-...

It's a smaller footprint, more CPU horsepower, etc., and I like mounting little devices onto DIN rails more than I like trying to rackmount the 7" or 9" APUs ...


I've used one of those in a satellite location and liked it ok, but what I actually ended up getting generally were full 1U replacements for the USG-4/UXG from SuperMicro's 5018D series. I kept an eye on ebay and found a bunch of new 5018D-FN8Ts listed for ~$650 a pop. They've got dedicated IPMI (obviously this needs to be secured), and unlike HP or Dell they include virtual iKVM for free and a full license is only $30. Pricing on the others can easily add +$200 for what should be included functionality even if they've got a spiffier coat of paint, which is significant on a low end server device. The 5018Ds are not fanless, but running they drop down to a load speed that I can't hear over anything else I've got (like a POE switch, all of which except the tiniest models have fans). HP's Proliants in contrast are jet engines all the time even when they're pulling <50W. I'm not running a total silent environment and preferred to make a rack closet instead well away from any living areas, but I don't want loud noise and SMs have been fine. Of course, one can just replace the fans with 40mm Notcuas or the like too if desired. They've got Xeon processors that have a bit more oomph than the Atoms or Celerons and also means ECC memory, since again I really want to be able to rely on gateways to a reasonable degree (this has already identified one bad memory module out of the dozen I had around). I install OPNsense on ZFS on a cheap small decent NVMe drive (PNY CS1030 250GB is around $35) which still means fast boot and no concerns about all the logging or the like I could desire.

I have one single site that is doing much heavier 10G+ routing and usage that I also wanted to mess with more intensive SDN and security with. For that last year I ended up picking the much beefier and much more expensive EPYC Embedded based 5019D-FTN4 and putting a Mellanox card in it. It's also extremely quiet and has been really impressive, but that's stupid overkill right now. Also, EPYC Embedded is currently still based off of Gen1, there was no Zen3 update due to not having low TDP given the way the chiplets were upgraded vs the IO chip. I expect Zen4 next year will see an upgraded Embedded platform that will essentially be a 3 gen leap forward, so at this point not the best time anwyay.

There is no perfect solution IMO. Though probably nothing that'd throw off the typical HNer, OPNsense does have its warts, rough edges and missing bits (no Webauthn so no security keys for login for example). It's based off of a FreeBSD variant (soon to be directly off FreeBSD) with all that comes with that for better or worse. Like, OPNsense does have a user space plugin option for WireGuard (along with ZeroTier and so on), but WG has not yet made it to the FreeBSD kernel which in some situations could be an issue (countered by raw CPU in my case). But it's powerful, well maintained, overall fairly user friendly, has pretty solid documentation and getting started guides, and a nice community with a good mix of developers and some companies behind it. The company Deciso for example does offer a paid business edition and paid support options if it's desired. It does have DNS Blacklist options ala PiHole, stats/telemetry/IDS/IPS via built-in and 3rd party offerings like Sensei, etc. There are plugins for Let's Encrypt, FreeRADIUS and other handy functionality. Someone who is very familiar with Linux might find VyOS more worth looking at but with my background I found OPNsense reasonably pleasant to get into.

The decision tree here does also depend on how much network functionality you want to have into your gateway/routing system vs how much to stick on a separate server elsewhere (maybe virtualized or as a part of a NAS). Gateways can be very minimal or can handle damn near everything on the network. There are straight forward tradeoffs there in terms of failure modes and complexity.


> enormously better solutions

Strongly disagree, even if you disregard price point. Unless you are limiting your criticism to the USG/UDM and routing. Which I think you are, but just want to be sure.

For the wifi and switching they are very very hard to beat. For normies and techies alike. I only dislike that the APs are underpowered so you need to carpet bomb your house if you have a large footprint. For dense living it's great though. Even for large area (blocked by walls) it's ok if you don't mind filling in the gaps with mesh. By the time you buy 1 AP and 1 mesh you are at $400 though.

The gateway products are absolutely abysmal, even before considering the buggy new hardware.


For anyone who wants to go a similar route I can highly recommend OpenWrt as the OS. Not only does it run on cheap routers, but also on big x86 machines and even in VMs and/or containers. For example, I run OpenWrt on my desktop in a LXC container to manage networking (Wifi, bridging firewall for bridging VMs into the network, general firewall, etc.) through the nice webui. It gets direct to access the WiFi adapter and the host gets access through a bridge connected to the container via a VETH interface.

It comes with nice addon packages for stuff like Wireguard, all kinds of tunnels/VPN, adblockers, runs containers and a ton more. I even run it on a VPS as container with it having exclusive access to the "physical" NIC. The parent OS isn't directly accessible at all. Makes firewalling a breeze. The only open ports are for the Tor relay and Wireguard, through Wich I connect to the webui/ssh and do everything else.

Of course, my router also runs OpenWrt...


I always found it baffling how a community maintained project like OpenWrt can not only keep routers updated for many years (unlike the manufacturer that gives you an update once in a blue moon for the first 2 years) but make these cheap routers so useful and so stable you just forget the thing is there in the first place. I can't remember the last time I had to reboot a router running OpenWrt because it started behaving erratically.

You can do a lot with these commodity ARM CPUs, 64-128MB of ram and a few tens of megabytes of flash storage.


i literally have my comcast router on a 24 hour time and it shutsdown for an hour each night from 3-4am . Communism style brownouts of internet over night


How is the OpenWrt CLI? Say I want to deploy it with Ansible.

How does it stack up against VyOS (which just recently got VRF-lite support)


> How is the OpenWrt CLI

You SSH into it. Then do whatever you need to. There's nothing in the UI that can't be done via CLI as far as I know. Some plugins might not be 100% CLI compliant but at least the Base UI (luci) is completely transparent to the CLI via uci.


Do you have any recommended hardware for OpenWRT? I've been wanting to put in a low powered router/firewall on my home network that isn't controlled by a big vendor.

I haven't done a ton of research in this area, but I'd certainly like to use OpenWRT or OPNSense on my home routers/firewalls.

Side note: I've been trying to figure out a decent way to get rid of Android on my Galaxy S9+, but it appears to be locked.

At the end of the day, I just want to be in control of my bandwidth, my data, and know whats going on. Big companies are making this very complicated with all the tracking.

I recently re-enabled my pi-hole on a virtual machine, and it never ceases to amaze me what is talking to the internet without my permission. After digging into DoH a bit, I'm about to the point where I think I need to put in an outbound proxy, deny all outbound access except via the proxy, and iterate again and again.

I just don't want to have $200 a month in power bills to support my home network to save bandwidth and know what is traversing the net.


For techie home use of OpenWrt, I'm currently using Netgear R7800.

The R7800 is well supported by OpenWrt, has the hardware features I need, some room to gro, and it's affordable used. I paid about $90 for my first one, and about $70 for my backup unit.

For OpenWrt for smaller purposes, for which an R7800 is both overkill and physically bulky, I understand there are a bunch of near options now. I just keep some old WNDR3700 and WNDR3800 units on hand, which used to be my main routers, and actually still could be. (Sometimes they might be a simple WiFi bridge or print server. Other times, they might be an experimental LAN that needs different properties than I have set up for my main router, and with which I don't want to complicate my main router setup.)


Thanks. I'm going to look into this further.


WRT3200ACM is what I use. It's one of the most powerful consumer devices with wifi supported by openwrt. It's powerful enough for a router with a little firewalling, VPN (wireguard only if you want speed), DoH and some Ad Filtering but thats pushing it's limits from my experience. If you want more power https://openbsdrouterguide.net/ is your friend.

That being said there is no hardcore prosumer hardware out there for this purpose. The moment you go beyond home user router hardware like the WRT3200acm you are in either CISCO Buisness stuff or custom server builds. Potentially a Raspberry Pi 4 with a PCIE ethernet card is closest to prosumer hardware out there and there's a lot of hacking involved to get that running to the same degree as a openwrt router


This is interesting. I'm going to research it further. I really appreciate the feedback. I'm really starting to hate DoH - and I may just not put any IoT thing on my network that uses it. Maybe that's the way to go.

But I doubt most consumers really care. It's complicated.


I don't know if they specifically fit your bill, but the Turris devices are worth checking out [1]. They come out of the box with TurrisOS, which is an OpenWRT fork with some extra features (e.g. automatic updates, config snapshots) and some changes (e.g. knot resolver for dns). Turris are a bit opinionated about using DNSSEC, and I think historically it was a bit tricky to configure a custom DNS resolver, but it looks like that's now possible through their new UI [2]. By the way, they offer 3 UIs: Foris, reForis and OpenWRT's LuCI, and of course ssh is also available.

If you don't like the fork, at least with the Turris Omnia it looks like you can put on vanilla OpenWRT [3], but as always check the OpenWRT table of hardware for details before buying.

I think the PSU of the Turris Omnia is rated at 40W max, but I don't know what sort of real-world power draw you'd get with your specific use-case. I guess it depends on whether you use WiFi, the SFP port etc.

[1] https://www.turris.com/en

[2] https://docs.turris.cz/basics/reforis/dns/reforis-dns/

[3] https://openwrt.org/toh/turris/turris_omnia


OpenWRT doesn't pack Python out of the box, so you need to install it, and have enough space for it, if you want to use Ansible. It uses a custom stack for configuration (like VyOS), so builtin ansible tasks won't always be so helpful. Configuration is not stored in a single place, but in several files, and it's easy to lock yourself out while testing changes: there's no commit-timeout, and there's no committing of changes, nor rollback. It's just editing random files, and restarting services.

I think there's a special configuration command that might fix some of the above issues, but I've been using the web interface (which actually does support committing and, to some extent, validation).


You know, I used to do this and give the same advice as well but after a lot of time spent on it, I am not anymore.

The way OpenWRT handles routing and firewall rules is particular and they apply their own terminology for some things. They have their own distro-specific packages for things like DHCP (odhcp(c)d) and firewall (fw3).

For very simple networks, it's very smooth to get to where you want. Add on dual-stack v4/v6, vlans, multiple firewall zones, routing policies etc and things start becoming very unpredictable.

Oh, and that adblock package? Turns out a single invalid line in a blocklist will completely break DNS (at least on the version I was running from last year).

Not to mention that (AFAIK) there's no good way to keep up to date with security patches and bugfixes while keeping the system stable.

After all the countless hours I poured into OpenWRT configuration, I finally realized that it's so much less pain and confusion with vanilla Debian with systemd-networkd (which BTW natively supports setting up Wireguard interfaces now) and firewalld+nftables, everything configured via ansible playbooks.

For someone diving into this today, it's a lot easier and more future-proof with nftables than iptables - and OpenWRT will be married to iptables for the foreseeable future.

It's great that it works for you, but if you like I did have some imposter syndrome over not perfectly understanding Linux networking and are happy that OpenWRT takes care of those confusing iptables rules and routing policies and what-not - you may just discover that learning how it actually works will take less work than abusing OpenWRT into doing what you want.

Sure, you have to give up the WebUI and some of the custom add-ons.

I am sure BSD or Rocky Linux are fine choices as well; Debian just happens to be what I mostly use for servers otherwise.

I don't want to hate too much on OpenWRT as it's great for novices with trivial needs and there are many devices where it or dd-wrt are the only readily available options. But if you run Linux anyway and have an x86/amd64/arm device you're going to use as a main router, I'd recommend choosing a "normal" distro and setting things up from scratch.


for home I can recommend openwrt running on a BT Home Hub 5A

you can buy them in the uk/ ebay for 15-20 pounds already with openwrt installed (you can do it yourself, but it includes a bit of soldering) - I have two in case the main one fails - talks to most if not all ISPs

I love openwrt now, it does take a bit of getting used to if you havent used it before.

I mainly use to lock my wifi down between hours for the kids. whilst keeping another wifi/ SSID open.

for security all my NAS's are wired and locked down to key wired computers - I keep meaning to create a Nextcloud gateway on docker


I thought about which OS for some of the same things and I realized that I would rather go with a lab version of a full enterprise firewall.

A Palo Alto VM gets you pretty much most of the sweet PA features without the cost, and a better approach than an outdated strategy like VLAN as Access Control, or zone firewalling, permitting the use of permit/deny by protocol, and overall better privilege tiering by network area.


I’m curious about PA firewalls. The product descriptions claim “Machine Learning” based routing/firewalls. What does that even mean? I’m a bit skeptical about AI being used in a firewall. Can someone help me understand why I should consider this instead of running pfsense on a Netgate appliance?


> A Palo Alto VM ... without the cost ...

Does Palo Alto have some kind of no-cost offering in their VM line?


> A Palo Alto VM ... without the cost ...

This isn't free, but $50-200 is alot less than $2-4k


Yikes! I didn't realize PA had anything available for less than a 4 digit price point. I'll check it out. Thanks!


I have dd-wrt on a tp-link router and it works great and has been for about 3 years now. No issues. I just have to check the site about once a month and see if there's an update. I wish that was automated or had a notice in the web gui a new one is available. Very configurable and stable.


Really glad to see this write-up. I love my current PiHole setup, but replicating it for family/friends (especially with my blocking so many sites they might want) hasn't seemed doable. I would be having to help them constantly with blocked sites, etc.,.

My setup could be even better if either Fios or Orbi provided a half-decent router, but just using PiHole as my DNS server has been awesome. When my Pi (SD card) crashed and I had to revert to my old router setup, it was shocking how slow everything was. I'd become so used to pages loading in a blink, waiting a second or two while ads loaded seemed to take forever.

If you have any unused SFF computer gathering dust, give PiHole a try (runs on any of the major linux distros). Initial setup takes under an hour. You can customize the backend out of it with blocksite lists; I'm blocking so much my wife and kids can't even install new iPhone apps. Amazon Devices, Rokus, Google, nothing gets to call home unless I allow it. I have it forcing all Google/Bing searches to use their clean filter; worrying my kids might google "porn" is a thing of the past. There are easy to find tutorials out there, just be sure to filter Google images, as well.


FWIW, a plug here for The FreshTomato router firmware. I just retired my PiHole because FT does the same job seemingly just as well.

Tomato has long been my chosen router firmware - much easier to use than OpenWrt, but not quite as feature-laden and runs on a more select set of hardware (mostly ASUS routers). But it has recently gotten a development boost from a new maintainer, and the ad-blocking seems just as good as PiHole.

I suppose I can unplug my PiHole now that there’s no traffic going to it...


"Really glad to see this write-up. I love my current PiHole setup, but replicating it for family/friends (especially with my blocking so many sites they might want) hasn't seemed doable."

A few things ...

First, you can make pihole-like DNS ad-filtering available to everyone you know by using nextdns.io as your DNS and (basically) moving your pihole into the cloud. It's a tremendous product and I wish I had thought of it.

Second, aren't all of these things (pihole / nextdns) already obsolete ? Browsers (like firefox) are enabling DoH by default and devices in your home as well as apps on your devices are going to migrate to DoH as well.

Unless there is a solution I am missing I fear that we had a brief golden age where properly configured ad-blocking, via DNS, was a simple and useful solution but now that's falling apart ...


> Browsers (like firefox) are enabling DoH by default

If you are using a filtered DNS, there is a domain (use-application-dns.net) that you add to tell Firefox to not activate it (unless if the user explicitly activated it). It's already included in Pi-Hole, and some hosts list includes it (despite Firefox prioritising hosts list before DoH).

Plain-text DNS are redirectible (technically a hijack, but whatever).

Ironically, I think that most IoT devices will be the one with hard-to-shut-off DoH/DoT: even worse, they have the incentive to develop a proprietary protocol for ads, so the next step-up would be IP blocklists. Or, I dunno, just hostage your device if you don't allow internet connectivity.


That sounds overbearing and your mobile data bills will skyrocket as nobody wants to use your annoying home setup.


I’ve had more than one person ask how to replicate mine after the standard internet noise was removed while on my wifi.

It’s also fairly easy to keep a VPN to home running to avoid nosy cooperate wifi and enjoy my own vicious filters.


Did you consider using NextDNS?


does that block Samsung's ads on their smart TVs ?


The best way to block samsung is to not spend any money with them. The next best way is to not connect your (non-samsung) smart TV to a network.


Yes. I use Nextdns (which is a PiHole in the cloud) with a recent Samsung TV, and it can block ads while all smart TV services work just fine.


I’ve had good success against Google and Samsung (and maybe others) by setting a firewall rule whereby port 53 traffic can’t leave if it doesn’t come from my Pihole. I sent it back to the Pihole.

There was a surprising amount of traffic that bypassed local DNS.

It’s a strange feeling when you are actively fighting your own devices.


> It’s a strange feeling when you are actively fighting your own devices.

That's because they're not your devices. As soon as certificate pinning for DNS-over-HTTPS becomes commonplace in consumer electronics filtering traffic by way of MiTM'ing name resolution will be completely game-over. You are an evil nation state actor trying to subvert the privacy of your people, per the DNS-over-HTTPS threat model.

I assume manufacturers are just going to stick 5G chipsets into things to get around user control anyway.


> I assume manufacturers are just going to stick 5G chipsets into things to get around user control anyway.

I seriously hate this future.


At which point, why do you buy the devices?


The only choice in a while will be non-backdoored devices for quad-digit amounts. I, for one, do not appreciate going back to $5000 workstations just because I want to get some stuff done.


I don't.

I wish a lot of consumers wouldn't but they don't know any better (and probably don't care).


I think they do care, otherwise Apple wouldn't make privacy such a big selling point.


I noticed an example of this when I set up split-horizon DNS for my local Home Assistant server.

Apparently if Android gets an IPv6 address it expects an IPv6 DNS address as well or it will fall back to Google's own. Had to configure the PiHole with an ULA address and make the router serve that.


Android's IPv6 support is the most actively user hostile thing I've ever seen. It's like if you took Apple's typical "our way or the highway" attitude and 10x'd it.

For example, no DHCPv6 support: https://issuetracker.google.com/issues/36949085?pli=1 (note the date)

They're so religious about it they wrote an entire RFC and made it a BCP so they could justify their silliness: https://datatracker.ietf.org/doc/html/rfc7934


You might want to block port 443 to 8.8.8.8 & co as well to be on the safe side.


Yes, DNS over HTTPS was designed specifically for this scenario. I wouldn't be surprised if trying to control DNS starts effectively bricking devices.


It was designed to get around DNS shenanigans by ISPs, etc.


dhcp dns is just a suggestion.


I never thought I would use Docker for 'production' in home network, but I recently completed a transition from three separate x64 servers to a single Raspberry Pi 4 [mostly] provisioned using Docker. Energy use-wise I went from maybe 90W to 2.9W. And to a single old-skool shell script that deploys and sets-up everything on a freshly installed Raspbian OS.

In addition to the services mentioned in the article, one that I recently added to my home network is Stratum-1 NTP server that gets it's time via Pi GPIO GPS module. Whether or not this has any real positive privacy or security implications, I do like the idea that I have significantly reduced outbound traffic to/from port 123 from the devices in the home network. Interestingly most devices are happy with NTP server provided via DHCP, but notably Apple and Sonos products always want to go to time.apple.com and Sonos' alternative (and I haven't yet have had heart to setup split DNS to try and redirect them to local NTP anyways...). edit: typos


All services you describe can simply be installed on the pi without docker with a single apt/yum/pkg command line. No need for docker there.


True. But I do like the fact that this way I have [almost] stock Raspbian OS and all the crud that gets installed by pihole, minidlnad, cloudflared, shinobi (especially by this!!) and so on are one 'docker image rm' away from removal.

And then there is of course the pure hack value of playing with the setup I don't get to/have to deal with / manage otherwise.


Crud containing for easy removal was one reason why I chose Docker for my infrastructure too. I'm fearless about trying out new containers, because the worst that can happen still leaves everything outside of that one application intact.


Once you know it, Docker is so much easier operationally.

The following things are easy:

- Tweaking a container version in the composefile to upgrade or downgrade

- Entirely swapping out the underlying Linux distro without touching a line of code in existing composefiles

- Isolating all incidental data generated by the application from the user-generated data (for backup purposes)

- Infrastructure as code (so you can easily migrate between servers, and version your setup)

- Quick iteration on service set-up are all so much easier with containers. It's possible to remove services entirely too, so experimentation between different options is very easy.

A sustainable self-hosted setup is one that is quick to maintain and upgrade. If you don't do both, security issues and incompatibilities will eventually be a problem.


- Docker's built-in service management works plenty well enough for a lazy home setup, which saves you having to care about how your server's distro manages services.

- Using docker means I can easily specify the version I want to run.

- Docker images are, for some reason, often simpler to configure than OS packages, for the same result.

- It's also really easy to tell what you need to back up, and to be sure that you got all of it. Plus it's very easy to make sure everything—data and config—for every service lives in a single branch of the filesystem tree, for further convenience.

- For similar reasons, entirely erasing a daemon is very easy.

- Using docker means I don't have to care which version(s) of the software I need is provided by my distro, or go track down extra repos, or whatever. This makes it easy to run a boring-but-stable distro to minimize maintenance, but still have the latest versions of the things you're actually running, and upgrading one of them will never mess up anything else.

- There's a lot more server software available through docker-hub than most (all?) distro official package sets—and, again, that works the same no matter which distro you're on, so everything about it, including the knowledge, is portable.

- All that, using the exact same docker-compose files or simple shell scripts, works the same on any distro, plus macOS. Migrating to a new server can, trivially, be made as simple as rsyncing your entire docker-stuff tree and then doing one "docker-compose up -d" or running a single very boring and simple shell script one time.

Really, the only down-side is you can't use it on FreeBSD right now.

As someone who's run Linux servers (public-facing, even, in the early days) at home since, oh, 2000 or 2001 (plus managing them professionally) I can say that Docker's great for home servers. Less fiddly trivia to worry about. What distro's on my home server? I'm not even sure. I think it's Debian? Dunno. Hardly matters. I've written zero "config as code" stuff for my home set-up, yet could have a new server up with identical services in maybe 3-5 minutes longer than it takes to install the OS, and all but maybe a minute of that would be hands-off, just watching progress bars complete. With effort I could get that down even lower, but I got that much for free.


I agree exactly with what you said, since this is exactly my experience as a home hoster. To add to what you said, I suspect that Docker containers are much easier to configure than OS packages because they are all to some extent Twelve Factor compliant, which means that configuration and operations are simplified.


> To add to what you said, I suspect that Docker containers are much easier to configure than OS packages because they are all to some extent Twelve Factor compliant, which means that configuration and operations are simplified.

Yeah, I think that's got something to do with it. At the very least, you practically have to document right up front where all the config affecting the container lives, both files and env vars. It's also tempting to put the most commonly-used config items in env vars, if they weren't already. Consequently, dockerized Samba, for example, is the easiest config of that daemon that I've ever performed, for any of my never-unusual-or-complex use cases, including with GUI tools, over a couple decades of using it.


> I recently added to the home network is Stratum-1 NTP server that gets it's time via Pi GPIO GPS module. Whether this has any real positive privacy or security implications, I do like the idea that I have significantly reduced outbound traffic to/from port 123 from the devices in home network

I like the way you think, and was thinking about some of the same. Could you share what solution you went with in terms of hardware? I am interested in using better than commodity time that could serve both NTP and precision time as well.

The benefits for better time, and local time are fairly impacting if you are doing anything with time sensitive computing (PKI, performance measurement, and audio are interests of mine).


I used this board from Uputronics[0] with optionally included active GPS antenna. Mostly because they were readily available, and there were few guides online for setting ntpd up on Pi using Uputronics board.

As a word of caution - it looks like most guides that I came across are outdated, or indeed have never produced a reliably (or at all) working NTPd setup. Instructions that lead me to the 'right' path is linked here[1]. I am still in the process of tuning GPS offset in my setup. But all in all, it's been a great learning experience into complexities of the protocol I've mostly taken for granted for a very long time.

[0] https://store.uputronics.com/index.php?route=product/product... [1] https://www.philrandal.co.uk/blog/archives/2019/04/entry_213...


If apple and sonos rely on dns to contact their ntp servers, it should be relatively easy to spoof.


If someone is interested, some comments on my home setup (about 4-5 years of continuous tinkering)

I have FTTH (France) and the first thing I did is to replace the ISP-provided router. They are really poor quality. I bought a Ubiquity ER-4 and, retroactively, I should have bought a mini fanless PC and run Debian on it.

Why Deian and not OpenSense or something like that? Because I have control over the very limited services I will use on the router, namely routing, firewalling and dnsmasq for DHCP and DNS. It is difficult to keep up with the security / functionality otherwise. Teh worst is a closed archaic system such as the one on the Ubiquity ER-4. A step up is something more dynamic such as OpenSense or similar. If you have the will to learn a bit of Linux Debian is the cleanest solution. Again - this is really for a very small set of service you will almost never touch.

Behind the ER-4 there is a switch that powers a few end user PCs, a PoE port for a Ubiquity Unifi WiFi access point and a server

The Unifi AP is fantastic. This is such an extreme upgrade vs the ISP provided one that I cannot imaging going back. Really invest in a higher-range consumer AP. In my apartment I used to have along the rooms (more or less linearly) 4, then 2 then 1 and 0-1 "wifi bars" (roughly the quality of reception). I now have, for the same emission power, 4 4 4 3

Then the server. There are two basic parts on it: the "services" part and the "home automation" part.

The services are things (web sites, MQTT, ...) that are run exclusively from Docker. I tested the recovery of the whole system from only the backups (of the configuration (docker-compose.yaml) and the non-volatile volumes) and it took me 1 hour from "I do not have any documentation, nor ISO" to "the lights work again"). Docker is a life saver here.

Backups with Borg.

The second part on the server is home automation: a docker container for Home Assistant and a USB dongle for Zigbee. I then have Zigbee and Wi-Fi devices all over the appartment to get rid of fixed wall switches. Plus automation.

What I would have done differently? A mini PC instead of the ER-4, two electrical wires in the wall (phase and neutral).

What I will do in the future? A redundant setup for home automation (this is not simple at all).

I would love to get some feedback for those who are more experienced or went though things they would do differently, or again.


I have a pretty similar hardware setup but I went for a used Ruckus R610 WAP. With a PoE injector and shipping I spent around 200 USD. I'm really happy with it.

My current goals are proper IPv6 support in case the day comes where my ISP will no longer give me an IPv4 address. The setup is currently close to perfect, but my DDNS service (google domains) isn't letting me make dynamic AAAA records for some reason. I'm in talks with support right now. I also have not successfully made ip6tables entries for rerouting IPv6 DNS traffic to my local DNS servers. When I have made entries it totally breaks IPv6.

I too would prefer an x86 box for a router, but OTOH purpose-built routers are much more energy efficient. Heat is bad for reliability and more power means less run time on a UPS.

My next home network purchases will be an all-flash TrueNAS box that is backed up to Backblaze. I want to get off privacy-devoid cloud services.


I would hardly consider blindly trusting a third party like cloudflare to be over-engineering.


"I over-engineered my network for privacy" .... by trusting a third-party for DNS resolution.

Doesn't compute !

Even more so as the OP chose Cloudflare ! US Jursisdiction + Commercial Company ?


Have you got a recommendation? I see them as a good choice compared to my other options. but if there is better out there…


> Have you got a recommendation?

Well, if you're going to go to the trouble of building and running your own VMs or docker images then the obvious recommendation is to run your own recursive resolver (Unbound or Knot Resolver).

The added bonus is that CDN content will operate as expected for your IP range instead of you talking to the CDN server nearest to whatever datacentre you spoke to the third-party DNS BGP anycast on. This may or may not be a big deal depending on your geographic location and internet connection. I am aware some third-party DNS services have options to try to fix the CDN problem, but YMMV as to whether this works in practice.

Otherwise, if I was forced to name a third-party, I would likely name Quad9. Especially after their recent relocation to Swiss jurisdiction[1]. IIRC (its been a while since I read it) their privacy policy is also better than Cloudflare's too (e.g. no weird, "we'll retain some stuff for 25 hours" IIRC).

[1] https://www.quad9.net/news/blog/quad9-public-domain-name-ser...



Run your own DNS server such as Unbound [1].

[1] https://nlnetlabs.nl/projects/unbound/about/


"Have you got a recommendation?"

I run my own resolving nameserver (unbound) on a server in a datacenter (but could be DO or EC2 or whatever).

The upstream for my nameserver is the DNS I set up for myself at nextdns.io which is ad-filtered like a pi-hole.

So I use my own DNS server and distribute that address to whomever or whatever needs it - and I control the traffic and usage and monitoring of it - but I also get ad-filtering without having to run a pi-hole.

Recommended.


Setting up a local DNS server is super easy with unbound + pihole: https://docs.pi-hole.net/guides/dns/unbound/.


Use Tor, or at the very least, use Tor for DNS, otherwise it's pointless.


This! unbound (a local DNS server) is painfully easy to set up and one of those programs that just works. Performance has been solid too and it cooperates well with Pi-hole. There are privacy-concious DNS servers that aren't based in the US too, if you insist on not self-hosting.


From a physical security perspective, I cannot recommend and emphasize enough having a fiber optic media converter between your network interface (cable modem, satellite, etc.) and your expensive and extensive network. The purpose is to isolate your expensive hardware from lightning-induced destruction. The cable modem and one of the media converters serve as "sacrificial anodes" in this setup. It ends up being much cheaper to replace just a cable modem and a single media converter instead of an entire network of devices.


After losing a cable modem and the connected NIC port to lightning in urban apartment, I installed a lightning suppressor / surge arrester with a replaceable gas discharge tube, on both the coax cable to the cable modem, and the ethernet cable to the modem. Since then, I've once replaced the coax-facing tube/fuse due to a surge, but the equipment has been fine. Spare fuses were back-ordered at Mouser, but arrived eventually.


That is a great point, but what about power? What do you recommend there?


Surge protector (high quality, insured) or UPS.


It's bizarre to me - the idea of running so many things on a home network that you need to start writing Ansible playbooks, building Docker images, spinning up Kubernetes clusters, putting a rack together. I think if you really applied some thought you could get rid of all of it and save yourself the time and effort. But I guess tech people do have a tendency to find nails for their hammer.


It's a hobby. Why do people build their own furniture or sew their own clothes, or knit a scarf? Same thing here. We do it to learn and have a sense of mastery. Doing things the "at scale" way on a scale where we personally can get that little dopamine hit from a successful project.


You are missing one more reason why: there just isn't anything fitting our needs on the market. I just want to keep ownership of my data, but cloud-based everything basically makes this impossible unless you run your own cloud.

I don't want to run my own infrastructure. But the commercial providers will scan your data, "unperson" everything they, the government, or the copyright holders don't like, and charge me a fortune for the privilege or withhold features.


If you knit a scarf, you have a scarf at the end of it. You can keep it or gift it to someone.

If you write a playbook to install some software on your home network, you have some YAML that took longer to write than it would have taken to do the task manually and isn't likely to be useful to anybody else. And without maintenance, that playbook will eventually decay and stop working. The scarf will continue to keep you warm forever.


"And without maintenance, that playbook will eventually decay and stop working."

This is the part that has killed this as an activity for me. I really enjoyed the experience of setting up playbooks and so on for my home server (deluge, jellyfin, zoneminder, etc), but in the end I didn't have the time to do much more than make the occasional config tweaks and restart it all live, and once that started happening, the scripts were no longer of value, and it was clear there was no point.

That said, I know some Nix acolytes swear by it for this kind of thing, in part because it really forces you to do things using its declarative wrappers and intentionally doesn't provide workarounds the way containers do ("just shell in and change whatever, lol").


Same here - The survivors on my home network are the ones with minimal ongoing BS. PFSense updates itself and chugs along like a champ. Plex does similar and never needs any kind of complex migration between versions. The Aruba wifi gear I picked up for a song because no bugger knows how to configure it is a million firmware versions behind current and it performs just brilliantly.

Many other things - Nextcloud, caddy, proxmox to name a few, are long gone now because they demanded hours of attention at unpredictable times, and I just can't be jeffed.

I still love tinkering with bits of tech on the home network, but I'm extremely wary of coming to rely on any of it until it's proven itself to need minimal ongoing babysitting.


When I go for a hike or play a game of soccer or audit a university class I have nothing concrete to show from it afterwords except the personal that development occurred. This is no different.


You think a scarf will keep you warm? Wait til you see my basement full of dual-socket Ivy Bridge servers humming along. No scarf necessary.


You're making an apples to oranges comparison. If you knit a scarf and give it away, all you're left with is the knitting needles and the knowledge of how to make another scarf. Same thing with ansible - you're left with ansible and the knowledge of how to write a playbook in future.

But also, why do you care so much about what other people do with their time? Live and let live, my friend. Nobody is forcing you to write ansible playbooks, build your own homelab, or comment on posts you view as a waste of everybody's time.


I can’t count the number of times I’ve done something in a “completely useless” personal project which taught me a skill or tool which was useful at work.

Even if you go by the philosophy that something has to be useful to be valuable (not something I personally believe), toy projects have their place. Plus, they’re fun.


Why are you comparing the scarf (output) to ansible (tool) though?

'If you use ansible to configure some stuff on your home network, you have some self-hosted stuff at the end of it. You can use it yourself or let guests log in too.

If you use needles and threads, you have some tools that took longer to go to the shop and buy and use to make something than it would have taken to go to the shop and just buy the thing. And without regular use, you'll lose the needles and the yarn will tangle and discolour. The self-hosted services will continue to run.'


You also have the experience of setting up a network in a "professional way" that you can use on the job, etc. Also, maybe OP just enjoys the process / finds it fun?


What seems illogical to me is the use of docker containers "so I don't have to worry about configuration and can leave that to someone with more domain knowledge" + "build my own playbook". Seems like you ought to go all one way or the other.


A playbook is documentation. The worst thing that can happen is setting up a useful service that you don't know how to, well, service. This is how 20+ year legacy servers start their life.


> If you write a playbook to install some software on your home network, you have some YAML that took longer to write than it would have taken to do the task manually and isn't likely to be useful to anybody else.

This is debatable. Some of those manual tasks you'd simply forget about and would end up with partially reproduced environment later, which is okay if you miss out on configuration that you don't actually particularly care about. It is much more annoying when you expect everything to work but realize that your workflows are randomly blocked because you forgot to manually install and configure some piece of software when reinstalling the OS or perhaps have to set up your IDEs completely anew because the configuration directories weren't preserved anywhere and are now simply lost. One of the benefits of using something like Ansible would be avoiding situations like this, or perhaps using something like NixOS (even though there are usability concerns presently).

In the more common case of using Ansible to manage servers (perhaps a homelab) as opposed to just using it for handling personal devices, the implications of this would be far more sinister - i've seen Java applications fail to work after migrating to a new OS release because fonts weren't installed, because someone forgot to document that step as necessary. And even if instructions are given (say, either when setting up some package anew, or reading your own documentation about your setup), there's no guarantee that you'll remember to follow all of them to the letter, or maybe you'll simply glance over a failure in one of the steps. That becomes even more likely as the size of your homelab and the count of your personal devices increases.

Clearly the impact of things like that happening is far lower in a homelab setting than it would be in a professional environment, but that also means that you'll essentially be pigeonholed into using some sort of an automation solution at your workplace (hopefully) to avoid situations like that, so also using it for your personal stuff is just the next logical progression. No one likes dealing with failures that aren't immediately apparent and could have been avoided entirely. I don't know about you, but Ansible's standard modules provide really good reusability, so i've definitely borrowed samples of how to do something from my personal setup for my work projects and vice versa (not verbatim, but syntax, how to use parameters and get things done in general).

> And without maintenance, that playbook will eventually decay and stop working. The scarf will continue to keep you warm forever.

This doesn't feel entirely accurate - everything, from your software, to your scarf will eventually decay. It's only a matter of time for the most part, though you can mitigate this by using more stable OS distributions like Ubuntu LTS (until very recently, i would have also recommended CentOS) or by using better materials for your scarf. Oh, and choose the stable versions of boring software, perhaps cut out the technologies that have the most rapid changes out of your stack entirely, until the development there slows down.

For me, that currently means:

  - using Debian because it's stable and boring enough for my needs (both servers and desktop/laptop with XFCE)
  - using Ansible for servers, treating personal devices and disposable otherwise (no attempt to preserve configuration, too much effort)
  - using automated incremental backup software for the data, just in case
  - manually provisioning any VMs/VPSes that i require, but having most of the configuration be similarly automated
  - using Docker containers within those VMs/VPSes with Docker Swarm liberally, to separate software runtime environments from their data output and their configuration input
  - using Docker Swarm to make managing all of that simpler and partially automate it, alongside something like Portainer for making that process more user friendly
  - using Caddy to never have to deal with certificates manually, even though i manage DNS manually
  - not updating software i don't expose publically and don't need the newest versions of (GIMP, Blender, LibreOffice, some private containers)
  - using automated security upgrades within everything else, but also using the latest stable versions of server software, never bleeding edge
A lot of it is about finding what works for your particular circumstances and seeing which parts cause the most pain and then automating those.


the scarf will decay too :)


I enjoy over-engineering my home network too. Its a learning experience. Its more about the process than the end results.

But many of these things aren't actually a major effort. Its some yaml for ansible to configure a raspberry pi for security and running docker. And run a few existing docker images. Its might even be less total time spent to just stick it all in ansible and run it rather than have to run each command by hand.

And there are actually some really nice benefits to running something like pi-hole on your home network and forcing all DNS through it. You can get ad and malware blocking on devices that don't typically let you easily. And you can just set it up in one place rather than on every single device.


It's just a hobby, it gives you some extra nice thing you wouldn't have that easily (for one, adblocking at DNS level with PiHole) but in the end is just a hobby.


I have nothing against hobbies but it's even better if your hobby has something to show for it. I think the time saved by just manually plonking the software on a cloud VPS would allow the author to do something more challenging and interesting, instead of well... pleasuring oneself with heavy duty deployment tools.


>I have nothing against hobbies but it's even better if your hobby has something to show for it.

For you, sure. For many others, it's just about doing something you like doing just because you like doing it. DJ'ing is my hobby, has been for 15+ years. I regularly spin sets a couple of times a week for at least 2, often up to 4 hours at one time, just for myself. I love doing it and feel so content and happy inside when I'm done. I feel even better during it, when everything clicks and I hit a really nice groove.

I have no need to share it with the world, have other people listen to it, or use my hobby to get gigs and make money with it. My sets are ephemeral, my own happy place, and always will be. I'm sure this is the case for many other people and their own hobbies.

Why can't people just enjoy doing a hobby simply because they like doing it? Why does there often need to be pressure from somewhere to do something more with a hobby?


I want to understand where you're coming from. But I do think there's a false equivalence here. DJing is a creative process with no defined goal. There's only so many ways an Ansible playbook can be realistically written to install some software, it's the opposite of a creative process.

Tapping your foot can't be considered a hobby right? It produces nothing tangible and involves no creativity.


It is a creative process, for sure, but that's just one reason I do it. Another reason is just getting lost in it, forgetting about everything else and just "being in it".

The same can be said about a lot of other hobbies, regardless of what kind of process it might or might not be. Just doing it and being in it - reading and getting lost in a book, watching a movie and getting lost in it, going on a hike and getting lost in the woods (metaphorically, not actually lost, lol), writing code and getting lost in it, etc..

Ansible playbooks can be the same way! The point of hobbies for many people is just because they like to do it, nothing more, nothing less. There may also be other reasons, such as the creativity for me, but there are also times when I'm not feeling super creative and my set will just be an un-mixed playlist of music that I just want to get lost in in that moment... if that makes sense.

Some people might just like losing themselves in an Ansible playbook, or think they're fun for fun's sake or whatever reason. :)


Writing ansible playbooks can be very creative, and so can tapping your foot.

Also, there is no requirement that a hobby has to produce something tangible or be creative. Watching television, drinking, people watching, and meditating could all be considered hobbies. Anything you do somewhat regularly to relax or for enjoyment, that isn't your job, is a hobby.


Sometimes a hobby is just something you find pleasantly distracting. Sudokus aren’t creative yet are thoroughly enjoyed by some.

It doesn’t have to be creative nor worthwhile. Just something that you enjoy. Perhaps yours is trolling nerd forums about other peoples choice of hobbies? :P


> Tapping your foot can't be considered a hobby right? It produces nothing tangible and involves no creativity.

Let me introduce you to the world of tap-dancing my friend.


> I have nothing against hobbies but it's even better if your hobby has something to show for it.

This toxic "grind until you die" mindset needs to end.

Hobbies *do not* need to produce something to "show for it". The thing you are "producing" is happiness for yourself.


This is pretty condescending, and also misses the major point made in TFA. Writing a quick Ansible playbook/docker compose means you can quickly standup your services on any system. Run it locally on a raspberry pi, move it to a VPS, just run one command.

None of this tooling is "heavy duty" by any means. If you just apply a little thought, you'll see how this saves time and effort.


> instead of well... pleasuring oneself with heavy duty deployment tools.

How dare someone experience pleasure! Especially from a hobby.

/s


> it's even better if your hobby has something to show for it.

Like the experience and newfound knowledge that Ben gained from this endeavor?


Reading a book doesn’t give you anything tactile to show for it. You read for the experience. Likewise for gaming, solving a crossword, nor watching TV. Different people relax in different ways.


Heavy duty deployment tooling is a hobby of its own. :-)


The hint is in the title of the article:

  How I re-OVER-ENGINEERED my home network for privacy and security
This is what us "tech people" do.


I think your list is a "murder, arson and jaywalking"[0] kind of list. Ansible playbooks and Docker images are very, very simple compared to building Kubernetes or hardware racks (both of which can be very, very hard).

My Ansible playbooks are barely a step up from shell scripts, my Docker images are basically always whatever each project supplies themselves (and don't run on top of Kubernetes), and my infrastructure consists of a grand total of one Raspberry Pi and an 8-port switch.

These things don't have to be overcomplicated if one doesn't want to overcomplicate them. Docker and Ansible lead to real time savings, both because they document the exact software setup and because they make replicating it very easy when switching servers (as inevitably happens in a hodge-podge home lab).

[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/ArsonMurderAndJa...


I have about 6 computers total (4 personal, 2 servers). Yet I am toying with the idea of throwing ansible into the mix on this.

Every once and awhile a good clean 'toss the whole thing out and start over' is in order. For me it is typically an ubnutu upgrade has done something odd. Then I spend a few hours digging on it and fixing it. But it would be easier/quicker to just scratch the whole thing and rebuild it, or at least toss it in a VM beforehand so I know what will go sideways. A bit of ansible would let me do that quickly. The other 4 I can just leave them alone. Starting at a known state (broken or fixed) is always handy. Basically it may save me several afternoons of work. Where I would rather be programming instead of digging out some weird error some upgraded bit of software started throwing because some leftover config file is getting in the way.


While many people are happy to write software as a hobby, having to fiddle with ansible, docker, kubernetes or other devops stuff is the kind of thing that would make people quit their jobs.

It astonishes me how people can enjoy this kind of tedious work as a hobby.


Kubernetes is overblown for home set-ups. Any tooling I use, including Docker and Ansible, is because they make the unenviable task of keeping the home infrastructure running less bad than doing these things by hand.


Why not. Here's the thing: tired of it? Just stop and do something else. Can't do that at work.

It's just mental exercise.


I mean if it floats his boat then why not.

But it looks like a lot (all?) of the work here was just to get pi-hole functioning well - and that now needs him to deploy pi-hole, cloudflared, caddy plus all the surrounding tools of Docker, Docker Compose, Ansible etc.

You're spot on that if he'd have applied some thought outside of simply 'fixing pihole' he might have seen there are more modern and elegant solutions to configure network-wide adblocking.

e.g. AdGuard Home is an opensource single-binary precompiled for most architectures and OSes. Self-updating with an HTTPS multi-user GUI; supports DoH, DoT, dnscrypt out-of-the-box. Has a single yml config file to backup/sync; supports adblock lists based on regex as well as hosts file format.

Downloading and running that one little binary gives him the endgame he wants a lot more elegantly.


It's the hacker mindset -- and this is hackernews after all. Isn't this kind of like walking into a wood shop and saying, "you know you guys can just buy this stuff at Ikea, right?".


Worse: this is like walking into a wood shop and claiming that you can just buy this at the IKEA, when the maker of this piece of furniture knows for a fact that IKEA doesn't carry anything remotely similar and will not do so for a long time.


This is a brand marketing ad for GitHub.

It's like those "there's no place like 127.0.0.1" shirts, which work despite the fact that we don't pronounce the localhost address as "home". It's a way of saying "hey, look, I'm just like you!" without being overt about it.

Microsoft is a clever company, and they recognize that they have to do a lot to offset their corporate stink that they brought to the GitHub and npm brands.


Eh, people who don't know tf/ansible say these things because they don't know the tool. When you know the tool it's no big deal and it's preferred for repeatability and its self documenting. People often think it's just good for complex things but infra as code solves a lot of problems. I use tf for my home lab because that's what I'm failure with and a homelab will inevitably get tinkered with or fall out of state. Same reason I use tf for everything infra as a dev consultant of 1. I'm not going to touch this thing for a while and when I do I want it to be repeatable and documented so I can move forward instead of back.

Also kubernetes is easy when using the right spin up tool, like portainer or rancher. It's also the only way to run docker images across multiple machine today. For a while I ran swarm but that's dead, then I just used docker compose but started running into compatibility issues. With kube you also get to leverage all the existing helm charts., which saves a lot of time.

It's still a bit overkill but a lot of fun, and I learned networking and infra things doing it on a homelab. I also use it to try new things.

There are also political reasons to do so. I suspect the internet will get less free and it's nice having your own private space.


I agree with everything except for Kubernetes. My use of Ansible for example is purely so I don't forget what tweaks I made to the system to make things run.


I've been running a very similar basic setup (docker-compose + ansible + caddy + a bunch of other things, most importantly nextcloud) on a VPS for my personal server needs for about 2 years, so I'm feeling very validated right now.


This is a great write-up, not just for the ostensible goal (more privacy and security with maintainable infrastructure) but also because it lays out some reusable primitive for other “home lab” type projects.

The next stop down this fun slippery slope is kubernetes :)

That said, it also is worth mentioning NextDNS as an option. It basically is “pihole as a service” and is reasonably affordable. I ended up going that route to achieve similar goals as to the author and have been happy with it.


> That said, it also is worth mentioning NextDNS as an option.

There's ControlD.com as well (by Windscribe) that's super neat which I really like, that offers more than just DNS.

Disclosure: I co-develop a FOSS NextDNS alternative.


I asked the author of the OP about NextDNS on Twitter. His reply:

https://twitter.com/benbalter/status/1433123882366611466


Doesn't NextDNS have the same problem as VPN where you are trusting all your browsing data to one entity?


I've done a similar setup at home with a full Unifi setup, PiHole, DNS over TLS, Let's Encrypt certs for internal devices, multiple wifi networks and other "It's COVID and I'm bored" goodness.

I've had to turn most of it off.

PiHole means every single web site issue for any user on the WiFi ends up on my plate immediately. Wifi network segments mean Minecraft doesn't "Just work" for the kids and their friends. IOT networks means Airplay doesn't find devices. SDDP doesn't work, etc.

Having a functional home network turns out to be more important than having a secure home network.


This post makes no sense, at all.

1) UniFi devices just recently had a massive security vulnerability. Also not a good idea to be letting third party servers, access your home network, directly via your router

2) Instead of CloudFlare, use something like Unbound. I have mine setup to fetch directly from the Root servers. Why send your DNS queries upstream?

3) Caddy is the one decent thing suggested, however I am skeptical of the benefits of having HTTPS internally in your LAN. If an attacker is in your LAN, then it is already Game Over


Fairy similar to what I did except I used proxmox and lxc not docker compose. Since proxmox is addressable via cli you can ansible against it to deploy stuff. There is also a module for it but I skipped that.

I’d suggest blocking port 53 at the firewall too. I was surprised how much stuff doesn’t go through the cloudflared tunnel. You think it’s all going through the Pihole but there is so much rogue stuff


"How I re–over-engineered my home network for privacy and security"

Proceeds to send all his DNS queries to Cloudflare


Still better than sending them to his ISP (or keeping them visible to his ISP no matter the destination). Sure it's not ideal but really very few things are when it comes to home network DNS requests.

I'd love suggestions for better solutions though, I'm sure there's something I haven't considered.


I intercept port 53 traffic at my router, tunnel it over to my VPS which runs an unbound instance. Overall I think this setup is better because:

1. It denies my ISP the chance to look at my DNS requests.

2. It doesn't involve yet another third party which may or may not try to monetize my dns requests.

3. It mixes my dns related traffic with that of a datacenter, which doesn't provide a compelling source of data for advertisers IMO.

DoH throws a wrench in the works. Still considering my options on how I want to address it.


I am a hypocrite as I do the exact same thing, Pi-Hole + Cloudflare. It's just that your title raised my hopes for a better solution.

Sorry :)


Some of the authors concerns with Pihole can probably be fixed by replacing it with Adguard home because of the HTTPS support. That being said, it’s kind of cool to see someone higher on the corporate ladder posting these kind of blog posts!


Was hoping to see some ideas on how to create a DMZ and it still being accessible locally.

A use case would be to host internet facing applications at home and still be safely consumed locally.


I mean, the simple solution is to just access your DMZ via its public IP address.

Alternative solution is to firewall traffic from the DMZ to the rest of the LAN but allow LAN to DMZ.


For TCP this is simple, only allow new connections in one direction.

The better way is to separate networks by vrf, ensuring packets take the right path and go through your dnat rules without possible shortcuts (firewall separation works too, but then you're at the mercy of not fat-fingering any rules)


wow. Just dropped in a new home gateway (Devuan 3/Debian 11)

- hardened Linux (Kicksecure/Whonix)

- openrc (no systemd network vulnerabilities)

- DMZ

- NAT + IPv6

- stealth/hidden master DNS server.

- Shorewall firewall

- Wireless, 4 SSIDs

- Wireguard to remote VPS carrying DNS queries

DMZ covers all IoT, Alexa, SmartTV, game console, cable TV; both by wireless and CAT5e.

Household network covers laptop, desktop, cloud servers (Proxmox), Kerberos ticket server, file server (NFSv4, media DVR, ownCloud server.

And my white lab is totally airgapped and occasionally waterfalled firewall as needed.

I have a subnet that offers encrypted MAC.


Do you have the instructions somewhere to follow


I’m writing them up as I go along. Need to break them up into short blogs.

Now doing transparent interception squid.

And I bookmarked this comment so I can reply to you later


Appreciated, thanks


Sounds like my old networking setup, except i had a special "IoT like" VLAN for my kids and their friends. No reason for malware to propagate outside the kids network :) Besides that was a DMZ and a storage VLAN, and because i'm cheap i opted for a router on a stick kinda setup with all storage being mounted across the firewall as kerberized NFSv4. Ideally i would probably have setup an L3 switch with ACLs, but i had the processing power / bandwidth on the router.

These days i've taken it at step further (back). PiHole got replaced by AdGuard Home, which then in turn got replaced by NextDNS. PiHole/AdGuard Home only works on your own network, and NextDNS works everywhere. The price they're asking ($20/year) is less than the cost of the Raspberry Pi and power required to run it, and requires no maintenance on my end.

Everything that previously ran in docker at home has been replaced with Cloud equivalent services, source encrypted by either rclone + encryption, or just using Cryptomator. Again, the cost of the cloud services is cheaper (about half) of what i paid in power consumption for just running the server. Then add hardware costs on top of that.

As for my home network, since everything is now "out there", i've retired about half my network infrastructure (yay, less stuff that can break), and the internet connection is now the lower common denominator. I used some of the money i saved on power consumption to instead upgrade my internet connection to a 500/500.

All that remains is a single Mac Mini that is powered 24/7 for the purpose of pulling our iCloud data back home so that i can make backups. I would really love if TimeMachine or 3rd party backup tools could backup that cloud data directly from the client, downloading it only once, and other than that leave it in the cloud.


This is similar to my setup that I host locally. I use an $80 laptop that I bought on craigslist and each service that I need runs in a rootless container under a single user just for that purpose. So far I have a dedicated Valheim server, Adguard, and Plex running under this scheme.

I still have to move over my torrent client though. Perhaps I'll write some Ansible playbooks to make this easier to manage.


How do you know CIA agents didn’t sell you an implanted laptop on Craigslist?


Because I am a CIA agent.


How likely is that?


Does anyone make an AP that allows unlimited (or at least a lot) of SSID's tied to separate VLANs? My Unifi's only support 4, so I have my private network (for my own laptops, etc), guest network, camera network and IoT network.

But I'd love to be able to separate IoT devices across multiple networks, so I could have one for smart switches, one for TV/media players, etc.


You don't want "unlimited" SSIDs-- you want a RADIUS-assigned VLAN and a single SSID. The credentials used to associate determine which VLAN the traffic dumps into. Your UniFi gear will do it.

Of course, a lot of consumer-grade hardware won't do 802.1x so you end up stuck with needing a bunch of SSIDs (and wasting air-time on beacons).


Of course, a lot of consumer-grade hardware won't do 802.1x so you end up stuck with needing a bunch of SSIDs (and wasting air-time on beacons)

Yes, that's why I want more SSID's. My internal network does use 802.1x, but as you said, few devices outside of laptops support it.


You probably shouldn’t deploy an one-SSID model for your home network.

Not many kernel can separate traffic within a single SSID (even if you did use VLAN, tcpdump on a malicious IoT can still view the traffic.

Better to have four to seven SSID, each mapped to a subnet. Make one subnet/SSID for encrypted MAC with laptops.

Cable TV, Smart TV, power-line LAN adapters, smart lightbulbs and webcams should go on separate SSID/subnet.


Or just one SSID, where you directly put all the Androids, iPhones, IoTs, and similar garbage in addition to your trusted devices connected via a overlay wireguard network on top in full mesh configuration.

You will not have to trust the potentially outdated wifi firmware, that is quite likely vulnerable to all the latest holes in wifi security.


Wait so does enterprise wifi just send all VLAN traffic to every device despite a login being assigned to a single VLAN?


All enterprise gear should allow you to have multiple SSIDs on different or dynamic VLANs. You won't have that issue on Meraki, Ruckus, Cisco or Aruba gear.

It's worth noting though that more than 3 SSIDs brings with it radio overhead which can degrade performance of your wireless network.


I'm fine with losing Wifi performance if I can improve my home network security. When I'm sitting at my desk, my laptop is hardwired, and my only 4K TV is hardwired, so I don't have any great bandwidth needs, and even with 4 SSID's broadcast, I'm maxing out my 500mbit internet connection, so I even if I lost half that to overhead, I'd be ok with that.

I've also thought about just setting up multiple nodes, one set for secure devices, one for insecure devices.

Cisco "only" supports 16 SSID's, which is probably more than I'd even need.


The Aruba Instant-On AP22 might do the trick. It uses the same hardware as the Aruba AP-505 which support 16 BSSIDS. I couldn't find a documented limit for the AP22. Note, the instant-on series uses a cloud controller and can't be managed locally, however it is a fraction of the price, $180 (Instant On AP22) vs $460 (AP-505).


Yeah this is what I'd like. I don't think I'd ever feel comfortable putting these shady devices on the same LAN as my PCs.


Mikrotik. I have 5 'virtual' APs broadcasting wifi links for 5 different vlans / subnets.


I've just finished a network overhaul as well.

I used similar techniques, except I preferred not to use any external DNS server¹, so I hosted a recursive DNS server (bind9) on a small fanless server (apu2d4).

The DNS server can also block ads, similar to how pihole does it.

1. Why is everyone trusting cloudflare to centralize all their DNS queries? Even Firefox is migrating all their browsers to use them.


Personally I’m using cloud flare because there’s not a lot of options. The only other major one most tech enthusiasts know about is 8.8.8.8 and that’s owned by Google.

It would be good if there were more players in this space. A nonprofit might help here too, like Mozilla. Maybe they can run a public DNS server.


I think you might be kind of missing the point here—if you run your own recursive resolver, e.g. unbound, you’re not relying on any public DNS server.


quad9 is run by a foundation. Most of it's donations come from corporate sponsors, but it's definitely a better option.


UniFi now offers L2 isolation on WLANs, so I don't think he needs a VLAN for the IoT wifi.

How do you create a rule to isolate a printer? It still needs to receive requests from the network, but I don't want it probing other devices...?


Any recommendations for a dedicated OpenWRT box for the home? Links welcome. A few 2.5 GB ports would be nice.


Nice article, but not really over engineered.

More, spent money on a bunch of gear and installed PiHole!

I did the same though LOL

However, in the spirit of his comments about Outsourcing to The Experts, I switched from PiHole to AdGuard Home to NextDNS.

Very happy with NDNS, just wish there was a kill switch!

I'm up around 50 devices on my home network, so the standard home router from the ISP, or even the fancy 'gaming routers' that look like alien spiders don't really cut it!

I get terrible internet where I am in Australia, so I have to be creative:

---------- My Setup: ----------

• 3x Unifi UAPs • 1x USG Pro • 1x USW 16 POE

• 2x 4G modems Connected to the USG Pro and load balanced! You can't LB with a UDM!

•1x M1 Mac mini with 12TB RAID as a TMS and Content Cache! Serves around ~500GB a month to my network!


I went down a similar hole with a UniFi Dream Machine.

udm-utilities [0] basically allows you to run all kinds of stuff on it through podman.

Went from trying to teach it to mimic my ISPs router to get internet working, to banging my head against IPv6 prefix delegation, to now running adguard, homebridge and a bunch of other things across different VLANs on it.

All in all, I am happy with the result, and happier that I got dragged out of my progrmaming bubble to learn proper networking with IPv6.

[0]: https://github.com/boostchicken/udm-utilities


His research boils down to two things:

1. Lock down DNS at all costs (and he means, all costs), going so far as to use PiHole and then ridirecting DoH/DNS to CloudFlare to scan for more mal sites.

2. Create virtual subnets at home for untrusted IoT devices


This. I'm really concerned that these IoT devices will begin hard coding their own setup and using DoH to get around any DNS restrictions.

I've started hardwiring devices that I have some trust in (eg laptops, desktops, etc) on to their own VLAN. I really want to setup a second wifi network, maybe a third for Rokus, IoT and other things, and force them through a proxy. Or they just don't get access to the Internet at all.

It seems silly that these companies think that can use my bandwidth, whether or not it's metered, to do as they wish.


On the whole Caddy/TLS bit: Did I miss something or wouldn't he also have to purchase a domain for the admin interface? (And then setup pihole such that the domain is resolved to the pihole server itself inside the LAN).

Of course it might be that he already had a personal domain available from some other project, that he could use.

But actually buying a domain just so your raspberry pi can use https seems like the ultimate overengeneering.


You're correct, but you can see he already has a personal domain hosting his blog: ben.balter.com. He probably just pointed another subdomain at the pihole e.g. pihole.balter.com. That's what I did with my piholes and other internal web apps on my home network.


I recently installed adguard home on my network because some googling suggested that was better than pi-hole. Is there a reason you picked pi-hole?


PiHole is what first introduced me to the DNS sinkholing concept and was more mature when I was first researching options. AdGuard home has come a long way since then and I’m planning on giving it a closer look when I’m looking for my next project.


This doesn't strike me as particularly secure, private, or overengineered considering his position and level of access.


overengineered for sure. Secure and private - no.


Running something similar with UDM on HP Prodesk G2 which I bought for $50. Ubuntu server with HomeAssistant VM and Portainer. Uses about 7w, CPU temp 32C.

I was running Pi4 before but felt it was WAY overrated for this purpose, you can buy a used micro computers for $150 which come with SSD, 8GB RAM and are multiple times faster than Pi4.


Ansible looks useful, but YML is a hellscape file format.

Is there a system equivalent to Ansible that uses a decent config file format?


What would you consider to be a decent config file format?


TOML seems OK.


Terraform?


I am wondering if I'd prefer to use NixOS to using Docker + Ansible + yada-yada for a similar setup. I've never used either, just a guess after reading many articles.


A drawing would help!


De rigeur despairing comment, which will be downvoted into obscurity:

That such efforts are necessary, and as the commentary shows, an active area of interest and hacking and development by so many very smart people,

is an absolute condemnation on the dystopian state of the surveillance capitalism that so many of us here have helped build.

It's not malicious actors that are the sole or even primary vector being blocked. It's the now-systemic misbehavior of so many of our own products and services.


I'm not sure what you're even trying to say here.


I'm saying that it's well worth taking a step back from the tree-level view,

e.g. celebrating and being inspired by the intellectual and technical details of a smart industrious individual attempt to reasonable degree insulate themselves from the abuses of surveillance capital,

and consider the forest-level view,

that while few were paying much attention,

our society in general, and the industry most HN readers contribute to in specific–in some cases, very very directly–is now profoundly, all but irredeemably broken, wrong, amoral, and bad, pick a word.

Articles of this type and fandom and advocacy of this or that open source solution to a tiny piece of the surveillance capitalism nightmare,

are in small way part of the problem; they normalize the experience well sketched here: https://den.dev/blog/user-hostile-software/

But that article only approaches some sides of the problem.

The political and philosophical ones, which are implicit but not blown out, are deep, real, and corrosive.

The subtext, which is not so subterranean, is that readers here who are cashing a pay check accelerating total information awareness, dark patterns, and the abuse of users in exchange for calculatedly seductive or addictive services whose fine print all know no one ever reads,

are culpable, and that damage is real, and the costs real.

And they should do better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: