What I'm seeing is not really a "self-hosted website", but a Cloudflare-hosted cache of a self-hosted website.
Edit: I wanted to get an idea of the "real" performance, so I went to a non-existent page so the cache would miss. I waited 930ms for a 301, then another 590ms for the 404 page.
Not sure of you are located but Pingdom test for 404 does not match your claim: https://tools.pingdom.com/#5ca8d73c1e400000
And of course 404 pages will be slower. Because Cloudflare still has to receive the request first, then check origin server roundtrip before responding.
The site is just a hobby page and the only thing public facing that I self hosted. Even my main blog which is hosted in North America uses full-page-cache CDN courtesy Bunnycdn.
I just ran that Pingdom test again, 938ms for the 301 and 3861ms on the 404 on a missed cache. Now that it's in the cache, everything loads in less than 247ms.
- Netgear Cable modem
- Edgerouter ER4
- 8-port POE Edgeswitch, powers the nanoHD AP in my hallway and acts as the backbone of the network.
- Raspberry Pi 3 as a dedicated pihole DNS server
- TPLink 16 port switch serves all the 'lab' gear.
- 2x R720's. One is off, the other is a VMWare ESXi host.
- ESXi is pretty light at the moment - running Unifi controller and a dedicated VM that just runs Elasticsearch for a side project. For a while I was heavily using it to simulate an AD deployment for a small business.
- Synology DS918 is a network drive and network time machine backup destination. It backs itself up to Backblaze nightly.
- Everything is on a simple Cyberpower 1500VA UPS. I can run everything for about 80 minutes without power, but if I shut down the R720 the runway gets much longer.
I got my start in about 2003-2004 by trying to run my own mini ISP out of my home during High School. I was doing everything over a residential DSL connection that only got 1.5mbit down and a fraction of that for upload.
I still remember asking my bud to help me 'test my mail server' which was a Qmail installation via the Qmail rocks guide. The machine was a Pentium 3 with 128mb of RAM. My bud Anand (Founder of Gyrosco.pe) sent a handful of test emails to it and brought it to its knees! Absolutely hilarious incident at the time.
I wish that I had more photos of that time in my life.
There is also a lot that goes on outside of your browser. It blocks tracking logic embedded inside other things, too. For example, my TV cannot phone home to Samsung anymore to tell it what I am doing.
You can get really wild with firewall rules to truly prevent any DNS traffic from escaping your physical network.
1) Run once, use everywhere. (Including devices without a browser, like an Android/Apple/'smart' TV.)
3) Configure alongside rest of networking, not clicking buttons to manually sync within browsers of the same type.
Specifically, iPhones cannot do in-browser content or domain blocking if the browser is not Safari.
This might be the best machine I’ve ever owned. It’s rock solid, a great Docker introduction and runs a load of small tasks without issue or intervention for a year+ so far.
Anyone know offhand if this year is getting a refresh to the 1618+? I think that Synology has a tick-tock upgrade cadence with consumer product lines getting upgrades in year x, and enterprise products get upgrades in year x+1.
Is there a good guide to start researching that isn't a top-ten list with just affiliate links?
Definitely have a look though the relevant reddit subs and by and large the relavant ones are positive and helpful, eg /datahoarder /synology /homelab. It's good to scan them pre purchase too - for example it's been interesting watch the Western Digital drive issues play out over those subs
There are some off-the-shelf solutions that are expensive and well regarded (eg Synology DS918 DS1019) but expensive. There are a lot of pretty nice systems you'll be able to hack together too. Freenas is excellent in this regard, but it gets into personal favorites very fast.
My home test xen hypervisor is an older Dell desktop mid-tower workstation dual socket/8-core xeon, because it's reasonably quiet.
For Docker, I think it's just the ease of use. Imagine a standard web service you want to host, consisting of an Apache, some PHP, a MySQL database and maybe a Redis cache as well. Now imagine how the software's author would write setup instructions. 'Grab this and that version of these things, install them, then edit these 10 config files and make sure these lines are in there.'
With Docker it's literally 'Here, copy that docker-compose.yml to your server und run `docker-compose up -d`'.
Back in the day you would walk into a rooms filled with racks full of phones for SMS / Airtime sales "optimisation".
I tried to google some pictures now but it turns out Im behind the times. They've gotten rid of the phone part completely! 
Now you just buy the sim cards and let modems do the rest..
This is really intriguing.
I'm currently running a SFF machine as a server, and thought about adding some Pi devices to play around with a local kubernetes cluster.
It actually did work for a while. The problem that you run into is that carriers rate limit the number of texts per minute that a particular phone number can send (this is a problem with Twilio too, but in that case it's much easier to spin up a new number).
The problem with texting rates isn't technical, it's that carriers essentially have a cartel and can set prices however they want, and have tools to detect and prevent such "abuse" of their cheap consumer plans for commercial usage.
"and have tools to detect and prevent such abuse" <- This was my assumption too. I tested all large carriers. Verizon, however doesn't seem to have these limits in place. With a verizon sim I can push out about 200 texts per hour per device. If you get rejected in my tests depends on the receiving carrier.
"it's that carriers essentially have a cartel and can set prices however they want" <- yes. Carrier to carrier is where things are interesting. Sprint gets quickly blocked by ATT for example. But, wow, a UNLIMITED TEXT plan is pretty cheap...
Yeah, the more I think about this, to me I think this counts as supporting the community.
At some point, I'm hoping to dive a bit deeper on the software side, which was also super fun.
I have a raspberry 2b+ with several services: ldap, syncthing, gitea, cups server, minidlna, torrent server, NFS, nginx for PHP stuff like phpldapadmin, phpmyadmin, nextcloud, etc.
I realized that I don't need a public IP. I set up zerotier on all my machines and pointed a subdomain that I own to that IP. OK, other people can't visit it, but I rarely need them to.
It's perfect, but nextcloud is slow :( Maybe an upgrade to rpi 4 will help it be faster.
IPv6 on public WiFi or corporate networks isn't a given, and where I live, neither is cell coverage, so I wouldn't want IPv6 only for important services.
It makes it difficult to even use IPv6 internally, because the LAN tracks the prefix, so addresses keep changing.
I haven't figured it out yet.
It's kind of morphed and people fold their home "production" environments into the term too.
Homelab [hom-læb](n): a laboratory of (usually slightly outdated) awesome in the domicile
 : https://github.com/quaintdev/pinotes
Or setting up a really cheap server (e.g. I like RamNode) as a central VPN point.
while it's not as pretty, the biggest win for me was to move all of the vm disks onto a Synology that I'm mounting via iscsi. it tends to work really well.
I found this trick to make vimeo videos "seamless" just as gifs back in the days. It's surprisingly not distracting yet super vivid :)