Hacker News new | past | comments | ask | show | jobs | submit login
Building an ARM64 home server the hard way (jforberg.se)
258 points by jforberg on Feb 19, 2023 | hide | past | favorite | 118 comments



For the Pine family of SBCs I highly recommend installing Tow-Boot - https://tow-boot.org/ - on the SPI flash memory to allow yourself much better boot options, including booting directly from NVMe so you don't need to keep the MicroSD card plugged-in.


Alternatively one could go for Barebox, it’s neat and standardized like Tow-boot, but it’s not phone-focused and works on many more boards.


Does this have a mechanism for automatic redundant OS upgrades? I just built a Yocto-based distribution for a board based on rk3399, but the currently integrated U-Boot is not in the best state. This could be a great alternative if it really is a bit easier to integrate/build upon.


It's just a bootloader, with a few tricks up its sleeve. All it does is letting you boot from any storage medium you want (most notably NVMe) instead of being restricted to the hardcoded sequence of the boot ROM.


I understand that part. What I'm talking about specifically is part of a bootloader, see for example this page in the documentation of SWUpdate: https://sbabic.github.io/swupdate/bootloader_interface.html

By adding communication between the OS and the bootloader it's possible to implement redundant updates for whole partitions (specifically A/B-updates with a boot counter). U-Boot supports this (depending on the state of the vendor-provided fork better or worse), and Tow-Boot seems to be based on U-Boot.


One problem with opinionated builds of U-Boot is that you'll have more work figuring out what's enabled in its config. Configure and build your own if you want this kind of control.


The tow-boot software devs goes out of the way to say they are offering a boring PC boot loader experience so I wouldn’t expect any advanced features other than booting from devices.


PC boot loaders have been able to fall back to previous configurations on boot failure ("automatic redundant OS upgrades") for a long time[1], so that's not a valid excuse.

https://systemd.io/AUTOMATIC_BOOT_ASSESSMENT/

https://www.gnu.org/software/grub/manual/grub/html_node/fall...

https://www.gnu.org/software/grub/manual/legacy/Booting-fall...

[1]: Minimum about 23 years, from personal use in creating a product with A/B-root partitions.


I don't know if it retained swupdate functionality, or if it's drop-in compatible with u-boot's such.


It is a bootloader bundled with the ARM64 firmware. That means you don't need to add the firmware to each OS that you might want to use so it is easier to switch between them.


Yes, I considered that and agree that it would have been nicer! I didn't pursue it for this project because my jury-rigged SD boot was working fine and I wanted to move on to other parts of the system.


SD cards are pretty unreliable so it might be good to take it out of the loop at some point. Most field failures I’ve seen for products I develop have been related to SD cards dying during power loss or falling out due to vibration.


Does the U-boot in the SPI-NOR not support booting from NVMe? It might also be possible to patch that in from mainline if it exists. You can also often provide a “boot script” in the vfat partition that overrides the boot config in non volatile memory. This was something Freescale did with the i.MX6 that became a relatively standard thing for vendor-supplied U-boot.


The default setup does not support it for any Rockchip SoC up to and including the RK3399, which is why everyone goes for tow-boot on SPI for their Pine devices. The SoCs have a hardcoded boot sequence which includes SPI, SD, USB and eMMC, but not PCIe. It is however available since the RK3588 - e.g. the Radxa Rock5B boots from NVMe right off the bat.


The hard way? Copy bootloader from somewhere, partition, extract readymade rootfs, setup bootloader, reboot. Sounds more like the Arch way. :)

The only ARM specific thing here is probably the need to use a DTB.

This just shows that manual Linux installation on random ARM board is not more complex than on x86_64. Perhaps even simpler, since you're just extracting a pre-made rootfs instead of using a package manager during installation.


Right? With that article title you figure the author had written his own bootloader in Typescript then transpiled to Rust, ultimately cross compiling to his Arm64 target from a homebrewed x86 CPU fabricated in his garage.

For real though, what the author did is much harder than downloading and booting an official OS image from Pine. The article also documents all the successful steps and skips any missteps or debugging, making the process look very simple (not a criticism, I thought it was an excellent read). Maybe those missteps took place during previous projects, but suffice to say that you don’t make booting a non-standard image look easy without expending significant effort, at some time.


Writing your own bootloader is an option if you want to play hard level, sure. I did follow that path, at one time https://xnux.eu/p-boot/ for another Pine64 device. ;)

Anyway, I didn't say the things you're criticizing my reply for.


Experts sometimes forget being beginners.


"Hard" is relative, sure. But regular Arch Linux installation is harder than this in a sense, and the author is Arch Linux user on multiple devices already, so not a beginner.

If I go through the article and list arm64 SBC specific pieces, it's just copying the booltloader from the manjaro image for rockpro64 and maybe different names of files in /dev for some block devices.

Even configuring U-Boot is done the same way you do it on regular Arch, if you use extlinux on x86_64, which many do/did.


DTB = Device Tree Blob?

EDIT: Probably yes. I see this term appears in the article.


I was hoping for something more akin to what the word "building" usually implies - something a bit more physical. If nothing else, the author made a box for it ;)

The RockPro64 is a good board with lots of expandability. I run NetBSD/aarch64eb on one to build all of NetBSD's pkgsrc packages (26,000). It performs well with an m.2 NVMe, and has been rock solid.

Of course, for anyone using the RockPro64, if you plan to do lots of processor intensive work like compiling, you'll either need a very large heat sink (no Flirc cases for these, unfortunately) or you'll need active cooling. Without good cooling, it'll throttle.

https://klos.com/~john/rockpro64.jpeg


Yes, I was hoping to learn about inexpensive aarch64 server hardware solutions (ready made).


I was thinking maybe they found a salvage sale for old oracle or aws aarch64 servers. I'd love one of those. A second best would be a jetson agx probably.


This seems to be low-power entry-level stuff. I'm curious, is there anything more serious - but less serious than some proper rack server hardware?

Currently I'm running a home server on EPYC 3251 mini-ITX board, which I use to route 1GbE WAN and 10GbE LAN, serve as a NAS, and run a bunch of services all without it breaking a sweat, and leaving plenty of headroom shall I want to run more stuff there. It sits on my desk in a small-ish cubic Supermicro chassis and barely makes any noise beyond the normal HDD screeching. And it's an entry-level server-oriented board so I have proper LOM without having to throw in an IPKVM.

I would fancy an ARMv8 machine - just for fun of it (and possibly better performance per watt) - but I think I can't get anything comparable from a RPi-level hardware. But the next "step" I see when searching for ARM servers are those fan-screaming behemoths you put in a rack in a proper server room, which is something I dread for a homelab, as I don't have a dedicated room for it. I've had a pleasure of WfH involving setting up some PowerEdges in my living room, was fun but extremely noisy. So I wonder, where are the middle grounds?


At this point, if you want a quiet, high-performance ARM system for home, Apple is worth a look, even if that means your "server" storage is plugged into Thunderbolt (and keeping in mind the Apple premium for RAM and internal storage).


I'm interested too and have pretty much come to the same conclusion as you. There are workstations like this: https://store.avantek.co.uk/ampere-emag-64bit-arm-workstatio...

It's a middle ground between Raspberry Pi and a 128 core they sell to cloud providers. For the money, you can probably get more work done with an amd64 workstation, unless you're paying someone to generate your electricity by riding a bicycle or something. (Cooling and power matter to cloud-scale datacenters, but not really for one computer in a room that you use to generate your income.)


Goodness gracious, five grand, only 3 drive bays, and that's without drives? My whole AMD machine (motherboard, RAM, PSU, NVMe SSD, case - basically everything except for the HDDs for the bulk storage) costed me slightly less than $2k, and I've surely overprovisioned.


I set up Proxmox on one of my Pi 4s (using an external SSD) and am quite happy with it. Runs four different LXC containers (one of which is a public-facing ActivityPub server for testing) and gives me zero headaches, so am currently looking for a beefier alternative that has a proper M.2 slot and at least 16GB of RAM...

I do wish that alternative boards had better OS support (especially the Rockchip ones, which tend to have weird kernel builds, etc),


The beefier alternative you're looking for is possibly the Radxa Rock5B, which has proper M.2 slot and comes in a 16GB version. Hardware support for it isn't entirely mainlined yet, but a lot of development is happening week by week. Debian runs well on it.


Or any of the Rockhip 3588 boards. Better CPU, better PCIe, some have dual 2.5gbe, 4, 8 or 16gb ram, etc. I got the NanoPi R6s.

Seems silly to save $25 on the slower CPU and half the ram.


The Rock5B is an RK3588 SBC, with up to 16GB RAM. Might not be the most "pimped" such.


I moved from a Pi setup to a second hand Lenovo M900 tiny PC, with 24GB of RAM and nvme drive and it works great. The power efficiency is obviously not as good as the Pi but it's a reasonable trade off


Tiny PCs are a good option, too - especially ones based on mobile processors. I just built up a barebone Asus PN50 (Ryzen 7 4700U equipped) and I'm quite impressed at what this little box can do. The 4700U is kind of weird - it has SMT disabled, so it kind of slots between the usual Ryzen 5s and 7s. No matter though - builds on it are quite fast. But it's still x86...


It’s a bit of a sledgehammer approach, but a Nuc with Proxmox is pretty excellent. You can even use 10gbe via thunderbolt (or the pci slot on the larger Nucs).


This is the way. An x86 NUC/Mini PC/Thin client smokes the ARM SBC market.


Dunno, got some numbers to back that up?

So far I'm pretty impressed with my Rk3588 board, 8gb ram, and dual 2.5gbe.

It's a bit more than half as fast as my quad core/8 thread xeon desktop, but that's abotu where I'd expect one of the cheaper x86 NUC/Mini PC/Thin clients to land.


Are there any boxes without a graphics card or at least with a primitive one? So that they would be more economical in case you only need them for the role of a server?


None of them have a dedicated GPU, just a low-powered one built into the CPU. Using a CPU without an integrated GPU would only shave a few dollars off the cost, so I don't that is a common choice.


The weird extreme range (not really Nucs in my view) have a pci slot, seemingly for a large graphics card. They’ll take an SFP card though, and that’s a excellent, though probably the most expensive mini server one could buy.


Pretty much all of the small boxes just use the iGPU.


And that iGPU is great for a Plex server for transcoding, especially if it has QuickSync for hardware acceleration.


All the small Nucs just use an iGPU. Boxes like the Nuc 8 have a bit of a cult following as the iGPU is surprisingly powerful. It’s probably beaten by newer models now. The newer ones have lots of cores, and when fully loaded with memory make a handy little box for VMs.


> has a proper M.2 slot and at least 16GB of RAM...

Orange Pi 5 16GB RK3588S (8 Core 64 Bit, 2.4GHz Frequency), PCIE Module External WiFi+BT,SSD Gigabit Ethernet Single Board Computer,Run Android Debian OS (M.2 PCIe2.0!)

https://www.aliexpress.com/store/group/OPI-5/1553371_4000000...

( via https://news.ycombinator.com/item?id=33739176 )

Review:

"""

"Orange Pi 5 Review – Powerful, No WiFi" https://jamesachambers.com/orange-pi-5-review/

Pros:

- 4 GB and 8 GB RAM variants cost under $100

- M.2 slot supports high speed NVMe storage

- RAM options from 4 GB all the way up to 32 GB available

Cons

- No WiFi or Bluetooth included (requires either adapter for the M.2 slot or a USB adapter to get WiFi/Bluetooth capabilities)

- No eMMC option

- PCIe speeds are limited to 500MB/s (PCIe 2.0, benchmarks show closer to 250MB/s write or PCIe 1.0 performance) — this is slower than SATA3

"""


Is there an official Proxmox build for the Raspberry Pi or are you using a third party?


Most likely Pimox


Yep, Pimox.


Somewhat tangential but is Arch really suitable for servers? Most Arch users I know still prefer Debian for servers. Yet I know at least one company that uses them for servers, which surprised me. I know the Arch breaking meme is overblown but for a server I'd still want something with less moving parts.


That big release upgrades provide more hassle than benifits was also observed by Google, hence they switched to rolling release. The reason Debian breaks more at release changes though is probably more due to them patching and modifying software, which they have sometimes have to change/drop with a new SW release. Or you have a hard time deploying a newer SW version on top of the old binaries. Arch follows upstream very close, which maybe increases the times things could/have to be reconfigured, but it still mostly means running vimdiff against config.conf and config.conf.pac{new,save}. Sure Debian is more reliable if you want do want to really change deployed systems, but if your company strategy is to keep up with upstream, Arch may work better than it's reputation.

And if you'd need stability once, you could just set Archive on a specific day as package mirror on your cache server.


If this was going to be used for some "big and serious" application, maybe different choices would have been made. Hopefully it was clear from the post that my goals here were the exact opposite!

In my own anecdotal experience of running a hobby server on Arch for several years, I haven't experienced anything to make me think the distro is unsuitable for server work.


I've used debian for almost twenty years and arch for over half that, despite being comfortable with Arch, I would not sleep well at night if anything mission critical depended on it. You install a rig with Debian on it and touch nothing, it will last longer than you.


No, it will not last longer than you. If you want security updates, you have to do major ditro upgrades when the support for previous Debian release is over. And these are somewhat tricky.

You'll just have your upgrade dance clustered into a single event once every N years, instead of spread over randomly, based on major releases of SW your server depends on.


> Somewhat tangential but is Arch really suitable for servers?

I think we all use the software we choose to use in order to use the software we choose to use. When we build a cluster, it's often done in order to build a cluster.


Depending on what you use it for, you'll have to babysit your servers more and you'll not be able to do it on your own timeline.

Eg. if major postgresql update comes, you'll have to upgrade your DB cluster very soon. If major update to some program requires configuration changes, or if scripting language has deprecations that you've ignored for years, etc. you'll run into trouble, too.

I've been running a few Arch Linux servers for ~5 years and it's been quite pleasant. Being able to use the latest features in various programs or scripting languages is a very nice benefit.


For pine stuff I had more issues with hardware than software. So in the end you’d still have to babysit a bit even if using fedora hat enterprise serious edition.


Run wathever on sd card on its most basic thing. Run lxc and mount storage on /var/lib/lxc.

Well… at least its what I’ve wish I could say.

Truth is I’m using manjaro (arch based) on a similar board and then one day after an upgrade they just decided to migrate from eth0 to the current naming scheme based on nic driver. Had to plug in a monitor and keyboard to fix the situation. Home stuff so its all good in my case.


Arch is totally fine for a home server.


> The total cost comes to around €350

A few weeks ago I bought an used intel nuc7 with a 7th gen core i5… for 150€.

It came with a 120gb ssd, 4gb ram and a power brick.

I still don’t see the value in this SBCs used as home servers.


NUC is a single board computer :)

If you think the price is high, I would point out that the SSD I used cost €200 new when I purchased it back in mid 2022. A used 120 GB SSD by contrast can be had for maybe €10 which alone would explain the difference in cost.

Now if 120 GB is enough for your application, that's a good value so more power to you.


> NUC is a single board computer :)

My laptop is also a single board computer, technically, so what?


The Lenovo "Tiny" machines are also cheap on eBay, even one with a Ryzen 5 and 8Gb RAM for ~$140USD.

Though there are reasons to specifically want an ARM64 machine for builds, etc.


I just want something small with ECC RAM


Not cheap, but does support ECC:

https://morefine.com/products/morefine-s500-mini-pc

The FAQ question reads a bit funny:

"Q19:Support ECC RAM?

YES. S500+ Support ECC RAM,

But compatibility requirements are higher, and cannot be used casually."

Also, these AsRock products:

https://www.asrockind.com/en-gb/4X4%20BOX-V1000M

https://www.asrockind.com/en-gb/4X4%20BOX-R1000M


Thank you for the links, that morefine one seems really nice, would need to get 2.5Gb networking gear...

I currently have a single 16 core/64 GB ECC Ryzen tower with ZFS that I'd like to break into a ZFS ARM ECC NFS server with a separate small form factor Ryzen or ARM compute cluster.


Honeycomb? https://www.solid-run.com/arm-servers-networking-platforms/h... I think they also have a lite version supporting ECC…


It's a nice write up. but as much as I love whole-drive filesystems, in this case I would have used a partition table and used a fixed partition for the swap space. Not only is it (slightly) more efficient, it's also simpler than using a btrfs subvolume and remembering to +C the swapfile.

I think the general approach is very good and could probably be used for the VisionFive 2 RISC-V SBC as well.


Thank you for sharing these notes.

I've been impatient for ARM64 hardware to become easy to use for home servers. I've got an RPi 4 (at retail prices, no less!) and it is quite good but I want more options. The previous stories I've read of RockPro64 have been much worse, it was nice to read this went relatively easily.


Don't know how you missed that but Pine64 has official EU store: https://pine64eu.com/


I had to check, but ALL products in that store are "out of stock". This does not look like useful store at the moment.


Sure, I just wanted to mention that it exists, not making any claims about its usefulness. Yes, they are running out of stock on a regular basis, but they usually have re-stocking dates, so you know what to expect. Also their main store often is out of stock on many items as well. That's just how they operate.


Maybe the store was recently opened? This project is from last year, I just finished the write-up now.


Unfortunately it doesn't have ECC, and uses a buggy file system BTRFS.


BTRFS is buggy?


Yes, especially for external drives. See: https://lore.kernel.org/all/20200326013007.GS15123@merlins.o... ... and many other problems.


I was literally recommended to use BTRFS because it has been battle tested and proven. I should probably have done more research but given its maturity (we're in 2023 after all), I feel a bit let down.


BTRFS is far better designed than other recent linux filesystems. It's just that linux always was and always will be very very buggy. If you want less bugs, use the oldest possible FS that is still usable. That's probably ext2.

Most problems with btrfs is that it fails to mount in many situations and there's no automated fsck to fix it, requiring manual intervention; in fact running fsck on btrfs is considered a very bad thing. This makes it OK for desktop, but not at all suitable for headless servers and other unsupervised machines.

Speaking of bad design, F2FS for example, a filesystem designed for flash drives, keeps both primary and backup superblocks in the same flash erase block. If that block gets corrupted the entire fs is lost.


That's interesting. I've always thought ext3fs would at least be safer than ext2 seeing that ext2 isn't a journaling filesystem. What happens if you abruptly shut it down e.g. a 1000 times.


I won't use anything journaled on SSDs.

What happens? I don't know... fsck would run automatically on reboot, fix the errros and recover part of the last file written in /lost+found/

But why would a server abruptly shut down 1000 times?? I already said btrfs is good enough for desktops and other interactive/supervised systems. And data loss on servers is recovered from backups, not journals!


Restoring from backups can be tedious and time consuming. For example, if you have a database and PITR system, you may have to replay WALs until you get the point in time where the server failed.

XFS is known to be extremely robust for servers - and it's journaled. Most servers nowadays are SSD.


You could do this for cheaper (and with more ram) using a raspberry pi 4 with a usb nvme ssd, it’s got gigabit ethernet and is arm64. Sure you have two less cores than this solution but it’s more likely to be supported over time and once you get the SD card out of the mix the I/O is solid. I’ve been surprised by how much the SD card throughput was limiting the experience.

I run arch Linux arm on mine and it’s a fantastic little device. I wonder if these boards are way faster or just more of the same. I guess the pcie expansion makes this more extensible.


> "Looking at the available offerings [...]"

The slight problem with the deservedly often-recommended RP4 is that for most people it's so hard to come by it effectively doesn't exist.


> cheaper

No, you can't, unless you know of some source of retail priced Raspberry Pi 4s.

In some basic tests (compiling, ffmpeg), the Pi 4 and the Rock Pro 64 are within a small percentage of difference in performance.


Indeed, but the much nicer/faster/better PCIe and faster CPUs in the RK3588 for another $25, with the option to get double the ram of the top RPi 4.


RK3399 is completely FOSS, from bootloader, to firmware, to Linux drivers and mainlined. It will be supported as long as someone wants to run software on it.


Raspberry Pi, despite not being fully FOSS, has a much better community supporting it. I’d bet it will last longer than the pine boards. Watched Jeff Geerling’s videos on it and he had trouble getting the pine board to work, whereas the Raspberry Pi worked first try. The only images for it were some images a guy made and put on the pine wiki.


That's the great thing about mainline support, you ignore random trashy images and use generic distro ones.


Any SSD will do really since it'll be bottlenecked by 5GBit/s USB.


I have this same board running freebsd as a NAS. It has been a pretty great experience overall. My main gripe has been that I would really like to be able to run an nvme drive for a cache at the same time as SATA ports, and I haven't found any cost effective PCIE2.0 x4 switches that I could use for that purpose. There is an x1 switch for use with RP4, but it is a shame to lose all the bandwidth.

I'm looking forward to someone making a NAS board on the new RK3588 since that has enough connectivity for everything I want.


Issue with FreeBSD on RockPro64 and similar boards is that the scheduler doesnt support big.little configurations therefore the OS doesnt make distinctions between the 2 A72 cores and the weaker A55s.

https://wiki.freebsd.org/arm/RockChip#Known_issues


It’s true, but performance has honestly been great. It’s still quite responsive over ssh under load.


I've been hosting some websites on a RockPro64 board running armbian for 2 years. I'm quite happy with it.

I recommend using an SSD and not an SD card though.


I had issues with SD card that may be related to durability. A lot of it was mitigated by moving the /var/log folder to a tmpfs (if you don't care much for the logs, or are using something to ship them to another machine, you really don't mind them not being written to durable storage).


What's with arm sbc users not setting up any disk encryption? I have yet to see a guide / tutorial / experience reports that do not totally skip the subject.



I had setup mine with tiny-ssh because last time I checked dropbear only supported rsa keys.

My point was not that I was looking for a guide, but I have the feeling that subject is totally ignored by most of the sbc-arm community.

I take the risk of being victim of burglary as a when, not an if. I totally don't want anyone to have easy access to my data. If an SBC is used for server purpose and not for embedded/domotic use, it will likely contain data and/or secrets.


How is btrfs nowadays? I have PTSD from it and stick to ZFS now.


Still as shitty as it ever was. I experienced some data loss recently on a btrfs fedora install.


Synology and Fedora use btrfs by default. I've been using it for years and have had zero issues.


I tried the really hard way with a RK3288 (Asus TinkerBoard).

After about a month I had a barely working uboot built from unpatched official sources.

After two months I still didn't have a bootable kernel built from unpatched official sources.

- with power regulator drivers, the board powers itself off while booting

- without power regulator drivers, it boots the kernel, but there's no power to usb, ethernet and wifi.

What I learned: To stay away from Rockchip.


Do it the easy way with Oracle cloud’s free tier and get Arm Ampere A1 CPUs, 24GB RAM, and 10TB egress with the hard part being creating an account which requires a credit card.

https://www.oracle.com/cloud/free/


This sounds awesome but the idea of giving Oracle my credit card is terrifying.


That uboot is pretty old, but not as old as the one on my armada 8040 boards... they can't even boot a modern kernel properly without having to compile uboot and ATF and upgrading the firmware.


It's possible to just build the uptodate one. The support for rockpro64-rk3399 is mainline in U-Boot. The same for TF-A.


Got myself a Pi, a plastic box, a memory card, a big usb key, wrote my own SMTP server in super lean no-libc C (c89 with benign bit of c99/c11), put a devuan GNU/linux (NOT debian with its toxic trashy bloat and kludge of systemd).

I did the same thing with a nanomimal http server to serve static content and maybe dynamic in the future: a noscript/basic (x)html http server for maps (which uses openstreet map tiles), which does provide proper map display in links2, with a font not too big, and with harmless html tables.

Configured the "server" to restart everything if something is detected missing (you know, cron with SH scripts and certainly not bash scripts).

It has been running for years. I never had to modify the code of my smtp server, yet (and I run IPv4 and native IPv6 provided by default to millions of clients by my ISP, I think it has been the case for more than a decade, may be wrong about this one though). I am kind of surprise it was not already pown by some trashy hackers.

The main issue: spamhaus block lists, they are hostile to all self-hosted people and they don't provide a irc server, or a non blocked email to be removed from their lists (which are unfortunately used by too many open source related companies/project, which is a mistake). Basically, they force ppl to use one of google/apple super heavy javascripted web engine (no better than the default security checks from cloudflare). Yes, those ppl are seriously worse than spam itself, hope they will fix that (they are a shaddy swiss-andoran company...).

Did you know you cannot send an email to redhat(IBM now) people using an ipv6 smtp? yeah...

And it is coming: I'll move everything to a similar RISC-V mini-computer because I am aware of the super toxic IP tied to arm64 ISA (same for x86_64), that will be the first step, the 2nd step will be to hand compile (=assembly programming with near Zero-SDK) all of them and forget this C syntax too complex and those horribly massive and complex compilers, not stable on the long run (thanks ISO, gcc extensions and c++). And with all that, I would not be surprise to port to 64bits RISC-V assembly a minimal IPv6 stack... and maybe more.


> The main issue: spamhaus block lists, they are hostile to all self-hosted people

Allow me to correct that for you.

There is nothing wrong with spamhaus. They provide one of the best anti-spam options amongst all the commercial providers.

Spamhaus have many lists, I suspect the one you are referring to is the PBL, in their words "DNSBL database of end-user IP address ranges which should not be delivering unauthenticated SMTP email to any Internet mail server except those provided for specifically by an ISP for that customer's use.".

We are in 2023, I think it is beyond any sort of doubt by now that a significant proportion of spam and phishing mails originates from home internet connections because people can't be bothered to keep their computers up to date and virus free, so they become part of a botnet.

So the fact of the matter is that even if Spamhaus PBL did not exist, someone else (or the MX operators themselves) would very soon fill their place by blocking the very same ranges.

Added to which, most home ISPs don't even provide reverse DNS ... so again, even if Spamhaus PBL did not exist, you would likely STILL find yourself being blocked by other measures that most sensible sysadmins implement on their servers.

Hell, many home ISPs just block outbound port 25 these days anyway !


Wrong, sys admin should use grey listing with a similar block lists.

Spamhaus provides a way to be removed from this list, but does not provide an IRC server, only an horrible web javascript only chat, they should fix that. Ofc, they provide an email to request removal from their block list... which is using their block lists.

Since spamhaus is "shadily" hidden in andore and switzerland, my lawyer cannot do much, but I guess I should go after the sys admins using without grey listing those block lists in EU/US but I haven't needed too yet, since there is most of the time either somebody with a smtp server not using blocklists (not even grey listing) or even an irc server.

From a technical point of view, and specific to my ISP in my country (did not check the other ISPs), putting all domestic ranges of my ISP in their block list is text book abusive... spamhaus is doing a really, really, bad job. But I keep that for court if I need too, I may go to EU regulatory orgs directly though, well only if I am pissed off enough (and that's very hard).


You're trying to argue sense with someone who thinks they can sue someone for greylisting, and is screeching about insecurities in GUI browsers and being "forced" to use an Apple or Google browser:

> "If you want to make spamhaus remove your IP from their block list, you must engage in a chat working only with google/apple javascript browsers (I am a noscript/basic (x)html user)."

Amazing that I've been on the internet for several decades and never once had my shit jacked (due to a modern GUI browser or otherwise.) The way people like grandparent commenter make it sound, the split second you use a modern browser, you'll be pwned...

Edit: https://news.ycombinator.com/item?id=34700126

> webkit/blink/geeko are financed (and steered) by the same people: blackrock/vanguard = apple = alphabet(=google) = microsoft = starbucks = etc.

loooooooooooooooool


> and never once had my shit jacked

Not that you know of, anyway.


Your efforts are commendable, but you're not correct about Spamhaus and being forced to use Google / Apple.

For starters, nobody is ever forced to use a web browser with email. I'm OK with the fact that pine will parse some of the HTML so I don't see all the silly tags in most email, but it will never follow a link, at least.

If your IPv4 and/or IPv6 is on a Spamhaus list and you can't get it / them removed, likely because you're in a pool of residential IPs, and likely in part because you can't control the PTR, then you can always smarthost through any reasonable provider.

I've been self-hosting email for a quarter of a century, and I'd never blame anyone else if I tried to send email from a residential pool of IPs and it didn't work.

Not sure what this has to do with setting up a nice little ARM server, besides your observation that the ARM architecture is licensed, but here we are :)


If you want to make spamhaus remove your IP from their block list, you must engage in a chat working only with google/apple javascript browsers (I am a noscript/basic (x)html user). Where is the IRC server? They provide an email for IP block list removal... which is blocking smtp servers (not even a grey listing) using their block lists.

Those guys are bad, really bad. Hope they grow up and improve.

Yeah, once I have finished or I am more advanced on other projects, I'll get rid of those pesky arm64 with that toxic IP (that said it is the same for x86_64). I'll re-use first my C code as a stepping stone to perform the jump. One more step towards real digital freedom.


I wrote what I wrote in an unclear way:

"you're not correct about Spamhaus and being forced to use Google / Apple"

What I meant is that you weren't necessarily correct about Spamhaus, nor correct about being forced to use Google / Apple (which I thought was a reference to the fact that 98% of the world use Google's browsers and Gmail and/or Apple's Safari and/or Mail).

I see now you were referring to using a mainstream browser to communicate with Spamhaus. Yes, that's uncool. And yes, I wholly agree that the email address to request unblocking should not be filtered like it is.

Sometimes we worry so much about the symptom that we forget about the problem. Perhaps it'd be worthwhile to just ask someone else to forward an email requesting removal to Spamhaus' removal address.


Come on, you know that the whole point is to be independent from gatekeepers, and walking towards real digital freedom.

Spamhaus is doing a really bad job. They just need to grow up and improve.


Of course, but giving up and just accepting the fact that you're on their blocklists does more harm to you in the long run, in my opinion, than just asking someone to forward an email. If that's what you want, then of course that's entirely up to you, but considering the complete lack of action network admins take when you report abuse and illegal activity, you can hardly blame people for taking the easy way out and just blocking all the low hanging fruit.


check this out - https://www.kickstarter.com/projects/uptimelab/compute-blade I am waiting for this to you with my RPI Compute Module 4


Get a 16G rockb5b board


I probably would have if it had been available at the time. This project was actually done last spring/summer.


This article was a joy to read, thank you.


any recommendation to build a mesh network? is it worthy to worry about?


hi, im looking towards a mesh network. is there a way to do it with these boards?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: