Hacker Newsnew | past | comments | ask | show | jobs | submit | cantagi's commentslogin

act is brilliant - it really helps iterate on github or gitea actions locally.


I have a problem with the Github Actions documentation. There is a lot of it, but it feels as though it was written from a "product" perspective, to explain how to use the product.

None of it usefully explains how GHA works from the ground up, in a way that would help me solve problems I encounter.


act is great. I use it to iterate on actions locally (I self-host gitea actions, which uses act, so it's identical to github actions).


schmoyoho are incredible at making these songs, and so quickly after each event. I'm amazed they usually only get 10s of thousands of views, with the odd exception over 1m.


UK resident here. The original version gave me the push I needed to get a rPi 2B+, subscribe to a VPN, and use it as a wifi AP that routes all traffic from my house through it.

Can you trust a VPN who say they don't log? No, but more so than an ISP who might be legally required to at any moment without you ever finding out.

Also, I will now never start a tech company in the UK, and this is because I will never put myself into a position where I am forced to add backdoors to a product.


Do you exit the vpn in the UK, or somewhere else?


I have 3 rpi4s running my entire home network.

One is a vpn router and a wifi AP, it also has Uptime Kuma. I need this to be reliable and rarely touch it except to improve its reliability. - Openvpn - HostAPD - Uptime Kuma (in docker) - A microservice invoked from Uptime Kuma that monitors connectivity to my ISPs router (in docker) - nginx, not in docker, reverse proxies to Uptime Kuma

The second acts as a NAS and has a RAID array, consisting of disks plugged into a powered USB hub. It runs OpenMediaVault and as many network sharing services as I can set up. I also want maximum reliability/availability from this pi, so rarely touch it. All the storage for all my services is hosted here, or backed up to here in the case of databases that need to be faster.

The third rpi runs all the rest of my services. All the web apps are dockerized. Those that need a DB also have their DB hosted. Those that need file storage are using kerberized NFS from my NAS. This rpi is also another wifi AP. This rpi keeps running out of RAM and crashing and I plan to scale it when rpis become cheaper or I can repair some old laptops.: - Postgres - HostAPD - nginx - Nextcloud - Keycloak - Heimdall - Jellyfin - N8N - Firefly-iii - Grist - A persistent reverse SSH tunnel to a small VM in the cloud to make some services public - A microservice needed for one of my hobbies - A monitoring service for my backups

All of these pis are provisioned via Ansible.


Sounds neat. Are you doing anything to mitigate the possibility of SD card corruption with the Raspberry Pis?

I used to use a single RP to run as a media server, and it was great, but stopped using it after suffering from SD card corruption.


You can boot a Pi4 (and some older Pi boards) from more reliable storage attached to a USB port these days (e.g. SSD).

https://www.raspberrypi.com/documentation/computers/raspberr...

You can also network boot as well:

https://www.raspberrypi.com/documentation/computers/raspberr...


TBH I haven't ever had a problem with SD card corruption so far. If I did, it wouldn't really matter, since all the important data is on the RAID array, and the OS can be reprovisioned if needed.

Performance proved to be an issue for SD cards though, when attempting to host nextcloud and postgres. I do what teh_klev is talking about and selected the fastest USB stick I could find, which was a Samsung FIT Plus 128 GB Type-A 300 MB/s USB 3.1 Flash Drive (MUF-128AB), and this gave me a huge speedup.

Unfortunately Jellyfin is not really fast enough on an rpi and I have no solution.


I have been thinking of setting up a pi4 as a wifi AP. Can you comment on the hardware performance? I am worried that the range or throughput might be poor, and thinking I might need to use an Intel ax200 or similar.


I've done this before and it works in a pinch, but I didn't think it was reliable enough to use on a permanent basis. I added a USB WiFi interface, and that helped with the signal quite a bit. Setting up the AP and networking isn't trivial (but is certainly do-able if you're familiar with linux networking).

My use case was using it to connect my family's devices to an AirBNB network. I used the Rpi as a bridge to the host WiFi. This way I could keep a common SSID/password and didn't have to reconfigure all of my kid's devices. It kinda worked.

However, it wasn't very reliable and had poor range and performance. The Rpi was meh with one client attached, but it was bleh with more than one. I ended up replacing it quickly with a cheap dedicated AP that I flashed with openwrt. Much easier, and device-wise was cheaper too.


The range is indeed poor, and it depends a lot on your house/flat. Would definitely recommend using another machine with something like this.

In terms of throughput, right next to an AP, I just got 65Mb/s.


To me it's been obvious ever since the term SaaS was coined, that it would be worse for users. Not only is it more expensive, you don't get control over your data or how you use the product. The idea of cloud computing is similar - you have to pay more for someone else's computer. Granted, SaaS and cloud computing make sense if you're an organization, and can have advantages in terms of scale, reliability, etc.

But also, when business interests get involved in producing software in general, it often causes problems, i.e. ads, worse interop, performance considered unimportant, marketing emails, DRM, the software not working after the company is acquihired or fails. However, producing software takes time which costs money. So, commercially produced software can only exist at this intersection between there being a business model, and the software being useful. The condition is, the usefulness must be enough to be worth paying for, and the result is what we have now.

Imagine rewinding to 1990 with unlimited borrowed VC funds, hiring every person employed in tech full time until 2023, and building a massive suite of useful software for individuals, companies, govt, with a few different alternatives for each use case (like we have now), except they communicate via a series of well defined and public APIs. The entire software stack would be developed in this way, for maximum usability, performance, interoperability, features, etc. . After getting to the set of features we now have in late 2022, we pause the thought experiment, note the date, and split the cost between the users. Ignoring the various practical issues with this experiment, I bet it would be possible to get to where we are sooner and far cheaper per user.

Long story short, I don't think the goal of making money as a business is very well aligned with the goal of producing really good quality and long lasting software, even if the users are willing to pay, and this is a real problem. For personal use, I won't tolerate ads, DRM, etc., so I now self host.


Yes, people writing unmaintainable code in Jupyter notebooks is a problem.

Personally, I start every notebook with

    %load_ext autoreload
    %autoreload 2
then develop production quality code in .py files.


I didn't realize anyone didn't do this. Totally essential, great point!


Well that has improved my life - thanks!


> Why bother with docker for a home server other than for the fun of it?

I do this. Over time you forget how each service was configured, or simply don't care. Adding more and more stuff to a home server increases the complexity and the attack surface more than linearly in the number of services.

I run nearly all my home services in docker, and I have a cookie-cutter approach for generating SSL certs and nginx config for SSL termination (not dockerized). Provisioning is automated through ansible, so my machines can be cattle not pets, as far as is possible on 3 raspberry pis.


Same, but I haven't started with anything like Ansible yet, only beginning to learn it at work.

Running all my services in Docker keeps it all clean because I'm a very messy person when it comes to Linux. Change, change, change, it works, forget about it, it breaks, find something I did years ago tripping me up now, change, change, change, it works, forget about it.

With docker every service is contained and nearly separated. I can rip something out and replace it like stacking a new network switch in the rack. Delete the container, delete the image(s) and delete the volume if I want to start over with something completely fresh.

I can move everything to a new server by moving bulk hard drives over, restoring docker volumes from backup and cloning docker-compose configs from git. Haven't tried any distributed volume storage yet.


> Haven't tried any distributed volume storage yet.

Having tried Gluster, Ceph/Rook, and Longhorn, I strongly recommend Longhorn. Gluster is kinda clunky to setup but works, albeit with very little built-in observability. Ceph on its own also works but has some fairly intense hardware requirements. Ceph with Rook is a nightmare. Longhorn works great (as long as you use ext4 for the underlying filesystem), has good observability, is easy to install and uninstall, and has lower hardware requirements.

Its main drawback is it only supports replication, not erasure coding, which tbf is a large contributor to its ease of use and lower hardware requirements.


longhorn has no authentication at the moment, so any workload running in your cluster can send API requests to delete any volume. I think they are working on it but it might not be the best solution unless you deploy a security policy to prevent network access to the API pod.


I also found it very weird, but here's my intuition.

There are 2^k > 512 spheres stuck to eachother across k-d (pretend k=9). The line from the center to the point where the inner sphere touches one of outer spheres has to shortcut through all k dimensions to get from the center to the sphere.

This distance has been massively inflated due to the number of dimensions. But the distance to the edge of the box hasn't been inflated - it's just constant, so the inner sphere breaks out.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: