I just use a single k3s install on a single bare metal from Hetzner or OVH, works like charm, very clean deployments, much more stable than docker-compose and 1/10 of the cost of AWS or similar.
Doing the same, grabbed a reasonably cheap Ryzen (zen2) server with 64GB ECC and 4x NVMe SSDs (2x 512G + 2x 1024G).
Runs pretty much this stack:
"Infrastructure":
- NixOS with ZFS-on-Linux for as 2 mirrors on the NVMes
- k3s (k8s 1.31)
- openebs-zfs provisioner (2 storage classes, one normal and one optimized for postgres)
- cnpg (cloud native postgres) operator for handling databases
- k3s' built-in traefik for ingress
- tailscale operator for remote access to cluster control plane and traefik dashboard
- External DNS controler to automate DNS
- Certmanager to handle LetsEncrypt
- Grafana cloud stack for monitoring. (metrics, logs, tracing)
Deployed stuff:
- Essentially 4 tenants right now
- 2x Keycloak + Postgres (2 diff. tenants)
- 2x headscale instances with postgres (2 diff. tenants, connected to keycloak for SSO)
- 1 Gitea with Postgres and memcached (for 1 tenant)
- 3 postfix instances providing simple email forwarding to sendgrid (3 diff. tenants)
- 2x dashy as homepage behind SSO for end users (2 tenants)
- 1x Zitadel with Postgres (1 tenant, going to migrate keycloaks to it as shared service)
- Youtrack server (1 tenant)
- Nextcloud with postgres and redis (1 tenant)
- tailscale-based proxy to bridge gitea and some machines that have issues getting through broken networks
Plus few random things that are musings on future deployments for now.
The server is barely loaded and I can easily clone services around (in fact a lot of the services above? instantiated from jsonnet templates).
Deploying some stuff was more annoying than doing it by hand from shell (specifically nextcloud) but now I have replicable setup, for example if I decide to move from host to host.
Biggest downtime ever was dealing with not well documented systemd-boot behaviour which caused the server to revert to older configuration and not apply newer ones.
I've done this but on EC2. What would you like to know? Installing K3s on a single node is trivial and at that point you have a fully functional K8s cluster and API.
I have an infrastructure layer that I apply to all clusters that includes things like cert-manager, an ingress controller and associated secrets. This is all cluster-independent stuff. Then some cluster-dependent stuff like storage controllers etc. I use flux to keep this stuff under version control and automatically reconciled.
From there you just deploy your app with standard manifests or however you want to do it (helm, kubectl, flux, whatever).
It all works wonderfully. The one downside is all the various controllers do eat up a fair amount of CPU cycles and memory. But it's not too bad.
Are these rates for both commercial (part 121/135) and part 91 private pilots? I would imagine the costs to be quite easy to swallow if you're a captain at a big airline but for someone flying occasionally, a per month rate might be expensive. How about a per distance (or time) pricing, if that's possible?
Typically, flights do not deviate from their planned routes for non-severe turbulence levels. It's safer and simpler to have passengers remain seated with the seatbelt sign on. Our model considers weather inputs alongside real-time data, acknowledging that weather is continuously changing and somewhat unpredictable. Therefore, we don't perceive this issue as significant
On HN, and especially regarding "infrastructure" services that you might depend on indefinitely for something, "what's the business model" isn't a question about how the service will be enshittified. Rather, it's a question of what incentive there is to keep a free service like this running. If there isn't one, then nobody should use the service, because it'll very likely get shut down as soon as it gets popular enough for its hosting costs to become nontrivial.
What's your alternative? Nobody wants a corporate solution, but they do want free services that fly under the radar. Of course not all of them last forever, but what better choice is there?
It is crazy for someone to expect someone to use a service that has a 99% chance of getting shut down in a year when the domain name needs to be renewed. If there's no plan, there's no reason to use this for anything, ever.
I hear what you're virtue signaling, but people DO have a right to wonder what will happen if a link redirect service runs out of money and kills their links
Thank you. We are actively working to onboard as many airlines as possible, and partnering with Delta would be highly beneficial. The more data we have, the safer and more efficient flights become.
Regarding the integration with Jeppesen, we would greatly appreciate hearing more about any issues you encounter and receiving your feedback
I’m not sure I have any suggestions on how to fix it, but the Skypath integration with the moving map in Jepp FD Pro is way too cluttered. There’s already quite a bit of information in Jepp and the bright colors from Skypath really wash out all the other information. I haven’t seen many pilots use it more than once.
Some of the most severe injuries occur among crew members because they must rush to secure passengers in their seats, often being the last to fasten their own seat belts. Additionally, when turbulence reaches a certain intensity, the aircraft must undergo costly structural testing on the ground, which disrupts the airline's schedule significantly.
It depends on the provider that the airline has partnered with, I suppose. We don't manage the satellite connectivity, so I can't provide specifics on that. However, in areas with fewer ground stations, disruptions are more likely. Our app and servers are designed to function effectively on unreliable internet connections. it downloads prediction data for several hours ahead and is capable of uploading turbulence data once the internet connection stabilizes.
In my last flight the disconnection over the arctic circle lasted about 5hs. Is the data still relevant then? Does turbulence happen in cycles and is predictable that way?
Five hours is a bit long; you can choose in the app how far ahead you would like to see. We typically recommend looking two hours ahead. This way, for the initial two hours, you'll have some data that may be partially outdated but still better than having no information at all.
Valid observation: turbulence patterns differ from pilot tapping and can be recognized and disregarded. While complete elimination of false positives isn't always feasible, employing various techniques allows us to significantly minimize such occurrences, thereby mitigating their impact.