How many requests do you expect users actually do? Especially if you're serving a B2B market; not everything is centered around addiction/"engagement". My 8 year old PC can do over 10k page requests/second for a reddit or myspace clone (without getting into caching). A modern high end gaming PC should be around 10x more capable (in terms of both CPU and storage IOPS). The limit in terms of needing to upgrade to "unusual" hardware for a PC would likely be the NIC. Networking is one place where typical consumer gear is stuck in 2005.
Webapps might make it hard to tell, but a modern computer (or even an old computer like mine) is mindbogglingly fast.
That's precisely what happened, knowing the public key of an address is commonplace (as long as the address has done at least one tx) and doesn't compromise the security of its private key
k3s single node + ArgoCD/Flux is what I would if I had to build infrastructure of a small startup by myself.
Unfortunately it's HN so people are more likely to do everything in bash scripts and say a big "fuck you" to all new hires that would have to learn their custom made mess
This is exactly the setup I’ve been considering. Feels like the best of both worlds: you learn the standard tooling and can easily upgrade to full blown distributed k8s, but you retain the flexibility and low cost aspects of single VM.
Also leaning towards putting it behind a Cloudflare tunnel and having managed Postgres for both k3s and application state.
Have been running k3sup provisioned nodes on Hetzner for services and even a Stackgres managed Postgres cluster on another node (yes, it backs up to the cloud). And it's been great. Incredibly low cost and I do not have to think about running out if compute or memory for everything I need for a tiny startup.
Most companies try to operate at a profit and actually increase those profits over time. That said, reasoning that Annapurna failed only because of that requires some impressive mental gymnastics
if you don't need High Availability you can even deploy to a single node k3s cluster. It's still miles better than having to setup systemd services, an Apache/NGINX proxy, etc. etc.
Yep, and you can get far with k3s "fake" load balancer (ServiceLB). Then when you need a more "real" cluster basically all the concepts are the same you just move to a new cluster.
I've been in your shoes for quite a long time. By now I've accepted that a lot of folks on HN and other similar forums simply don't know / care about the issue that Kubernetes resolves, or that someone else in their company takes care of those for them
reply