The new piku docs are pretty but, as a potential new user very interested in trying piku, the new docs are completely useless to me. I gave up on piku because the docs essentially assume I already know everything I need to know to run and use piku. Your doc fixes that, but I never found your doc even after spending quite a bit of time trying to figure out how and whether I could use piku. I never would have known it existed without your comment here.
At a minimum, your doc should be prominently linked to from both the piku repo and the piku docs (or more prominently linked, if it's already linked somewhere), if not pulled completely into the docs home page.
That said, if you're interested in a suggestion, take a look at an end-to-end coolio tutorial that shows how to go from new bare metal server to publicly accessible custom domain name with SSL cert, and add the extra steps to your doc (even though yes, they have nothing directly to do with piku, because they have everything to do with what a potential new user actually wants to do and the potential new user doesn't know how to do those steps yet even though you do).
Your doc is already hundreds of times more useful than the main piku docs page. Extending your doc to cover an example of how to get to exposing a publicly accessible custom domain with SSL cert would make your doc hundreds of times more useful than it is now. Yes, I know, there are a ton of ways to get from where your doc ends to a publicly available custom domain with SSL cert. Pick one. It doesn't matter what you pick. The person who cares which approach you use already knows how to do the approach they prefer. You're adding these steps for the person who doesn't know how to do any of the anpproaches and just wants to get to their site hosted on a $5 droplet or whatever.
Again, your page is a huge help, this suggestion is just about making your page a huger help.
For reference, here's a sample coolio end-to-end example showing how they go from bare metal to publicly accessible custom domain with SSL:
The goal of all this isn't about making it possible to do things, it's about massively increasing the number of people who adopt piku by making it easier for more people to do so.
Acknowledged. The tutorial is linked someplace deeper in the docs, but I am adding a direct link to it in the docs home page. Should be up in a little while.
I like your suggestions. I haven't looked at this tutorial in awhile but I have an occasion to do so coming up, so I'll keep your feedback in mind for a revision.
Basically it was the first PaaS to improve the developer experience when working with server infrastructure. It had git integration and allowed to scale easily your apps from a CLI
Thanks for the explanation, official repo doesn't make it clear enough for me.
So, did I understand correctly, that Pico installs both an agent on the remote machine and a commit hook on the local machine? Why didn't they minimize the overhead by just making the remote machine a Git remote and do all the work there when you push a specific branch to that remote?
You’re confusing things, there is only the remote, the local machine doesn’t need anything. We do have a simple CLI you can run locally, but all it does is ssh remote <command> to scale up/down workers, change settings, etc.
piku installs an agent on the remote machine (piku.py) which itself also provides the support for making that machine a git remote.
There is no commit hook on the local machine. On the local machine, you simply have a shim named "piku" which is essentially running "ssh remote /path/to/piku.py $@" to control the remote machine.
This looks good, and Dokku has been very solid for me, but removing the Docker dependency means that now I'm beholden to my OS's choices. For apps that might run for years without maintenance, that's not ideal, as you'll quickly find you need a specific version of the OS for them.
You can use docker with it - I have a couple of things with "docker run" statements in the procfile, but of course it’s not designed for that.
Most of the deployments I got wind of are on extremely stable distros - typically LTS versions where you will not need to upgrade your runtime every six months (and my website has been running in it for at least two Ubuntu LTS releases…)
But you can trivially use pyenv/nvenv/etc. by just setting environment variables. My home automation system now needs two different Node versions, and I have one set per app.
That depends on your tech stack. I have Perl CGI and Java apps that have been running unchanged for two decades. And the only thing I ever had to change on Debian over that time was adding HTTPS (Let's Encrypt) and SPF/DMARC for email.
My point is that OS upgrades don’t have to break tech stacks, and don’t tend to with runtimes that care a lot about backwards compatibility like Perl and Java. I did regularly upgrade Debian across those two decades.
IMO that quality should be the default, and I would choose my OS and tech stacks accordingly.
The runtimes are part of the Linux distribution and get upgraded along with it (and receive continuous security updates along with it), while maintaining backwards compatibility for the application code (Perl scripts or Java bytecode). Tools like needrestart will notify when a process needs to be restarted to take advantage of the update.
Well, I don't know about you, but my dependencies have often been built against a static library from a different version of the OS, so they wouldn't work on mine.
I agree. But 2008 is old enough that exploits may be lost in time.
I recently pentested a client, and had great problems connecting to an old service still using SSL (i think it was 2.0). Every modern tool straight up refused to connect, there was no method to override that, oldest static curl binaries i could find were still too new, I couldn't easily compile curl from source because dependencies also changed in the meantime. Finally I've found sn acient docker image that worked.
The service was ironically so old that no modern vulnerability scanner or programming language would be able to connect!
This made me seeiously ponder the fleeting nature of modern world - SSL support was everywhere 15 years ago, and now I, as an expert, had great problems using it. What chance do we have in 100 years?
Question - how can dependency hell be solved when using such a tool?
It seems so elegant and I love the "it just works" attitude, and I do understand that docker can't be used everywhere due to its technical (and mental) overhead, but I love it because it allows to isolate everything, freeze everything in time so running a container 5 years for now "just works".
In my humble workflow, I'm using lazydocker to manage the containers, gitlab workflow (action?) for deployment on push and a small VPS to build and push the containers to gitlab registry and to run it, on the same VPS. It's a little bit overkill - I could use a combination of a Dockerfile and a compose.yml with docker compose build. Also, I didn't figure out scaling yet. Good thing I don't need it! Otherwise I would swap docker for k8s and lazydocker for k9s.
(I'm open to suggestions. I just got into devops, and I love it!)
Personally I use the same approach to piku, but instead rebuild my Nixos config on push. My projects use nix flakes, so I get both something that I know will run on my server and on my local machine with the full development environment. No containers needed technically, but I use systemd nspawn to run the software in its own sandboxed namespace.
My entire server is then managed declaratively, so if I want to add a new project, it’s like 3-5 lines of Nginx config and push, that’s all. Something goes wrong? Just revert the commit.
As far as the nixos config is concerned, there is nothing crazy in it. It is just a regular nix config with nginx (https://nixos.wiki/wiki/Nginx). You can see there that adding nginx is just four lines of code, ~eight with ssl. Use it to proxy to your applications that are setup as systemd services (https://wiki.nixos.org/wiki/Systemd/User_Services).
Edit: btw I'm a big fan of asciinema! ty for making it. :)
Nix would actually be fantastic for this, but I've never been able to get it to work (including with Devbox and a few other such solutions). I might try again, thank you.
Not related to “git push” deployments, but absolutely related to the PaaS experience, the team I’m working on is previewing Cloud Native Buildpacks (CNB) which is an open spec in the CNCF for Buildpacks that target OCI.
What this means is that you can now generate a docker image locally using similar build tooling to Heroku’s “git push” logic that detects language support and does the right thing TM. Here’s a tutorial for building a Rails app with the buildpack I maintain https://www.schneems.com/2024/05/01/build-a-ruby-on-rails-ap...
Would love some feedback, if you try it. Please consider posting about the experience in the linked discussion (good, bad, indifferent, or whatever I just want more feedback to improve the experience).
First time I read about piku. I have no idea why, but the feeling of `git push` to initiate a deployment like piku does always felt magical to me. There's nothing simpler than that.
It works like magic, but it's also extremely simple to DIY if you wanna learn.
If you set up a server, you can create a git repo by just doing `git init --bare`, add the setting `git config receive.denyCurrentBranch updateInstead`.
After that you can use git hooks (more specifically push-to-checkout hook), to receive uploads, compile and launch. The hook will just be a simple shell script, the most basic version could be a variant of `compile && install && systemctl restart service`.
From there you'll be able to copy the repo locally and pushing your changes will now trigger the hook you've setup.
Maybe I'm missing something obvious, but how does sequencer use git to do deploys, if it's similar to Heroku/dokku/piku? Seems like you're dealing with kubernetes templates and kubectl rather than `git push` to deploy, which would put the project is a completely difference space.
I wish I had known about this project ~18 months ago. I was specifically looking for a way to have a Heroku-like dev experience deploying to my Raspberry Pi, and this looks like it's trying to be exactly that.
Exactly. There's a visibility problem. I've just setup a new VPS with CapRover and never found any mention of piku in the hour I've spent checking for comparisons between "Heroku-style self-hosted PaaS" dokku, CapRover, coolify, and dokploy.
Maintainer and co-author here. If you like simple, minimalist deployment tools, check out https://github.com/rcarmo/ground-init for a very much down to earth take on cloud-init…
Does someone know how it handles (if any) zero downtime deployments? Like, if your Python service is running in one machine on port 8080 behind nginx, how does piku switch to a fresh instance running in the same port?
Currently it will only kill running processes after it finishes deploying the new git push. Socket and session handling will vary depending on your code and whether you use uwsgi or run your own HTTP daemon.
One thing it already does (optionally) is to kill off unused instances and idle, lazily starting them when a new connection comes in.
That gives me a couple of ideas...But picking a shorter name than "piku" is going to be hard... Maybe I can whip up a proof of concept and call it "syd".
I like Epinio which does the same but on top of kubernetes. It is backed by Suse and lightweight compared to KNative (which is the basis of GCP CloudRun for example), but being kubernetes based still requires more Resources than dokku or Piku. I still prefer k8s due to the vast ecosystem of mature solutions. And I can still run everything on a single box, it just needs to be a bit bigger. The new Hetzner CX42 with 8 vCPUs, 16 GB of RAM, and 160 GB of disk space for € 16.40 a month (€ 0.0273 per hour) is sufficient, and with the Kube Hetzner Project I can set up a kubernetes cluster with auto updating microos in 5 minutes.
> I like Epinio which does the same but on top of kubernetes
So basically not at all the same? :D
The point of piku seems to be: Heroku experience without requiring docker, and with a really simple architecture, and that it works on ARM.
Kubernetes works on ARM, I give you that. But AFAIK, Kubernetes requires you to use some sort of containers (Docker or otherwise) and its architecture is anything but simple (for obvious reasons).
Besides that, I don't see how epinion enables the "git push" workflow, the quick start tutorial seems to tell you to run "epinio push manifest.yaml" or similar to deploy the application, so it doesn't fit with the "Heroku-like experience" either.
So really, the only things they have in common is that they handle deployments?
You are right it needs docker but so does Dokku and I see this more as an implementation detail (even a plus in my book for the flexibility). Epinio does admittedly not support git push but for me „epinio push --name myapp“ feels similar enough. In the end, I can just push my Django or next.js or rails or node.js code to the server.
I should have said „I like Epinio, as well“ because I also like Piku, especially for its minimalistic approach and readable code, but when it comes to actually using it for deployments, I prefer Kubernetes.
To be honest, Python made it stupendously simpler than anything else because it has a great standard library. The only dependency (click) is rock solid and made it a lot simpler to handle commands independently, but we could probably do without it and just use the built-in argparse—-but at the expense of a few more lines of code I didn’t want to maintain.
Also, Python is everywhere, on every OS and Linux system, so it was a natural choice. I also wanted it to be easily hackable and extensible, and few languages would make it simpler to understand or extend.
That’s pretty funny. You may want to look a little further field to discover that the machines with Python are far from “all the machines” out there. Particularly production servers, which, if they run responsibly, are hardened with every extraneous bit of software removed.
Not sure who's to "blame", but I was super surprised a few days ago when I installed Kubuntu 24.04 (minimal), and Python was missing. Was fine though as I strictly use via pipx and miniconda only, but still surprising.
It's actually worth taking your joke seriously to compare and contrast:
- piku deploys via git rather than scp/sftp, but authenticates via ssh like those tools
- piku supports a number of runtimes, including Python, Ruby, Node, Go, Clojure. The runtimes are implemented rather simply, you can add your own rather easily, see examples here in the code: https://github.com/piku/piku/blob/8777cc093a062c67d3bead9a5d...
- For each runtime, a mechanism is utilized to install and isolate package dependencies (requirements.txt in Python, Gemfile in Ruby, packages.json in Node, etc.)
- a Procfile and ENV file are used to declare your application entrypoints and envvars akin to Heroku / 12 Factor App ideas
- a CLI (ssh shim on dev client machine) is provided for checking status and logs from the client (as well as stop/start/restart)
- since all applications are managed via uwsgi on the remote, there is also support for worker/sidecar processes and cronjob-style scheduled tasks
- HTTPS via Let's Encrypt (acme.sh) is handled automagically for web apps
I describe more about how piku works in this tutorial:
You're right that PHP apps have a simple deployment story, and in a way piku brings something akin to this level of simplicity to other web programming runtimes.
That is brilliant. Something complex, but not complicated. A project distilled down to its UNIX essence: hackable, lean, and magic.
That said I want to give this a go but don't immediately see how I can migrate my overengineered 8-10 container spaghetti of a docker-compose file to a state where I can use piku instead of manual 'git pull && docker compose up' on the remote
I use dokku for my side gigs and it works best. The performance issue I’ve experienced was when the container with my app was being built and it dramatically increased the load on a $20 vm. Then I migrated to use container registry to utilize GitHub actions for building and pushing containers to the registry and then deploying the container directly on the dokku host. Does piku support that flow?
Another question is subdomain support: to have a catch all virtual host that will respond to anything.domain.tld and have wildcard letsenrypt enabled ssl with DNS challenge.
Those two problems make me think that my side gig has grown up enough to switch to ArgoCD/K8s, although there are many other problems that come with it (from my experience on the day job). For now I just do the certificate rotation manually, which is not much ideal but works with a couple make targets.
15 years ago it was common to deploy web applications as live SVN repositories with a hidden path executing 'svn update' on manual http request.
Not quite the 'push deploy', but that was the way apps were developed back in the days, and for some reason I still prefer that approach. Commit, test, and at one point manually nominate and deploy stable version.
Yes, when we didn’t want a build machine, we’d just build in production. Isolating production with no unauthorized binary (like Alpine) was a long path away…
I just added a magic URL in my app that GitHub calls whenever a commit is pushed and the server does `git pull` which in turn causes pm2 to reload the app. So committing anything shows up in production in seconds. Great for smaller projects.
GitHub will call the webhook after a push to main and a successful test suite run. Snare runs a shell script on my server to git pull, build, deploy, and call a cronitor.io hook for monitoring deploy success.
I've been pretty happy with how relatively simple it is and how well it works.
Can it be a good replacement for Capistrano (for deploying rails applications)?
Love the focus on being lightweight
Recently I wanted to create a super basic website, and discovered it’s actually pretty hard to create something simple
And then, even if you manage to create something actually simple, you usually end up having to manage some not so simple deployment process together with hopefully some sort of version control
Ended up settling for putting plain html/css/js files in a git repo, then configuring auto deploy to GitHub Pages on merge to master (via Actions)
Came to ask this same question. I have a post-receive hook on my server that instamagically deploys whenever I push to it. It is simple and awesome and is basically just a builtin git feature.
1. Less dependencies (only Podman and registry is needed)
2. Rock solid rootless systemd service management
3. Easy integration with systemd-proxyd
4. Easy manage dependencies between containers (with healthchecks)
5. Rollbacks
Sounds interesting! Is there any support for multi-node systems? Let's say I want to have an ingress Caddy proxy on one node, which reverse proxies to several backed APIs on other nodes - can this be done simply with Podman Quadlet?
Also, what is the localdev UX like? With Docker Swarm I can easily run very similar setups in dev, test and prod, and it works with multi-node setups, has support for secrets etc. But the lack of work going into Docker Swarm becomes more concerning as the years pass by.
Also, had no idea systemd-proxy was a thing - is there anything systemd doesn't have its tentacles into? :)
If your VPS is wired with another one using VPC or any other internal network it'll just work. Just point Caddy to specified internal IPs of your other servers.
It's not designed to work on local envs. When I wanted to debug infra I used to run it on Vagrant though
I think a more common use case than doing deploys by pushing to a different remote is to send git repo webhooks on PR merges to main to an API that has a deploy key and can fetch the repo itself.
This afaik is missing from most PaaS tools (CapRover excluded, but it has been illegally relicensed to non-open-source). Perhaps watchtower or something could replace the functionality?
Actually, this is how I deploy my static websites: piku in lazy mode handles GitHub hooks, pulls the source and renders them out to cloud storage, then kills all workers and idles again.
The piku micro-app that does the deployment is just a 10-line Bottle app that validates the GitHub hook and does a git pull with a private SSH key, so yes.
It’s just a 10-line script, I’ll see if I can sanitize it and add to the docs (one of the samples already does something similar, you can peek at the repos to get ideas)
You scared me for a moment, as I've just setup a new VPS with CapRover and migrated all my projects from heroku. Doesn't look too shady for me, there's a oneliner to disable analytics, it seems enough for me.
You still have to agree to the terms and conditions of use of the nonfree application which can of course change at any time without notice. It’s a time bomb.
I’m thinking of forking it and adding all his dumb and easy table stakes features (2fa etc) that he is trying to gate as subscriptionware.
If you want to contribute any of that to piku, we’ll welcome it. Might take a bit to review and merge, but we’re always looking for non-breaking improvements
It’s not even open core. The solo maintainer simply relicensed the entire repo to a nonfree license without consent of the copyright holders to all the external contributions.
Don't forget CapRover. I'm just trying it on a new VPS and it just works as expected. I would have tried piku first if I knew about it, because it's even more minimal.
Eventually, we'll need something more secure than effectively `sudo curl INSTALLER | sh` as a way to install stuff. I can see why package managers aren't always the answer, but still.
Actually, we had manual install steps as the only way to go for a while. You'd be surprised at how many people asked for a one-liner... I'm going to add a note to the docs about that, since I've seen a couple of comments here of people who were put off by it and didn't even read the rest.
In defense of their argument (not their choice of words), the final script is doing a `git clone`, so if they're going to ask the user to copy-paste anyway, their copy-paste code could be something like:
(EDIT: Feel free to clean it up a bit, adding other variables to make it a bit more readable, etc.)
From the user's POV it's the same thing: copy from somewhere, paste into terminal, press enter. But now you're trusting two fewer places:
- piku.github.io (the deployed version)
- github.com/piku/piku.github.io (the source code that supposedly generates the deployed version)
I'm not saying to do exactly these steps, and they are not perfect anyway (you're still trusting a `git clone` before even doing any kind of verification), but at least you're not YOLOing[1] without first doing some minimum verification of integrity before something is executed, so you know at least the first installation step has not been tampered with[2][3].
The git repos don't have commit signatures, and the verification doesn't have to use specifically those kind of signatures (could be a minisign public key hosted somewhere), but you get the point.
The updates are also a `git pull`, meaning, if the commits were signed, and the installation copy-pasta (or script) included a step that added the author's public key to the allowed signers file (TOFU-style), e.g. with a `curl` to the author's GitHub's keys URL, then `git pull --verify-signatures` could check new commits against the already-known key, and automatically warn about any changes so the user could decide if trust the new key or not. Stuff like that.
[1]: Nowadays, saying "install with `curl | sh`, but you can also do install these other ways", is equivalent to saying "verify integrity with `md5sum`, but you can also check this other file if you want to see other hashes"; technically correct, but if the installation instructions already contain questionable defaults, it makes me question what other questionable defaults are in the actual code.
[2]: "Has not been tampered with" as in "the last modification of `README.md` and the most recent commit are signed with the same key". Not perfect, but it's not YOLO.
[3]: Emphasis on "first installation step". Obviously anything that happens after that simple sanity check would need more scrutiny in case the author tries to run unsigned/unverified code in one of the next steps.
It's a convenience and risk assessment matter IMO. Looking at the repo I see stars in the 1000s, issues are well addressed, and of course the source is available. Lots of persons have likely gone through the code already and no flags were raised, so it's pretty low risk to use that one-liner. I'll definitely use it, as have others, as I just want to get setup and move on with minimal effort.
That is just one of the deployment methods. You definitely didn’t read beyond the first few lines on the page…
——-
There are 3 main ways to install piku on a server:
Use piku-bootstrap to do it if your server is already provisioned (that is what the TL;DR command does)
Use cloud-init to do it automatically at VPS build time (see the cloud-init repository, which has examples for most common cloud providers)
Manually: Follow the guide below or one of the platform-specfic guides.
———
(I do all my deployments using cloud-init, where you can even inline piku.py into the manifest if you want to—-and we have separate repos to demonstrate that)
A "Service" is when someone or something does something for you, usually in return for some fee.
A "Platform," in the context of IT and software (and especially the internet), is some IT infrastructure, generally a server and the software installed on it, that you can host something on, such as an app or a website.
A "Platform as a Service" is when someone else sets up your platform for you so that you don't have to do it yourself—you get access to the platform and can use it for your own stuff, but don't have to configure or maintain most of it.
I think the main reason is it's sensible to pass the source code through a process that organises and optimises it for release to a specific environment. My first assumption seeing a git repo used in this way would be that someone was cutting corners and probably doing bad things like committing secrets to the repo, things like that.
If the person setting it up is aware of the potential pitfalls and has a good explanation for the process - particularly if there is no build step involved and secrets are managed appropriately, then it can be fine.
It goes further than that, those were just examples. The principle of least knowledge, and the principle of least privilege, guide deployment to a process that does not include the source code on a production server. But like I said, there are ways for it to be a reasonable approach if properly justified
https://github.com/piku/webapp-tutorial?tab=readme-ov-file#b...
It explains how piku works under the hood, as well as showing a minimalistic Python web app example from a user standpoint.