Hacker News new | past | comments | ask | show | jobs | submit login
Piku: Allows git push deployments to your own servers (github.com/piku)
693 points by tosh 10 months ago | hide | past | favorite | 158 comments



I love piku. I wrote a webapp tutorial for piku which got turned into a repo as part of the official GitHub piku org. You can find that here:

https://github.com/piku/webapp-tutorial?tab=readme-ov-file#b...

It explains how piku works under the hood, as well as showing a minimalistic Python web app example from a user standpoint.


The new piku docs are pretty but, as a potential new user very interested in trying piku, the new docs are completely useless to me. I gave up on piku because the docs essentially assume I already know everything I need to know to run and use piku. Your doc fixes that, but I never found your doc even after spending quite a bit of time trying to figure out how and whether I could use piku. I never would have known it existed without your comment here.

At a minimum, your doc should be prominently linked to from both the piku repo and the piku docs (or more prominently linked, if it's already linked somewhere), if not pulled completely into the docs home page.

That said, if you're interested in a suggestion, take a look at an end-to-end coolio tutorial that shows how to go from new bare metal server to publicly accessible custom domain name with SSL cert, and add the extra steps to your doc (even though yes, they have nothing directly to do with piku, because they have everything to do with what a potential new user actually wants to do and the potential new user doesn't know how to do those steps yet even though you do).

Your doc is already hundreds of times more useful than the main piku docs page. Extending your doc to cover an example of how to get to exposing a publicly accessible custom domain with SSL cert would make your doc hundreds of times more useful than it is now. Yes, I know, there are a ton of ways to get from where your doc ends to a publicly available custom domain with SSL cert. Pick one. It doesn't matter what you pick. The person who cares which approach you use already knows how to do the approach they prefer. You're adding these steps for the person who doesn't know how to do any of the anpproaches and just wants to get to their site hosted on a $5 droplet or whatever.

Again, your page is a huge help, this suggestion is just about making your page a huger help.

For reference, here's a sample coolio end-to-end example showing how they go from bare metal to publicly accessible custom domain with SSL:

https://billyle.dev/posts/self-hosting-your-website-with-coo...

The goal of all this isn't about making it possible to do things, it's about massively increasing the number of people who adopt piku by making it easier for more people to do so.


Acknowledged. The tutorial is linked someplace deeper in the docs, but I am adding a direct link to it in the docs home page. Should be up in a little while.


I like your suggestions. I haven't looked at this tutorial in awhile but I have an occasion to do so coming up, so I'll keep your feedback in mind for a revision.


"What is a Heroku-style deploy?"

thanks for that. I have no idea what heroku is or does.


Sure thing! Bit of cloud computing history. Covered a bit here:

https://leerob.io/blog/heroku


Basically it was the first PaaS to improve the developer experience when working with server infrastructure. It had git integration and allowed to scale easily your apps from a CLI


Short version:

Git push deployment where it detects your stack and automatically builds, then deploys with zero downtime.


Thanks for the explanation, official repo doesn't make it clear enough for me.

So, did I understand correctly, that Pico installs both an agent on the remote machine and a commit hook on the local machine? Why didn't they minimize the overhead by just making the remote machine a Git remote and do all the work there when you push a specific branch to that remote?


You’re confusing things, there is only the remote, the local machine doesn’t need anything. We do have a simple CLI you can run locally, but all it does is ssh remote <command> to scale up/down workers, change settings, etc.


Thanks for clarifying!


piku installs an agent on the remote machine (piku.py) which itself also provides the support for making that machine a git remote.

There is no commit hook on the local machine. On the local machine, you simply have a shim named "piku" which is essentially running "ssh remote /path/to/piku.py $@" to control the remote machine.


Thanks for clarifying!


This is now linked from the docs home page.


This looks good, and Dokku has been very solid for me, but removing the Docker dependency means that now I'm beholden to my OS's choices. For apps that might run for years without maintenance, that's not ideal, as you'll quickly find you need a specific version of the OS for them.


A different niche than Piku but I will give Dokku another vote.

I've upgraded my dokku install over 3-4 Ubuntu LTS so far and it's been problem free for my use case of hosting little side projects on a VPS.


Have you tried cloud native buildpacks? I posted a link to a tutorial on the top level.


Sometimes docker is overkill and I'm so glad something exists that doesn't require it.


You can use docker with it - I have a couple of things with "docker run" statements in the procfile, but of course it’s not designed for that.

Most of the deployments I got wind of are on extremely stable distros - typically LTS versions where you will not need to upgrade your runtime every six months (and my website has been running in it for at least two Ubuntu LTS releases…)

But you can trivially use pyenv/nvenv/etc. by just setting environment variables. My home automation system now needs two different Node versions, and I have one set per app.


Oh yes, I definitely use LTS distros, but my longest-running apps are from 2008, so even LTS won't cover that.


That depends on your tech stack. I have Perl CGI and Java apps that have been running unchanged for two decades. And the only thing I ever had to change on Debian over that time was adding HTTPS (Let's Encrypt) and SPF/DMARC for email.


Yeah, but my point is that you have to upgrade your OS. If you never change anything, obviously you don't need to worry.


My point is that OS upgrades don’t have to break tech stacks, and don’t tend to with runtimes that care a lot about backwards compatibility like Perl and Java. I did regularly upgrade Debian across those two decades.

IMO that quality should be the default, and I would choose my OS and tech stacks accordingly.


Don't they link against static libraries? How do they do that?


The runtimes are part of the Linux distribution and get upgraded along with it (and receive continuous security updates along with it), while maintaining backwards compatibility for the application code (Perl scripts or Java bytecode). Tools like needrestart will notify when a process needs to be restarted to take advantage of the update.


Ah, all your dependencies are in the language you're using? Some of mine use dependencies that are written in compiled languages.


Not necessarily, but they are part of the Linux distribution.


Well, I don't know about you, but my dependencies have often been built against a static library from a different version of the OS, so they wouldn't work on mine.


OS updates are important sometimes. Security and all...


At -some- point you actually need to update things. If you're using a 2008 docker container you have all manner of bugs and security issues.


I agree. But 2008 is old enough that exploits may be lost in time. I recently pentested a client, and had great problems connecting to an old service still using SSL (i think it was 2.0). Every modern tool straight up refused to connect, there was no method to override that, oldest static curl binaries i could find were still too new, I couldn't easily compile curl from source because dependencies also changed in the meantime. Finally I've found sn acient docker image that worked.

The service was ironically so old that no modern vulnerability scanner or programming language would be able to connect!

This made me seeiously ponder the fleeting nature of modern world - SSL support was everywhere 15 years ago, and now I, as an expert, had great problems using it. What chance do we have in 100 years?


But at least the attack vectors are limited


yes, limited to those that work 100%!


Question - how can dependency hell be solved when using such a tool?

It seems so elegant and I love the "it just works" attitude, and I do understand that docker can't be used everywhere due to its technical (and mental) overhead, but I love it because it allows to isolate everything, freeze everything in time so running a container 5 years for now "just works".

In my humble workflow, I'm using lazydocker to manage the containers, gitlab workflow (action?) for deployment on push and a small VPS to build and push the containers to gitlab registry and to run it, on the same VPS. It's a little bit overkill - I could use a combination of a Dockerfile and a compose.yml with docker compose build. Also, I didn't figure out scaling yet. Good thing I don't need it! Otherwise I would swap docker for k8s and lazydocker for k9s.

(I'm open to suggestions. I just got into devops, and I love it!)


Personally I use the same approach to piku, but instead rebuild my Nixos config on push. My projects use nix flakes, so I get both something that I know will run on my server and on my local machine with the full development environment. No containers needed technically, but I use systemd nspawn to run the software in its own sandboxed namespace.

My entire server is then managed declaratively, so if I want to add a new project, it’s like 3-5 lines of Nginx config and push, that’s all. Something goes wrong? Just revert the commit.


This sounds super interesting! Do you have an example of such a config somewhere, that you can share?


I did a write-up of setting up nixos with git deploys here: https://mccd.space/posts/git-to-deploy/.

As far as the nixos config is concerned, there is nothing crazy in it. It is just a regular nix config with nginx (https://nixos.wiki/wiki/Nginx). You can see there that adding nginx is just four lines of code, ~eight with ssl. Use it to proxy to your applications that are setup as systemd services (https://wiki.nixos.org/wiki/Systemd/User_Services).

Edit: btw I'm a big fan of asciinema! ty for making it. :)


Question, could one use piku for that? (Would it be able to rebuild nixos on each commit?)


I use docker compose + traefik. It's nicer than dokku for me because there are less magical abstractions.


I use nix via jetify devbox. Maybe something like that could help here.


Nix would actually be fantastic for this, but I've never been able to get it to work (including with Devbox and a few other such solutions). I might try again, thank you.


Not related to “git push” deployments, but absolutely related to the PaaS experience, the team I’m working on is previewing Cloud Native Buildpacks (CNB) which is an open spec in the CNCF for Buildpacks that target OCI.

What this means is that you can now generate a docker image locally using similar build tooling to Heroku’s “git push” logic that detects language support and does the right thing TM. Here’s a tutorial for building a Rails app with the buildpack I maintain https://www.schneems.com/2024/05/01/build-a-ruby-on-rails-ap...

Would love some feedback, if you try it. Please consider posting about the experience in the linked discussion (good, bad, indifferent, or whatever I just want more feedback to improve the experience).


First time I read about piku. I have no idea why, but the feeling of `git push` to initiate a deployment like piku does always felt magical to me. There's nothing simpler than that.

This is timely for me as well as I just open sourced (yesterday!) a project that is in the same space, but for Kubernetes (https://github.com/pier-oliviert/sequencer).

All of this to say, congrats! It looks great.


It works like magic, but it's also extremely simple to DIY if you wanna learn.

If you set up a server, you can create a git repo by just doing `git init --bare`, add the setting `git config receive.denyCurrentBranch updateInstead`.

After that you can use git hooks (more specifically push-to-checkout hook), to receive uploads, compile and launch. The hook will just be a simple shell script, the most basic version could be a variant of `compile && install && systemctl restart service`.

From there you'll be able to copy the repo locally and pushing your changes will now trigger the hook you've setup.

git clone root@yourserver.com:/path/to/git/folder


You just described Piku, except that it’s a Python script that also sets up nginx and a process supervisor for your code :)


Yeah I love the simplicity of Piku, being able to actually understand what is happening behind the scenes is a great quality. :)


I've been doing almost exactly this. Have set up Ansible to automate it.

Why would I want to use Piku? Would it give me some benefits I currently don't have?


I guess the benefit of piku comes with the ease of use for developers who don't know lots about system administration/infrastructure.

Spinning up a server and installing a repo on it is easy. Depends on your use case and on what you know/have.

I prefer ansible or jenkins+scp-build-to-server+run-deploy.script

I added it to my tools list in case i need sth quick'n working for a small team/to recommend when there's no ansible/sysadmin knowledge available.

(I haven't looked into piku but i guess you'll hit its limitation once you have more complex deployment schemes, privilege/access management, ...)


Maybe I'm missing something obvious, but how does sequencer use git to do deploys, if it's similar to Heroku/dokku/piku? Seems like you're dealing with kubernetes templates and kubectl rather than `git push` to deploy, which would put the project is a completely difference space.


Very happy to see this here - check out our freshly revamped docs at https://piku.github.io/


The new docs look great!


Is this the successor to Dokku? I didn't know you had a second project.


Nope, just took inspiration from it because I couldn’t run Docker on some of my targets.


Great to see the updated docs.


The initial commit was eight years ago??

I wish I had known about this project ~18 months ago. I was specifically looking for a way to have a Heroku-like dev experience deploying to my Raspberry Pi, and this looks like it's trying to be exactly that.


Exactly. There's a visibility problem. I've just setup a new VPS with CapRover and never found any mention of piku in the hour I've spent checking for comparisons between "Heroku-style self-hosted PaaS" dokku, CapRover, coolify, and dokploy.


We’ve been using it for a long time, yes, but doing Marketing for a 1500 LOC Python script felt a little overblown :)

Still, Chris did a public presentation on it near the beginning (video’s in the docs) and other folk did similar things, so…


Maintainer and co-author here. If you like simple, minimalist deployment tools, check out https://github.com/rcarmo/ground-init for a very much down to earth take on cloud-init…


Your readme doesn't really answer the question of why not cloud-init?


You can’t use cloud-init on already installed systems.


Can't is a bit strong - shouldn't I guess?

https://cloudinit.readthedocs.io/en/latest/howto/rerun_cloud...


Does someone know how it handles (if any) zero downtime deployments? Like, if your Python service is running in one machine on port 8080 behind nginx, how does piku switch to a fresh instance running in the same port?


Currently it will only kill running processes after it finishes deploying the new git push. Socket and session handling will vary depending on your code and whether you use uwsgi or run your own HTTP daemon.

One thing it already does (optionally) is to kill off unused instances and idle, lazily starting them when a new connection comes in.


Slightly off-topic, but you can do zero downtime deployments using systemd and socket activation.


That gives me a couple of ideas...But picking a shorter name than "piku" is going to be hard... Maybe I can whip up a proof of concept and call it "syd".


I like Epinio which does the same but on top of kubernetes. It is backed by Suse and lightweight compared to KNative (which is the basis of GCP CloudRun for example), but being kubernetes based still requires more Resources than dokku or Piku. I still prefer k8s due to the vast ecosystem of mature solutions. And I can still run everything on a single box, it just needs to be a bit bigger. The new Hetzner CX42 with 8 vCPUs, 16 GB of RAM, and 160 GB of disk space for € 16.40 a month (€ 0.0273 per hour) is sufficient, and with the Kube Hetzner Project I can set up a kubernetes cluster with auto updating microos in 5 minutes.

https://github.com/epinio/epinio/

https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne...


> I like Epinio which does the same but on top of kubernetes

So basically not at all the same? :D

The point of piku seems to be: Heroku experience without requiring docker, and with a really simple architecture, and that it works on ARM.

Kubernetes works on ARM, I give you that. But AFAIK, Kubernetes requires you to use some sort of containers (Docker or otherwise) and its architecture is anything but simple (for obvious reasons).

Besides that, I don't see how epinion enables the "git push" workflow, the quick start tutorial seems to tell you to run "epinio push manifest.yaml" or similar to deploy the application, so it doesn't fit with the "Heroku-like experience" either.

So really, the only things they have in common is that they handle deployments?


You are right it needs docker but so does Dokku and I see this more as an implementation detail (even a plus in my book for the flexibility). Epinio does admittedly not support git push but for me „epinio push --name myapp“ feels similar enough. In the end, I can just push my Django or next.js or rails or node.js code to the server.

I should have said „I like Epinio, as well“ because I also like Piku, especially for its minimalistic approach and readable code, but when it comes to actually using it for deployments, I prefer Kubernetes.



With PHP, 1-line (no new tools):

  sftp user@host remoteFile localFile
Joking aside, I’m a bit surprised such a tool would be developed in Python given its dependency’s and runtime (which is not easy on the user).


To be honest, Python made it stupendously simpler than anything else because it has a great standard library. The only dependency (click) is rock solid and made it a lot simpler to handle commands independently, but we could probably do without it and just use the built-in argparse—-but at the expense of a few more lines of code I didn’t want to maintain.

Also, Python is everywhere, on every OS and Linux system, so it was a natural choice. I also wanted it to be easily hackable and extensible, and few languages would make it simpler to understand or extend.


That’s pretty funny. You may want to look a little further field to discover that the machines with Python are far from “all the machines” out there. Particularly production servers, which, if they run responsibly, are hardened with every extraneous bit of software removed.


I developed security software in Python that ran on 100k+ production nodes covering dozens of operating systems. They all had Python.


Counter-anecdote: none of my Linux PCs have python.


Debian comes prepackaged with Python. If there are distros that are good enough for a server almost out of the box, surely Debian stable is one.


Not sure who's to "blame", but I was super surprised a few days ago when I installed Kubuntu 24.04 (minimal), and Python was missing. Was fine though as I strictly use via pipx and miniconda only, but still surprising.


Counter-counter-anecdote: my toaster has python.


I am sorry for your toaster.


I used to run Python 2 on OpenVMS in production. Python can have a pretty wide footprint if one looks around.


It's actually worth taking your joke seriously to compare and contrast:

- piku deploys via git rather than scp/sftp, but authenticates via ssh like those tools

- piku supports a number of runtimes, including Python, Ruby, Node, Go, Clojure. The runtimes are implemented rather simply, you can add your own rather easily, see examples here in the code: https://github.com/piku/piku/blob/8777cc093a062c67d3bead9a5d...

- For each runtime, a mechanism is utilized to install and isolate package dependencies (requirements.txt in Python, Gemfile in Ruby, packages.json in Node, etc.)

- a Procfile and ENV file are used to declare your application entrypoints and envvars akin to Heroku / 12 Factor App ideas

- a CLI (ssh shim on dev client machine) is provided for checking status and logs from the client (as well as stop/start/restart)

- since all applications are managed via uwsgi on the remote, there is also support for worker/sidecar processes and cronjob-style scheduled tasks

- HTTPS via Let's Encrypt (acme.sh) is handled automagically for web apps

I describe more about how piku works in this tutorial:

https://github.com/piku/webapp-tutorial?tab=readme-ov-file#b...

You're right that PHP apps have a simple deployment story, and in a way piku brings something akin to this level of simplicity to other web programming runtimes.


You still need to install nginx, php-fpm an configure certs, so php is not that easy unfortunately.


That's two lines in Caddy : ) (I do get your point of course )


That is brilliant. Something complex, but not complicated. A project distilled down to its UNIX essence: hackable, lean, and magic.

That said I want to give this a go but don't immediately see how I can migrate my overengineered 8-10 container spaghetti of a docker-compose file to a state where I can use piku instead of manual 'git pull && docker compose up' on the remote


That kind of situation was what drove me to go simpler :)


Yes it's me, not you ;)

Currently hyping myself up to drastically simplify everything, which will be a joy onto itself


Can't you use git hooks to automate the manual steps?


I use dokku for my side gigs and it works best. The performance issue I’ve experienced was when the container with my app was being built and it dramatically increased the load on a $20 vm. Then I migrated to use container registry to utilize GitHub actions for building and pushing containers to the registry and then deploying the container directly on the dokku host. Does piku support that flow?

Another question is subdomain support: to have a catch all virtual host that will respond to anything.domain.tld and have wildcard letsenrypt enabled ssl with DNS challenge.

Those two problems make me think that my side gig has grown up enough to switch to ArgoCD/K8s, although there are many other problems that come with it (from my experience on the day job). For now I just do the certificate rotation manually, which is not much ideal but works with a couple make targets.


> Those two problems make me think that my side gig has grown up enough to switch to ArgoCD/K8s

If you have to think about it, it isn't worth it. You'll know when it is time to refactor your infra.


15 years ago it was common to deploy web applications as live SVN repositories with a hidden path executing 'svn update' on manual http request.

Not quite the 'push deploy', but that was the way apps were developed back in the days, and for some reason I still prefer that approach. Commit, test, and at one point manually nominate and deploy stable version.


Yes, when we didn’t want a build machine, we’d just build in production. Isolating production with no unauthorized binary (like Alpine) was a long path away…


You can do that in the git push by having a separate "stable" branch and linking deployment to it.


I just added a magic URL in my app that GitHub calls whenever a commit is pushed and the server does `git pull` which in turn causes pm2 to reload the app. So committing anything shows up in production in seconds. Great for smaller projects.


I have a similar setup, using snare to handle the webhook endpoint: https://github.com/softdevteam/snare

GitHub will call the webhook after a push to main and a successful test suite run. Snare runs a shell script on my server to git pull, build, deploy, and call a cronitor.io hook for monitoring deploy success.

I've been pretty happy with how relatively simple it is and how well it works.


How did you set this up? Seems simple yet effective.


Can it be a good replacement for Capistrano (for deploying rails applications)?

Love the focus on being lightweight

Recently I wanted to create a super basic website, and discovered it’s actually pretty hard to create something simple

And then, even if you manage to create something actually simple, you usually end up having to manage some not so simple deployment process together with hopefully some sort of version control

Ended up settling for putting plain html/css/js files in a git repo, then configuring auto deploy to GitHub Pages on merge to master (via Actions)


Also an option, if it's just for you and with not too many updates, you can upload the new files to ftp as a manual step.


Does GitHub pages support ftp? Or are you talking about some other potential hosting options?

Yes, ftp is pretty easy for static sites. However, given I want to have version control, it’s nice to have automated deploys happen after a git push


Use Podman Quadlet, I use it as replacement


This, but on a per user basis would be great.

But uwsgi performance overhead is a concern. Altough last time I've done anything with uwsgi that was probably over a decade ago.

And last time I checked, Go required to import the uwsgi package, maybe not anymore? Or is uwsgi used only for Python here?

Also I wonder how to define nginx routes aka locations?


pikku means tiny or little in Finnish. Is it where the name came from?


I don't know but my first association was "pico-dokku"


My guess has been they both originate from heroku; docker heroku to dokku, pico heroku to piku


Cute, as in the sibling language, Estonian it means “big” or “tall”



interesting project! what are the advantages of this over pushing to a normal ssh server with a server side git hook?


Came to ask this same question. I have a post-receive hook on my server that instamagically deploys whenever I push to it. It is simple and awesome and is basically just a builtin git feature.


Isn't it better to create local docker repository and then use Podman Quadlet with autopull images to run apps?


Better in what way?


1. Less dependencies (only Podman and registry is needed) 2. Rock solid rootless systemd service management 3. Easy integration with systemd-proxyd 4. Easy manage dependencies between containers (with healthchecks) 5. Rollbacks


Sounds interesting! Is there any support for multi-node systems? Let's say I want to have an ingress Caddy proxy on one node, which reverse proxies to several backed APIs on other nodes - can this be done simply with Podman Quadlet?

Also, what is the localdev UX like? With Docker Swarm I can easily run very similar setups in dev, test and prod, and it works with multi-node setups, has support for secrets etc. But the lack of work going into Docker Swarm becomes more concerning as the years pass by.

Also, had no idea systemd-proxy was a thing - is there anything systemd doesn't have its tentacles into? :)


If your VPS is wired with another one using VPC or any other internal network it'll just work. Just point Caddy to specified internal IPs of your other servers.

It's not designed to work on local envs. When I wanted to debug infra I used to run it on Vagrant though


Does this all fit in 256MB of server RAM?


You can’t do that on tiny systems very easily.


Is there support for secrets?


You have to bring your own. I have some trivial deployments that fetch secrets from Azure keyvaults using either release hooks or app startup code.


Thanks


But what if you like 'big pass'?

Looking at you sir mixalot:

https://youtu.be/wKfMZOR8eWI?t=31


Has anybody used this for Ruby on Rails?


Yes. Not any of the maintainers, though.


I think a more common use case than doing deploys by pushing to a different remote is to send git repo webhooks on PR merges to main to an API that has a deploy key and can fetch the repo itself.

This afaik is missing from most PaaS tools (CapRover excluded, but it has been illegally relicensed to non-open-source). Perhaps watchtower or something could replace the functionality?


Actually, this is how I deploy my static websites: piku in lazy mode handles GitHub hooks, pulls the source and renders them out to cloud storage, then kills all workers and idles again.


Does it support deploy keys, or are your website source repos public?


The piku micro-app that does the deployment is just a 10-line Bottle app that validates the GitHub hook and does a git pull with a private SSH key, so yes.


are there docs for this setup?


It’s just a 10-line script, I’ll see if I can sanitize it and add to the docs (one of the samples already does something similar, you can peek at the repos to get ideas)


That script sounds super useful!


Didnt know ...

"CapRover has built in anonymous usage analytics starting v1.11"

https://github.com/caprover/caprover/blob/master/TERMS_AND_C...

https://github.com/caprover/caprover/issues/1852

Was looking at CapRover to see if it has REST API

Looks shaddy


You scared me for a moment, as I've just setup a new VPS with CapRover and migrated all my projects from heroku. Doesn't look too shady for me, there's a oneliner to disable analytics, it seems enough for me.


You still have to agree to the terms and conditions of use of the nonfree application which can of course change at any time without notice. It’s a time bomb.

I’m thinking of forking it and adding all his dumb and easy table stakes features (2fa etc) that he is trying to gate as subscriptionware.


If you want to contribute any of that to piku, we’ll welcome it. Might take a bit to review and merge, but we’re always looking for non-breaking improvements


It’s not even open core. The solo maintainer simply relicensed the entire repo to a nonfree license without consent of the copyright holders to all the external contributions.


These self-hosted open source paas alternatives are really cool.

Off the top of my head I know of

coolify dokku kamal

and now piku


Don't forget CapRover. I'm just trying it on a new VPS and it just works as expected. I would have tried piku first if I knew about it, because it's even more minimal.


Nice work. But why isn't Docker supported as a runtime? Or is it?


The FAQ explains it: https://piku.github.io/FAQ.html

You can use docker run commands, but that’s not the main goal.


I just "git push" using nixos-rebuild


Is go support planned?


It works with Godeps. Module support was always a bit in flux when we added that, but it should be an easy first contribution…


Eventually, we'll need something more secure than effectively `sudo curl INSTALLER | sh` as a way to install stuff. I can see why package managers aren't always the answer, but still.

piku itself is neat and I like it.


Actually, we had manual install steps as the only way to go for a while. You'd be surprised at how many people asked for a one-liner... I'm going to add a note to the docs about that, since I've seen a couple of comments here of people who were put off by it and didn't even read the rest.

I actually only install piku via cloud-init, but there are plenty more options: https://piku.github.io/install/index.html


watch -n 1 git pull


Cool project, but I’ll stick with Dokku, which is a wonder for managing single server deploys via Docker/Git.


[flagged]


> set -e

> echo "Downloading piku-bootstrap here."

> curl -s https://raw.githubusercontent.com/piku/piku-bootstrap/master... > piku-bootstrap

> chmod 755 piku-bootstrap

> echo "Now you can install Piku on `hostname` like this:"

> echo "./piku-bootstrap install"

Did you look at the script?


In defense of their argument (not their choice of words), the final script is doing a `git clone`, so if they're going to ask the user to copy-paste anyway, their copy-paste code could be something like:

    PUBLIC_KEY_FINGERPRINT='SHA256:KnownPublicKeyFingerprintHere'
    git clone 'https://github.com/piku/piku-bootstrap' 'piku-bootstrap' && \
    cd 'piku-bootstrap' && \
    (git verify-commit $(git log -1 --pretty='format:%H' -- README.md) | grep 'Good "git" signature with ED25519 key '"${PUBLIC_KEY_FINGERPRINT}") && \
    (git verify-commit 'HEAD' | grep 'Good "git" signature with ED25519 key '"${PUBLIC_KEY_FINGERPRINT}") && \
    ./whatever-command-from-this-repo
(EDIT: Feel free to clean it up a bit, adding other variables to make it a bit more readable, etc.)

From the user's POV it's the same thing: copy from somewhere, paste into terminal, press enter. But now you're trusting two fewer places:

- piku.github.io (the deployed version)

- github.com/piku/piku.github.io (the source code that supposedly generates the deployed version)

I'm not saying to do exactly these steps, and they are not perfect anyway (you're still trusting a `git clone` before even doing any kind of verification), but at least you're not YOLOing[1] without first doing some minimum verification of integrity before something is executed, so you know at least the first installation step has not been tampered with[2][3].

The git repos don't have commit signatures, and the verification doesn't have to use specifically those kind of signatures (could be a minisign public key hosted somewhere), but you get the point.

The updates are also a `git pull`, meaning, if the commits were signed, and the installation copy-pasta (or script) included a step that added the author's public key to the allowed signers file (TOFU-style), e.g. with a `curl` to the author's GitHub's keys URL, then `git pull --verify-signatures` could check new commits against the already-known key, and automatically warn about any changes so the user could decide if trust the new key or not. Stuff like that.

[1]: Nowadays, saying "install with `curl | sh`, but you can also do install these other ways", is equivalent to saying "verify integrity with `md5sum`, but you can also check this other file if you want to see other hashes"; technically correct, but if the installation instructions already contain questionable defaults, it makes me question what other questionable defaults are in the actual code.

[2]: "Has not been tampered with" as in "the last modification of `README.md` and the most recent commit are signed with the same key". Not perfect, but it's not YOLO.

[3]: Emphasis on "first installation step". Obviously anything that happens after that simple sanity check would need more scrutiny in case the author tries to run unsigned/unverified code in one of the next steps.


the script is irrelevant.

it's the culture it attracts/creates when you start with this.

and while i didn't look at the script for the reason above, i did look at the site and found no high level or architecture designs links.

good luck basing, even your dev, infrastructure on things like this.


It's a convenience and risk assessment matter IMO. Looking at the repo I see stars in the 1000s, issues are well addressed, and of course the source is available. Lots of persons have likely gone through the code already and no flags were raised, so it's pretty low risk to use that one-liner. I'll definitely use it, as have others, as I just want to get setup and move on with minimal effort.


That is just one of the deployment methods. You definitely didn’t read beyond the first few lines on the page…

——-

There are 3 main ways to install piku on a server:

Use piku-bootstrap to do it if your server is already provisioned (that is what the TL;DR command does)

Use cloud-init to do it automatically at VPS build time (see the cloud-init repository, which has examples for most common cloud providers)

Manually: Follow the guide below or one of the platform-specfic guides.

———

(I do all my deployments using cloud-init, where you can even inline piku.py into the manifest if you want to—-and we have separate repos to demonstrate that)


>You definitely didn’t read beyond the first few lines on the page…

I always do a mental `exit 1´ upon encountering `curl | sh´. It's just a good practice.


JavaScripter?


What is a PaaS?


Platform as a Service.

Which leaves me the same number of questions.

So, what is Platform as a Service?


A "Service" is when someone or something does something for you, usually in return for some fee.

A "Platform," in the context of IT and software (and especially the internet), is some IT infrastructure, generally a server and the software installed on it, that you can host something on, such as an app or a website.

A "Platform as a Service" is when someone else sets up your platform for you so that you don't have to do it yourself—you get access to the platform and can use it for your own stuff, but don't have to configure or maintain most of it.


Repeat after me: git, is, not, a, deployment, tool


Why? Not?


I think the main reason is it's sensible to pass the source code through a process that organises and optimises it for release to a specific environment. My first assumption seeing a git repo used in this way would be that someone was cutting corners and probably doing bad things like committing secrets to the repo, things like that.

If the person setting it up is aware of the potential pitfalls and has a good explanation for the process - particularly if there is no build step involved and secrets are managed appropriately, then it can be fine.


You can do all that with a pre-commit or post-commit hook.


It goes further than that, those were just examples. The principle of least knowledge, and the principle of least privilege, guide deployment to a process that does not include the source code on a production server. But like I said, there are ways for it to be a reasonable approach if properly justified


Then you’re checking in artifacts generated from source code (for this method that relies on git push to work).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: