Interestingly similar and interestingly different to a project I wrote a while back called Hookah. Mine is specifically targeted at GitHub though.
I actually might not have written mine if this had been around. I intend to play with this as we want to start handling webhooks from other services and I'm not really interested in growing the scope of Hookah.
I also wrote one a year ago called harpoon after searching for one not involving nodejs or apache & PHP. It has a tunneling feature using localtunnel.me. It is targeting only github.
Mine's called Hooknook. It takes care of updating the GitHub repository for you on push and then invokes a command inside the repository. https://github.com/sampsyo/hooknook
We'd played with a PHP script running behind nginx, but I wanted to be able to easily and reliably route requests, with ZERO config, and I wanted to write scripts in bash. The majority of our scripts are written in bash utilizing jq.
Now we can define where things route simply by where they go in the hierarchy and it's wonderfully simple to use.
It was a fairly simple undertaking in Go, and we've been very happy with it overall.
When I started working on this, I couldn't find a simple tool that I could drop in and automatically redeploy my latest development branch on my code repo.
Configuration for this, especially the hook rules definitions, would be better in some DSL rather than JSON. Trying to structure logic as JSON is very hard to read.
Edit: if you aren't married to configuration in a file, you could even do hosted JS or Lua functions kinda like a serverless environment
I agree, the biggest pain point I've identified in my tool is the actual configuration. It's okay for simple definitions, but as the number of endpoints grow, it becomes messy. I started work on a CLI tool that would be responsible for managing the hooks.json file, but I haven't had enough time to devote to it. (My wife, my daughter and my daily job currently occupy most of my time...)
The only part that made me comment was the rules definitions. I will gladly write key/value type config in JSON or YAML or whatever. But, writing the rules logic in it would be a big pain, because it is not easy to see what deeply nested and and or statements mean.
JSON is definitely nice for the time vs. effort balance; you've effectively got the users writing AST directly. And, building tooling to make it easier to write and manage is good.
I've had a couple of projects recently where I was writing a parser. And, there is a lot of tooling to make it easier: lex/flex, yacc/bison, PEG-style tools like Python's Parsimonious [0]. (I'm sure there are similar libraries in JavaScript) Defining the grammar is a bit arcane, but the only other thing you write is the interpreter, which you already have a version of (just with a different AST format).
I would suggest some sort of a generator instead of creating CLI as if you have many endpoints, this will be a chore.
As an example: We worked on something similar for a school project in Python, where you would simply write classes for each things, which would then create a JSON file.
In your case, might not be what you want if you do not want to tie to a single language though.
I think that using some DSL rather that JSON will make it more obscure, since a lot more people are familiar with JSON coming from front-end development and/or Node. If you want a big userbase and more potential contributors is good to stick to what everybody is using.
(Reading my own answer I would be swearing at anyone that told me that about an XML config file a couple of years ago)
for the others who don't know the history.... (despite your added comment)
You can take what you've said, replace "JSON" with "XML" and you've got an explanation for Maven and all the other Java tools with a ridiculous amount of logic shoved into some generic markup format that was never intended for it. History has shown that it sucks to deal with. XML was the hot, "obvious", "everybody knows it", format of its day. I agree that JSON is, in most uses, a hell of a lot better, but, like XML it is a terrible place to be defining logic.
It really needs to be something that is actually designed to encode logic, not a data structure format. A custom DSL is a decent, and probably very easy (for users) idea, but if the author was really not interested in the design and debugging of such a thing then you could use LUA or some other embedded language.
Seems cool! Given the example talks about "redeploy," I assume this was built with longer running tasks in mind? Is there a way to see what hooks are processing and get output from them (or send that output somewhere)?
The -verbose flag logs pretty much everything you need to the stdout & stderr. I.e. when the hook gets triggered, what arguments and environment are being passed, which command has been executed, what was the output etc...
It is also possible to wait for the command to finish and return the response as part of the HTTP response, or just return 200 Ok and run the script in the background.
Yes. I believe that it's better to be verbose rather than obscure. I'm of a belief that it's easier for someone to understand what was the original intention of the "pass-arguments-to-command" rather than "arguments" keyword. The verbosity shouldn't affect performance significantly.
Verbosity is sometimes good but in the context of this.. Where all you are doing is giving commands, giving them arguments... there are not so many things here that can lead to confusion.
They are a little verbose, though it is likely not to cause problems here. I suppose that it may slow things down a very tiny bit. The author could possibly have been trying to future-proof it with a bit more verbosity in the face of allowing for other keys (execute-mode, execute-permissions) etc.
Sigh, it's HTTP by default. I get that HTTPS is more setup and it's nice to have something that works out of the box with minimal configuration, but the idea of command execution over an insecure channel makes my eye twitch.
I don't like insecure defaults. I might remember to take the steps to secure this thing, but not everyone will. They might forget, they might not realize, it might be an innocent mistake, but it will happen. It shouldn't be possible to send commands over an insecure channel.
I'm also a bit concerned about the authentication story. You can require the sender to provide a hash signature, but you have to specify the secret in plain text in the webhook config file. That's not best practice. Better would be ldap or Kerberos integration. And again, there should be secure defaults.
I actually might not have written mine if this had been around. I intend to play with this as we want to start handling webhooks from other services and I'm not really interested in growing the scope of Hookah.
https://github.com/donatj/hookah