In fact, by reading the motivation of the project in the README, it seems that the problem it solved was the authentication part of it, and replaced it with... no authentication?
It's just an extra thing to install, update, maintain and hack.
- less error prone
For some use cases, these can feel bloated, so I understand there could be room for a more minimalistic option.
I'm not against generating configuration, but that should not require an always-running instance.
I don't understand why you think a GUI necessitates losing version control though, or why you think version control is a "feature" of text files.
If you need version control of always-running live-changing nginx configuration then you can run `nginx -T` to dump the parsed configuration to stdout. Just save that to version control.
I wouldn't say it is "state of the art". Solaris had containers in 2004/2005.
phrase of state
the most recent stage in the development of a product, incorporating the newest ideas and the most up-to-date features.
incorporating the newest ideas and the most up-to-date features.
"a new state-of-the-art hospital"
something can be quite old, but if nothing has replaced it, it's still "newest" and "most up-to-date"
(The key insight here is that "containerization" in the modern sense is about norms/expectations/practices and an ecosystem of tooling around a particular design that's convenient for certain use cases; it's not about a technical facility in the OS.)
Maybe do script injection by doing some fancy file loading? Not sure how much nginx config language allows.
That would probably not mean that there is an Infrastructure code in git.
I don't mean this as offending but my personal action is to fix the underlying issue.
If you add more tests etc. you will have to do that less often until it doesn't is the norm anymore.
Even if you don't use kubernetes or docker images, you do wanna have it in code right?
Having something like ansible would do the same thing but allows you to have your config backuped by git and allows you to recreate the env without any issues.
How do you keep your mean time to recovery low? How do you document your setup? How do you automated tests if you change your config manual?
I have done this for 10+ years and it's still working flawlessly, I don't see a reason to add more complexity to something that is working fine for my needs. I don't have to change config often, maybe once a year or so.
I would do it due to:
- keeping 'meantime to recovery' low due to having my configs in git -> fast recovery
- using something like ansible for your small case to have it in git (instead of a shellscript) -> backup
- perhaps a jenkins pipeline with testing the config before applying it automatically -> for quality
- automatisation for having fun with it, seeing what is out there besides ssh and manuel editing
At the end of the day, i assume that this automatisated process is also less dependend from myself. So i'm not sure who is doing things when you are on holiday or sick.
I'm now having a small k8s cluster at home. I have never had anything which allows me to have my whole setup in git so easy with autohealing, domain management and high availability.
If anything this tool makes editing easier and may lead to more misconfiguration.
sudo vi /etc/nginx/...
Mind you, I've had more trouble with that with Apache given how they changed their config files around on multiple occasions.
Have we already forgotten how Microsoft Exchange singlehandedly launched an entire spam industry?
In case you didn't know, the configuration parsing library we created for the NGINX Amplify agent is open source under Apache 2.0:
https://github.com/nginxinc/crossplane (credit to @aluttik who was the primary author)
It parses an NGINX configuration files into JSON and vice-versa.
Edit file, reload... Voilà
Nonetheless they had a problem, they fixed it and they shared the solution when they could've just kept it.