Hacker News new | past | comments | ask | show | jobs | submit login

Caddy is amazing, but on production machines remember to disable the unauthenticated and enabled by default JSON-based admin API bound to localhost:2019, as it can be a serious security risk in certain deployments.

Put the following in your Caddyfile at the lowest scope to disable it:

  {
      admin off
  }



PSA: systemctl reload caddy will call this api. If you disable it, then reloading the server will no longer work.


Too good for sighup? I deplore an API when signals should work perfectly fine


Signals don't work on Windows. We want a unified API for all platforms. Also signals don't carry arguments, which is necessary to push a new config. At runtime, Caddy doesn't know where the config came from because config is just data.


Apologies in advance, I'm not trying to be mean spirited or too critical. I'm limited in my ability to express at the moment, on mobile.

The API can still be there, I'm just asking for better integration where feasible. Signal handling on Linux/similar

It's silly to tell my init process to go out 'to the network' to do something it can do directly against the child.

I would not expect turning off an admin API to effectively limit my way to administer the process.

Services will generally ordain a path for a config, overridable with arguments. The same file used then is what is re-read on reload.

Argument/command line changing during a reload isn't a thing, that's restarting. We give it config files as an argument (or implicit default) so that can be reloaded.

It's uncommon to start a process with one file, decide you want a new file path, but keep the PID.


See https://news.ycombinator.com/item?id=37482096, the plan is to switch to unix socket by default on certain distributions of Caddy.

But it won't be possible to add signals support. We've thought hard about it but it's simply not a fit. There's discussion in GitHub: https://github.com/caddyserver/caddy/issues/3967


Sockets will definitely be appreciated! It leaves some corner cases I can imagine, but it's definitely a step in the right direction.

ie: admin interface disabled, I can't reload to bring it back... because that depends on it.

With sockets we gain a permission model; one simply being in the 'localhost' scope can't do funny/scary things - either a user or another service on the system.

Thank you for the discussion, I'll give it a read - have a meeting then I can finally use my computer to 'catch up'


Earlier versions of Caddy were just a single binary that accepted signals to reload; the newer versions add a bunch of process management stuff that just got in the way of our existing tooling (...why remove the signals? ugh!) so we just switched back to Nginx.


The only signal we don't support anymore is USR1. (It's not a powerful-enough API for config reloads.) That was why you switched your entire web server stack?


This default behaviour makes sense. If you're going to be using it for hosting in production, then read the documentation, and it's trivial to disable. If I recall, AWS has a similar default for some services. That is, access to the subnet (VPC) gives you full access to the attached service, no password required.


Disagree, to a degree. It's fine to offer this for extended use cases (ie: restarting from a second, trusted, host)

It would be more appropriate to handle signals, particularly SIGHUP. That's how most services have been handling reloads.

It's fine to offer an admin API, especially if I want a peer to be able to affect the local instance, but this shouldn't be the position init is placed in.

Put simply, the init process is what we depend on if everything else fails.


And mongo, and many others packages with insane defaults.

What if rm would by default just delete everything, as it assumes that makes sense? Stupid comparison, I know, also a stupid default.


Do you recall which AWS services are like this? Thinking I better check a few things!


If you have any old S3 buckets, listing the index used to (in relative terms quite a long time ago) be enabled by default.


Thanks!


At least, have it respond only to authenticated requests. Caddy supports client certificate authentication:

https://caddyserver.com/docs/json/admin/remote/access_contro...


Caddy also supports Unix sockets, which should be rather more difficult to smuggle requests to, and can be protected by file permissions:

    admin listen unix//var/run/caddy/admin.sock


This (if they definitely must leave the functionality enabled by default) is what should be the default honestly. I still can't fathom why that isn't the case!


Caddy maintainer here: we're looking to move to unix socket by default for Linux distributions. See https://github.com/caddyserver/caddy/issues/5317, the plan is to set this env var in the default service config but I'm trying to be careful about backwards compatibility so I haven't pushed the change for our deb package yet. Will likely do it soon.


I'll see about getting it made the default for the FreeBSD port at least.


I would imagine so the default behaviour could be identical across platforms.


I imagine it's for Windows users. But yes, it could very sensibly be the default in Unix.


While you are right, remember there are usually additional layers of security (and if not, there should be). On the network level, you would only allow ports 80/443 to reach the machine. And if you use a containerized deployment, you would only expose 80/443 as well.


If your application can be used to make outbound requests to the internet (and so many apps can be), you can easily make a GET against localhost. There are ways to lock that down, but they aren’t automatic.


Just another weird and stupid default waiting to be exploited.


Works on localhost. It is not a big deal.


This is how trivial bugs turn into full-fledged threats. Increasing attack surfaces without any justification is bad cyber security.


If you're hosting your applications on localhost it can be a security risk.

A blind SSRF vulnerability (with payload control) in your application could be used to gain full control over the reverse proxy resulting in the attacker gaining full unfettered access to your network.

If you're not using it (and you shouldn't be using such functionality on a production machine), then you don't need it and should disable it, see: https://owasp.org/Top10/A05_2021-Security_Misconfiguration/


It is absolutely a big deal. Any server software should be secure by default, period.


If your server reaches out to user-provided URLs, it can be a big deal. Especially with DNS rebinding, remote users can bind domains to 127.0.0.1. Which avoids cors like protections.


We mitigate both DNS rebinding and cross-origin in the admin endpoint by verifying Host and Origin headers -- by default.


Alternatively don't serve your site over HTTP at all. Just redirect to HTTPS.

Edit, I just checked the Caddyfile for one of my sites. There is no config for redirecting HTTP to HTTPS is does it automatically. So this is entirely unnecessary.


No what I'm talking about here is the unauthenticated JSON-based configuration API that hosts itself on port 2019 on localhost of the machine that runs Caddy.

This is unrelated to sites hosted using HTTP. I was clumsily using the term "HTTP" to refer to the fact that this configuration mechanism is based on HTTP-communication.


So if I understand this correctly, anyone can bring down a site with a Caddy server by just running : curl -X POST "https://example.com:2019/stop" ? [0]

Seems counter to their objective of having secure defaults.

[0] https://caddyserver.com/docs/api#post-stop


No, by default it listens on localhost. So only processes running on the same machine can connect to that port.


Okay that makes sense. So why would you bother disabling this thing?

I'd imagine if someone already has local access to the server, it's already too late.


Not really. If someone logins as user A on the machine, and caddy runs as user B, then unless A has sudo access, A cannot modify caddy. But with this admin HTTP endpoint, user A now can arbitrarily modify caddy.


This does kind of beg the question, who is sharing their load balancer / reverse proxy?


That's true, but I think if your production web server is running on a system that you expect to have other users log into and do things on while having the Unix permissions prevent them from interfering with the production server, then your whole architecture and process is deeply broken far beyond the ability of any Caddy design decisions to address.


That's another really good point, even if it's less common these days to see this type of shared machine.


Most people would expect that `sudo` and `curl localhost:2019` are very different permissions, that is, curl with post payload `-d '{"admin":{"remote":{"listen":"0.0.0.0:2019"}}}'`, and you'd only have to convince an existing process to make the request.


In some cases, localhost is not just accessible from localhost :) https://unit42.paloaltonetworks.com/cve-2020-8558/

Also SSRF risks as mentioned elsewhere ...


SSRF in an application is a serious issue to have on its own, that's true, but in combination with a Caddy admin endpoint it can be used to give an attacker full access to your local network.

You could have a blind SSRF vulnerability in an application and while that's not great, it is difficult for an attacker to exploit successfully.

If the attacker knows or guesses you're hosting Caddy on the same machine, they know you most likely have an admin interface on localhost:2019 that they can use to make further local network requests and also makes it possible for them to access the results of their local network requests they were making through the blind SSRF vulnerability hypothesised above.

Basically, if you're not using it (and you shouldn't be using such functionality on a production machine), then you don't need it and should disable it, see: https://owasp.org/Top10/A05_2021-Security_Misconfiguration/


> Basically, if you're not using it (and you shouldn't be using such functionality on a production machine), then you don't need it and should disable it

Actually, most everyone wants zero-downtime config reloads. The API is necessary to perform config reloads.

As others have said, you may use a unix socket instead for the admin endpoint. And see https://news.ycombinator.com/item?id=37482096, we plan to make that the default in certain distributions.


> The API is necessary to perform config reloads.

Of course it isn't. It could reload the config from the same path it loaded the config from in the first place. Like practically all other software has done for decades.


The source of a config doesn't necessarily need to be from a config file. Config loading is abstracted. So it requires input, and signals provide no way to pass arguments, so it's not workable. See https://github.com/caddyserver/caddy/issues/3967


This sounds like a design decision you've made, not an inherent limitation. You can read config from files, like practically all other software has done for decades.


I get the thing about config reloads, I don't think it's worth it due to the security risks of the current default, but I get it.

Happy to hear you're moving to sockets by default on *nix!

However, I'd like to point out that the default should be in the binary, not in the distros default environment variables, otherwise it won't reach people who build their own binary, and depending on how you start your Caddy server you may clear environment variables for that process, and end up with the insecure HTTP-based admin endpoint enabled by accident.


The only default that works on all platforms is a TCP socket. We can't write to unix socket file by default because the path to the socket needs to be writable, and there's no single default that has any guarantee to work. So it needs to be dictated by config in some way or another. It's better for it to actually work by default than possibly not work because of a bad default.


So detect the OS and choose the more secure default where possible? I know it's less elegant, but having a much more secure model is worth some sacrifices.


It's not only the OS, it's the environment. File permissions are not a guarantee, no matter the OS.


Then you throw an error in the log, you have to leave something for the admin to do to set their system up correctly. It's better that Caddy fails to enable the admin endpoint than that it enables it in an insecure manner.


You're overestimating the users; a large % of them would not understand how to resolve that on their own and would complain to us that they can't start Caddy without errors. And I fundamentally disagree that the TCP socket is so insecure that it must never be used as a default, it's only insecure if your server is otherwise compromised. It's a sufficient default for 99.99% of users.


Said large percentage of users will be installing through a package manager anyway, where you can make sure that Caddy has a path the user it runs as can write to.

If you're correct that I'm overestimating users then what are you guys doing? You're expecting users to know how to secure their Caddy configuration when in reality most users probably have no idea that this API even exists, they'll put their config in Caddyfile, start the server, and be done with it.

We should be expecting that they don't know anything about the risks involved with leaving an unauthenticated HTTP API on localhost, and instead shipping a default that doesn't place their system and network at unnecessary risk.


> Said large percentage of users will be installing through a package manager anyway

Exactly, which is why the environment variable approach is perfectly fine. The env var will be set in the systemd config.

> You're expecting users to know how to secure their systems

Again, our view is that the TCP socket for admin is secure enough for 99.99% of users, and has been for over 3 years since Caddy v2 was released. We've still not seen any evidence of a practical exploit in the wild.


You should disable it if you don't need it or at least move it behind authentication if you do need it.

Security follows the Swiss cheese model: each individual measure has known limitations but by layering them, you reduce the overall number of attack vectors.

Getting the server to make arbitrary HTTP requests is bad, yes, but limiting what the attacker can do with that makes it less dangerous if you somehow screw that one thing up.


No, it is only bound to localhost.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: