If you were to disable/lower privileged ports entirely you need to also realise there are, as always, a bunch of implicit security boundaries related to this that you need to account for. As a most simple example, if you can trigger a local DoS to force a process to exit then you can race to bind it's port and potentially do something nefarious - depending on the protocol you raced it may give you some privileges to deceive a remote host etc. Which is not to say you shouldn't do it in general but one must be aware of these implications when you want to change a literally decades old design decision that like all good bugs, many people depend on.
>As a most simple example, if you can trigger a local DoS to force a process to exit then you can race to bind it's port and potentially
Which is why what we really need is the ability to set properties on each network port and have those persist across reboots.
Give me some path in /sys where I can write a message of some kind, or an ioctl, which allows me to say "user httpd can bind to port 80". Or even as specific as user, port, protocol (TCP/UDP/etc) and ip address.
Requiring that I have to be root to set CAP_NET_BIND_SERVICE every time I replace my web server executable sucks.
> Which is why what we really need is the ability to set properties on each network port and have those persist across reboots.
SELinux can effectively do this via the 'semanage port' options[1] -- so you could configure it and then set privileged ports to 0 and achieve what you're looking for.
Admittedly this seems like using a crane and wrecking ball to hang a painting.
Anything you invent that can confine software from the outside will become SELinux. SELinux is actually really simple, you tag stuff and write rules about what tags can and can’t do to other tags. And writing your own policies, while strange and esoteric because it uses m4, is straightforward.
SELinix is hard because you really have to know what’s going on with your software to actually write these policies. The complexity is in creating a maintainable system of tags and rules with software that does all kinds of crazy weird things.
The complexity is irreducible. The only thing you can really do is create tools to handle common cases and that’s semanage.
I disagree that the complexity is irreducible. SELinux really complicates things by having compiled policies that have their own tool chain, and (although this is definitely debatable) using file system labelling versus just paths (like AppArmor for example). I know there are shortcomings for paths versus labels, but administrators are much more used to working with paths and simple config file, versus labels and compiled policies written in a DSL. In addition, the auditing system for understanding what SELinux is actually doing is complicated, so much so that there’s additional tools around it that allow you to pipe the logs into so you can get something resembling a human explanation for what’s going on.
None of this is irreducible. SELinux does itself no favours by having a complicated tool chain with a terrible user experience. And I say this as someone who always leaves SELinux enabled…
I think we’re maybe mostly agreeing. I’m not going to sit here and defend SELinux’s toolchain or audit messages — they’re weird as fuck.
My only point is that once you get over the oddities of toolchain, the DSL, the odd smattering of tools unconnected tools and learn to read the audit messages (no simple task mind you. It’s stupid and shouldn’t be so hard) only then do you hit the real complexity which is actually knowing enough about the software internals and the deep kernel and userspace magics to write intelligible policies. SELinux could go a long long way on making the easy stuff easier and more friendly, but once you go down the rabbit hole it’s all the same.
It really can be as simple as deploying your service to a clean system, then grepping the audit log and choosing whether or not you're comfortable feeding that to audit2allow
More often than not you probably are in this clean room situation, but that doesn't mean ignore all reason
Follow things like the FHS and the CIS benchmark. Don't put your application data in /etc/. Don't expect to execute things in /tmp/.
It's guard rails for things you should be doing, the policies come from what the industry has determined are reasonable defaults.
I understand it's foreign to people but disabling it is silly. It's easy to do it this way, admittedly 'poorly', and still gleam some benefit.
True, at the end of the day there is still a huge amount of work to be done modelling the access your application needs. But I do think that can be conceptually simple, and all the horridness of the SELinux toolchain makes it far more scary than it needs to be.
That being said, I feel like like with containers and a generic SELinux/AppArmor policy that restricts the containerized app, there's really no need to even bother anymore.
And yet I recall managing launchd security policies to be a fraction of the complexity to manage and easier to verify you were doing the thing you intended. As compared with Android where I observed SELinux required far more man power to maintain / complexity / foot guns.
Now maybe SELinux is indeed more powerful. I’m not well versed enough in that problem domain to do a comparison. In terms of successfully getting broad adoption across teams of various skill sets, I’d say the launchd approach is better holistically.
I thought launchd was more akin to systemd than to selinux or apparmor. Some brief googling didn't show anything about security policies. Do you have a link where I could learn some more?
I don't know that there's anything external and it's possible I may have misremembered. Performance limits are managed in launchd (maximum memory etc). File access security is managed by setting a class on the file via ioctl which controls when the file is accessible (after first unlock, while screen is unlocked, etc).
Maybe there isn't a secondary security layer? Can't recall now but I swear I thought the launchd plist described it all. I don't have a Mac anymore so I can't even look at the plist contents to double-check.
> Give me some path in /sys where I can write a message of some kind, or an ioctl, which allows me to say "user httpd can bind to port 80". Or even as specific as user, port, protocol (TCP/UDP/etc) and ip address.
So, regular file permissions for ports. It would be nice if we had 9p's concept of a network filesystem here. No ioctl() needed.
Isn't this provided by inetd and systemd? They bind the socket and hand it to the server, who doesn't need permission to bind any port at all? (with the added optional benefit of "socket activation", i.e. starting the server on-demand when the first connection comes in)
This article is nonsense. Privileged ports are a security feature. They have literally nothing to do with mainframes. On multi-user systems, they're incredibly important because they give external clients confidence that the services provided on them are authorized by the system and not just by any user -- some of whom may not be as trustworthy as others. Most systems in this era of cheap hardware are single user, BUT NOT ALL SYSTEMS. It's fine for Windows and Mac OS to do without them and it's fine to configure your own Linux system to disable them if that's what you want, but it's completely insane to argue that they're a security flaw because some people work around them using insecure practices. There are plenty of secure ways to work around them, most obviously by USING A NON-PRIVILEGED PORT. Start your service on port 8080, for example, and give out a URL like http://example.com:8080/path. It's really that simple. Take the time to understand the actual purpose of a feature before urging others to abolish it.
> Most systems in this era of cheap hardware are single user, BUT NOT ALL SYSTEMS
OP quite clearly argues that multi-user systems can still have the old behavior if they so choose with explicit configuration. OP makes an argument about what should be the sensible default in 2022, and who should do explicit configuration.
I think the point that nowadays easy single user linux configuration should be preferred over multi-user configurations is good.
IMO distros could make this easy by turning privileged ports into an (advanced) installation question. Then the clearly single-user-focused distros would default to 80, and the more server-oriented or conservative distros would default to the current behavior, but both could do which ever.
> Start your service on port 8080, for example.
In the era of Let's Encrypt, this is really about 80 and 443. The background for the OP is probably to host "normal" sites on single user hardware.
If you were to give out to non-technical users addresses like "http://example.org:8080" which then transform into "https://example.org:8443", that's just horrible UX. Those kinds of numbers in the URL probably also look to many people like someone is trying to hack them. Furthermore addresses are also communicated word-of-mouth "go to example dot org".
So no, using unprivileged ports is not an actual workaround for the use cases OP is referring to.
Stop pretending that non-technical users want to run web servers. That's just not a thing. What you're actually arguing is that some technical users are so important that making them tweak a default setting is too much to ask, but others are so insignificant that pulling the rug out from under them after decades of practice is perfectly reasonable. I disagree.
Do you not grasp WHY Let's Encrypt requires port 80 (for one particular challenge type)? Think about that for just one second. Okay I'll spell it out for you: the convention that ports under 1024 are privileged gives Let's Encrypt some confidence that privileged ports runs services sanctioned by the system administrator and not some tenant -- which is exactly the point I was making. So thanks for providing more support for my argument, I guess?
And while we're at it, can we stop pretending you care about user experience? You can't even be bothered to type those two words! You cite no studies and make no technical arguments. All you're offering is the claim that the default you prefer "should be preferred" based on your intuition.
I probably shouldn't even dignify your claim that people think port numbers in URLs mean they're being attacked with a response, but I'll bite. Do you have even a shred of evidence for this claim or did you just make it up on the spot? Obviously the latter, but if even if it were true the solution would be to educate people about what port numbers mean. Unless you want to argue that any feature of any internet service some people are confused about should be abolished? I guess we'll have to shut the entire internet down.
Changing the port number is a perfectly reasonable solution for many use cases, but it's far from the only option. Alternatives include CAP_NET_BIND_SERVICE, net.ipv4.ip_unprivileged_port_start, port mapping in containers and many more. Pick your favorite and stop wasting everyone's time.
I think you're missing the fact that in your scenario services are started with explicit privileges by init. This has nothing to do with unprivileged login users.
Back in the day, init ran as root. Init ran services as root, and services were responsible for becoming another user if they didn't need root after binding a port. It was a very simple interface.
Nowadays we have much more complicated interfaces and as a result more flexibility. Init (now systemd) must still run as root, but we can tell the component that executes a service to drop privileges before the service is started. These privileges are also more granular: Root is no longer needed to bind to a low port: We only have to grant the CAP_NET_BIND capability, rather than running as a root user.
On top of this we now have network namespaces and containers, so services might be granted their own interfaces visible only to that service, with specific permissions tailored to only that service.
If you are running a persistent webserver you do not need to worry about whether or not unprivileged users can bind to port 443. The webserver is managed by init and the permissions granted to services are explicit and granular -- and typically all this is configured by the distro by default.
>The webserver is managed by init and the permissions granted to services are explicit and granular -- and typically all this is configured by the distro by default.
Only if you're using a web server that's prepackaged by the distro.
If you're installing third party services, and exposing them to the internet, or even local networks, it's not out of line to expect you to know what you're doing. At least enough to turn off the 'dont shoot yourself in the foot' protections
People who "don't know what they're doing" will just disable SELinux if it's too difficult to configure the policy that they want. It's better to make security policies easy to configure correctly so that more people do it correctly.
Also, what does "third party" really mean in this context? I am neither the author of the Linux distro that I'm using nor the web server that I want to use. It's all third party software from my point of view.
You need some level of expertise to turn off selinux, or even know what it is. They know what they're doing enough to know the consequences if they can turn it off.
By third party, i mean stuff not in the distro. It's extra effort to install something else. You're likely to have at least a link to docs in your face when you went and got the binaries.
I don't think this is true in general. For example, in one instance I wanted to use caddy, which wasn't packaged for CentOS at the time. I don't think I had trouble getting it to run as a non-root user, but getting it to work with SELinux was a big PITA.
The Linux community seems to have a pervasive "if you're doing X then you should be able to Y" kind of attitude. But sometimes you just need to do X, and you know what you know.
This was covered in the article, albeit a little sarcastically:
>And for the three folks in Finland who administer multi-user Linux instances and rely on privileged ports for their mainframe-era security properties, they can always run sysctl and set their port limit to 1024 as it was before.
If you wanted to block non-root users binding certain ports, you'd still be able to do so. It just doesn't make sense to have this as the default anymore, as it tends to cause more security issues that it prevents.
No one has yet provided evidence that privileged ports create even a single security issue. While there's lots of huffing and puffing in the original article (including someone confusing privileged ports with IP based authentication, which is effectively dead), the closest it gets is to say that dropping privileges in a server is a pain. And then they go on to show off a one line configuration change that would disable privileged ports system wide. So do that if it makes sense for you.
What doesn't make sense would be to abandon decades of practice that real people (more than three in Finland, actually) rely upon because some people are too lazy to make a trivial change to their systems. And let's get real: 99.9% of people are never going to want to run a web server on their computers. You're just arguing that your portion of that 0.1% -- let's be a LITTLE sarcastic and call them five penguins in Antarctica! -- are more important than the rest.
One example I can think of is a local server handling some sensitive stuff. E.g. a webserver for a CNC machine that takes a password to log in. If that webserver is offline another user could start their own webserver with a fake login prompt. Other users who go to http://localhost as usual would not notice.
> It's fine for Windows and Mac OS to do without them and it's fine to configure your own Linux system to disable them if that's what you want, but it's completely insane to argue that they're a security flaw because some people work around them using insecure practices.
I guess we will also be making distros with `sudo` without a password to stop some people writing insecure scripts that try to pass it via plaintext stdin. This should be default for _every_ Linux system because some Linux server admins copy insecure code from StackOverflow. /s
Agreed. In the HPC and research space large multi user systems are still king.
It's quite common for users to stand up their own versions of privileged services on unprivileged ports. Bad actors aside, this prevents users from accidentally mimicking a service that would effectively break shared resources. These are nice guardrails.
Security and guardrails should be optional, but it should be opt out, not opt in.
Right, that's the way it should be done for ports that try to listen to non-local connections. That would actually strongly increase security, as non-privileged ports can be abused a lot too.
They did, it's called 1025-65535. This is literally ancient tech. There are more modern (and perhaps more granular) ways to do it today with cgroups and nftables I'm sure.
I myself am an avid SElinux user and I know for sure you can restrict ports to user roles there.
Did you... actually read the article this comment responded to? Or even the comment you're replying to? The original article proposed making all ports non-privileged because most systems serve a single user in practice. Are you really going to argue that changing the port to 8080 is insecure because someone could snipe it but making all ports non-privileged is better because... now someone can snipe 80 as well?
I agree with the article that privileged ports is a bad idea. I disagree that making them free for all, or using a free-for-all port is a good solution. You propose it as a simple solution but it has many issues.
IMHO privileged ports are still useful to distinguish ports assigned by system administrator and ports assigned by any user. Otherwise some user service can (inadvertently) take some port before a system service that was supposed to use that port started and cause completely preventable failure of the system service.
The minor issue with privileges is already solved by CAP_NET_BIND_SERVICE, the init system could just give this capability to every system service, instead of disabling privileged ports system-wide.
> Otherwise some user service can (inadvertently) take some port before a system service that was supposed to use that port started and cause completely preventable failure of the system service.
There are plenty of cases where a web server is just a reverse proxy to :8080 or some other non-privileged port that could potentially be taken over in such a manner.
> There are plenty of cases where a web server is just a reverse proxy to :8080 or some other non-privileged port that could potentially be taken over in such a manner
True, but you can also proxy through a named unix domain socket on the filesystem and control access to it that way. At least nginx, haproxy, and caddy can all use a unix domain socket as an upstream.
This should generally be preferred when reverse proxying to localhost, as you get to adjust permissions on the socket. Unfortunately people aren't always doing this, and sometimes the upstream app isn't capable.
When you get right down to it the issue is port numbers just plain don't work for this. SELinux had some concept of sending SELinux security contexts over the network so you could get positive claims about what you were talking to, but it was never developed into a full system AFAIK.
Which is ultimately what you want: TLS "I'm securely talking to the controller of this domain name" and then a secondary "within that controllers namespace, I'm definitely talking to the service I think I am".
Perhaps services could have different priorities, and if the wonderful systemd starts a rando user service before the system service, the system service starts anyway and the existing user service gets booted off the port.
Put on your black hat for a moment. You want to pretend to be the sshd on your shared university server and collect passwords or session data from your fellow students foolish enough to ignore host key warnings.
1) You find a way to crash sshd. This might be easier than it seems. Maybe you can fill up a disk partition and cause a fatal logging error, or drive the machine so low on memory that it is killed. If sshd seems too farfetched, pretend you're compromising a less robust system service like the printing subsystem.
2) Try to start your own listener before sshd can be restarted. It's a race, and with enough tries you will eventually win
3) If you're really diabolical, you might notice that there are certain cronjobs or management systems like puppet, chef, ansible, etc that restart system services at known times or after known events (after updating a config file). You write a script to watch for these events (use inotify to event on file updates), and race it to the finish line.
There really isn't a good mechanism from userspace for systemd to police ports without race conditions. Privileged ports is a fairly reasonable way to do it.
This attack succeeds because "Cookies do not provide isolation by port" (RFC6265 Section 8.5).
What is the fix? If only the cookie spec allowed binding to specific ports...
But an alternate fix could be requiring web browsers to only connect to privileged ports. 80 and 443, or any port <1024, thwarting the unprivileged user from exfiltrating cookies.
Unfortunately this ship has sailed and web browsers now have to support unprivileged ports forever. A more practical defense, in practice, is to consider this scenario out of scope, and/or implement application-level authentication. I am with you, and would have advocated privileged ports to defend against these attacks (with http and ssh and other services), but am not optimistic it will gain any traction. The world has moved on, and even multi-user shell servers are becoming increasingly rare (as much as I use them - still a proud Super Dimension Fortress member)
Well, that's a bug in the HTTP cookie spec. Regrettable, but as you note something that should have been foreseen. There's absolutely no excuse, as RFC6265 itself notes "cookies contain a number of security and privacy infelicities."
Also put a black hat: what can be done by theft well-known port is basically like a MiTM attack. Anyway MiTM attack should be prevented by other methods like public key auth.
Creating identities is useful, but also impractical at scale for internal access. Your system depends on non-authenticated privileges. I'll explain:
Privileged ports are a permission applied to bind() to a sockaddr of type AF_INET.
Privileges are also checked when bind()ing to type AF_UNIX -- aka a unix domain socket -- aka a path to a file in your filesystem.
Privileges are also checked when open()ing a local file.
It is entirely reasonable to rely on filesystem privileges to control access to regular files like /etc/shadow, or /etc/ssh/ssh_host_ed25519_key.
It is entirely reasonable to rely on filesystem privileges to control access to unix domain sockets.
It is entirely reasonable to rely on interface privileges to control access to inet sockets.
The commonality in all of the above scenarios is that it is reasonable to trust the local system regarding its own access controls. There is no such practical concept as a MiTM between a process and open() to a local file, nor is there such thing as MiTM between a process and a bind() to a local resource.
In fact, this local trust is a required component to implement a key based system (note: the need of sshd to trust file permissions to store a host key)
You can enforce public-key auth server-side, but you can't enforce that your clients won't prompt for a password if the (attacker-run) sshd a client connects to requires it.
Clients should be configured correctly to never ask. But many won't be and that's out of your control.
In case anyone wonders what the cooperation is, normally when you start a program on Linux you get three open file handles: standard input, standard output and standard error. But if you use systemd socket activation, then when your program starts you get an additional open file handle per port.
So rather than binding to a port, you ask your webserver library to listen for requests on the extra file handle. De facto, this means the application author needs cooperation from their webserver library - it needs to have been written so that it has the option to listen from an open file handle rather than binding to a port and listening to the file handle it gets as a result of that bind. But since this is a reasonably common usecase, and basically just half of the normal process, you might discover that the library you're already using has support for it.
On the other hand, AFAIK docker doesn't support socket activation, so if you use docker to provide your runtime it's game over.
If it's SSH or HTTPS you aren't - the SSH client will check the fingerprint and abort on unknown, and the HTTPS client should check the certificate and verify that it was issued when a trusted client could access the same server via the same name. But if you have access to bind to a port, you can trick the trusted HTTP client as much as the end-user client. Maybe SSH is a slightly stronger guarantee here - if port binding permissions are holding out to the extent we haven't disabled them, then file read permissions are probably holding out too.
Yeah I would argue if you want security you need to be using something like mutual tls, or a framework/system/etc., that provides the same.
That's from the perspective of the developer though, from an end user's perspective I guess you just hope your application is, or that it's something where it doesn't matter.
Reserved ports are a good idea, but just blanket "everything under 1024 is for root" is a broken solution, there should be some configuration where you pick individual ports to reserve for specific users - port 80/443 are only for the nginx user, etc
it's possible to accept/drop(/reject) traffic from specified port if the listening process owner is this-or-that (see nftables "owner" module), but it can not prevent other users to bind on that port, only prevent them to actually communicate on it.
> The minor issue with privileges is already solved by CAP_NET_BIND_SERVICE, the init system could just give this capability to every system service, instead of disabling privileged ports system-wide.
This is one of the better suggestions here, but an issue I see is that users are now only able to get ports through the init system. What if some user wants to restart a server they own? Only root can restart a systemd service right?
> but an issue I see is that users are now only able to get ports through the init system
The capabilities can also be specified by a fs attribute using setcap(8). This works like setuid, in the the capability is set whenever the file in question is exec'd. *
> Only root can restart a systemd service right
By default, only root can manage system-level systemd services, however, you can add policykit rules to allow specific users (or groups) to perform actions on specific services.
* Certain sandboxing techniques will break this (like no_new_privs)
You wanna make an argument that things should work differently, fine, you wanna even make a custom distro with brave new behavior, fine, but a package installer should not be changing such a default system behavior that will from then-on affect unknown other services.
This violates least surprise in a big way and is not excused merely because the package author thinks that crufty old behavior is stupid.
Even if there's a confirmation that's just more wrong than the wrong they're complaining about.
I don't think it should even offer to so much as add a firewall rule just for the single port it wants to use, let alone make a change that changes the behavior and breaks universal assumptions about all ports.
To give a user that level of unthinking unknowing convenience, do that in the form of a container image. Otherwise just directions, and directions that say how to configure a firewall rule not advice to change a default behavior across the board.
If the user does not already know enough to ignore the bad advice and arrange for the listening mechanism themselves, or follow it but fully educated and conscious, then they are trusting you not to do bad things and take liberties with their system.
That's the equivalent of "just run with sudo". People don't want to deal with different configurations, policies, etc. So to provide a "one-click" Installation, they'll just change the whole environment to whatever they think they need. Good luck installing multiple such programs.
It's like homebrew making /usr/local itself and everything in it all owned by whatever user installed homebrew for years until Apple finally just took over the directory.
"most macs are really single user anyway"
Really? That's the excuse? I'm supposed to feel good about software coming from this quality engineering?
That's almost universal for non-package based installation instructions on Linux, to be fair. And it is safer than "add this trusted repo to your list of repositories", since at least it's one time, not something that will be accessed every time you run "$PACKAGE_MANAGER upgrade".
There's no guarantee that their shell script doesn't also add their repo to your list of repositories, and in fact many do - I've written scripts just like that that do.
I trust an application that wants me to add it's repos to it - It's pretty clear what it's doing, I can see and verify the package installation after it's done, and I can order my package manager to download the packages before installing them, inspect the package contents, and then install them afterwards. It's been demonstrated numerous times that it's easy to determine[1] whether a shell script is being downloaded via curl or executed via curl | bash ; That provides an opportunity for attackers to serve you different data if they think you're paying attention or not.
But quite simply: Setting up a package repository is a professional operation and a professional concern. It's not hard hard, but it's not so simple. What it does is speak to a level of professionalism and integration - They company has decided that they will stick to your package manager's conventions, rather than invent it themselves. That's humility. That's what I want from a company that I buy from.
So you don't bother to read the source before running when it's open?
How is that any different from running npm which is even harder to audit with many dependencies?
OS packages may have maintainers and some eyes before being packaged but for other ecosystems where people can just upload, it seems it's on the same level.
There isn't, if your sudo is set up to cache credentials and you have sudo'ed something recently enough [0]:
# Reduce unprivileged ports for this session...
sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80 2>&1 1>/dev/null
It could be polite to `sudo -k` before that, so people who use NOPASSWD sudo (and I guess don't care as much about security on that machine) don't have the speedbump whereas people who would otherwise have to type their password indeed end up typing their password.. but still, this is too much for an installer.
Just print something to stdout, make it red if you want, and include a note in the documentation to the same effect.
While I was looking, I saw this a few lines down:
sudo chmod 0777 "${sysctlConfigurationFilePath}"
sysctl.conf(5) on my system doesn't say anything about what mode bits files in /etc/sysctl.d need to have, but +x is not often used for configuration files. I haven't looked further, but this is a smell.
+x isn't as concerning as world-writable. This basically allows any user to escalate to root privileges through exploiting sysctl parameters like kernel.modprobe (albeit after a reboot).
...and that's coming from someone who has audacity to comment on and criticize well-established security practices because they find them inconvenient.
Wow.
For the record, this isn't just "a smell". This is effectively a (likely unintended) backdoor for any user to gain root privileges. Any kind of user-based separation becomes moot at that point.
Software that changes my system configuration for their own benefit is something I try to avoid for eternity. I wouldn't trust such a development team to make smarter choices in the future.
Exactly. The quickest way to have your software put on the shitlist at any organization of any size by it's security team is to do something exactly like this.
If you don't understand why that was ever a thing: This comes from a time where in networks (and, to some degree at some point, the entire Internet), basically only non-administrator users were not trusted. Besides this weird idea of "privileged ports" that would prevent a non-administrator user to pose as a system service to other computers, it was also entirely common to just use IP address ranges for authentication.
It was an entirely different world. The thought that someone rogue could plug in an entire malicious machine into a network, rather than just being a random student having user-level access, was present, but often not considered a serious enough threat. And it may not have been, given what a lot of networks--and computers--were used for back then, compared to today.
This is also half the story in some ways. SRV records didn't exist because the DNS didn't exist. Email "was" port 25 because that was the assigned port. It was uplifted into the WKS (well known services) model as IANA port allocations, and then rei-ified by "ask IANA for a port" and low number assignment implicitly meant a service daemon which had to setuid() to the user for specific delivery events (most things were store-and-forward)
Once SRV records were invented, at the cost of 2+ DNS RTT you can ask "hey, where do I talk SMTP to this place" and be told what port to bind to. But lots of stuff was godfathered in, great-grandfathered in. Too many people think SSH is on 22 and don't want to incur the DNS costs.
before this, its 25 for mail. 110 for POP, 123 for NTP for pretty good reasons: we didn't have anything else, except inter-host agreement which ports bound to which service, and the services were almost exclusively launched by root from init process. The trend to dropping effective UID came pretty damn quickly after people realized SMTP as root meant remote actors sending Subject: "|bad stuff; even worse" subject lines to exec() calls which did implicit shell uplift.
(I am eliding this a bit)
I run SSH on a shifted port. If I could convince myself that URLs with :port embedded were "cool" I'd move off 443 and 80.
Before SRV records existed, there was portmap for RPC services like NFS. Most system and network administrators hated portmap, still hate portmap, and the requirement for portmap was removed from NFSv4, at least for querying the target port of remote services.
People are too accustomed to relying on firewall rules. We can't have nice things like SRV not because they're too difficult to use, but because they're too difficult to control with simple, static, external ACLs lower down the networking stack.
Not sure, I think it was more complex than that. The "tcpmux" port 1 in /etc/services hints to some of the story. And wasn't there an early competing name system (called NAMES maybe?) that included the port as well, which lost out against DNS for... some reason? I think there were some nuances that are long forgotten.
TCP and UDP communication doesn't just go over a network.
If you're talking to a service on the same machine, and the port number is in the privileged range, then (in theory) you have the assurance that it's a root process and that sending messages to it is as secure as passing data to the kernel.
Unix domain socket permissions are quagmire of portability, aren't they?
It adds complexity to the app if the app also does speak over the network. It's more than just changing how you create the sockets, and a few details around that.
You will find that the detailed semantics of some of the socket operations will be different here and there.
E.g. exactly how a non-blocking connect works over a AF_UNIX versus AF_INET in all the cases that may arise; things like that.
The configuration may be exposed to the user who may have to learn to specify a Unix socket for the local case or else an IP address. If there is some configuration language, it may have to be tweaked to indicate the address type. (Could be as simple as a hack somewhere that if there is a leading slash, it's AF_UNIX).
> The workaround
>
> On modern Linux systems, you can configure privileged ports using sysctl:
Or you could just run your service on a non-privileged port - you don't have to stick to 80/443.
(If you really must have access to the service on the standard ports, you could set up an iptables/firewall rule to forward packets headed to port 80/443 to whatever port you're actually running your service on.)
This is missing the whole point of privileged ports. The problem is that I tell you "connect to my webserver on port 80" you want to connect to the actual webserver that I expect to be there. If port 80 isn't privileged any user on the system can bind that port before the webserver starts or if it crashes. Even with TLS this is a DoS attack.
What I really want is to be able to delegate specific ports to different processes. CAP_NET_BIND_SERVICE can be used to allow a process to bind any port but I want to be able to allow it to bind just a single port. My webserver can bind 80 and 443 but not 25. The MTA can bind 25 but not 80.
Currently there are two main approaches to this.
1. Socket activation/passing. This has a huge number of other benefits as well. The service manager can keep the socket open across restarts, crashes or even do on-demand startup to save resources without losing a single connection.
2. A proxy that binds the privileged ports and then proxies to UNIX sockets or other things. This is fairly similar to 1 expect that it doesn't necessarily have to spawn the program. This way only a single process has the permission to bind all of these ports, every other service just creates a UNIX socket which is bound by filesystems privileges.
The real model I want isn't privileged and unprivileged, it is "assigned" and "ephemeral" ports. Assigned ports can only be bound by services that were explicitly granted access to that port. ephemeral ports are always random addresses and never collide with assigned ports. But the privileged/unprivileged split is close enough.
I'm in the opposite camp, not because I think that you should be forced to do the privileged ports dance, but because I think you should bind to higher ports and it should be your load balancer's work to do the translation. Everyone is running some form of load balancer, whether it be a NAT box, Nginx, Kube-proxy, etc.
The low-numbered internet ports are useful to avoid binding so that your machine that's only a shell host isn't used as an exploit server when a single user on it is compromised. It's useful to provide a minor barrier to understanding how a server works and port allocations are made before someone deploys their cool WebApp on a random ec2 machine.
Probably good advice if you're maintaining a base image for a k8s cluster or on the load balancer box itself or whatever to simply set the minimum port to 1, though.
Nothing prevents that. Your average newbie is more likely to notice the port in the URL, though, that it's not an "Official" bank link or whatever.
If you've got root, you will misconfigure no matter what you do. I'm not arguing against a root user - I'm arguing about machines you don't administer.
> And for the three folks in Finland who administer multi-user Linux instances and rely on privileged ports for their mainframe-era security properties, they can always run sysctl and set their port limit to 1024 as it was before.
What is this reference to? Feels like a cheap snipe at something...
Considering the origins of Linux I don't think joke is all there is to it. It would not surprise me at all if there were multiple such systems still maintained in Finnish universities; quite the opposite.
Privileged ports can be grabbed by a designated service and then handed off to an application. The application itself does not have to start as root. I believe systemd does this.
The point is to prevent a literal nobody user on the system from grabbing 22 and MITM SSH or some similar shenanigans.
It is not effective, if this is not the type of "attack" you have to defend against vs. plugging a Raspberry Pi with some nefarious software into the same network.
But yes, I think preventing random users from opening ports listening to the world is a good idea, I don't think it should be done this way. First of all, it possibly should affect all ports and there should be a configuration file which regulates which user can open which port. Tying that privilege to the root id is the worst possible way, as discussed. The right way is to define a mapping, which userid may open a port below 1024 or at all. That would also ban e.g. a malware to operate a backdoor server on a non-privileged port. And would mean that no service requires to have root rights.
These things you could do to improve, not weaken the initial concept.
Windows leverages some services on predefined ports. Namely smb on 445. Responder abuses weak auth via poisoning attacks which trigger smb on 445.
Permitting standard users on a Linux machine to open 445 has security impacts for windows machines, and enables low priv system compromises being directly leveraged (e.g. without privesc) to move laterally.
There are many other edge cases which rely on existing behaviour that would make changing this a dangerous change.
This is a good example of the antiquated security model we need to put firmly in the past. If you’re depending on Linux’s privileged port mechanism to protect your Windows boxes, many things have already failed:
1. Your network should be segmented and authenticated: a Linux box listening on port 445 shouldn’t matter because it’s not in a network group which receives that.
2. Your Windows boxes should have their firewalls enabled since they shouldn’t normally be making SMB connections to your Linux servers.
3. Your Linux boxes should have their firewalls configured.
4. Your Windows boxes shouldn’t be accepting requests from strangers to connect over SMB.
Another way to think about it: if there’s a major Windows vulnerability, is it more likely that it’ll be attacked by a Linux box or someone in accounting clicking on the wrong PDF? Ultimately you just can’t trust the client - the percentage of times where someone can run code on your Linux box but can’t get root or attack something on a different port is just not enough to rely on for anything. Since you already know you need to protect against those other things, just focus on doing that.
> No one is depending on it. Security is an onion. The more layers, the better.
>
> I still haven't read a good argument on why that layer has to be removed at all.
The article is a good place to start. The privileged port security theater means that there have been generations of exploits caused by code which either ran as root when it would otherwise not be necessary, didn't correctly drop privileges, or had some kind of exploitable hand-off between a process running as root which has the bound port.
I mention that because it's important to understand for the first point: layering can be useful if it's not duplicative – in this case, this is better handled by firewall policies which are a complete solution — but there's been a cost in exploitable security bugs and complexity due to what the original author described as a gross hack. When the benefits are minimal, those costs should guide the decision — and since the two other popular operating systems have already done this, it's pretty hard to make the case that this gives us a benefit worth keeping.
It’s more that we should get rid of things which add risk and don’t offer much security and focus on getting people to use the real security measures. I’m still seeing vendors claim they have to run Java as root on Linux because it needs to bind on port 80/443 and it’d be nice to stop having to haggle.
Yeah. Active directory is broken and has been for 25 years so that won’t be fixed. As strange as it sounds, aside from external implications (e.g. smb), a Java web service running as root _shouldnt_ have much impact - if you drop a shell on the host, it shouldn’t be doing anything more than running that one app. So dropping to jboss user or whatever should have no real world impact for low / sophistication attacks, as irrespective of user they’re still going to have access to db credentials etc.
There are hypervisor breakouts etc which may be possible if you are root but not std user, but if you’re able to break out of hypervisors via 0day you probably also have privesc 0day.
Edit: I say ‘shouldn’t matter’. If it does matter, you’ve probably got architectural issues in system design.
> Permitting standard users on a Linux machine to open 445 has security impacts for windows machines
I can connect my phone to your WiFi and now I'm not a "standard user on a Linux machine" anymore but an administrator. The only way to protect the Windows users are with proper network security/firewalling.
When I first learned about how privileged ports worked, I thought a better system would be to have per port security that resembles filesystem security. That is, each port would have an owner that can decide which users may use the port. A program could run before starting network services that would load the security information into the kernel, and equivalents to chown and chmod would be available to update the persistence file and the kernel in tandem (or separately).
Yeap, I don't understand this security stuff, I don't know why we just don't turn it all off.
As an aside, my life has been much easier since I learned you can 'sudo su' your shell and then everything just works.
I believe this has been the recommendation for quite some time. It's how users are able to use ping, for example.
That said, I kinda agree with the author that it feels weird today. And the vast majority of people will just reach for the sudo hammer instead of researching to find setcap, which is a security nightmare.
I'd probably argue keeping the idea of privileged ports, but not having it enabled by default anymore, I guess.
Most systems do tend to use different users for running services though; so while there's a single "human user", the multi-user support is being used quite a lot.
Which is another bandaid - "system users" is a hack that should be better handled by a system designed to handle permissions, etc in a more granular method than "this is a user".
I don't want to delegate the port to a program. I want to delegate it to a user. The user should be able to open the port with any server he wants. Apache today, Django tomorrow - fine by me. Furthermore no one else should be able to run the server, they could set malicious arguments which again I don't want. Oh and finally it can get any port it wants with that CAP. I want it to only be allowed one specific port.
I'll be happy to implement that for you. Already have a solution in mind; please feel free to reach out and we'll discuss my fees for Linux sysadmin work.
I've thought about this and I think it would be neat if you could define permissions for a user/group to bind to some specific port (without going all in with selinux or something). If listen ports would be represented as named objects in filesystem tree, then you could manage them by simple chmod, but I guess even something cludgier would be good enough (and more linuxy..).
I've thought about that too. Then you could say the apache user can bind to port 80 and 443, the ssh user can bind to port 22, the bind user can bind to port 53, etc.
There are billions of Linux systems in the world. They have a default. If you change the default for your own distribution, you're the oddball, and the fact that there are two different values for the default can itself be an issue. It's not possible to change the default everywhere and have everyone adopt your new default. The kernel developers are very careful not to make breaking changes.
Just the same, you can write your non-root server and arrange to be launched by a service manager that turns on the appropriate capability to allow it to bind low-numbered ports, but nothing else.
> There are billions of Linux systems in the world.
There are not billions of Linux servers, desktops and phones, probably high single digit millions or low double digit millions, but I suppose if you include microcontrollers, perhaps.
One should be able to configure Linux to reconfigure itself any which way one wants it on reboot without making fundamental changes to all Linux systems everywhere. OP really can't figure a way to run a custom configuration script on reboot? RLY?? I would consider this academic on BSD. Is this something that systemd broke, that one can't pump the special settings back in, and they are just lost on every reboot? Then never reboot. Problem solved. Otherwise, init ftw.
That is a very big number, but I wonder how many of those are multiuser, such that there may be concern an unprivileged user might leverage privileged ports to intentionally cause havoc. I know... all of them. Users aren't necessarily people, and usually not.
All Android devices are multiuser: Android will assign a different unprivileged user ID to each app, because it can't be assumed that apps should trust each other. You definitely don't want random apps to be able to take control of privileged ports.
Unix is around for about 50 years now and it doesn't look like it is getting replaced any time soon. If anything, the problem of Unix is, that in these 50 years, there was far to few cleanup going on. It is about house-keeping. Do we want a system which is designed by arbitrary decisions 50 years ago, or do we constantly clean up those corners to keep a clean system?
While there are issues with certain WKP being also privileged, this entire article ignores the fact that multi user machines still exist (and multi user Unix machines were mostly not mainframes). In such systems, having ports that are known to be administrator-only still has a use.
What… huh. I tried it and sure enough. I had … no idea. I don't like that, mostly because it flies in the face of my "But I need sudo access as a developer" arguments. Still can't edit /etc/hosts though
Oblivious because I have used Windows when using 80 locally, or 8080/3000 when using Linux, or 3000/whatever or something like that in a container. I guess most projects end up working around the problem - and I only use 80 locally when it is the default by whatever set that up - usually Visual Studio (not code).
Been a while since I set up a server with nginx and all that. I don't recall having to do anything special, maybe nginx runs privileged. In any case it typically reverse proxies into your app running on a higher port.
This is one of the things I never really understood why it wasn't fixed. The privileged ports always looked at a somewhat working hack from the times, when there were only a few machines on the net and the typical network services were few and well known. And it was a time, when having a root user as an active user account was normal.
But things have changed since those times, even in the 90ies, it wasn't really reasonable any more - and solutions like sudo were implemented to give more reasonable access to higher privileges.
Why wasn't anything done to the ports? I don't like the proposals presented in the article. It is perfectly fine to have a port range, which isn't available to every user on the system, but for those ports there should be a clear mechanism to configure, who is allowed to open them. So why not have an /etc/ports, where there is a port:uid mapping or a special directory (somewhere in /dev ?) where you have a file for each reserved port with only the owner of that file being able to open that port.
I didn't see any explanation why it was so important his server run on port 80.
Allowing any user level process to listen on any of the common ports will result in security holes. E.g. A process could constantly attempt to bind to port 22 (SSH). If the SSH process dies or is restarted for any reason, the user level process can harvest usernames and passwords.
> If the SSH process dies or is restarted for any reason, the user level process can harvest usernames and passwords.
That wouldn't work because the user level process wouldn't have read access to the machine's host keys in /etc/ssh/*_key, so when things connect, they would get known_hosts warnings and nobody clicks through those
It is still a DoS attack. And of course SSH is usually TOFU so that isn't a strong defense.
Imagine this at a company network.
1. The attacker compromises an application process on one machine and manages to get code execution.
2. They wait for SSH to restart (updates, trigger a crash, OOM?) then bind port 22.
3. Wait for some developer to connect to this machine for the first time. They (unexpectedly) see a new host key so accept it.
Now if that developer has forwarding enabled you can log into other machines (probably with root/sudo access). You have just escalated your privileges from a non-root user on one machine to the entire network.
> The fix is easy: ship Linux distributions so that privileged ports start from 80 to begin with.
net.ipv4.ip_unprivileged_port_start=80
But then this could potentially break other security models the author is not aware of that are built on this assumption. I think the middle ground here would be to have a setting to de-root a specific port.
Or whitelist a specific process for binding. Or create a server user (www is often used) that's purpose is only to have root for the sake of binding, and nothing else. Better yet, it would only be able to access resources within a tightly restricted range (file directories, ports, RAM, CPU, etc).
I don't see how unrestricted 80-1024 is actually the best answer here, given the legacy cruft that will take time to deal with.
A better default may be making ports for the 127.0.0.1/8 address space completely unprivileged (excluding 127.0.0.1). Using alternative localhost addresses is very useful for development. It's more convenient than using random ports since /etc/hosts can be used to give each address a fake domain name and this way HTTPS certificates work fine and page redirects don't break your manually entered port number.
I'm not sure if it's possible to make the unprivileged ports configurable per listening address; I just think not all listening addresses have identical security concerns.
That said it's simple to add the "cap_net_bind_service+eip" capability to any program in question; on NixOS it's just the "security.wrappers" module option.
> A better default may be making ports for the 127.0.0.1/8 address space completely unprivileged
I might be remembering the wrong thing, but it seems to me that I’ve read that there were discussions about fractioning that address apace for use on the public internet, since it’s enormous and basically wasted when allocated to localhost.
Yes, but it needs cooperation from the process. Specifically, the process needs to know how to get the file descriptor from the environment variable and use that rather than binding to the port itself. And not all services are set up to do that.
Privileged ports can't die until we start using SRV records. As a bonus that would probably negate the need for ipv6, since we'd extend the ipv4 space to essentially 48 bits.
You still need privileged ports to ensure that the right service binds the port. SRV records are great because I can run on any port but if any compromised process on the system can bind that port you have greatly increased your attack surface. This can maybe be mitigated by dynamically updating the SRV record but that adds complexity and has caching issues.
You can run your service on a local Unix port and proxy to it. That way only the proxy needs to bind to a IP port. Or, you know, run it in a container and use the Docker proxying. Or use an orchestrated ingress. Or proxy from a separate load balancer to high ports where the service runs.
Why is it important to expose the application server directly to the edge in this day and age anyway? The author wants to talk security but wants to put every random application on a reachable public port?
Starting with process ID 1, this daemon (who shall not be named) should have zero active network capability but should be able to mete out such a privilege.
I have just set my services to run on 8080 and port mapped 80 to 8080 with iptables. Been doing this for years and never had any issue. Is this a bad idea?
I agree with the article and that this is archaic, but find it very strange how this person gets on a soap box... only to screw over everyone in the end using ports under 80 in the proposal (I guess nobody needs SSH?). If anyone is going to go through this work of changing the default unprivileged port start point hopefully they would just start at 0.
As others have said, CAP_NET_BIND_SERVICE already solves this issue without root, as do the myriad of other configurable options in the linux networking stack. You can configure different port handling rules on a per-container basis, such as granting CAP_NET_BIND_SERVICE for the entire container -- and the network namespace will prevent tomfoolery on interfaces which aren't part of the container.
The argument against privileged ports is mostly a collection of straw men. It's true they can't create remote trust (because you can't trust the integrity of the remote system), but this doesn't mean privileged ports are useless: They still guarantee no reservation issues among system processes. For example, privileged ports prevent a user from crashing sshd and then screwing around and binding their own process to port 22.
This is a rant written by someone with just enough understanding to be dangerous, but not quite enough wisdom to know why things are still the way they are. Most of the complaints raised are subtly inaccurate.
I would absolutely not trust running the kitten webserver after reading this article.
> I would absolutely not trust running the kitten webserver after reading this article.
Indeed, after it points out that the installer changes a sysctl automatically, that's shockingly poor practice for someone that claims to like security.
It's fair to disagree and critique an aspect of operating system design. It's even fair to recommend to users to perform the configuration task themselves, after understanding any risks involved. It's a whole other league to just take the initiative to alter system configuration without even informing the user!
This fact alone makes me put Kitten firmly in the untrustworthy category. A shame or not? I don't know, I hadn't even heard about it before this article.
> This is a rant written by someone with just enough understanding to be dangerous, but not quite enough wisdom to know why things are still the way they are. Most of the complaints raised are subtly inaccurate.
Yet curiously it's completely unmentioned in this article, in spite that this is probably what started the author's dislike of privileged ports. I guess it was inconvenient as it got in the way of angrily ranting.
I think more than a few of the author's complaints are deliberately disingenuous. For example, the nonsense about making a sysctl setting look as difficult and obscure as possible.
The side effect with node he mentions is actually a setuid binary issue and is related to filesystem capabilities, not CAP_NET_BIND_SERVICE itself. When a setuid binary is run, including executables with capabilities set in xattrs, the loader does things a bit differently to preserve a secure execution environment. In this case it ignores some environment variables that would otherwise let the user modify the program's behavior - in particular LD_LIBRARY_PATH.
The right way to do this is to launch node from whatever process launcher is in use, for example by setting capabilities in the systemd service file. There shouldn't be any need to set extended filesystem attributes on the binary itself.
I think the deal here is that the author, Aral Balkan, frequently sees these kind of topics as amenable to solution through activism: raising the issue in a public statement, garnering widespread attention, and then using that to put pressure on those best-placed to fix the issue. You can seem more of this approach in his earlier blog posts, and on his Mastodon feed: https://mastodon.ar.al/@aral
I don't entirely disagree with this as an approach in some circumstances, but you definitely need to have a clear idea of your target, and what incentives they have. I'm not sure what the target here is, though it's possible that he will change enough minds among distribution builders to shift things so that the default for privileged ports will be dropped to 80. The fact that Macs don't have this limit any more was certainly new to me.
Yeah, the argument (effectively) that most systems aren't multi-user flies in the face of well, everything modern too, the author just doesn't realize it.
You can do this either by launchign with "capsh" and should also be able to configure a systemd service to do it.
There seems some gotchas though so I haven't tested it with systemd but some references: https://serverfault.com/questions/916807/net-bind-capability...
https://stackoverflow.com/questions/413807/is-there-a-way-fo...
If you were to disable/lower privileged ports entirely you need to also realise there are, as always, a bunch of implicit security boundaries related to this that you need to account for. As a most simple example, if you can trigger a local DoS to force a process to exit then you can race to bind it's port and potentially do something nefarious - depending on the protocol you raced it may give you some privileges to deceive a remote host etc. Which is not to say you shouldn't do it in general but one must be aware of these implications when you want to change a literally decades old design decision that like all good bugs, many people depend on.