`fuser` (from `psmisc`[0] Linux package; commonly installed IME, also supplies things like `pstree`) lets you identify processes which have files or sockets open, and has a `-k` switch for killing them. So you can run e.g.
fuser -k 8080/tcp # kill processes with TCP :8080 open, IPv4 and IPv6
fuser -k -6 8080/tcp # same but IPv6 only
fuser -k 8080/udp # UDP instead of TCP
Does this also release the port? I sometimes have an application listen on a port and when I kill it, the port is still reserved or blocked for a minute or two.
As a somewhat experienced admin, I can't imagine killing a process on a port without first understanding what process is using that port and why it is using that port. If a process is supervised, then I can't imagine killing a process when there might be process specific logic encoded into a script to try first.
The rust script itself uses kill -9, which probably isn't the ideal behavior. SIGTERM and then SIGKILL after a timeout is probably the appropriate behavior for a script, but even more appropriate would be to figure out if it's supervised and `service stop` it.
This program finds processes to kill AND it kills them AND it kills their children and it does each of those things poorly.
For this program to be better than `lsof` and `kill`, it would need to implement command line options for which type of signal to send. By the time that is implemented, a program with the same interface as kill will be implemented, and it would be better for that to be it's own program that could be piped too. What's left would be a program that finds a process listening to a specific port, and there are already plenty of good programs for that.
Counter argument is that i dont care about any of those arguments for the use case where i shut my laptop lid and now cant boot my dev server during local development
Yeah, that is fair. For that problem, from an admin perspective, killport becomes a bandaid for not fixing the bug of poor behavior when the laptop lid closes. Instead of finding an automatic technical solution that does the right thing, every developer must toil by running operational commands to resolve their hung server.
If you have a "cargo cult" culture rather than an "understand the problem" culture what happens is that you hire a new dev and teach them that when the lid closes, you run killport to get it running again. The environment takes 10 seconds to set up and that's fine. By the time you are 100 employees you have 100 employees killporting their dev server and now taking 2 minutes to restart.
If you're lucky you get an altruistic hero dev who says "screw this, I don't want to deal with this any more" and they solve it. If you're unlucky, the behavior gets scaled out to 1000 employees your dev server takes 10 minutes to initialize and your developer tools team is under resourced and has higher priority stuff to deal with since "dev server hung" already has a solution. That's a little hyperbolic, but I've seen similar things.
Maybe what you really want is a cron job or supervisor that health checks your environment and restarts it when it's hung. Maybe there's a place to shim open/close lid behavior. Maybe spinning disks are being used instead of SSD's. Maybe the problem is local dev environment instead of remote. Most of those suggestions are automations or mitigations. The real solution might be as simple as a configuration change.
It seems like a very solvable problem.
> now cant boot my dev server during local development
Because you can't have two processes using the same port, and the process that was using the port is now not responding. Perhaps you don't even want to go through the process of learning what PID it has, you just want the port back so you can relaunch the dev server on localhost.
your criticisms of this program do not seem well-founded because your assuming that it will be used improperly. In many cases this is a very simple problem with a simple solution
> the process that was using the port is now not responding.
Yes, but why?
Between strace, ps, atop, lsof, logging, and understanding the OS and TCP, it shouldn't be horrific to figure out what is happening. The very same skills to understand the problem are valuable to a lot of problems and investing in one simple problem is a further investment in future problems. "I don't have the telemetry to understand this" is, if nothing else, a deferred problem. Having devs that aren't prepared for oncall is also a problem.
"Why is my process hung?" is something I hope most devs can solve.
Not solving a frequently hanging process is short-termism. Not solving problems creates a learned helplessness about systemic problems. They build up and get addressed too late.
> Because you can't have two processes using the same port
I am a little harsh because it increases complexity when no increased complexity is needed. That is my only strong criticism, a whole new program is invoked when we have plenty of very sufficient (composable) tools already.
The progression you’re outlining here sounds perfectly acceptable to me. I encounter many problems per day and they aren’t always worth understanding and solving, especially if there is a 10s bandaid for it. When the bandaid becomes expensive, that’s when I’ll put in the effort to fix it.
I bet this wasn't motivated by the sys admin use case; as you say, people like you don't need this because they will do it manually with shell commands after due diligence.
This is for development environment: you know that tool foo always runs on port 8888; you want a clean slate before rerunning your tests/dev server/ whatever.
(But sure, personally I would say people should learn enough shell programming to use shell commands for this, or even make their own utility as a shell script/function, rather than use a one-off tool).
Well, the simple explanation is probably that the author isn't aware of `lsof` and might even learn something from your comment.
lsof isn't something you automatically pickup if it's not your daily business (like so many other things). I for myself often forget about it but I have some usual ports to kill. So these days I simply search my shell history by port to find lsof again.
I keep finding uses for more of the ls family: knew lsusb and lspci for ages, lsof is handy, and most recently I learned lsblk which is always worth using before dd'ing an image onto a disk, etc.
Back in the DOS era, there were few enough files on the system that I could just explore around and learn what everything did. That's no longer practical, but what's the alternative? There doesn't seem to be a "top 100 commands to get familiar with", or whatever. The ss64.com pages are pretty great, but...
Choose a random entry you don't recognise and open the man page for it. There's not that many on most systems - I mean, there's likely a couple hundred or more, but some are convenience symlinks and many you'll already know.
1621 on Pop, which is still about 15x more than I can reasonably remember. Hence the whole problem, it's not digestable unless you make digesting it your whole hobby. Ah well. Thanks for confirming, I guess.
I've known about LSOF and about kill but I've never put them together like this. I find his tool compelling simply because now I don't have to remember all this stuff.
What we really need is a tool like this for Windows. Find the process that keeps this file open and kill it. It's so annoying when I can't delete a file because some process has it open but I can't tell which one.
Useful, but talk about too little too late! The fact that this is an external tool, not part of the operating system, and that you are expected to, in 2023, blindly guess which application is holding your flash drive or documents open is pretty pathetic.
When killing by port number, you usually want to restrict to the LISTENING server port. Otherwise you will likely kill things connecting to the port. For example, in a local development environment the above command would kill both the web server and the ALL web browsers with an open page to that server
That is the full browser instance not just the specific tab connection to the server
I was thinking the same thing. Quite a bit of effort, 3 Branches 26 commits and Rust (The effort gone in taming the beast).
The hacker mindset is usually to avoid over engineering, saving efforts by avoiding a solved problem.
You don't have to do it exactly like that or probably 'google the stack overflow' once you've used kill & lsof (whether for this purpose of something else). My thought as GP's as soon as I read the title was 'lsof & kill?' - I doubt they looked up the pretty simple command they gave.
And as they said, 'which you can easily turn into an alias [well, a function]', which you could call 'killport' if you want for exactly the same functionality.
Developers find themselves in tnis situation sometimes and are not sysadmins. I had this situation before, and I would need to google to know what lsof does and how to use it. And so do many others that are commenting positively here.
Experienced users (developer or otherwise) of unixy systems already know what lsof is. Inexperienced developers only need to learn it once, then they join the ranks of the aforementioned. Using these sort of systems for any length of time in the capacity of a developer or even just a hobby enthusiast will effectively make you a 'sysadmin', at least w.r.t. knowing the basic standard tools. Not least because lsof is useful for many sort of tasks (like figuring out which program is using a drive you're trying to umount), so chances are high that people in these situations already know about lsof.
And besides having to learn that lsof exists, you'd also have to learn that this even more obscure tool exists. And what happens if you prefer for the program to terminate itself gracefully, cleaning up whatever mess it might have made, instead of sending SIGKILL? Now you have to look up the PID or program name anyway, probably using lsof..
This is the sort of thing which Copilot on the command line is great at, actually. Copilot X is decent, too, but I prefer `copilot.vim` in my neovim plus `EDITOR=nvim` plus C-x C-e "# kill all processes listening on port 2134" which then proceeds to put it into your history and with a comment explaining it and keywording it so you can find it easier.
Don't look basic Unix stuff up on the web. Look it up in your Unix environment. The documentation is self-contained, and accessing it internally is way, way more efficient.
So instead, try one of these:
- use `man lsof`, search 'EXAMPLES' to jump down to the examples section, and search for the word 'port'
- search directly though examples: `tldr lsof | grep -A2 -i port`
- look through your shell history with grep or CTRL+R
- type the first few letters of the command and let history-based autocompletion hint the rest to you
Those options may seem strange if you're not used to them. But if you're not using techniques like that, and instead going to your web browser to look up CLI usage, you don't need to learn the CLI— you need to learn how to learn the CLI. Once you do that, a ton of things will become much easier and quicker for you to deal with.
As someone who is good at unix but watches other people struggle with it: the advantage is that people don’t understand signals, subshells, what ls stands for, what of stands for, don’t know the manual, don’t know what man stands for, don’t know the manual has sections, and aren’t interested in finding out.
I was going to say use kill and lsof, and also something similar to what you said here. I would add that even many seasoned developers don’t know UNIX-like systems well enough to do anything with native tooling and therefore replicate native functionality poorly and slowly.
For many people, it’s faster to write an entirely new tool than it is to go learn the platform for which they’re writing and therefore they do it.
I am not good at knowing Linux at all beyond some general concepts. If I did my brain would have no room left. However when dealing with it I always assume that it has everything and do google search. I do not remember a single case when I would want to write some utility. The ones available along with bash always cover all of my needs leaving me more time to design and implement the actual software I am being paid for. My whole CI/CD or whatever complete management is called is few bash scripts.
That definitely says something about the discoverability of the platform's functionality.
The similar thing also regularly happens in medium-to-large codebases so that's how you end with four slighlty different implementations of e.g. "extract value by path from a JSON object, with some traversal options": it's easier to just write the damn loop by yourself than to find a) if it's already been written, and if yes, b) how the hell do you use it.
> That definitely says something about the discoverability of the platform's functionality.
Meh. Back in ye olde days, when developers were either self-taught from the roots or came from universities, they just knew that stuff from experience.
Nowadays, a lot of "developers" come from three months "coder bootcamps". They may even be reasonably proficient in whatever flavor of JS toolkit the camps teach at the moment, but they will have zero knowledge about what makes their computer tick.
The issue isn't unexperienced devs not knowing about stuff. The issue is whether devs with no experience are willing to learn new stuff or not. We all had zero experience at some point.
>"That definitely says something about the discoverability of the platform's functionality."
I think it says more about people being lazy and not willing to use that mighty tool called search (add ChatGPT now). Even here on HN I often see people asking what is X when that X is one web search away.
Yeah, agreed, we should change human psychology to fit our current technological stack better, that sounds way easier than doing the opposite.
On a serious note, I do hope that built-in help systems with ChatGPT glued on (and additionally trained on its contents) will become a de-facto standard in the near future: it should be possible to just ask your computing environment "how do I do X with you?" and get an answer or at least some guidance pointers.
The scope of modern OS, its utilities and commands is so large that good and comprehensive doc would be huge. And that is just the description of command with options. How do I do X is even more complex and may require things that go way beyond the simple doc and will require books. This is just not scalable and impractical. Having piles of common knowledge like Stack Overflow / forums / etc augmented with search and now ChatGPT is way more practical.
I've personally written what is very comprehensive doc for one of my software products. It described every nook and cranny. Still I ended up supporting a forum with most common questions percolated to a dedicated How Do I Do XYZ section and it just keeps growing. In the beginning I was answering question there. Now I just see people supporting themselves.
> don’t know what man stands for, don’t know the manual has sections, and aren’t interested in finding out
People that unwilling to learn— uninterested in even discovering that there is a manual and how to open it— don't belong in software.
The rest of the things you mention... whatever. Many people haven't heard a good pitch. They haven't had the relevant experience to see how investing in learning those things can pay off for them. But disinterest in finding out that there's a manual? That's wilful incompetence of a kind I've never seen and hope I never will.
So as someone who's not very good at UNIX, knows that it has signals, subshell, what ls is, don't know of, know there's a manual and has invoked man many times and is interested in finding out...
How would I use man to find that lsof exists so that I could come up with the incantation mentioned in OP? I tried "man -k 'open'" but none of the answers returns lsof on my system. I checked that "man lsof" works as expected.
should surface lsof on your system. On mine it only has the description 'open files', though, which might not be obviously relevant to many people.
I'm not sure what macOS or FreeBSD `man` have, but GNU `man` has `-K` as well as `-k`. The uppercase version searches the manpages globally, instead of just their short descriptions. So something like
man -K -a 8 "list open" port
will turn up `lsof` on GNU, even if `apropos` doesn't.
If you have a tldr client installed, that can also be useful for searching through common examples including for programs you may not have installed. For example with tealdeer, the following (Fish) surfaces both lsof and netstat as options:
for page in (tldr -l)
tldr $page | rg -A2 -i '\b(find|list)\b.*\bport'
end
and
for page in (tldr -l)
tldr $page | rg -A2 -i '\bkill\b.*\bport'
end
shows that fuser and fkill can both do the same thing as killport.
Grep sucks as a search interface for tldr, though, so you probably want a better tldr client than I have, which will do global search, or an FZF script that will let you filter through tldr results.
Discovering new (to you) tools which are not yet installed is a good reason to use the web to look up Unix stuff, especially after you've checked the manpages and your package manager.
Thanks for the detailed response. Seems -K is not available on the UNIX I have (TrueNAS, FreeBSD-based). That should indeed have helped.
> Discovering new (to you) tools which are not yet installed is a good reason to use the web to look up Unix stuff
Of course a web search can help, however I often find that the times when I need to do such searches is quite correlated with times when I don't have internet access. Less frequent now with my "smart" phone, but still happens.
I really don't think I trust this approach, despite it being vouched for by so many responders to this thread. When I run this:
sudo lsof -t -i :22
I expected to get sshd and nothing else, but it actually includes outgoing ssh connections too - which I don't want to kill. Likewise :443 gives clients like the pids of firefox and signal. I'm gonna stick with `fuser`.
Well my comment was actually a reference to the manual [0].
But I agree with it, because aliases are just a string substitution for interactive shells, so they are limited. I found that I often end up wanting to add additional logic (such as arguments or local variables) and to have it available for use in scripts later.
That's why you have a set of dotfiles that you use as your base wherever you go. Store it at a url you know and as long as you have wget or curl it's 1 line away. Then you don't have to remember whatever the author decided to name the package.
> The problem is less that people don't understand shell commands but more there is too much stuff to remember.
But is it really better in the long run to add more and more additional tools?
Is it less work to understand and remember more uses of fewer commands or is it less work to understand and remember more specialized commands? The latter is clearly more administrative work, because you have to install all those tools on your machines. On the other hand, finding more uses of a few select tools creates those associations in your brain that you need to cover new usages for them yourself.
To me that's a clear win for the side of fewer well-understood tools.
That doesn't work on Windows and it isn't portable.
At least with Rust it can be abstracted and made to be portable, to work on other platforms like Windows.
Why learn 3 different commands with slightly non-standard arguments to find and kill a process on different OSes, when you can use one unified command and the arguments work the same as expected?
"Different OSes"? There is only Linux in different tastes and maybe some BSD in some obscure places but no other OSes exist (that are worth to get oneself acquainted with), I am pretty sure of it.
> There is only Linux in different tastes and maybe some BSD in some obscure places.
This is why most people (and developers) still use the Windows Desktop and not the thousands of Linux Desktop(s) out there.
> But no other OSes exist (that are worth to get oneself acquainted with)
macOS exists and it works with that. Rust supports Windows as well so it is certainly possible to port it to Windows. Literally someone asked for Windows. [0]
> This is why most people (and developers) still use the Windows Desktop and not the thousands of Linux Desktop(s) out there.
I'm failing to see the logic here. As a Linux user, trust me there are tons of people out there for which macOS is the only OS that exists or matters. It doesn't prevent people from switching to macOS. Years back that was true of nearly every Windows user as well (only Windows existed). I'm not understanding the causative chain here.
A loop over all fds across all processes on a box will be very slow on any large machine with, say, a few hundred thousand processes, some of which may have up to hundreds of thousands or millions of FDs open.
Especially since it looks like it's reading the owning process's cmdline per-fd.
AFAIK there's no other ways to do this on Linux: link a given tcp connection to a process. The network uAPIs (netlink or /proc/net/tcp?) will give you an inode, and to link the inode to the PID you need to go through every open fd in /proc/*/fd. ss, fuser or lsof (mentioned in other comments) do this too.
Yup, ss and friends do this, but they don't read "/proc/$pid/cmdline" N times (where N is the number of file descriptors the process has) in the hotloop.
I phrased it poorly. Doing the loop is what it is, but doing the loop and allocating inside (which 'process.cmdline()' does) on every loop is something I'm fairly sure none of the other tools do.
> Yup, ss and friends do this, but they don't read "/proc/$pid/cmdline" N times (where N is the number of file descriptors the process has) in the hotloop.
This one doesn't either. The code structure could be clearer IMHO, but `kill_processes_by_inode` reads the cmdline within the `if target_inode == inode` block, which breaks out of the `for fd in fds` loop at the end. So it only looks at the cmdline once per process that has the target inode.
That said, if `find_target_inodes` returns n inodes, `kill_processes_by_port` will call `kill_processes_by_inode` n times. It'd be better to find all fds only once and compare each to all the target inodes at once with a hash set (if n might be large) or by bisecting a sorted slice. Multiple inodes per port could happen in a few different ways: different processes listening to the same port on different IPv4/IPv6 addresses, an old-fashioned pre-forked sort of server model, a bunch of individually spawned single-threaded servers listening on the same port via `SO_REUSEPORT`/`SO_REUSEADDR`.
> A loop over all fds across all processes on a box will be very slow on any large machine with, say, a few hundred thousand processes, some of which may have up to hundreds of thousands or millions of FDs open.
Any implementation of this objective has the same limitation, though.
The overly usage of unwrap() is also suspicious. Processes can vanish while you are inspecting them. So there is a good chance that this tool will panic on busy machines.
let processes = procfs::process::all_processes().unwrap();
...
let process = p.unwrap();
Doesn't this use of unwrap cancel out one of the advantages of Rust which is to handle errors quite comprehensively?
Here process could be null, and we try to access it the line after.
edit: actually process won't be null, unwrap will panic. Got it. I still think a good error message would be better than exposing the user to a crash, but that's a matter of opinion. It seems like the whole point of using Rust, and also of such a tool, is to be robust and user-friendly (since it implements an UNIX one liner), and an unhandled crash could lower the confidence of the user in the tool.
Let's say that you're unable to list all processes. What do you do to handle that error? You exit the program. What does unwrap do? It safely exits the program. Seems to be handling the error just fine to me.
The only downside is that the error message won't look great, but for a tool like this, that seems fine.
> Here process could be null, and we try to access it the line after.
Process cannot be null. Rust does not have nulls outside of unsafe code.
The type of 'p' there is 'Result<Process, ProcError>', and so 'process' is of type 'Process', which cant' be null.
That's one of the actual benefits of rust: it is memory safe outside of unsafe code, and doesn't make the "null" mistake.
`expect` can be used instead of `unwrap` to print an error. That's quite idiomatic I'd say for situations where there's no way to recover anyway, as you mention.
I'd say `expect` is idiomatic for situations in which the error is due to an unrecoverable logic flaw. If it's due to user error, I'd want to do something else just to customize the error message format.
Also, as another commenter mentioned, the `let process = p.unwrap()` is suspicious. I imagine this can happen if a process simply exits between being returned from the `/proc` traversal and `opendir("/proc/<pid>")`. If the error is `ENOENT`, it should simply continue to the next pid without printing an error at all.
But seriously kids, take the time to learn Unix tools so you can save yourself from needing to write hundreds of lines of Rust when one shell command will do the same thing.
Well yeah, because blindly killing the process that's listening on the port will do the wrong thing for Docker and you can't predict without asking Docker what container/process is the one that owns the port.
If, for example, you run nginx in a default configuration, you'll notice both the master and worker processes will have an fd bound to the same port (port 80 or whatever), so it's also not exactly unusual.
Even without SO_REUSEPORT, you can have multiple things on one port if you have multiple IPs, like something listening on "127.0.0.1:80", and a different process listening on "192.168.1.10:80"
So if you had one process listening to the same port on multiple ips, this xargs would fail on the second invocation? Seems like piping through uniq may be a good idea
This is a good idea. Tracking down which process owns a port in `netstat` can be tedious. My only suggestion to improve is to clarify in the documentation if this is only for TCP or also for UDP.
If it's "npx kill-port", it does seem to spawn quite a lot of separate processes to get the job done (lsof, grep, awk, xargs, kill). So perhaps this is more efficient, though it seems to have it's own issues from reading the other comments.
If it's "npx killport", it seems to not work on Windows.
I have no problem with that. But the colloquial error-prone is out of place when we are discussing interactions that are more invasive, and have OS-level consequences.