Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Killport – CLI tool to kill processes running on a specified port (github.com/jkfran)
153 points by jkfran on April 25, 2023 | hide | past | favorite | 136 comments



`fuser` (from `psmisc`[0] Linux package; commonly installed IME, also supplies things like `pstree`) lets you identify processes which have files or sockets open, and has a `-k` switch for killing them. So you can run e.g.

    fuser -k 8080/tcp       # kill processes with TCP :8080 open, IPv4 and IPv6
    fuser -k -6 8080/tcp    # same but IPv6 only
    fuser -k 8080/udp       # UDP instead of TCP
[0] https://gitlab.com/psmisc/psmisc


Does this also release the port? I sometimes have an application listen on a port and when I kill it, the port is still reserved or blocked for a minute or two.


fuser is so handy - I am surprised more people do not know about it. It is one of my favourite tricks when pair programming etc.

You can also use it to kill processes that have a file open too, which is also handy if your hung server is also hanging onto files as well as ports.


As a somewhat experienced admin, I can't imagine killing a process on a port without first understanding what process is using that port and why it is using that port. If a process is supervised, then I can't imagine killing a process when there might be process specific logic encoded into a script to try first.

  lsof -P | grep LISTEN | grep $PORT
  kill $RELEVANT_PID
The rust script itself uses kill -9, which probably isn't the ideal behavior. SIGTERM and then SIGKILL after a timeout is probably the appropriate behavior for a script, but even more appropriate would be to figure out if it's supervised and `service stop` it.

The real problem with this script is that it violates the unix philosophy (https://en.wikipedia.org/wiki/Unix_philosophy). "Do one thing and do it well" is the main problem.

This program finds processes to kill AND it kills them AND it kills their children and it does each of those things poorly.

For this program to be better than `lsof` and `kill`, it would need to implement command line options for which type of signal to send. By the time that is implemented, a program with the same interface as kill will be implemented, and it would be better for that to be it's own program that could be piped too. What's left would be a program that finds a process listening to a specific port, and there are already plenty of good programs for that.


Counter argument is that i dont care about any of those arguments for the use case where i shut my laptop lid and now cant boot my dev server during local development


Yeah, that is fair. For that problem, from an admin perspective, killport becomes a bandaid for not fixing the bug of poor behavior when the laptop lid closes. Instead of finding an automatic technical solution that does the right thing, every developer must toil by running operational commands to resolve their hung server.

If you have a "cargo cult" culture rather than an "understand the problem" culture what happens is that you hire a new dev and teach them that when the lid closes, you run killport to get it running again. The environment takes 10 seconds to set up and that's fine. By the time you are 100 employees you have 100 employees killporting their dev server and now taking 2 minutes to restart.

If you're lucky you get an altruistic hero dev who says "screw this, I don't want to deal with this any more" and they solve it. If you're unlucky, the behavior gets scaled out to 1000 employees your dev server takes 10 minutes to initialize and your developer tools team is under resourced and has higher priority stuff to deal with since "dev server hung" already has a solution. That's a little hyperbolic, but I've seen similar things.

Maybe what you really want is a cron job or supervisor that health checks your environment and restarts it when it's hung. Maybe there's a place to shim open/close lid behavior. Maybe spinning disks are being used instead of SSD's. Maybe the problem is local dev environment instead of remote. Most of those suggestions are automations or mitigations. The real solution might be as simple as a configuration change.

It seems like a very solvable problem.

> now cant boot my dev server during local development

Why?


> Why?

Because you can't have two processes using the same port, and the process that was using the port is now not responding. Perhaps you don't even want to go through the process of learning what PID it has, you just want the port back so you can relaunch the dev server on localhost.

your criticisms of this program do not seem well-founded because your assuming that it will be used improperly. In many cases this is a very simple problem with a simple solution


> the process that was using the port is now not responding.

Yes, but why?

Between strace, ps, atop, lsof, logging, and understanding the OS and TCP, it shouldn't be horrific to figure out what is happening. The very same skills to understand the problem are valuable to a lot of problems and investing in one simple problem is a further investment in future problems. "I don't have the telemetry to understand this" is, if nothing else, a deferred problem. Having devs that aren't prepared for oncall is also a problem.

"Why is my process hung?" is something I hope most devs can solve.

Not solving a frequently hanging process is short-termism. Not solving problems creates a learned helplessness about systemic problems. They build up and get addressed too late.

> Because you can't have two processes using the same port

https://lwn.net/Articles/542629/

Maybe this is relevant: https://hea-www.harvard.edu/~fine/Tech/addrinuse.html

> criticisms of this program

I am a little harsh because it increases complexity when no increased complexity is needed. That is my only strong criticism, a whole new program is invoked when we have plenty of very sufficient (composable) tools already.


The progression you’re outlining here sounds perfectly acceptable to me. I encounter many problems per day and they aren’t always worth understanding and solving, especially if there is a 10s bandaid for it. When the bandaid becomes expensive, that’s when I’ll put in the effort to fix it.


I bet this wasn't motivated by the sys admin use case; as you say, people like you don't need this because they will do it manually with shell commands after due diligence.

This is for development environment: you know that tool foo always runs on port 8888; you want a clean slate before rerunning your tests/dev server/ whatever.

(But sure, personally I would say people should learn enough shell programming to use shell commands for this, or even make their own utility as a shell script/function, rather than use a one-off tool).


To avoid any unwanted matches with grep (e.g. ports 1999 and 19999), use:

    $ lsof -i:$port | grep LISTEN
    $ kill $pid


grep isn't necessary at all here:

  lsof -s TCP:LISTEN -i :$port


You could also add the -t option for direct piping to kill.


For me it would be

netstat -nap | grep port (to see the proc and get its pid)

and then kill -9 pid to kill it


Who told you to kill a process on a port without first understanding what process is using that port?


Advantages over the usual command line approach? (which you can easily turn into an alias)

    kill $(lsof -t -i:8080)


Well, the simple explanation is probably that the author isn't aware of `lsof` and might even learn something from your comment.

lsof isn't something you automatically pickup if it's not your daily business (like so many other things). I for myself often forget about it but I have some usual ports to kill. So these days I simply search my shell history by port to find lsof again.


I keep finding uses for more of the ls family: knew lsusb and lspci for ages, lsof is handy, and most recently I learned lsblk which is always worth using before dd'ing an image onto a disk, etc.

Back in the DOS era, there were few enough files on the system that I could just explore around and learn what everything did. That's no longer practical, but what's the alternative? There doesn't seem to be a "top 100 commands to get familiar with", or whatever. The ss64.com pages are pretty great, but...


ls /usr/bin

Choose a random entry you don't recognise and open the man page for it. There's not that many on most systems - I mean, there's likely a couple hundred or more, but some are convenience symlinks and many you'll already know.

For things you may not have by default, there's a few "awesome" lists available, like https://github.com/agarrharr/awesome-cli-apps


Hmm, tried that but that directory seems to contain ~3000 files on my old Ubuntu.


1621 on Pop, which is still about 15x more than I can reasonably remember. Hence the whole problem, it's not digestable unless you make digesting it your whole hobby. Ah well. Thanks for confirming, I guess.


I've known about LSOF and about kill but I've never put them together like this. I find his tool compelling simply because now I don't have to remember all this stuff.

What we really need is a tool like this for Windows. Find the process that keeps this file open and kill it. It's so annoying when I can't delete a file because some process has it open but I can't tell which one.


`handle` from sysinternals gets you some of the way there. Maybe all of the way there? (I haven't used Windows in over a decade)

https://learn.microsoft.com/en-us/sysinternals/downloads/han...


Powertoys for Windows has added a "What is blocking this file?" feature recently.

https://learn.microsoft.com/en-us/windows/powertoys/file-loc...


Useful, but talk about too little too late! The fact that this is an external tool, not part of the operating system, and that you are expected to, in 2023, blindly guess which application is holding your flash drive or documents open is pretty pathetic.


> What we really need is a tool like this for Windows.

Nevermind Windows, I want an app like this for android.


True, but there's no shortage of Stack Overflow answers: https://www.google.com/search?hl=en&q=kill%20process%20on%20...


> lsof isn't something you automatically pickup if it's not your daily business

Its a good thing now we have chatGpt to help us :)


When killing by port number, you usually want to restrict to the LISTENING server port. Otherwise you will likely kill things connecting to the port. For example, in a local development environment the above command would kill both the web server and the ALL web browsers with an open page to that server

That is the full browser instance not just the specific tab connection to the server

This is the command I would use.

> kill -9 (lsof -nP -t -iTCP:8080 -sTCP:LISTEN)


I was thinking the same thing. Quite a bit of effort, 3 Branches 26 commits and Rust (The effort gone in taming the beast). The hacker mindset is usually to avoid over engineering, saving efforts by avoiding a solved problem.


Typical rust programmer



The UX of "killport 8080" vs Googling the stack overflow on what you just typed every time is way better.


You don't have to do it exactly like that or probably 'google the stack overflow' once you've used kill & lsof (whether for this purpose of something else). My thought as GP's as soon as I read the title was 'lsof & kill?' - I doubt they looked up the pretty simple command they gave.

And as they said, 'which you can easily turn into an alias [well, a function]', which you could call 'killport' if you want for exactly the same functionality.


You could do that too. UNIX gives you a lot of options.


Do what too?


As a sysadmin, `lsof` is part of my standard toolkit; I wouldn't be able to do my job if I had to look stuff like that up on Stackoverflow every time.


Nice flex but what about every other person that is not a sysadmin?


They don't find themselves in the situation of sitting before a terminal emulator needing to kill a process by port in the first place?


Developers find themselves in tnis situation sometimes and are not sysadmins. I had this situation before, and I would need to google to know what lsof does and how to use it. And so do many others that are commenting positively here.


Experienced users (developer or otherwise) of unixy systems already know what lsof is. Inexperienced developers only need to learn it once, then they join the ranks of the aforementioned. Using these sort of systems for any length of time in the capacity of a developer or even just a hobby enthusiast will effectively make you a 'sysadmin', at least w.r.t. knowing the basic standard tools. Not least because lsof is useful for many sort of tasks (like figuring out which program is using a drive you're trying to umount), so chances are high that people in these situations already know about lsof.

And besides having to learn that lsof exists, you'd also have to learn that this even more obscure tool exists. And what happens if you prefer for the program to terminate itself gracefully, cleaning up whatever mess it might have made, instead of sending SIGKILL? Now you have to look up the PID or program name anyway, probably using lsof..


I'd say it's a toss-up whether the stackoverflow/chatgpt query is slower than remembering the name of the alternate tool.


Get TLDR, the examples it gives almost always save me from man or google. So handy!


Meanwhile other people are developers who only occassionally need to do this kind of thing. So they/we forget.


This is the sort of thing which Copilot on the command line is great at, actually. Copilot X is decent, too, but I prefer `copilot.vim` in my neovim plus `EDITOR=nvim` plus C-x C-e "# kill all processes listening on port 2134" which then proceeds to put it into your history and with a comment explaining it and keywording it so you can find it easier.


function killport() { kill $(lsof -t -i:$1) }


> Googling the stack overflow [...] every time

Don't look basic Unix stuff up on the web. Look it up in your Unix environment. The documentation is self-contained, and accessing it internally is way, way more efficient.

So instead, try one of these:

  - use `man lsof`, search 'EXAMPLES' to jump down to the examples section, and search for the word 'port'
  - search directly though examples: `tldr lsof | grep -A2 -i port`
  - look through your shell history with grep or CTRL+R
  - type the first few letters of the command and let history-based autocompletion hint the rest to you
Those options may seem strange if you're not used to them. But if you're not using techniques like that, and instead going to your web browser to look up CLI usage, you don't need to learn the CLI— you need to learn how to learn the CLI. Once you do that, a ton of things will become much easier and quicker for you to deal with.


As someone who is good at unix but watches other people struggle with it: the advantage is that people don’t understand signals, subshells, what ls stands for, what of stands for, don’t know the manual, don’t know what man stands for, don’t know the manual has sections, and aren’t interested in finding out.


I was going to say use kill and lsof, and also something similar to what you said here. I would add that even many seasoned developers don’t know UNIX-like systems well enough to do anything with native tooling and therefore replicate native functionality poorly and slowly.

For many people, it’s faster to write an entirely new tool than it is to go learn the platform for which they’re writing and therefore they do it.


I am not good at knowing Linux at all beyond some general concepts. If I did my brain would have no room left. However when dealing with it I always assume that it has everything and do google search. I do not remember a single case when I would want to write some utility. The ones available along with bash always cover all of my needs leaving me more time to design and implement the actual software I am being paid for. My whole CI/CD or whatever complete management is called is few bash scripts.


That definitely says something about the discoverability of the platform's functionality.

The similar thing also regularly happens in medium-to-large codebases so that's how you end with four slighlty different implementations of e.g. "extract value by path from a JSON object, with some traversal options": it's easier to just write the damn loop by yourself than to find a) if it's already been written, and if yes, b) how the hell do you use it.


> That definitely says something about the discoverability of the platform's functionality.

Meh. Back in ye olde days, when developers were either self-taught from the roots or came from universities, they just knew that stuff from experience.

Nowadays, a lot of "developers" come from three months "coder bootcamps". They may even be reasonably proficient in whatever flavor of JS toolkit the camps teach at the moment, but they will have zero knowledge about what makes their computer tick.


The issue isn't unexperienced devs not knowing about stuff. The issue is whether devs with no experience are willing to learn new stuff or not. We all had zero experience at some point.


>"That definitely says something about the discoverability of the platform's functionality."

I think it says more about people being lazy and not willing to use that mighty tool called search (add ChatGPT now). Even here on HN I often see people asking what is X when that X is one web search away.


Yeah, agreed, we should change human psychology to fit our current technological stack better, that sounds way easier than doing the opposite.

On a serious note, I do hope that built-in help systems with ChatGPT glued on (and additionally trained on its contents) will become a de-facto standard in the near future: it should be possible to just ask your computing environment "how do I do X with you?" and get an answer or at least some guidance pointers.


The scope of modern OS, its utilities and commands is so large that good and comprehensive doc would be huge. And that is just the description of command with options. How do I do X is even more complex and may require things that go way beyond the simple doc and will require books. This is just not scalable and impractical. Having piles of common knowledge like Stack Overflow / forums / etc augmented with search and now ChatGPT is way more practical.

I've personally written what is very comprehensive doc for one of my software products. It described every nook and cranny. Still I ended up supporting a forum with most common questions percolated to a dedicated How Do I Do XYZ section and it just keeps growing. In the beginning I was answering question there. Now I just see people supporting themselves.


That one is at least defensible, a Q and A in a public forum can be a seed that generates further discussion and provides knowledge to others.


> don’t know what man stands for, don’t know the manual has sections, and aren’t interested in finding out

People that unwilling to learn— uninterested in even discovering that there is a manual and how to open it— don't belong in software.

The rest of the things you mention... whatever. Many people haven't heard a good pitch. They haven't had the relevant experience to see how investing in learning those things can pay off for them. But disinterest in finding out that there's a manual? That's wilful incompetence of a kind I've never seen and hope I never will.


So as someone who's not very good at UNIX, knows that it has signals, subshell, what ls is, don't know of, know there's a manual and has invoked man many times and is interested in finding out...

How would I use man to find that lsof exists so that I could come up with the incantation mentioned in OP? I tried "man -k 'open'" but none of the answers returns lsof on my system. I checked that "man lsof" works as expected.


  man -k open
should surface lsof on your system. On mine it only has the description 'open files', though, which might not be obviously relevant to many people.

I'm not sure what macOS or FreeBSD `man` have, but GNU `man` has `-K` as well as `-k`. The uppercase version searches the manpages globally, instead of just their short descriptions. So something like

  man -K -a 8 "list open" port
will turn up `lsof` on GNU, even if `apropos` doesn't.

If you have a tldr client installed, that can also be useful for searching through common examples including for programs you may not have installed. For example with tealdeer, the following (Fish) surfaces both lsof and netstat as options:

  for page in (tldr -l)
    tldr $page | rg -A2 -i '\b(find|list)\b.*\bport'
  end
and

  for page in (tldr -l)
    tldr $page | rg -A2 -i '\bkill\b.*\bport'
  end
shows that fuser and fkill can both do the same thing as killport.

Grep sucks as a search interface for tldr, though, so you probably want a better tldr client than I have, which will do global search, or an FZF script that will let you filter through tldr results.

Discovering new (to you) tools which are not yet installed is a good reason to use the web to look up Unix stuff, especially after you've checked the manpages and your package manager.


Thanks for the detailed response. Seems -K is not available on the UNIX I have (TrueNAS, FreeBSD-based). That should indeed have helped.

> Discovering new (to you) tools which are not yet installed is a good reason to use the web to look up Unix stuff

Of course a web search can help, however I often find that the times when I need to do such searches is quite correlated with times when I don't have internet access. Less frequent now with my "smart" phone, but still happens.


I really don't think I trust this approach, despite it being vouched for by so many responders to this thread. When I run this:

    sudo lsof -t -i :22
I expected to get sshd and nothing else, but it actually includes outgoing ssh connections too - which I don't want to kill. Likewise :443 gives clients like the pids of firefox and signal. I'm gonna stick with `fuser`.


I’m not sure either. It looks more like a contrived implementation for the sake of using Rust.

P.s: functions are almost always preferable over aliases


(Heavy alias user. Why are funcions to be prefered over aliases ?)


Well my comment was actually a reference to the manual [0].

But I agree with it, because aliases are just a string substitution for interactive shells, so they are limited. I found that I often end up wanting to add additional logic (such as arguments or local variables) and to have it available for use in scripts later.

[0] https://www.gnu.org/software/bash/manual/html_node/Aliases.h...


Huh I had no idea lsof could be used like that. I've always used netstat to find what process was using a given port.


I see one advantage: it allowed me to run into your comment :-)


Does this work on windows? I've been using a npm package for this "port-kill".

The problem is less that people don't understand shell commands but more there is too much stuff to remember.

An alias works well for this stuff but then you have to set it up everywhere you work. Sometimes downloading a package is just easier.

npx kill-port 8080


That's why you have a set of dotfiles that you use as your base wherever you go. Store it at a url you know and as long as you have wget or curl it's 1 line away. Then you don't have to remember whatever the author decided to name the package.


> The problem is less that people don't understand shell commands but more there is too much stuff to remember.

But is it really better in the long run to add more and more additional tools?

Is it less work to understand and remember more uses of fewer commands or is it less work to understand and remember more specialized commands? The latter is clearly more administrative work, because you have to install all those tools on your machines. On the other hand, finding more uses of a few select tools creates those associations in your brain that you need to cover new usages for them yourself.

To me that's a clear win for the side of fewer well-understood tools.


I can see writing such a tool when learning eg network programming in a new language, here rust.


I’ll only argue that lsof is a command for which I need to lookup the argument structure every time.

Relevant xkcd: https://xkcd.com/1168/


At least it didn't say "no man" :) Anyways, I always wanted to map pid/user/port, so TIL: lsof.

I may not come up with that single liner when I need but `lsof -i` (i for internet) is ok for me to resolve some issue. Err, `sudo lsof -i`


That doesn't work on Windows and it isn't portable.

At least with Rust it can be abstracted and made to be portable, to work on other platforms like Windows.

Why learn 3 different commands with slightly non-standard arguments to find and kill a process on different OSes, when you can use one unified command and the arguments work the same as expected?


"Different OSes"? There is only Linux in different tastes and maybe some BSD in some obscure places but no other OSes exist (that are worth to get oneself acquainted with), I am pretty sure of it.


> There is only Linux in different tastes and maybe some BSD in some obscure places.

This is why most people (and developers) still use the Windows Desktop and not the thousands of Linux Desktop(s) out there.

> But no other OSes exist (that are worth to get oneself acquainted with)

macOS exists and it works with that. Rust supports Windows as well so it is certainly possible to port it to Windows. Literally someone asked for Windows. [0]

> I am pretty sure of it.

Henceforth, I don't think you are even sure.

[0] https://news.ycombinator.com/item?id=35698882


> This is why most people (and developers) still use the Windows Desktop and not the thousands of Linux Desktop(s) out there.

I'm failing to see the logic here. As a Linux user, trust me there are tons of people out there for which macOS is the only OS that exists or matters. It doesn't prevent people from switching to macOS. Years back that was true of nearly every Windows user as well (only Windows existed). I'm not understanding the causative chain here.


> macOS exists and it works with that

your parent poster is obviously trolling, but macOS is some kind of BSD, so they actually took this case in account.


My point still stands about Windows, regardless.


Windows now includes the basics of GNU/Linux out of the box.



My other holy smokes moment was the install sh script hosted on bitly

> curl -sL https://bit.ly/killport | sh


A shame it doesn't require "sudo sh" to install.


Hah, I was about to rant about the same. This tool is obviously not battle proved and never run outside the developer's machine.


care to enlighten us mere mortals? why is this a holy smokes?


A loop over all fds across all processes on a box will be very slow on any large machine with, say, a few hundred thousand processes, some of which may have up to hundreds of thousands or millions of FDs open.

Especially since it looks like it's reading the owning process's cmdline per-fd.


AFAIK there's no other ways to do this on Linux: link a given tcp connection to a process. The network uAPIs (netlink or /proc/net/tcp?) will give you an inode, and to link the inode to the PID you need to go through every open fd in /proc/*/fd. ss, fuser or lsof (mentioned in other comments) do this too.


Yup, ss and friends do this, but they don't read "/proc/$pid/cmdline" N times (where N is the number of file descriptors the process has) in the hotloop.

I phrased it poorly. Doing the loop is what it is, but doing the loop and allocating inside (which 'process.cmdline()' does) on every loop is something I'm fairly sure none of the other tools do.


> Yup, ss and friends do this, but they don't read "/proc/$pid/cmdline" N times (where N is the number of file descriptors the process has) in the hotloop.

This one doesn't either. The code structure could be clearer IMHO, but `kill_processes_by_inode` reads the cmdline within the `if target_inode == inode` block, which breaks out of the `for fd in fds` loop at the end. So it only looks at the cmdline once per process that has the target inode.

That said, if `find_target_inodes` returns n inodes, `kill_processes_by_port` will call `kill_processes_by_inode` n times. It'd be better to find all fds only once and compare each to all the target inodes at once with a hash set (if n might be large) or by bisecting a sorted slice. Multiple inodes per port could happen in a few different ways: different processes listening to the same port on different IPv4/IPv6 addresses, an old-fashioned pre-forked sort of server model, a bunch of individually spawned single-threaded servers listening on the same port via `SO_REUSEPORT`/`SO_REUSEADDR`.


Oh yeah, definitely. In my implementation, even skipping reading comm (once!) if not needed gave me better perf than ss:

https://github.com/anisse/tcpkill/blob/cfd96d5dec438a3722edb...


> A loop over all fds across all processes on a box will be very slow on any large machine with, say, a few hundred thousand processes, some of which may have up to hundreds of thousands or millions of FDs open.

Any implementation of this objective has the same limitation, though.


The overly usage of unwrap() is also suspicious. Processes can vanish while you are inspecting them. So there is a good chance that this tool will panic on busy machines.


Curious about the following lines [1] [2]:

    let processes = procfs::process::all_processes().unwrap();
    ...
    let process = p.unwrap();
Doesn't this use of unwrap cancel out one of the advantages of Rust which is to handle errors quite comprehensively?

Here process could be null, and we try to access it the line after.

edit: actually process won't be null, unwrap will panic. Got it. I still think a good error message would be better than exposing the user to a crash, but that's a matter of opinion. It seems like the whole point of using Rust, and also of such a tool, is to be robust and user-friendly (since it implements an UNIX one liner), and an unhandled crash could lower the confidence of the user in the tool.

[1] https://github.com/jkfran/killport/blob/3a43d037c08ab3aec730...

[2] https://github.com/jkfran/killport/blob/3a43d037c08ab3aec730...


Let's say that you're unable to list all processes. What do you do to handle that error? You exit the program. What does unwrap do? It safely exits the program. Seems to be handling the error just fine to me.

The only downside is that the error message won't look great, but for a tool like this, that seems fine.

> Here process could be null, and we try to access it the line after.

Process cannot be null. Rust does not have nulls outside of unsafe code.

The type of 'p' there is 'Result<Process, ProcError>', and so 'process' is of type 'Process', which cant' be null.

That's one of the actual benefits of rust: it is memory safe outside of unsafe code, and doesn't make the "null" mistake.


>"Let's say that you're unable to list all processes. What do you do to handle that error? You exit the program."

No, you list what you can and give meaningful error message for any particular offenders.


`expect` can be used instead of `unwrap` to print an error. That's quite idiomatic I'd say for situations where there's no way to recover anyway, as you mention.


I'd say `expect` is idiomatic for situations in which the error is due to an unrecoverable logic flaw. If it's due to user error, I'd want to do something else just to customize the error message format.

Also, as another commenter mentioned, the `let process = p.unwrap()` is suspicious. I imagine this can happen if a process simply exits between being returned from the `/proc` traversal and `opendir("/proc/<pid>")`. If the error is `ENOENT`, it should simply continue to the next pid without printing an error at all.


I love how there's a special case behavior for the hardcoded process name "docker." Just incredible: https://github.com/jkfran/killport/blob/main/src/linux.rs#L9...

But seriously kids, take the time to learn Unix tools so you can save yourself from needing to write hundreds of lines of Rust when one shell command will do the same thing.


Well yeah, because blindly killing the process that's listening on the port will do the wrong thing for Docker and you can't predict without asking Docker what container/process is the one that owns the port.


Only because Docker is broken. No other process manager would be confused by that.

Maybe we should patch kill itself? It wouldn't surprise me one bit if that pull request exists.


To support multiple PIDs: killport() { lsof -ti :"$1" | xargs kill -9; }


kill already supports multiple pids

  kill $(lsof -t -i:TCP:$1 -sTCP:LISTEN)
will do just fine. But you probably want listening ports only, hence -s.

Unless you want it to work with millions of listening processes, which would overflow your environment and require the use of xargs.

In that case you probably want GNU xargs with -n and -p to batch the job and run processes in parallel because that's going to take a while.


When would you have multiple pids?


See SO_REUSEPORT in 'man 7 socket'

If, for example, you run nginx in a default configuration, you'll notice both the master and worker processes will have an fd bound to the same port (port 80 or whatever), so it's also not exactly unusual.

Even without SO_REUSEPORT, you can have multiple things on one port if you have multiple IPs, like something listening on "127.0.0.1:80", and a different process listening on "192.168.1.10:80"


So if you had one process listening to the same port on multiple ips, this xargs would fail on the second invocation? Seems like piping through uniq may be a good idea


Probably when binding a socket with SO_REUSEADDR or SO_REUSEPORT.


sshd does this.

sshd 3891262 root 4u IPv4 15450342 0t0 TCP ibpmaas-testing:https->10.10.47.11:49064 (ESTABLISHED)

sshd 3891351 ubuntu 4u IPv4 15450342 0t0 TCP ibpmaas-testing:https->10.10.47.11:49064 (ESTABLISHED)

# lsof -ti ":49064"

3891262 3891351

edit for readability


kill accepts multiple pids. So there is no need to use xargs unless you expect so many pids that the command line is exhausted.


I made an amazing CLI utility for this! Simply add this to your .bashrc or .zshrc file:

alias portkill='function killit(){kill -9 "$(lsof -t -i:$1)" &>/dev/null || true};killit'

then run portkill PORT


Why define the alias as a function instead of the whole thing as a function?


It is amazing, otherwise it would just be normal


Different processes can listen on the same port number for TCP and UDP and IPv4 and IPv6. How does it differentiate between those?



I've come up with this simple bash function for a while already: https://gist.github.com/kschiffer/912d95ca552112820d34f59ec6...

Just add it to your shell config (e.g. `.zshrc`) and use it like so: `$ killport 8080`


In case you're interested in only closing the TCP socket, and not killing the process, I wrote a linux-only PoC: https://github.com/anisse/tcpkill/

It has two implemented methods: one uses pidfd, the other uses netlink INET_DIAG_DESTROY (like ss -K).

CLI UX could be improved, but it does the job.


If you prefer shell, here's a simple script to look up port processes. The script can use lsof, netstat, ss, fuser, or sockstat.

https://github.com/sixarm/port-to-process


This is a good idea. Tracking down which process owns a port in `netstat` can be tedious. My only suggestion to improve is to clarify in the documentation if this is only for TCP or also for UDP.


"netstat -anp | grep $PORT" is tedious?


ss is from pkg iproute2 (installed by default on Void Linux)

   ss -tunlp '(dst = :8080 or src = :8080)'
   ss -K '(dport = 8080 or sport = 8080)'
BSD has fstat(1) which will list open file descriptors including network connections.

NetBSD has sockstat(1) which lists open sockets.

OpenBSD has tcpdrop(1) but that's only TCP.

MacOS has none of these, apparently. It looks like it comes with fuser and lsof instead.


ss is the right tool for this. Additionally, it may be commonly necessary to find processes in network namespaces.

  ip -all netns exec ss -K …


ss (from iproute2) can also free up ports with -K flag. It doesn't kill the process, it just closes the sockets


I'll use a one-liner for that:

https://github.com/helpermethod/pk/blob/main/pk

Also handles port reuse


You can also use npm's for that: `npx kill-port 8080`


Wait what’s the difference between this and npx killport?


If it's "npx kill-port", it does seem to spawn quite a lot of separate processes to get the job done (lsof, grep, awk, xargs, kill). So perhaps this is more efficient, though it seems to have it's own issues from reading the other comments.

If it's "npx killport", it seems to not work on Windows.


I would consider npm installing a huge difference to a static binary


I consider it an advantage because you get a version-locked dependency on the tool that is easily updated along your other dependencies.


wondering how is it different from `kill -9 $(lsof -t -i:4000 -i:8081)` ?


This is what GPT-4 comes up with:

    kill $(lsof -t -i :PORT_NUMBER)


I use this, forcibly kill, no logging, never errors, works on mac and linux:

    kill -9 "$(lsof -t -i:5000)" &>/dev/null || true


I like how the install script alone for this tool is much longer than the bash-ism


… and its answer is wrong, for reasons discussed in the very many other threads on this post that suggest this same snippet.

(On my laptop, this kills stern, Chrome, Slack, Firefox, and 1Pass…)


Processes do not run on a port. Processes listen on a port.


Useless pedantry, people say server processes are “running on port XYZ” all the time


I have no problem with that. But the colloquial error-prone is out of place when we are discussing interactions that are more invasive, and have OS-level consequences.


John walks to the stadium to watch the game.

John is not walking to the game.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: