Hacker News new | past | comments | ask | show | jobs | submit login
Pause/unpause any X11 application (vermaden.wordpress.com)
119 points by vermaden 6 months ago | hide | past | web | favorite | 49 comments



Very cool. I tried it on KDE Neon (ubuntu 18.04) and it worked perfectly. Didn't even leave any artifacts. However, all input events were queued up by KWin and as the application handled them as soon as I resumed it.

Would be seriously cool if KWin natively supported this so apps would truly freeze. Would fit in perfectly with Plasma's Activity system.


You can suspend a whole Plasma activity already. Isn't that more useful?


Really? I've never noticed it. Apps running in another plasma activity always show up as running instead of stopped and continue to consume resources. I only tested by switching between activities.


They don't suspend by default. I don't use KDE any longer so maybe it's changed, but you just needed to right click on the activity in the activity panel and there should be a menu item to suspend the whole activity. It's really a non-obvious feature.. :)

It's very handy and it's the main thing I miss from KDE b/c you can have more stuff "running" on a wimpy computer. I'd usually have a whole activity for some project I only touch once a week and it'd just sits suspended in the background till I needed it again


It would be wonderful to have a pause button on the titlebar of applications.


Great idea. Probably needs to be WM dependent. I started looking at the Gnome path and realized adding a window title bar menu item is probably easier, because they seem pretty locked into the Big Three buttons, and you'd get into layout and icon issues.


Its totally possible :)


This is actually very neat, I haven't though about pausing desktop applications to save battery/CPU before. Tried it out real quick with Safari on my work laptop (macOS), worked like a charm with

   killall -STOP "Safari"
Safari is paused/suspended

   killall -CONT "Safari"
Responding again.

Shouldn't be to hard to hook it up to a keyboard shortcut which grabs the active window and sends stop/continue signals to that window.


The `xdotool` is available in the https://brew.sh so you can just use the provided `desktop-pause.sh` script with a shortcut which does exactly that - "grabs the active window and sends stop/continue signals to that window".


Does this work for Aqua applications as well?

I'm thinking we'd like to use something like Hammerspoon or AppleScript to fetch the PID of the currently focused window.

https://www.hammerspoon.org


Heh, nice. I also wrote a simple script a long while ago, after someone wrote Universal Pause Button for Windows. https://gist.github.com/auscompgeek/5da8f27e50feb185d1e2

There are a couple of functional differences I can see:

- I directly read /proc to grab the current process status, which probably isn't very portable...

- vermaden's will also stop the children of the process.

Unfortunately I don't really use this any more, since I use Wayland these days.


Handy, although that's not how I was expecting it to work - SIGSTOP is all well and good, but what if your window is coming from a remote system? Or does nobody use ssh X forwarding any more?

What I was expecting was a means of suspending the X event loop, so the application remains "running" but idle.


> Or does nobody use ssh X forwarding any more?

For a few years I worked with a bit of peculiar setup: I had a laptop with Windows and Xming (X impl for Win) and a headless desktop with FreeBSD under my desk. I didn't actually use SSH X forwarding - if I remember correctly, I configured it such that X apps run on the desktop would display on my laptop communicating over LAN with a raw X protocol. So I would SSH under my desk and run gVIM, RXVT and a few other tools which displayed as windows on Win. It was cool in more ways than one, but - even in that setup - the latency, or lag, between a keystroke and the effect, was visible and occasionally irritating. It was still much better than doing the same over SSH - which was the reason why I opted for the insecure open-to-the-world X setup in the first place - and that was over a wire less than 2 meters long.

Needless to say, the lag when working on remote servers over SSH was much greater than that. It was kind of OK for displaying things which (optionally) updated on their own once a second or things like that, but I would be very unhappy, to the point of quitting after a week, if I had to actually work over such a connection. Well, maybe I'm exaggerating and I could get used to it - still, the experience, when compared to running things locally or remotely, but in terminal (ie. VIM instead of gVIM), can be summarized with one word: LAG. Lots and lots of it :)


Not many people use X forwarding, and X isn't really network transparent anyway. The X wire protocol is so chatty that modern clients do anything they can to work around it. See this Daniel Stone talk: https://www.youtube.com/watch?v=RIctzAQOe44


> The X wire protocol is so chatty that modern clients do anything they can to work around it.

So there's two things here. Part one is modern clients render everything on the client, instead of the original design of X where clients more or less sent geometric things down the wire; this makes much nicer looking and more functional windows, but it is also a lot more data. Classic X has a single stream, so you get everything great about that; that's part of why everything works better locally -- the big chunks of data skip the socket, and there's much rejoicing.

The other thing is not a fault of the wire protocol, but of programing (much of it in xlib) -- at around 23:14 in the video, gedit startup is described as 130 blocking InternAtom calls, 34 blocking GetProperty calls, and 116 property change requests, resulting in upwards of 25 ms sitting around waiting for the Xserver. But --- there's no reason that you can't send all the atoms you want in one go, maybe some are dependent on earlier responses, but even then, it should be a couple round trips, not 130 for atoms. Once you have enough atoms, you can start the getpropery calls; there's likely to be more dependencies there, but still.

These things that block, but shouldn't, combined with the realities of shipping large data over the same pipe as commands is what has made remote X11 work worse over time, to the point where it's fairly unworkable.

There's a couple X11 protocol issues that make window management really hard (some things should really have been made synchronous), but there has been no ability to change the protocol, which is clearly a problem.

The reason that people are still really attached to networked X11 is because, even though it's often crappy, and it's gotten worse over time, is that when it's useful, it's really useful.

Honestly, all the discussions of Wayland seem to be "X is terrible, we're doing something else, but not everything is ready; oh and there's a lot of stuff in flux still" Maybe that's an unfair summary, and I feel better about Wayland after watching the talk; but it would be nice if there was a way to use it to fit my workflows. And I'll always wonder if making xlib not suck, and being willing to change X11 protocol over time wouldn't have gotten us to a better place faster.


> Part one is modern clients render everything on the client, instead of the original design of X where clients more or less sent geometric things down the wire

Ever since we've moved to the paradigm where window rendering and compositing happens on the GPU, I've been wondering whether the X11 protocol would make a comeback. After all, from the CPU's perspective, we're essentially doing what X11 does again: sending descriptions of geometry and texmaps to "somewhere else", where they're put together, and then referring to the elements we've sent by handles we got back from the "somewhere else." That somewhere else is the GPU.

In the case of combining GPU rendering with X11, we're doing a weird sort of dance (direct rendering) where—just in this one case—the X11 server allocates a GPU handle, and then we get an X11 handle wrapping the GPU handle, from which we can extract the GPU handle and reference it in GPU commands, with X11 being able to use the result as, essentially, an X11 texmap in the compositing process. (Wayland is the same, just with more efficient compositing and no other primitives than the DRI surface.)

But what if we went the other way: gave the X11 protocol primitives for all of the primitives that GPU drivers expose, such you X11 is essentially acting as a proxy for the GPU? So, instead of speaking to the GPU and getting GPU handles back, we'd be speaking to X11 and getting X11 handles back, which—on the X11 server side—would be toll-free-bridged to GPU handles?

You could take any code that makes OpenGL calls, and just switch out the DRI OpenGL driver for an X11Client OpenGL driver, and it would all suddenly be network-transparent (with not even that much overhead for the loopback case, because these are high-level commands streaming over the socket, not pixbufs.) An X11 "display server" would be made into, in effect, a network-transparent "virtual GPU." (Which, of course, you could also implement yourself in software—xvfb would handle X11GL messages just fine, doing internal Mesa software rendering in response.)


I'm rather uninformed about the nuts and bolts of rendering: my dabbling in X includes getting things to more or less work as a user of an X desktop, with varying degrees of success since about 1997 but also a bit of wire protocol tweaking with ex11, an erlang x11 client protocol library, just enough so I could output images from an erlang program at 3 frames per second without the background color flashing I was getting with the wxWindow bindings. I debugged some crashes I was having, and added interfaces to set window titles and window dimensions via ICCCM standards.

But, with that meager background, it sounds like you may be describing GLX? Originally, that supported OpenGL over the network, although later versions stopped working, because it was hard/nobody cared enough.

Although, the spec says that GLX 1.4 only supports up to OpenGL 1.3; and my understanding is the newer versions of OpenGL are a lot easier to work with / better specified. And/or Vulkan is supposed to be cool? :)


There's a machine I should into but I don't know if I can x inner ssh because it isn't publicly available.

I ssh into another machine and then ssh from there to that machine. I don't even know how to search this question. The machine in the middle doesn't have a display. X over ssh is difficult.


It can work if X forwarding is enabled on both hops. However, X11 forwarding may be disabled in the sshd_config on the remote system.

It doesn't require a physical display. If it's working then it should set the $DISPLAY environment variable inside the ssh session.


Another thing that is required for X11 forwarding is installed xauth on the server side (that means even the middle host(s)), nothing else X11 related is required.


Try setting up a $HOME/.ssh/config entry for the inner machine and specify a ProxyJump command referencing the outer machine? It should end up relaying a direct ssh connection across the outer ssh session's TCP port forwarding mechanism (I think)


>Can you imagine opening xterm(1) terminal and searching for all Chromium or Firefox processes and then freezing them one by one every time you need it? Me neither.

I can, "pkill -STOP firefox"?

pkill takes regex too.

Y'all have heard of ctrl+z too I hope.


You are right about pkill.

About CTRL-Z and other of these I once wrote about them :)

https://vermaden.wordpress.com/2018/07/08/ghost-in-the-shell...


Your style is really easy to read, but I thought you meant there was something wrong with ctrl+z : thought I was going to get schooled!

Nice blog posts, well done.


> I can, "pkill -STOP firefox"?

That doesn't stop Firefox's separate rendering processes, though. They run with the process name Web Content.


The only times I have ever wanted to freeze an application it was for the purpose of it not hogging CPU time in favor of some interactive program, and had no better way of doing that. On Linux, the Con Kolivas MuQSS CPU scheduler introduces a new scheduling policy called SCHED_IDLEPRIO which runs a program only if there is absolutely nothing else to run. It is perfect for long-running programs that don't need interactive performance.

EDIT: The example given in this article is also IMO better solved by using SCHED_IDLEPRIO >Other example may be some heavy processing. For example you started RawTherapee or Darktable processing of large amount of photos and you are not able to smoothly watch a video. Just pause it, watch the video and unpause it again to finish its work.


This is really useful, especially for browsers.

I've created a WIP recipe for awesomeWM a while ago, which supports stopping special clients (browsers, thunderbird, slack) after a given timeout of being unfocused, and also when minimizing them.

https://github.com/awesomeWM/awesome-www/pull/111/files


The cgroup freezer is much better (reliable).

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


In what way its more reliable?

It works only on Linux.

Mine approach works on Linux/FreeBSD/OpenBSD/Illumos/Solaris/UNIX/...


More reliable not more available. It's because programs in userspace can mess with the signal usage, while they can't with the cgroup feezing.


Sounds interesting, and I've created a note to look into this for my recipe that I use with awesomeWM (https://github.com/awesomeWM/awesome-www/pull/111/files).

However at first glance it is much more difficult to setup (cgroups permissions, creating task groups etc). I wonder if there is a wrapper that would behave more like "kill -STOP" already?


It’s interesting to see that stopping a process causes visual artifacts when a window on top of it is moved. This is really odd, since it seems like applications should be able to affect each other in that way; that is, this not isolated so that a frozen window shows a snapshot of how it looked when it was sent the signal and nothing else?


That depends on the window manager. The application state is not really affected, it just stops updating (redrawing its area). When another window moves away, the window manager asks the "underlying" application to update that area of the screen. It's dead, so the WM keeps displaying the last thing that was there, until something else happens in that spot.

On the other hand, compositing window managers will dedicate a separate buffer to each application, where they have exclusive access. That kind of a window manager would not have to ask the application to update anything - it would just take the image from the dedicated application's buffer and update the screen with it. Since the application's buffer can't be modified by anything else, it would have the last state of the application in it. That would in turn find its way to the screen. No glitches.


It also partially depends on how the client program has configured her windows, and how the X server handles related stuff.


Your sleep command could also iconify/minify the app's windows.


Good idea.

Just added -A and -S options that also minimize a window.

  -a  -  Do pause/resume active window.
  -A  -  Do pause/resume active window and minimize it.
  -s  -  Do pause/resume interactively selected window.
  -S  -  Do pause/resume interactively selected window and minimize it.
https://github.com/vermaden/scripts/commit/03591a138b14ceded...


This is great! I've just added support for this to my Phoenix [1] setup.

[1] https://github.com/fabiospampinato/phoenix


I'm nitpicking, but is there a reason why you use the numeric signal number in Linux? On my machine "kill -SIGSTOP" works as expected, and I don't remember a time when you couldn't use signal names like that.


Neither "kill -SIGSTOP" nor "kill -17" is portable. (In fact 17 is SIGCHLD on x86 Linux…)

You should use "kill -STOP".


Fixed, thank you for pointing that.


The blog post still uses "kill -17" and "kill -19".


This is seriously cool idea.

But could you not get the same effect with nice?


No, you cannot. A given task, running at maximum "niceness" (i.e., the lowest "priority" in regard to its peers) will happily consume 100% CPU if it has enough work to do - at least in total absence of other consumers of that resource.

SIGSTOP effectively tells the scheduler to just not allocate any CPU time to the task any more, which makes the task unable to perform any work (and also not consume any additional resources, of course).


Awesome, for those of you wondering just like me - it's SIGCONT.


Sigstop is pulling the car over. Nice is like pulling over if there's other traffic on the road but going full speed ahead if it's empty.


Thanks.

The `nice` (or also `idprio` on FreeBSD) will only lower the priotiry in the queue for CPU time, which means that if nothing else will need CPU it will still get as much CPU time as it wants, thus still eating battery time.


One thing about the title, its not `xdotool` that does the pause/unpause, the `xdotool` is used only to get PID of active application with -a option.

Same with -s option where you will select the target window with `xprop` command.

The pause/unpause is done by the `kill` command of course.

This title was not set by me.


Apropos of nothing, you can use xdotool for rudimentary gui macros like very simple game bots and such.

Your post made me think perhaps xkill had an option to send other signals than abort, but it doesn't appear to without modification.


I believe xkill doesn't send any signals at all. Rather it tells the x-server to destroy a window and close its connection with the program that opened the window. If programs die, it's only because they weren't expecting their connection to suddenly be closed, as most wouldn't.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: