Would be seriously cool if KWin natively supported this so apps would truly freeze. Would fit in perfectly with Plasma's Activity system.
It's very handy and it's the main thing I miss from KDE b/c you can have more stuff "running" on a wimpy computer. I'd usually have a whole activity for some project I only touch once a week and it'd just sits suspended in the background till I needed it again
killall -STOP "Safari"
killall -CONT "Safari"
Shouldn't be to hard to hook it up to a keyboard shortcut which grabs the active window and sends stop/continue signals to that window.
I'm thinking we'd like to use something like Hammerspoon or AppleScript to fetch the PID of the currently focused window.
There are a couple of functional differences I can see:
- I directly read /proc to grab the current process status, which probably isn't very portable...
- vermaden's will also stop the children of the process.
Unfortunately I don't really use this any more, since I use Wayland these days.
What I was expecting was a means of suspending the X event loop, so the application remains "running" but idle.
For a few years I worked with a bit of peculiar setup: I had a laptop with Windows and Xming (X impl for Win) and a headless desktop with FreeBSD under my desk. I didn't actually use SSH X forwarding - if I remember correctly, I configured it such that X apps run on the desktop would display on my laptop communicating over LAN with a raw X protocol. So I would SSH under my desk and run gVIM, RXVT and a few other tools which displayed as windows on Win. It was cool in more ways than one, but - even in that setup - the latency, or lag, between a keystroke and the effect, was visible and occasionally irritating. It was still much better than doing the same over SSH - which was the reason why I opted for the insecure open-to-the-world X setup in the first place - and that was over a wire less than 2 meters long.
Needless to say, the lag when working on remote servers over SSH was much greater than that. It was kind of OK for displaying things which (optionally) updated on their own once a second or things like that, but I would be very unhappy, to the point of quitting after a week, if I had to actually work over such a connection. Well, maybe I'm exaggerating and I could get used to it - still, the experience, when compared to running things locally or remotely, but in terminal (ie. VIM instead of gVIM), can be summarized with one word: LAG. Lots and lots of it :)
So there's two things here. Part one is modern clients render everything on the client, instead of the original design of X where clients more or less sent geometric things down the wire; this makes much nicer looking and more functional windows, but it is also a lot more data. Classic X has a single stream, so you get everything great about that; that's part of why everything works better locally -- the big chunks of data skip the socket, and there's much rejoicing.
The other thing is not a fault of the wire protocol, but of programing (much of it in xlib) -- at around 23:14 in the video, gedit startup is described as 130 blocking InternAtom calls, 34 blocking GetProperty calls, and 116 property change requests, resulting in upwards of 25 ms sitting around waiting for the Xserver. But --- there's no reason that you can't send all the atoms you want in one go, maybe some are dependent on earlier responses, but even then, it should be a couple round trips, not 130 for atoms. Once you have enough atoms, you can start the getpropery calls; there's likely to be more dependencies there, but still.
These things that block, but shouldn't, combined with the realities of shipping large data over the same pipe as commands is what has made remote X11 work worse over time, to the point where it's fairly unworkable.
There's a couple X11 protocol issues that make window management really hard (some things should really have been made synchronous), but there has been no ability to change the protocol, which is clearly a problem.
The reason that people are still really attached to networked X11 is because, even though it's often crappy, and it's gotten worse over time, is that when it's useful, it's really useful.
Honestly, all the discussions of Wayland seem to be "X is terrible, we're doing something else, but not everything is ready; oh and there's a lot of stuff in flux still" Maybe that's an unfair summary, and I feel better about Wayland after watching the talk; but it would be nice if there was a way to use it to fit my workflows. And I'll always wonder if making xlib not suck, and being willing to change X11 protocol over time wouldn't have gotten us to a better place faster.
Ever since we've moved to the paradigm where window rendering and compositing happens on the GPU, I've been wondering whether the X11 protocol would make a comeback. After all, from the CPU's perspective, we're essentially doing what X11 does again: sending descriptions of geometry and texmaps to "somewhere else", where they're put together, and then referring to the elements we've sent by handles we got back from the "somewhere else." That somewhere else is the GPU.
In the case of combining GPU rendering with X11, we're doing a weird sort of dance (direct rendering) where—just in this one case—the X11 server allocates a GPU handle, and then we get an X11 handle wrapping the GPU handle, from which we can extract the GPU handle and reference it in GPU commands, with X11 being able to use the result as, essentially, an X11 texmap in the compositing process. (Wayland is the same, just with more efficient compositing and no other primitives than the DRI surface.)
But what if we went the other way: gave the X11 protocol primitives for all of the primitives that GPU drivers expose, such you X11 is essentially acting as a proxy for the GPU? So, instead of speaking to the GPU and getting GPU handles back, we'd be speaking to X11 and getting X11 handles back, which—on the X11 server side—would be toll-free-bridged to GPU handles?
You could take any code that makes OpenGL calls, and just switch out the DRI OpenGL driver for an X11Client OpenGL driver, and it would all suddenly be network-transparent (with not even that much overhead for the loopback case, because these are high-level commands streaming over the socket, not pixbufs.) An X11 "display server" would be made into, in effect, a network-transparent "virtual GPU." (Which, of course, you could also implement yourself in software—xvfb would handle X11GL messages just fine, doing internal Mesa software rendering in response.)
But, with that meager background, it sounds like you may be describing GLX? Originally, that supported OpenGL over the network, although later versions stopped working, because it was hard/nobody cared enough.
Although, the spec says that GLX 1.4 only supports up to OpenGL 1.3; and my understanding is the newer versions of OpenGL are a lot easier to work with / better specified. And/or Vulkan is supposed to be cool? :)
I ssh into another machine and then ssh from there to that machine. I don't even know how to search this question. The machine in the middle doesn't have a display. X over ssh is difficult.
It doesn't require a physical display. If it's working then it should set the $DISPLAY environment variable inside the ssh session.
I can, "pkill -STOP firefox"?
pkill takes regex too.
Y'all have heard of ctrl+z too I hope.
About CTRL-Z and other of these I once wrote about them :)
Nice blog posts, well done.
That doesn't stop Firefox's separate rendering processes, though. They run with the process name Web Content.
EDIT: The example given in this article is also IMO better solved by using SCHED_IDLEPRIO
>Other example may be some heavy processing. For example you started RawTherapee or Darktable processing of large amount of photos and you are not able to smoothly watch a video. Just pause it, watch the video and unpause it again to finish its work.
I've created a WIP recipe for awesomeWM a while ago, which supports stopping special clients (browsers, thunderbird, slack) after a given timeout of being unfocused, and also when minimizing them.
It works only on Linux.
Mine approach works on Linux/FreeBSD/OpenBSD/Illumos/Solaris/UNIX/...
However at first glance it is much more difficult to setup (cgroups permissions, creating task groups etc). I wonder if there is a wrapper that would behave more like "kill -STOP" already?
On the other hand, compositing window managers will dedicate a separate buffer to each application, where they have exclusive access. That kind of a window manager would not have to ask the application to update anything - it would just take the image from the dedicated application's buffer and update the screen with it. Since the application's buffer can't be modified by anything else, it would have the last state of the application in it. That would in turn find its way to the screen. No glitches.
Just added -A and -S options that also minimize a window.
-a - Do pause/resume active window.
-A - Do pause/resume active window and minimize it.
-s - Do pause/resume interactively selected window.
-S - Do pause/resume interactively selected window and minimize it.
You should use "kill -STOP".
But could you not get the same effect with nice?
SIGSTOP effectively tells the scheduler to just not allocate any CPU time to the task any more, which makes the task unable to perform any work (and also not consume any additional resources, of course).
The `nice` (or also `idprio` on FreeBSD) will only lower the priotiry in the queue for CPU time, which means that if nothing else will need CPU it will still get as much CPU time as it wants, thus still eating battery time.
Same with -s option where you will select the target window with `xprop` command.
The pause/unpause is done by the `kill` command of course.
This title was not set by me.
Your post made me think perhaps xkill had an option to send other signals than abort, but it doesn't appear to without modification.