Hacker News new | past | comments | ask | show | jobs | submit login
Desktop Environments Resource Usage Comparison (vermaden.wordpress.com)
86 points by vermaden on July 11, 2022 | hide | past | favorite | 84 comments



I shouldn’t be surprised, but desktop environments using >1G of ram as standard strikes me as insane.

I get that they’re doing a lot but my first custom built PC had 1G of memory for the entire system (and that was absurdly large by those days standards; somewhere around 2005).

I don’t see terribly large improvements in functionality between 2005’s gnome2 and what exists now in mate or XFCE, bearing in mind that I was playing with beryl/compiz at the time too.

Does anyone know what’s contributing?


I feel similarly about Windows. Way, way more resource hungry. Tens to hundreds of times, depending on the metric. Barely does more than Win95, as far as user-facing features. Shit I bet most of the important features would have fit in 5-10MB of "power tools" addons with barely-measurable effects on memory and cycle use. Win95 ran very comfortably on 64MB (with an M) of memory and a 4GB disk. You can't even install the OS on a disk that small anymore, let alone have room left for anything else.

As for Linux, I remember using Windowmaker for a long time, and later XFCE, because Gnome and especially KDE (funny how those have reversed position performance-wise now, eh?) would make my poor Celeron laptop cry. But they did run, and didn't even come close to maxing out my 384MB of RAM in that machine.


>I feel similarly about Windows. Way, way more resource hungry. Tens to hundreds of times, depending on the metric.

You're incorect here as that's not an apples to apples comparison. If you've got the RAM, Windows (and MacOS too I bet) does a lot of caching on boot of frequently used apps and files to RAM, as any sane desktop focused OS should do to give you a good experience, and it frees it back up as it starts to be needed by other apps. If you've got free RAM, why shouldn't the OS put it to good use to improve the UX?

I never understood the obsession of many enthusiasts, to spec PCs with huge amounts of RAM, then smugly compete for the least amount of idle RAM used by their system. That's not the metric of a good desktop OS. What do you gain by just staring at large amounts of RAM that you paid for just sitting idle and unused by anything? RAM is there to be used, and if free RAM is available and the OS is smart about using it and freeing it on the fly to boost the UX, then please by all means go ahead and use as much as you want if that will improve the UX.

My parents run Windows 10 on my 11 year old 4GB laptop with a spinning rust drive, using Chrome as a browser and it works without ever crashing or needing to kill apps, so IMO, Windows is stellar at managing memory/resources even when in very short supply. Meanwhile on the same laptop, Ubuntu's Gnome would just completely lock up after a long Chrome browsing session, or its OOM would just straight up kill Chrome, both cases yielding a much worse UX than on Windows, making it the far superior desktop OS in this case.

IMHO a better and more realistic metric should be idle CPU usage, and how the OS deals with low memory scenarios, like the one I described above, not by how little idle RAM it uses on PCs with large ammounts of it as that's just pointless.


I think that's missing the point of the GP, because our Windows 9x machines with 64 MB of RAM also had plenty of room for cache. Of course, there have been real advances in GUIs since Windows 9x, and the 9x kernel was hardly state of the art even for its time. But I don't know if a minimum of 2 GB of RAM, and probably more like 4 GB for a usable system, is really necessary.


You can't apples to apples compare windows 95 RAM usage logic with the one implemented in Windows 10/11. Of course old OSs used less.


>Of course old OSs used less.

"Why" is the question here. New cars don't use more petrol. And new bikes are not heavier then old ones.


But then they turn around and burn 100x (or more) the cycles-at-idle of 90s operating systems and constantly hit disk for no clear reason despite all this supposed performance-enhancing caching (try using Vista or newer on spinning rust—it's very clear that they're constantly doing disk I/O for normal shit like opening the start menu or just mousing over some UI elements, so much for "smart" caching of frequently-used things at boot).

I absolutely do not think intelligent caching is behind enough of the increased memory use to absolve them of all the waste. I don't believe Microsoft are being more respectful of memory than they are of other system resources. It also fails to explain the huge increase in memory footprint of Linux desktop environments and window managers. I don't think XFCE is caching your most-used "apps" in memory at launch.


Free RAM is used as disk cache and sacrificed to web 2.0 jabbscript SPA blogs that try to improve your browsing experience by consuming more RAM than you have.


OS RAM usage and website browser RAM usage are two different things.


They consume the same RAM. There's no infinite web 2.0 RAM.


My point was about who's using it. You can't blame the OS that bloated websites and browsers need too much RAM.


I’ve got 128GB of RAM and I would _love_ my OS to use it. But unless I’m explicitly blocking out memory by running a VM or loading a large dataset I’ve never seen it go above 20GB, and even that’s after turning off paging and tweaking windows to cache stuff more aggressively.


Try ZFS.


That’s a common argument; but in Linux these are reported differently.

I believe it is the same in windows and MacOS too.

So “used ram” from the OS perspective cannot be reused as filesystem caches.

I doubt they’re increasing performance either as the overall performance of such a system is so low.

it’s also hard to argue increased performance when your app could have instead fit in L3 cache.


What hardware giveth, the software developer take the away. For a variety of reasons, today's devs are terrible at writing software that is even remotely efficient.


for i in 0..1000000; do long slow test echo "not last" > /dev/null; if last thing then do slow test again to be sure then echo "last"; done


> Win95 ran very comfortably on 64MB (with an M) of memory and a 4GB disk.

I ran it on 8mb ram / 850mb disk. Did upgrade to 16mb ram and the performance boost was palpable, but, again, used it at 8mb for a while. Also a fresh install took up a whooping 50mb!


Right, hence the "very comfortably". That wasn't min specs, that was a very happy Win95 system.

> Also a fresh install took up a whooping 50mb

I do wonder WTF modern Windows is doing with tens of GB of disk space, not even counting swap & such. Seems like it jumped up dramatically after WinXP—even the otherwise-kinda-decent Win7 used crazy amounts of disk. "They started including a bunch of drivers" bullshit, Linux has always included loads of drivers with most default installations and it's so much smaller that just calling it "smaller" doesn't do the difference justice, plus there are surely a lot of those that could easily be made a download or optional default-off package (extremely niche or very old hardware).


> my first custom built PC had 1G of memory for the entire system

Windows 95 ran on 4 MB, and very well on 8 MB. I believe I ran Linux with fvwm2 on 8 MB as well.

Of course 64-bit and higher resolution increase RAM requirements to some degree, but I’m still not sure what Openbox needs 600 MB for. ;)


> A Spellchecker Used to Be a Major Feat of Software Engineering

> Fast forward to today. A program to load /usr/share/dict/words into a hash table is 3-5 lines of Perl or Python, depending on how terse you mind being. Looking up a word in this hash table dictionary is a trivial expression, one built into the language. And that's it. Sure, you could come up with some ways to decrease the load time or reduce the memory footprint, but that's icing and likely won't be needed.

https://prog21.dadgum.com/29.html

5MB here. 5MB there. No one notices ;)


I wonder how much of that comes from careless programming (accidental complexity), and how much is essential complexity in keeping with user expectations for polish & responsiveness (driven by Apple, etc).

Another possible source of the problem I can imagine might be the ubiquity programming style of using high-level languages and important libraries on a whim. If the language does not strip out unused code paths in dependencies, sizes can quickly add up with the same libraries being reimported multiple times.


When not much resources available one have to be more careful (about not accidentally creating O(N^2) where O(N) would work and not wasting RAM on multiples copies of the same data) and at same time one have to refuse adding non essential features with bad HW resources / user value ratio.

In corporate software it is often easy to find some low hanging fruits allowing to reduce CPU/RAM usage by 10x (but incentives stacked against doing this). In open source software like KDE/Gnome I think it would be hard to find blatantly inefficient code and the main reason is the overall complexity which is hard to reduce without sacrificing some features.


In 2005 my resolution was 1024x768 on a 60(?)Hz screen. That's 2MB per frame. Right now I have 2x 2560x1440 HDR monitors with 144Hz refresh rates hooked up to my machine, with double buffering is 100MB just for screen buffers. I believe it was Windows Vista (apologies I'm not confident in my linux versions) that introduced the DWM compositor, meaning that all of a sudden you need all of the frames of all of the windows in memory x 2.


When you feel your desktop is too heavy, bury your woes by firing a web browser.


I started using Ubuntu 8.10 with Gnome 2 and Compiz on January 2009. I've stopped thinking for a while what I'm getting with Gnome 3 on Ubuntu 20.04 now that I didn't had in 2009. The only important feature I came out with is gsconnect / kdeconnect which maybe would have been impossible with Gnome 2 but maybe not. Everything else feels like nice to have small esthetic improvements which I would do without if I think all the time I had to spend configuring out all the new stuff I didn't like. I used Gnome Flashback until the upgrade to 18.04, when I was sure I could put together a dozen Gnome shell extensions to get a desktop I could live with. Switching back to Windows or to a Mac with the OSX GUI would have been worse for me. Tastes.

Then there is the inevitable maintenance to keep the desktop environment compatible with the world that is changing around it. Kernels, drivers, etc. "It takes all the running you can do, to keep in the same place."


Isn't summing RSS going to end up with the wrong numbers?

My understanding is that RSS is memory allocated (heap + stack) + executable pages inclusive of shared libraries.

i.e, if 10 processes load libfoobar which is 10MB of code, each processes RSS value will be 10MB higher. So if you sum RSS you see 10 x 10MB or 100MB extra usage. But the pages for that library will be shared across processes, with the result that "real ram usage" only goes up by 10MB.

Please correct me if I'm wrong. This is based on my understanding of Linux too. FreeBSD might be different.


Just a quick experiment:

I started 4 copies of VLC. `top` shows this:

     PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                                                                  
 1806287 adam      20   0 1091068  74664  56228 S   0.0   0.2   0:00.22 vlc                                                                                                                                                                      
 1806426 adam      20   0 1091064  74548  56128 S   0.0   0.2   0:00.21 vlc                                                                                                                                                                      
 1806537 adam      20   0 1091064  74572  56144 S   0.0   0.2   0:00.22 vlc                                                                                                                                                                      
 1806600 adam      20   0 1091064  74960  56536 S   0.0   0.2   0:00.23 vlc                                                                                                                                                                      
Summing RES (RSS) gives 298744.

VLC loads many shared libraries:

    $ lsof -p 1806426 | grep .so | wc -l
    195
I'd hope some/most of those are shared between the instances of VLC and are included in SHR (if I'm reading the top man page correctly).

Doing RES-SHR isn't totally correct either. Those shared pages might be "marked as shared" (i.e, a .so that's only loaded by one process). But lets assume that most of what VLC uses as SHR is shared libraries, and they're used by each instance.

RES-SHR = 73708

I also ran `free` before and after launching the 4 instances of VLC. Used increased by 66068. Hard to rely on the accuracy of this number since this is a desktop system and I'd imagine used goes up and down all the time. But it's strikingly close to the RES-SHR figure.

IMHO just summing RES isn't correct.

Linux memory usage is complex.


On Linux, you also have PSS which is like RSS, but the shared part is divided among the processes sharing the same memory.


> … RES (RSS) …

If RES is resident memory <https://www.freebsd.org/cgi/man.cgi?query=top&sektion=1&manp...>, what's RSS?


SHR is not "shared" size, it is "potentially shareable size". Process might have 100 Mb of SHR and none of them actually shared.

You should use PSS which accounts for shared memory pages (by dividing their size across all processes).


You are right. RSS values cannot be added. Imagine if there are 3 processes, each use 100 Mb of shared memory and 100 Mb of private. Summing the RSS we will get 200 + 200 + 200 = 600 Mb, however the true memory consumption is 100 (shared) + 100 + 100 + 100 = 400 Mb.

To count the memory properly, you should divide size of shared segments by number of processes using it. This is how PSS in Linux is implemented. You can safely add PSS values of different processes. If you have swap, you should also sum swap usage by processes.

As I understand, you can reliably count memory usage only in Linux because other OSes like Windows or Mac do not provide PSS in their Task Managers. Windows Task Manager at least has a documentation that describes what "memory" column means, and I failed to find such documentation for Mac, so we can safely assume that it displays random numbers remotely related to memory usage.


Recently I was looking at DEs on Debian 11 and measured their relative number of installed packages. My reason was to get a first order approximation of "vulnerability surface" based on the assumption that more packages would probably imply more things to have vulnerabilities. I realize that's a gross oversimplification but I thought it was an interesting metric.

  DE       Added packages
  Plasma:  964
  XFCE:    417
  Openbox: 103

Of course Plasma (and XFCE) give you way more functionality than OpenBox, and I found if you start installing other programs to replace all that (Network Config GUI, USB storage GUI, file manager, etc) you quickly start to approach at least XFCE levels of packages. Still I found it an interesting exercise. You could start with openbox and potentially make better choices, choosing more secure programs (for some measure of secure). Or assume that because Plasma has more developer energy behind it and therefore get security updates much faster.

It's nice to see number of packages and RAM usage correlate. That seems intuitive at least.


> I realize that's a gross oversimplification

In part especially since KDE deliberately breaks up their core libraries into multiple sub-libraries(https://develop.kde.org/products/frameworks/), in order to make them available for reuse by other Qt projects. Which is not the case for XFCE. KDE also depends on Qt which is similarly broken up into smaller packages(e.g. https://packages.debian.org/buster/libqt5gui5 is depended on by https://packages.debian.org/buster/libkf5auth5). Even though the actual attack surface in terms of exposed functionality is similar.


XFCE does have modularity in many ways. The reality is it has a tenth of the complexity of Plasma.


> My reason was to get a first order approximation of "vulnerability surface" based on the assumption that more packages would probably imply more things to have vulnerabilities.

Hmm, I'd think the opposite - DEs with a large number of dependencies are probably well factored and following good development practices, DEs that are a giant blob of undifferentiated C would seem much more likely to have vulnerabilities.


Just for reference, I am using KDE Neon 5.25 and cold boot mem usage is around 800 megs. I'm not sure how the author got his numbers, but something is way off.


These numbers are bad because they are not really about what desktop environments use but about what the entire OS will use since they rely heavily on the OS configuration for that.

A much better approach to check how much resources DEs use is to capture the memory use before and after the DE itself starts, ignoring even X server (as this too can use different memory depending, e.g., on the drivers, hardware and/or other configuration parameters - all of which are outside the control of a DE).

As an example by placing `free > free-pre` in my .xinitrc before starting Window Maker and then `free > free-post` at the end of Window Maker's own startup script (which is run after the desktop is launched) i found that my WM setup uses 11MB of RAM (on a 64bit x86-64 machine - it'd most likely use less on a 32bit machine). This is however also on my setup where i have a bunch of dockapps launched that aren't part of a default Window Maker installation - disabling those puts the resources Window Maker needs down to 8MB of RAM.

These are way more realistic for Window Maker (and basically what i'd expect it to use) since my OS (openSUSE) at startup uses around 3% of my available RAM (32GB) and i'm certain that Window Maker doesn't need 1GB of RAM to work - which makes sense since pretty much all of that is used by other processes, like systemd, postgres, bunch of managers, etc.


Ignoring X server (or Wayland) is wrong because desktop environment can allocate memory in X server's process and it won't be accounted.


Most numbers seem too high to me. KDE and Gnome are definitely below 2GB out of the box and Openbox using >600MB also seems pretty high. A lightweight system like Alpine or Void can be at ~150MB of RAM usage with a window manager.

RAM usage comparisons including the whole OS aren't that interesting imho, run a couple programs and you'll never reach those numbers again until the next reboot. RAM isn't intended to be empty.


Same, I did an install with OpenSUSE Tumbleweed and the latest KDE desktop environment with all optional packages and goodies, and after a fresh boot into KDE with Konsole opened, the RAM consumption was from 580MB to under 760MB, similar to XFCE.

I don't know how the author got those huge numbers but I fell like something is off with his FreeBSD installation/configuration and is skewing the results.

I'd take those results with a huge boulder of salt.


I am pretty sure is numbers are skewed by summing RES which include for each process the shared memory. So he is basically summing things that shouldn't be summed.


This was my first thought as well. If each KDE process loads a few K libs those will add up really quick.


You are checking with htop right?



Exactly what i meant ;)


Nope, I used top, but it's basically the same


Agreed, something really off here. I used fedora plasma until 2 months ago and on boot it was always less than 1Gb. Once I started ides and things like intellij then I would see ram usages in the 2 to 3 gbs.


What do you use now?


M1 pro MacBook.

Nice hardware, but I miss Linux everyday.


Sameish numbers here. I thought KDE doing just as good (and even slightly better) than XFCE (in the RAM department) was kind of a well established fact at this point.


A lot of these types of comparisons done with distribution provided packages aren't really ever comparing what they think they are. KDE has a TON of components which can be optionally compiled/installed(like baloo or the PIM), which are not essential and offer features far beyond what a regular XFCE install will provide(and which most non-corporate users don't want). In addition there are lots of different flags and ways to compile each of these environments(and their related libraries) that can make all of them much leaner if that is the goal.

A fairer comparison would be to look at the most minimal install with similar compilation flags, or look at only what applications it would take to achieve some goal and compile and install only those parts needed to perform it using the desktop environment's applications.


I'm running Plasma on a couple of repurposed Chromebooks: Celeron 2 core @ 1Ghz with 4GB of RAM and the DE is quite snappy. Inkscape, GIMP and the Godot IDE have run great. Light gaming: minecraft on somewhat reduced settings, older windows games on Wine, terraria, and stardew valley all fine.

Anything in a browser? Horrible.


Use UBo in the browser. Also, chrome based browsers have a --light switch.


I feel like it's gotten too hard to judge the size of gigabytes of RAM.

I come from a game development background where traditionally the bulk of the space usage went into media content. It felt easy to justify a game of X megabytes when 90% of that was in graphics. That doesn't seem to be where the space goes for things like this though.

To put this into perspective, the calculation I make is 1920x1080x3 (an uncompressed HD screen) = 6220800. Roughly 6 megabytes. That makes a gigabyte hold 170-ish full screens of uncompressed graphics. I'm ok with something using a gig if it's throwing that much imagery around, but if not, where does all the space go?


So, to start with, 1920x1080x4 -- even if the alpha channel isn't used, we align to pixels 4-byte words for performance (if you want to trigger panic quickly in a crowded room full of Linux graphics developers, say "Cirrus 24-bit shadowfb" out loud and see how quickly you get thrown out of the room).

But not only that -- for compositing, we need to store the full window pixmap contents for each window. That's what's necessary for things like the blur-behind effect, or antialiased window corners.

But you probably want to also have tiny previews of these windows, so you're going to need mipmap'd versions of them stored. The mip chain for a texture is just the successive power-2s-down summed together, roughly an extra 33% of memory.

And we haven't even counted things like double- or triple-buffering! In order to not see tears as the GPU draws to a framebuffer, it needs a framebuffer to scan out, and a different framebuffer to draw to. So take all of the above, and multiply it again.

This is not counting all the additional bookkeeping that apps can provide. Shoutouts to SDL for only uploading a 256x256 version of window icons -- even when the game provides a 16x16 variant, SDL will internally upscale it to 256x256 before handing it to the window manager. And you probably want to display it back down at 16x16 or maybe 64x64 for your alt-tab, so that's a full mip chain on a 256x256 texture.

Oh, and window frames! On a reparenting WM under X11, you wrap the window in a slightly larger window, and draw the window frame in the larger one. So if you have a maximized window that's using OpenGL to draw, the app has its own pair of backbuffers, then you have a window frame in its own window pixmap, which the compositor then draws to its 1920x1080 backbuffer.

You probably also want a window title on that window frame, so that means a font glyph texture, but that probably fits in a single 1024x1024 R8 texture without mips.

Anyway, these things quickly add up! I've done memory profiling like this before. There's still many, many gains left on the floor, absolutely, but I've had people plug in 3 1920x1080 monitors and then complain that the window manager was using 20MB of GPU VRAM.


You've mentioned a bunch of stuff, but even then the numbers don't add up double and triple buffering are adding just another few buffers per screen. Multiple buffers for every window still doesn't add up to the memory use levels some of these desktop environments use. 50 maximized windows with multiple buffers still ends under a gig.

Mipmapping on dynamic content is just daft. If you can generate mipmaps as they change without slowing things down then you can generate them on demand when they are needed.


The usual memory usage I was seeing was in the 200M-300M range -- this was on mutter/gnome-shell. As others mentioned, they can't reproduce the 1GB figure. But there was still a lot of noise about that, so I figured I'd explain where some of it is coming from.

> Mipmapping on dynamic content is just daft. If you can generate mipmaps as they change without slowing things down then you can generate them on demand when they are needed.

We do generate the contents on demand, but mipmapping requires the texture memory be available for the full chain. In theory, you can use sparse textures on OpenGL/Vulkan, but they have several drawbacks that make them unsuitable for compositor work -- changing the mipchain configuration for a single texture can often take 1-2ms, which wasn't fast enough for our performance targets. I did the investigation!


In the context of games: There's a lot of space used for high resolution textures which users normally never see... but they can see if they zoom in, put their nose up to a wall, etc.


Where'd you get the 3 from? I thought it'd be 24 or 32, since that's how many bits you'd need to represent the color.


3 bytes. 24 bit.


"Desktop Environments Resource Usage Comparison" [...] on an OS practically nobody uses for desktop, FreeBSD. I cannot use this comparison, since I don't use FreeBSD for desktop.


Someone actually tried doing that across various Linux desktop distros a while ago: https://www.reddit.com/r/linux/comments/5kdq92/linux_distros...

Also there was another post on Reddit 2 years ago, where Ubuntu was used as the base OS: https://www.reddit.com/r/xfce/comments/kb0d87/i_compared_the...

Curiously, the average values for memory usage were pretty close, though more information about the methodology would be nice.


A very unscientific and anecdotal comparison of KDE Plasma on Kubuntu 22.04 and my install on FreeBSD at the very moment I am looking at both: the memory usage seems similar overall, but individual processes seem oddly different. Example, kded5 on kubuntu (kde version 5.24.4) is using ~57 MB of memory, on FreeBSD (kde version 5.24.5) uses ~112MB. For kwin_x11, ~61MB on kubuntu, 140MB on FreeBSD. I restarted Xorg on FreeBSD and that's what they booted up to.

This is interesting why it uses a bit more memory on FreeBSD, but there is probably some reasoning behind it.


Take a look at the end of article where I added also freecolor(1) and htop(1) memory usage - it seems to be a lot lower then RSS field from top(1) command.


I use Sway and it definitely is a lot faster than Gnome or KDE. KDE is more of an ecosystem though. Installing any package on Arch that has some dependency on KDE wants to pull in a bunch of large dependencies.


I closed everything that was using more ram that plasmashell on my machine and ended up with about 3.3GB of memory usage (down from 12ish, mostly firefox, but evolution and MS teams were also pretty ram hungry).

Notable not-kde things that were using significant ram after doing this:

Various pipeware processes, adding up to at least 100MB

systemd-journald, also about 100MB

Several things enabled to run gnome applications (dconf-service, xdg-desktop-portal-gnome, goa-daemon) also about 100MB total

One thing I noticed about plasma is how many processes there are; plasmashell is only about 10% of the usage of a plasma system doing "nothing" and it's the single largest process.


Interesting, I'm using KDE Plasma as well, and with only a terminal open, my RAM usage is about 1.25GB, and < 3GB with Firefox opened (I don't open a lot of tabs though).


I must say I was looking for an update on this. It is absolutely fantastic to that someone is taking the time to do this. Thank you so much!

Edit: Ok, no, those numbers are totally off.


These numbers seem completely wrong to me. You cannot add RSS, you should add PSS (proportional set size) and SWAP PSS to obtain true numbers. PSS values allow to account shared memory properly.

Here are the results that I got from my Linux system. I excluded all user-visible applications (like Firefox, terminal, ssh or text editor) and added up what was remanining. This includes systemd, system services like udevd, Gnome, user services like pulseaudio. Total memory consumption is 730 Megabytes (RAM + SWAP added).

I think the reasons for such high memory consumption are that Gnome shell is written in Javascript (gnome-shell alone uses 200 Mb) and there are many small daemons: for example, NTFS daemon uses 30 Mb, Xwayland uses 45 Mb, IME daemons use around 55 Mb and so on. Most daemons use somewhere around 5-15 Mb but there are so many of them that the result is this big.

For comparison, Firefox processes use somewhere around 1.2 Gb, Skype (Electron-based) uses 600 Mb, and desktop Telegram uses 600 Mb as well despite being written in C++.

If you wish to try this on your system, you can use this Python script: [1]. Don't forget to edit `get_group()` method to exclude user-visible applications (or close them). You need to run the script as root because otherwise you will be able to read only your own `smaps_rollup` files. The script also saves detailed stats in CSV format into /tmp/ram.csv.

If you don't want to use my script you can use htop and enable PSS and SWAP columns there (and you can remove useless SHR/RES).

[1] https://gist.github.com/codedokode/ddaedf4ae44cbfa16ca44dde6...


Could you not use htop for this? I was kind of surprised when it showed i3 using about 500mb while pretty much idle. I do have a few extras installed: lxpolkit for mounting with nautilus, feh for a background, nm-applet, picom, etc. There weren't any big whales, either. All of the processes were using only a point or so of ram.


My i3bar is using 162MB, but I use a pretty extensively customized py3status. The /usr/bin/i3 executable itself reports not even a couple hundred KB for me.


A couple hundred KB is very impressive for i3. Even the ridiculously minimalist dwm (with a few patches applied) clocks in at 15,000 KB on my desktop. It does ship with a status bar though.


I was reading the wrong line in htop. The i3 executable clocks in at about 22,000 KB on my setup, so dwm probably still wins. I had to double check, because coming in smaller than your report of dwm failed my sanity check. :)

(I use st as my terminal, so I am familiar.)


Not really the 'same' (but good enough (better in fact) for me): Sway comes in under 40MB of RSS.


Sway is a window manager, not a desktop environment. Not a fair comparison, since you have to install everything DEs provide separately.


No doubt, but I do challenge the assumption that you need 'everything'. I get by pretty well with about 20 apps and DEs tend to just keep adding more and more.


explorer.exe is 347 mb memory usage in comparison. We really are in the year of the Linux desktop, those DEs are bloated like Windows used to be


1) The post has problems with math and the numbers are wrong

2) For fair comparison you should add memory used by daemons/services, otherwise I can say that gnome-shell uses only 200 Mb which is almost twice as low as your number

3) I am not sure if you can properly measure memory usage with Windows Task Manager. You need to account properly private, shared and swapped out pages for each process.


Lol.. Windows desktop runs a lot more than explorer.exe.


explorer.exe is closer to a window manager than a desktop environment. you'd have to compare it to kwin, mutter, xfwm, or i3.


Windows is still bloated.


Just added freecolor(1) and htop(1) RAM measurements at the end.

Hope that helps.

Regards.


Ive mostly just used LXQT with my favourite file manager nemo and called it a day. I dont see the appeal of heavy DEs. Openbox has ludicrous customization potential for basic split views and keyboard navigation, and endless visual tweaks and themes too, dmenu for launching stuff. What is the stuff that you'd miss from kde/cin/gnome?


fwiw: I just put Bodhi Linux v6 (Enlightenment/Moksha) on a Macbook late 2011 model that has 16GB of RAM and a SSD. It is now running great under Bodhi but was butt slow with Ubuntu 20.04.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: