I like Lubuntu and often use it for low powered devices and sometimes devices that aren't so low powered. This phrasing is a PR blunder in my opinion because the old focus is a lot clearer than the new focus. Here's how you should have said it:
We are focusing our resources on developing a fast, clean, functional distribution for devices manufactured in the last ten years.
Then you can go on to explain the rationale (which is all quite sensible, if you don't want to support hardware that's 15-20 years old, you don't have to, it's a pretty niche market anyway).
> This phrasing is a PR blunder in my opinion because the old focus is a lot clearer than the new focus
To me, it seems that they ran the numbers (kudos to them for taking an empirical approach) and figured out that their original USP was now no longer unique enough, and they need to do something different to stay in the game.
But this article makes it seem like they haven't figured out what that something different is. They list a number of things they are going to provide, including:
- leverage modern, Qt-based technologies and programs to give users a functional yet modular experience
- Lubuntu will continue to be a transparent and open distribution
- create and maintain complete documentation
And a few others. But none of these seem to be different enough from the other great choices available as Ubuntu flavors.
That's just my read on the article anyway.
Personally, I think the fact that so many minor-variant distros exist is all the evidence you need that Linux is a terrible platform for customization, because if it were simple to customize everyone would just use Ubuntu and make their own changes, but instead you get all these distros.
If you look at any of the more "hands on" distros then you might find a couple of themed variations, but generally users will just recommend you install the base distro and change your personalisations yourself. As an aside I went through a phase of running a different DE/WM each week on ArchLinux. Never had any issues there.
Most other distros don't have that split - whether it is ArchLinux, Fedora, CentOS, NixOS, and what have you.
Fedora is more like a split - it's the people who like RHEL but want a faster release cycle.
Same about being up to date. always want bleeding edge stuff that might be unstable at times, or change its GUI ever so slightly all the time? Go for arch Linux. Don't mind running a somewhat older version of most tools for some time but have it more stable? Pick Debian. Etc ..
Right now Lubuntu's niche is really clear for me, if I want a lightweight, general purpose distro I use Xubuntu. If I want a really lightweight, general purpose distro I use Lubuntu. (A step beyond that if I wanted "so damn light there are probably a lot of things I don't be able to do, I would think of Puppy or Damn Small or something, neither of which I've needed to touch in a long time, not sure they're even still maintained.)
So if Lubuntu is no longer my super-light distro, erm, what is it. A competitor to Xubuntu?
(Total side note, I really love the space of light, clean, functional desktop operating systems. The most exciting development in computing for me personally would be if someone figured out how to commercialize one as a competitor to Windows and Mac. If I thought there was a good business model I'd start that company myself.)
Then the community complains "X" is too heavy, we need a lightweight system! and the cycle repeats itself.
Another one to add to your list of tiny Linux distros is Tiny Core Linux, which is indeed tiny while being fully graphical.
I see Puppy's shifted their idea of "lightweight" too, I recall the ISO being 4-5x smaller last time (2007ish).
Yep. "Gets out of the users way" is what a lot of Ubuntu's goals are about already.
And wait until you hear about Unix.
Same as NTFS file system then
Win 3.1 (1992) used only FAT.
It's been a while since I checked all possible distros out there (which used to be a doable thing 2 decades ago, not so much anymore) but to me this sounds just like a whole bunch of other distros out there? So yes, not the best PR.
Here's how you should have said it: We are focusing our resources on developing a fast, clean, functional distribution for devices manufactured in the last ten years.
They could have, but it also means something different because it specifically says something about older devices. Whereas it looks like they're actively shifting away from that. I'm saying 'looks like' because honestly after reading the text I'm still not sure what exactly the plan is.. On one point the core goals don't mention anythin about 10 years, and it says should still be usable on old systems which sounds like 'well, maybe, don't care' when on the other hand it also says we will no longer primarily focus on older hardware which implies there is still some focus left. Again, yes, not the best PR as this article just creates confusion because of the wroding.
Very few distros want to support devices older than 10 years. That leaves the distros that want to be fast clean and functional. That really limits things down. </s>
Wait, what? 50% faster? Is that all? I mean, many tasks are still single threaded and from 2010 to 2018 we have grown 50% in single thread and 100% in multithread? If things were going the "old way" then in 8 years would've seen a 2^(8/1.5) -- well over thirty times speedup (Moore's law as stated by David House who predicted that chip performance would double every 18 months). You can see some of that http://3dfmaps.com/CPU/cpu.htm here.
No wonder there's no point focusing on old CPUs any more.
Well, the X5670 was a high-end 2P server CPU sold for $1400 (https://ark.intel.com/products/47920/Intel-Xeon-Processor-X5...) so to fairly measure how performance has improved, you should compare it to a similarly priced product, like the AMD EPYC 7401 (https://www.amd.com/en/products/cpu/amd-epyc-7401). And, oh, look at this: the SPECint_rate "h264ref" benchmark subcomponent, which measures multithreaded video encoding performance, shows a 4.2× improvement (472 to 2000):
• X5670: https://www.spec.org/cpu2006/results/res2010q2/cpu2006-20100...
• EPYC 7401: https://www.spec.org/cpu2006/results/res2017q4/cpu2006-20171...
Edit: oops, fixed major screw up. I had linked to SPECint numbers, instead of SPECint_rate. The real improvement is 4.2× not 45×! Far from what Moore's Law predicts (2^(8/1.5)) but still a very notable improvement. I'm sure your friend would be happy to transcode 4× faster...
To compare, let's presume (this is a great exaggeration I know but it serves the purpose) each Chrome tab takes equal CPU time. Say, on the current computer the 100th tab makes it unusuably sluggish. At 2x, the previous computer could open 50, even at 4x we are looking at 25. At 100, we are looking at not being able to run Chrome.
It's also worth noting that a X5670 cost ~$1400, so you're comparing ~$2800 worth of CPU to ~$350 worth of CPU in something like an 8700k, and that doesn't cover platform cost differences. Another way to look at this is that 8 years after the release of the X5670, you can get better performance for ~10% of the price, that's not bad. If you did plop down ~$2800 today for a CPU or two, you'll have 28+ cores at your disposal.
In just over a weeks time you will be able to get a 32 core / 64 thread AMD Ryzen Threadripper 2990X  for around $1,500-$1,800 which is pretty amazing.
The ryzen is only 20% better in single threaded performance. I could easily overclock my cpu to close further the gap. Just need to upgrade my gpu and I'm good for a few more years. These cpus have aged quiet well.
Double the performance isn't too bad obviously, but that's about the same timespan as between the 486 and the Pentium III.
I made my desktop in 2014 with a Haswell i5-4670K @ 3.4GHz. Overclocked it to 4.3GHz, and I still don't have a reason to upgrade. I really want a Ryzen 2700X also but staying patient. Newer and better hardware will keep coming out so I'm in no rush. (but my next CPU will definitely be AMD)
My current thing right now is compiling the Linux kernel exactly catered to my Haswell machine, and using the latest gcc with "-march=native -O2" optimizations. It might seem minor, but man if she ain't screaming right now. I should mention I have an AMD RX 480 graphics card also. So I'm up-to-date in that world and enjoying the best of team red and blue right now. New drivers, software, kernels, and optimizations keep coming out that make my current hardware faster. But then came Spectre and Meltdown lol.
One thing I will say, compiling your own has many benefits, as all the distribution kernels use Generic x86-64 and miss out on all the Intel/AMD optimizations.
For example, just by switching from "Generic-x86-64" to "Core 2 or newer" in Processor type and features -> Processor family in kernel config, adds these 5 optimizations:
You can say it's pulling hairs, but hey, that's what I want. Not many people have the time, know-how, run Linux, etc. to be able to make these optimizations.
My much newer one (six core 6850/950 NVME/1080ti/32gb) really isn't that much faster, GPU aside.
For example: https://www.ebay.com/itm/Intel-i7-4770-3-4GHz-Quad-Core-Syst...
Intel® Virtualization Technology for Directed I/O (VT-d) ‡
And that's not even covering motherboard incompatibilities with Linux (e.g. sleep/shutdown not working, WiFi/Bluetooth broken, etc.)
Have those problems been fixed definitively? Or should one stick to Intel for Linux at the moment?
Also, since I use the Arch distro, installing packages from the AUR requires compiling stuff. So far ffmpeg has been among the packages with large source code base that I compile regularly and I've never had issues. I would like to compile the kernel too, but I'm unsure as to whether the difference will be noticeable for my daily usage. On my old machine, the Zen-kernel did improve responsiveness, but on my current one, I'm sticking to the LTS kernel.
Word is that the 2nd generation of Ryzen a.k.a Raven ridge are more stable and worth it if you can afford them (watch out for motherboard compatibility though).
I for myself couldn't suffer a machine where I occasionally need to reset , destroying flow.
Most of your gripes come from running Linux on a laptop machine. If you are a person who likes to work on powerful desk workstations, Linux behaves great on those. Granted, some things, such as multifunction printers (which are mostly software-driven) don't have support. You do have to be careful when selecting hardware, unfortunately.
Yes, of course!
> What about compiling large source code bases? There were some scary bugs reported on Linux when Ryzen came out.
Are you referring to this? https://www.extremetech.com/computing/254750-amd-replaces-ry...
The later revisions of Ryzen 1000 CPUs have been fixed, you could RMA your CPU if it was affected (I did this myself, AMD's support was excellent). Ryzen 2000 CPUs aren't affected.
> Have those problems been fixed definitively? Or should one stick to Intel for Linux at the moment?
I've never heard of any motherboard incompatibilities. The WiFi chips in some of those AMD motherboards are actually from Intel btw.
I'm running an AMD Ryzen workstation with Linux for over a year now and it works perfectly.
In other words: new CPU architectures don't make old algorithms faster†. Instead, new CPU architectures enable newer, faster algorithms.
† Yes, this used to happen, in the 90s, when "new CPU" meant "appreciably higher clock speed." Nobody is making computers faster by increasing clock speeds any more, because higher clock speeds = higher power-draw and higher TDP. CPUs are now advanced by making them work smarter, not harder: accomplishing the same abstract tasks faster by providing hardware that enables newer algorithms to be used for those tasks, while using the same or less energy than the previous implementation would have used.
Like client-server vs local, progress is a cycle.
If your language is inherently heap bound and/or targets bytecode with ad-hoc JIT trickery, no amount of static typing or hard inference is going to help.
And it's not setting any speed records among automatic memory management languages either. Swift is well ahead in performance without breaking a sweat. Some Common Lisp implementations generate faster code than Java without any JIT overhead and a billion investment into development.
It doesn't use full automatic GC, so of course it is going to be faster than Java.
Swift also has performance gotchas, like it's strings. It's strings are very easy to use in an non-performant way because they are going for full unicode correctness, even when you don't want it.
But it's also a young language with a whole bunch of low hanging fruit in the perf realm.
My perf scale is also more orientated towards business logic / app type things. If your really thinking about your perf and doing a rewrite in fully static C/C++, then I have no idea how much better you can get :)
Is common lisp fully GC?
And yes, CL is fully GC. But my point is that's not the culprit. If your problem calls for allocation and management of heap memory, it has to be done one way or another, manually or via GC. GC overhead in that light can really be negligible, and it's other concerns (like scheduling and real time constraints) that typically make its use problematic.
Does not explode, just burns
The only annoying issue is having to use a piggyback PSU for my graphics card, an RX480. Dell just had to use a proprietary ATX-style connector or else I would have just replaced the stock PSU all together.
The thing is, many of the modern Linux desktops seem to be heavily wedded to a lot of dconf and systemd infrastructure. OpenBSD has managed to build analogs for enough of the heavily Linux-centric infrastructure that Gnome 3 depends on, but man it’s clear that Gnome 3 was written with Linux and only Linux in mind.
I didn’t see KDE in the OpenBSD ports when I looked, and I suspect it too is because KDE has become wedded to Linux-only services.
XFCE remains the most portable full-featured Desktop environment and runs well on OpenBSD.
But GNUStep... GNUStep is my biggest regret. GNUStep isn’t entirely comparable to XFCE, because GNUStep isn’t a desktop. It’s more like all GTK plus dconf plus all the desktop services required for interaction between GUI apps plus a uniform display layer.
In short, GNUStep is more or less like Cocoa from macOS. Add GWorkspace, and you have a solid reimplementation of macOS’s Finder.
I wish GNUStep had caught on or that someone would inject new life into it. There are apps that compile on both GNUstep and macOS. For whatever reason macOS seems to attract better desktop apps—both Free and commercial.
Had GNUStep and not KDE or Gnome become the defacto *Nix desktop, I imagine people would be writing macOS apps for free on Linux, and macOS users would be recompiling apps to run on Linux... and not just Linux, but all the platforms that GNUStep supports, like OpenBSD and Windows.
EDIT: Oh yeah, and why I thought to write this, is because GTK vs QT both feel like lower potential toolkits. And GNUStep also feels very lightweight and portable.
I doubt this is the issue. KDE Plasma 5 seemed okay on FreeBSD when I tried it recently, although I don't have any recent KDE+Linux experience to compare it with. More likely there just hasn't been enough developer interest to get it properly ported to OpenBSD.
Edit: To get more at your actual point, I've had similar feelings about GNUStep, but when I dug into it I felt like there was just too much of a mismatch with the rest of the FOSS GUI ecosystem. Maybe I'd be a die-hard devotee if I'd tried it back in the '90s as my first thing after fvwm-configured-to-look-like-mwm.
Definitely. It probably has the largest mismatch outside of exotic options like, I don’t know, a Smalltalk image.
I think that’s why the Free *nix desktop struggles to this day. We never built the core infrastructure that allowed useful inter-application messaging. We never built the application frameworks to enable a really great, consistent UX. When we finally did, we built dconf and systemd.
I can’t say these things are horrible, but neither can I say they offer anything more than macOS’s system services or Cocoa APIs. And the price of being different, is that macOS is far and away the best GUI platform today (I’m not a fan of the Mac these days for other reasons, but the UI consistency is first rate, owing to their frameworks and the NeXT legacy). We could have shared (maybe still could share) so much with the Mac. And importantly, we could have benefited from Apple’s singular vision while also providing an escape from the walled garden.
But yeah, the FOSS GUI ecosystem chased FVWM bling for a long time. Oh my goodness, I spent hours tweaking FVWM configs, later Window Maker, Enlightenment, and who knows what else. But it was all skin-deep looks. We spent so long chasing window themes, but ignoring the hard work of building infrastructure.
KDE and Gnome were ultimately late, divisive, and different from everything else. So much of Linux and the BSDs is great, it bums me to see us so far behind Apple, when we could have chased tail lights until we overtook them.
At that moment I knew that Linux would never win on the desktop. All those wasted developer hours.
I am still heartbroken to the day. In capitalism, competition benefits the user, even if the products are virtually identical, because then they compete on price. In the free software world, competition of two indistinguishable products actively harms the user.
There are people who prefer GNOME over KDE, would you force those people to use KDE? There are people who prefer KDE over GNOME, would you force those people to use GNOME? There are people who prefer Window Maker over any desktop environment, would you force them to use KDE or GNOME? There are people who prefer i3 over any DE or WM that uses an overlapping window UI paradigm, would you force them to use KDE or GNOME or Window Maker?
If you did so, you'd be making their experience with their computers worse - you'd be acting against their choices. If people didn't value having options, they'd flock to a single desktop environment (or WM) and the rest would be a mere curiosity at most. But this doesn't happen.
And of course what Slashdot predicted didn't happen either: in addition to GNOME and KDE we also have MATE, XFCE, Budgie, Pantheon, Enlightenment, Cinnamon, LXDE/LXQt and a TON of standalone window managers to choose from - we even got the rising popularity of tiled window managers, which was certainly not a thing back when KDE and GNOME were new.
Windows and Mac OS X don't allow much customization. Not because Microsoft and Apple are lazy, but because they want the average user to be able to pick up any Windows PC or Mac and be immediately familiar with how to use it.
If we didn't have the Gnome and KDE wars, we would have likely ended with a "standard" Linux desktop.
It looks like he’s done some custom work to the Preferences app. Impressive!
The project looks pretty dead, though.
Imagining an alternative timeline where GNUStep became the dominant Linux Desktop makes me feel like I live in the darkest timeline.
No reason GNUStep couldn’t be as beautiful as anything else. Oddly, I find the NeXT aesthetic oddly fresh looking today. I was struck by how good it looked when I took a trip to the Living Computer Museum and played on their NeXT cube. What’s old is new I guess.
Still, not supporting themes from the getgo was definitely a reason GNUStep didn’t get traction early on, I’d
I'd love to switch to a GNUstep powered OS (maybe with Ubuntu underneath), but its just not good enough yet.
I'm with you on the general idea that the GNUstep developers don't seem to be in a rush to take GNUstep out of the shadows and let it be a glory on Linux. More aggressively imitating Cocoa, adopting Swift and filling in some of its holes on Linux, and abandoning the wholly useless enterprise of source-level compatibility with the OPENSTEP APIs of the 1990s… these would be good.
And distros not having the old versions, too!
LXDE based Lubuntu struck the perfect balance (for me) between:
- Being minimal and lightweight
- but works out of the box with enough batteries included
- tons of community resources due to being a ~buntu
- very, very customizable (I love my undecorated windows and no bullshit shortcuts/decorations etc)
- sane user experience, without any unnecessary bloat
- LTS support (albeit shorter than other ~buntu releases).
Yes, I can achieve these things some other way, but Lubuntu hit this sweet spot out of the box with minimal tweaking.
Maybe LXQt will be great, may be not, but at this point it reminds me of Ubuntu 10.10 which, IMHO, was as refined as Ubuntu (or any linux distro at the time) could be, and then they just decided to throw all that maturity and refinement away and start from scratch.
I truly hope LXQt based Lubuntu succeeds, but it's a sad day to me.
I don't know if you know this, but LXDE is GTK2 based, so the decision to "throw all that maturity away" isn't quite the same here. Ubuntu bet on convergence that never happened, Lubuntu has to eventually accept that GTK2 is getting long in the tooth. And since the LXDE maintainers moved to the LXQT project, QT it is.
EDIT: Also, given the choice between GTK3 and QT, the latter is the less resource hungry IIRC, at least when sticking to the basics.
I'm not old enough to have experienced that, but I can say that we have the same thing happening with 16.04
They had it almost perfect and decided to throw it all out.
History does repeat itself...
However, I wasn't into it to the extent that I could follow the linux distro culture. Don't even remember the version now; must have been 10 or 11.
There was also the fact that I had limited access to the internet in the early days (that's just how it was in India).
I was just sick of trying to make pirated software work (including windows), so was looking for something I can just download and run.
I do remember that I liked how easy it was to install games, compared to my windows desktop.
Only now, that I am familiar with software development cycles do I understand how stupid it is of Ubuntu to discontinue fully featured, polished software.
For example, Mac has changed so little over the years (haven't personally used it much), which probably is the reason why it's so much more mature as compared to linux DEs...
P.S. I am 19
Yes, 16.04 (unity 7) is peak desktop for years to come. Nothing else works quite as well for me. Luckily, you can use unity 7 with 18.04 without too much friction.
'rename'  is a good way to do it anyway.
Budige is great, but it's not as mature as untiy 7.
So the "Blame" is not in Lubuntu community but LXDE
I don't know where you're getting that. I like LXDE well enough. The only laptop I have running Linux runs Lubuntu. I would not call it "very" customizable. I can't easily rearrange the menu (which I blame the retarded .desktop file standard for), I can't add my own menus (to my knowledge), I can't change the application menus to be mac-style top-of-screen menus like I could in KDE 3, etc.
I meant that from a point of view of comparing it to others like gtk3, unity, cinnamon etc for some specific features that I find useful.
Of course, not all distros will lend themselves to all types of customizations easily and customizability, at least to some degree, is relative.
With that in mind, yes, maybe I should tone it down to 'customizable,' instead of 'very, very customizable.'
For instance, why do we need one single global menu with all the apps categorized according to a .desktop file? That's overcomplicated and restrictive garbage.
Here's a simpler alternative: let me place as many panels on the desktop as I want, let me place widgets on the panels, let one of those widgets be a button that presents a menu-ized view of a folder structure. Now I can have as many menus as I want, with whatever layout I want, and organize my application launchers as best suits my workflow.
I used to use Gnome Do as my primary launch tool. Now I just use a shortcut (Windows Key + Space Bar) for the 'Run' dialog (usually found by Alt + F2) and I know the first few chars of my favorite tools after which they autocomplete.
For everyday use, it's hard to beat the speed and simplicity of that.
On the other side, I always loved GTK apps and their feel/design, but never got to make anything big myself in GTK.
There are other points to this debate than RAM/Resources: Looks and feel, philosophy, feature bloatedness, etc.
Akonadi, purely associated with the KDEPim stuff. You can just uninstall it all without a fuss.
Baloo, the file scanner. You can disable this outright.
Plasmashell, which is the desktop applet engine. This is where I feel LXQT can find its niche, by providing a simpler shell that uses less resources than the plasmoid JS script container style the Plasma Shell uses.
Without any of these running a KDE desktop with system services etc included is under 200MB, often under 100MB.
edit: found this http://www.linux-databook.info/?page_id=3728
But into the future I am very interested in restoring older laptops with ChromeOS / Neverware. Especially if they meet the requirements for running the Android / Linux container and studio. By mid-2020's chromebooks could account for 5% of global pc market share.
But you are of course right that it would be less work for all people involved if these were instead Debian Derivatives or Fedora Spins where the useful infrastructure, developer processes etc. are already in place. I guess it all comes down to branding, and the fact that starting over is easier right at the beginning.
After the installer runs if it actually works you aren't left with something much different from a normal arch installation.
The actual distro developers have a lot of work to do as far as assembling a wide range of components and often writing their own like package management systems and recipes for building thousands of packages.
Its unsurprising that people writing 0.1% of distro badly doesn't require much work.
Comments like these, which are common, are what pushed me away from open source. Everyone feels entitled. Fix it, report bugs, ask for features. Frankly, as an open source contributor, I want it to work for people, but, if it doesn't work for your specific setup and you offer nothing of value, I don't care that it doesn't.
In my case (not the person you replied to) the Antergos website and IRC (sorry, I know reporting on IRC is not as useful as a proper bug report) people gave me the impression it was all good and it worked for everyone. I spent 2 hours hacking the installer to try to make it work, until eventually giving up and using the Arch iso to install in 10 minutes.
The entitled behaviour one might see is therefore sometimes not just entitlement, but could also be frustration from "feeling misled". We all know it's free, and I'm grateful for that, but I'm not grateful for feeling promised something to work and then spending 2 hours too much on that promise.
"I worked with the Antergos team, a team of 4 people, to write a distro that let you pick and choose everything, from DE to browsers. While I've mainly faded away they are still hard at it. In short, if we can support nearly every DE with 4 volunteers, what are these *buntus spending their resources on?"
You compared flavors of Ubuntu unfavorably with flavors of antergos and it's neither throwing a fit nor entitled to point out that their resources presumably are spent on making things that work.
Regarding antergos, I consulted their bug tracker my issue had been reported already and wasn't fixed within the month I was interested in the matter and it was impossible to use the older version without the bug because once online it helpfully updated itself to the broken version. Further as far as I could tell the bug was generic enough that it would have effected most users. It simply crashed most of the way through a generic Ext4 install to bog standard desktop hardware.
People throw around entitled as if giving away something for free exempts your work from all critism. Sure you owe me nothing but if your work over promises and under delivers I'm apt to say something so others don't waste their time.
Secondly, you didn't report your issues, enough said. When you are a volunteer team, you can't buy every piece of hardware. You absolutely depend on logs and more importantly people willing to test. I had an early Ubuntu beta format my partition when I hadn't selected to do so. I wouldn't say Ubuntu sucks and should give up. It was addressed as a bug and fixed. Such is the nature of software.
I don't think any promises were made. Antergos is an easy installer for arch. I don't know if you know all that goes into an installer. It's way more than I assumed when I signed up. Just writing py3 bindings for libparted, for example, didn't exist. I upstreamed those to RedHat. For a distro, I'd probably say the installer is 20 to 30 percent of the effort. Packaging is the large remaining majority. *buntu flavors reuse packaging, and reuse an installer.
I will never claim Antergos is perfect. It's not, and it does break as Arch changes things. But saying 'it doesn't work' or 'start over' is dismissive and borderline ignorant. Most of the issues over the years were due to changes in Arch packaging, not due to bad code.
I won't reply further, feel free to stand on your soapbox and continue badmouthing something you did nothing to help.
I have a 2004 nx7010 (awesome laptop, at the time and also 8 years later) that I sometimes use at home or grab to play around with and try out different distros and OSes. I used to use ArchLinux but they stopped supporting x86 a while ago. VoidLinux is also out. Right now I'm running OpenBSD 6.2 on it, which is nice - but the short release cycle for a machine you only start up like 3 times before the OS is EOLed kinda sucks. (Not OpenBSD's fault, of course.)
So we'll see how long one can get a decent Ubuntu fallback solution (as in, it's big enough that enough people work on it that it will mostly work out of the box for these hobby projects of running old but not ancient hardware) for x86.
Sorry if it sounded like complaining, it wasn't my intention. I just think x86 is dying a slow death, there are not many "enthusiasts" like for e.g. 1980s or 1990s hardware where you have "this one ancient build everyone uses", but x86 has no discernible advantage over x64 these days (please don't name the the RAM requirements for 32 bit builds now :P) - so x64 is in 99% of cases superior and that's why it's rightfully supported. e.g. VAX, Sun SGI and C64 stuff is exciting, x86 in the age of x64 is just "older and slower" to most people.
An modern Linux distribution in a vm, running on your laptop typically isn't all that great an experience.
There where Linux is very successfull it's focused on audiences, like sys admins and devops people.
But there's no serious focused distro for web and cross platform app designers/developers and other creators. That's something companies that make design tools could pull off, where the graphical framework of their tool becomes the framework of the OS, similar to how GTK (Gimp Tool Kit) was born, but in a more professional way.
One of the main reasons I use Lubuntu is the fact that it got out of my way (install Lubuntu, remove the handful of stuff that came pre-installed, install i3 and get to work).
With my above setup, I would idle with just 250 MB of RAM usage and almost no CPU usage. I loved that - as we have more and more electron apps (Slack, Teams, Atom, GitKraken etc.).
I hope that bit of focus is not lost as not everyone has the affordability or the ability and access to buy a 32 GB i9 Macbook Pro and develop on. (My personal machine is a Thinkpad T420 with 8 GB RAM and a 250 GB SSD).
Anyway, back to your question. razor-qt was pretty snappy on my machine, leading me to conclude that KDE was slow because it was KDE. Akonadi, slow animations, buggy widgets, you name it. I haven't tried KDE since then (this was back in 2012 or so), but I've heard that the latest incarnations are pretty snappy and light. Even if they aren't, I highly doubt LXQT will have any problems with lag.
Because otherwise they have to develop it?
> A skin is superficial, I would do anything to avoid QTumor.
Do me a favour.
Wonder if they will also (have already?) adopt LXQt?
Up until now Lubuntu had a clear goal: Create a usable Ubuntu-compatible distribution that runs on old hardware.
Now the goals are so vague and subjective that they lead to nowhere. Sure, this won't matter in the short term, but sooner or later the project will get completely derailed because of different interpretations of this dumb statement.
They can still make a good lightweight OS that runs on 10 year old hardware. There's no reason to change the goal.
Sure, it's no longer a massive challenge, but it doesn't have to be. If the developers want to try new things they can fork it.
Lubuntu was working great on one of the first netbooks from 2007 where other distributions out of the box were not.
If people want modular or super configurable then Arch Linux has that use case pretty well covered does it not?
It's difficult to find a phone with less than 1 GB RAM. And although ARM CPUs are much less powerful than x86 with the same clock, 1 GHz also seems a mininum.
That being said, I'm not clear on what the real difference between Lubuntu and Kubuntu will be now.
Not really for GPUs though. AMD progressed a lot and all the new features are landing in amdgpu, not in radeon. That's important especially for Vulkan support. Older GPUs are locked out of that completely.
CPU progress is less rapid, but something like Ryzen is quite a breakthrough.
It's cool to see LXQt in a state where lubuntu is adopting it as it's primary environment.
It's terrible that we can't count on computers getting twice as fast every year any more. I guess it couldn't last for ever...
For those kinds of hard problems, the advances seem to be coming in massively parallel computing of the kind that is advancing (the GPGPU has transformed ML, for example).
This miiiiight be just you. Even sublime text (forget vim or another cli editor) runs fine on ancient hardware.
What type of files are you editing?
> Or use entire sticks of RAM at rest.
I'd hope my text editors would use as much ram as it needs in order to give me the smoothest experience. If possible, that means storing the whole file in RAM (up to a set limit).
You must not throw much at it
It can play absolutely every single modern game, with the only component upgrade in its 8 year life being the GPU.
Either my cpu is slower than it should be, your cpu is faster than it should be, or you don't check out one of the poster child's aaa game series.
I cannot think of any modern, graphically demanding game which is bottlenecked by the CPU. I imagine you are harming your GPU performance to some degree with such an old CPU though. Not sure I buy what you're saying here. What settings are you using?
Yeah, me neither.
As an aside to answer your question:, I can run pretty much any game on its highest settings just fine with this machine: which is a gtx1070, 2600k, and 16gb of RAM driving a 120hz 1080p display. It was originally built with two HD5990s in crossfire.
If the purpose of the device is to play games, and it it incapable of doing so without this component, I'd think of it as a bit more than an "add on card"
For 90% of computers, merely putting an SSD in it (no matter how old it is, as long as it has, or can fit, at least 2GB of RAM, as to deal with piggy "modern" Electron/CEF/etc apps and similar idiocy in webapps in browsers) can make it seem sometimes a decade newer.
For those who game, more often than not, just upgrading the GPU helps.
I have a i7-4771 @ 3.9ghz /w 32GB of DDR3-2133. This is not a new system, but no one has made anything newer that really would improve actual performance for me... what holds me back is a similarly old GPU that I've finally decided to swap out with a GTX 1180 when they come out next month.
My 7970 served me for many many years and games are finally starting to punish it at 1080p. Still won't need to update my CPU, however.
I think for the purposes of OEM software licensing, they use CPU + motherboard as the defining hardware but I'm not sure.
Even then, you're going to have issues with the motherboard and probably the CPU bottlenecking you as well.
Also, issues with CPU throttling are not too bad, unless you are already pushing the limits of your GPU, which gets more and more difficult each year.