Hacker News new | past | comments | ask | show | jobs | submit login
Why You Should Use Tumbleweed (rootco.de)
123 points by rbrownsuse on March 29, 2016 | hide | past | favorite | 66 comments

"It uses openCV and a library of reference screenshots (with areas of interest selected to allow openQA to ignore things we don’t care about) which we call ‘needles’ ..."

OpenQA is a fascinating project. The tests are written in perl and I think the OpenCV magical stuff is around the "check_screen()" function.

For example, a test for Firefox:

    x11_start_program("firefox https://html5test.com/index.html", 6, {valid => 1});

    # makes firefox as default browser

    if (check_screen('firefox_default_browser')) {
        assert_and_click 'firefox_default_browser_yes';

check_screen is a non-fatal check, really useful for stuff like you cite in that example "tell openQA to see if something is on the screen, then react to it"

assert_screen is it's fatal cousin "make sure the screen has this needle on it, or die"

assert_and_click is it's mouse-controlling companion, "make sure the screen has this needle on it, then click on it"


The above doc covers the basics, we also have a whole bunch of tutorials on YouTube

https://www.youtube.com/watch?v=-fqvaSO6nKE https://www.youtube.com/watch?v=a8LmqhwpVvg https://www.youtube.com/watch?v=EM3XmaQXcLg

That's great!

I found the piece of code I was looking for inside "tinycv" (about OpenCV): https://github.com/os-autoinst/os-autoinst/blob/3c2f5edda09b...

I switched to Debian from Fedora a year ago during the "freeze" in the release cycle. I wonder what is the stability level of Tumbleweed compared to a standard (non-beta) Fedora or a Debian testing.

Is it the good repo for Tumbleweed OpenQA test? https://github.com/os-autoinst/os-autoinst-distri-opensuse

Yes, we share one test repo amongst all the openSUSE distributions and SUSE distributions

SUSE use openQA for testing their enterprise distributions too, and keeping all our tests together and reused as heavily as possible really helps in all directions (and proves the argument that maintaining openQA tests isnt hard even when dealing with multiple codebases-under-test all moving at different paces)

I use Tumbleweed as my daily driver on all my machines besides my server (Leap).

In 2 years I've had no problems that I didnt cause myself (eg. rm -rf /) and even then snapper saved the day and let me rollback to a working snapshot (Tumbleweed, like all openSUSE/SUSE distributions have btrfs snapshots and rollback by default)

So I'd say its comparible to Fedora or Debian testing

And we'd like to see more contributors, not only adding more packages and maintaining them, but to openQA also so that Tumbleweeds quality doesn't just stay at that level but gets even higher

Thank you a lot. I'll try Tumbleweed.

This is very cool. But it irritates me a bit another cool piece is reproducible builds, as implemented by guix challenge [1], and I can't have both!

I also tend to think most package managers lack a sound foundation, unlike nix. It builds everything upon a basic but powerful idea, don't force to keep the whole package tree in sync.

[1] https://www.gnu.org/software/guix/manual/html_node/Invoking-...

This does all sound quite compelling. Somehow OpenSUSE has fallen under my radar, I've been an Arch user for a few years, and Gentoo user before that, and I'd only ever be interested in rolling distros, but I wasn't aware that OpenSUSE had Tumbleweed for this.

I did find dependency conflicts all too common on Gentoo, and keeping the system up to date was a frequent struggle. I've not had any major issues with Arch recently, and any that occur are usually fixed by doing a full system upgrade, but they do occur from time to time. I'd be interested to see how SUSE differs.

One thing I've noticed when doing a quick bit of reading on your Wiki - it sounds like you can't update Nvidia drivers from the package manager in Tumbleweed, and it has to be done manually. It also sounds like this process has to be done any time you update the kernel or X. This sounds like quite a pain - manually installing the nvidia drivers isn't exactly complex, but it's a number of manual steps that I'd rather not have to go through outside of the package manager every time I do a system update. Am I missing something here, or is this the case?

It also sounds like it supports installing with sysvinit rather than systemd, which is a rarity these days, and might appeal to some people.

For years I only used distributions based on Debian, Slackware was my first distro but it's such a burden to get Wi-Fi going. I gave SUSE a shot and have not looked back, anything I need to install is available. I think my only issue with openSUSE was getting DMD (D language compiler) to cooperate for some reason. I'll likely just use Vagrant with Ubuntu as a work around. My only issue is the fact I can't have it on DigitalOcean as a one click install. :)

People have been asking for that for quite a while..


There are plenty of hosting providers who do offer Leap and sometimes even Tumbleweed images.

My blog is hosted on Hetzner (they're local ;))

We have a solid presence on all the major Public Clouds https://en.opensuse.org/Portal:Cloud

Would love to be everywhere, so if any hosting companies want to get in touch about using openSUSE on their platform, my contact details are on the bottom of that article

Thank you for the UserVoice link, I added in my vote, hopefully others here in HN who enjoy SUSE do as well. I would also love to see SUSE more in the cloud, it is definitely a great distro.

Hetzner is really awesome. They even offer a painless Arch Linux installation which works great.

I agree with you on Gentoo. I loved the project from the rolling distribution standpoint and had run it for a while but long-term maintenance of it was a mess. I'd reserve a few hours for doing updates since every emerge would have me validating via diffs every configuration file (which it, unintelligently, wanted to return to the "sample" state).

Tumbleweed struck the right balance for me. Bleeding edge, rolling distribution with a mix of easy maintenance and configuration (and at the time all of Gentoo's emerge operations involved compiling from source, so you could add "faster maintenance" to the list).

I have a side-project I'm going to start this weekend that'll have me checking out Gentoo again, is it still this ugly or has that part improved?

Being a Gentoo user for years, if your issue is the config file updates I would say they are nearly if not the same, AFAIK there are some better tools other than etc-update that may make it easier.

The whole compiling from source well you can avoid some that with binary packages but that is not the focus of the distro, it has always been compiling from source.

About the updates I currently run it on a server and plan on moving my laptop from windows to Gentoo, at least on the server there are no issues, I have a cron to run sync everyday and send me an email report, when where are too much new packages I update it, usually pretty quick, unless there are new versions for gcc or other large packages (gcc takes 15min). I would say give it a try again.

Many thanks for the info -- actually, the reason I'm looking into Gentoo, again, is specifically because of the "compile from source". I have a particular need where this is the only solution that suits it well so I'm glad they haven't abandoned that (I seem to recall that about the time I left Gentoo, they had just introduced binary sources for the kernel and such to speed up getting a system working "from nothing").

I'll poke around for information on alternatives to etc-update. That really was a bear and where I'm planning on using this I'll want to have that work with a lot less overhead.

nvidia drivers - we don't recommend them because we have a habit of adding kernels quicker than NVIDIA can keep up with their proprietary drivers

If you really want to use them, I'd recommend the dkms drivers we have in OBS https://en.opensuse.org/SDB:NVIDIA_the_hard_way#Further_read...

And yes, I believe if you want you can install with sysvinit, though we are primarily a systemd distribution first (with extensive sysvinit compatibility.. people still love those runlevels)

> people still love those runlevels

Ts'o has a interesting use case for runlevels that didn't translate well to systemd.

He set up a runlevel on his laptop that was the same as the graphical one, except it stopped various daemons etc to keep battery drain low. He would then switch to said runlevel when he would be away from a socket for an extended period.

When he tried to translate that to systemd, he ended up with all kinds of side effects. One of them being that he lost access to the Fn combos, so he could no longer control the screen brightness etc.

Ah, that's fine in theory then - I use a dkms package on Arch too since I use a non standard kernel with VFIO patches for PCI passthrough. Interestingly I see people have reported getting PCI passthrough working with the standard SUSE kernel too, so I'll have to give that a try.

As far as I can see the x11-video-nvidia package in the page you linked has quite an old nvidia driver version for Tumbleweed - https://software.opensuse.org/package/x11-video-nvidia

Again I might be missing something, and in practice it might be easy to install the latest drivers, but this is a bit off putting. I don't know the details of how your build system works or why it keeping up with Nvidia drivers complicated, but this seems like it'd be quite a fundamental thing for a lot of users. The vast majority of people who play games on Linux are going to want recent proprietary Nvidia drivers, so having that as a footnote at the bottom of an article that says it's not recommended/supported doesn't inspire confidence.

Still, I'm intrigued enough to give it a try.

The search seems to be doing something weird at the moment..

There is a second "openSUSE Tumbleweed" under the "unsupported distributions" category, and that has the more up to date nvidia packages I'd expect

If someone would like to take a look at the lovely ruby behind software.opensuse.org and fix that bug, I'm sure we'd love the pull requests :) https://github.com/openSUSE/software-o-o

The main problem these days is that people don't give a fuck about packaging their software for the constellations of distributions that exist. OpenSUSE topic-repositories usually offer the latest packaged version of most things for the Stable version too (say the Games repository), where available. But in many cases, latest versions are just not there for anyone.

We are in a model now where OS repositories have been replaced by npm, pip, rubygems in the best case. By pulling from Github's master branch in the worst (wink at Golang). In a sense, your system is already a rolling distro, except plugged to a third-party-managed repository which you trust to install packages produced by total strangers.

That does sound pretty cool.

Who owns Suse again? Do they have a social contract? How's their democratic decision making? Do they properly separate non-free software from their free packages?

- Debian User.

I wrote a rather long mailinglist post that should answer a lot of those questions..


and one question that doesn't get answered by that link - yes, we properly separate non-free software from our free packages

"Who Owns It" seems to be a company owned by a company floated on the stock markets. But with excellent community input.

That company is apparently actually making money selling at-least-mostly-free software, while supporting the properly-free openSuse.

Is that about right?

That's probably better than being Shuttleworth's toy.

Looks easily enough forked if needed.

I think I'd feel weird about going back to a OS supported by a for-profit rather than a charity, but that test suite does sound top.

I may give it a go for a while at least, next time I have to reinstall.

Cheers for making me pay some attention to it ;)

I'd say that's a fair assessment :)

SUSE isn't your typical 'for-profit', and the nature of the relationship with openSUSE reflects that

Thanks for the consideration

I have some relatively fond memories of SuSE, because it was my first GNU/Linux distribution. Back then, it had the advantage of being pretty newbie-friendly (graphical installer / system control utility) and good ISDN support. Since ISDN apparently never became popular outside of Germany, not many distros had good support for it, and setting it up manually could be a real pain (at least for a newbie - a couple of years later, I set up a dial-on-demand ISDN router using NetBSD). So SuSE was, in retrospect a good choice for a first distro.

Eventually, though, I got fed up with YaST, because it used to overwrite config files and thus wipe out all manual changes one made to them. Does SuSE still do that? I remember reading about plans to rewrite YaST from scratch, so hopefully they found a better way of dealing with config files. From a UI perspective, YaST was a neat tool for non-techies.

YaST was written into Ruby, yes.

These days YaST no longer overwrites config files, except from a very few corner cases where it really really wants to be in control of specific config files for very specific reasons. But if it notices local changes, it warns the admin and doesn't take over unless the admin consents

So, no more unexpected config file obliteration :)

(and even if it did, YaST is integrated with openSUSE's default btrfs snapshot tooling, so YaST takes a snapshot before and after it changes anything, so you could always revert)

Thanks for the info! I think I'll give it a try over the next extended weekend.

So it automatically ships the latest versions of e.g. screen and vim, no manual recompiling/installing necessary?

That's always been my biggest gripe with other distros, they'd have a version of an app or tool that was built once years ago and never touched again. I'm sold, I'll be taking Tumbleweed for a test drive tonight.

It is a rolling release, that means each time a new version of a package is available they will try and incorporate it. Apparently the difference they have to gentoo, arch et al is that they have a large CI that actually inspects screen shots to verify that the new package didn't break anything.

I run antergos (arch based) at home, having the bleeding edge is nice, but the AUR is really what seals the deal for me. That being said I have been cut by the bleeding edge at least twice, once so badly that it booted up without the eth0 interface. Maybe tumbleweed wouldn't have this problem as often.

It's probably worth mentioning that the openSUSE Build Service pretty much fills the same role as AUR

http://software.opensuse.org lets you search across all the available repositories and easily install packages on your Tumbleweed installation

Sure, those repositories are not tested, unlike the main Tumbleweed distribution, but then again, AUR isn't exactly expected to be a wholly reliable resource, just a useful one :)

CI tests as described are very good for minor patches, but new features and significant UI changes seem to require significant manual updates to tests anyway.

You'd be surprised how little a problem it is in the real world

In the case of UI changes, openQA can have it's 'needles' updated in it's webUI. Look at the new screenshot, compare to the old screenshot, tell openQA to like the new stuff, done.

In the case of functionality or workflow changes, yeah, tests need to be adapted, but the openQA test writing language is pretty human friendly, describing the steps that humans actually do, so it's not that hard to alter

And because we test at the point of submission before anything is merged to the distribution, we generally catch the behaviour changes as part of the package submission, then we have developers keen to get their stuff in the distribution happy to help tune up the tests ;)

That doesn't sound very different from Debian Sid: new and updated packages are rolled in as they come through. This means fairly frequent breakages, especially for complex stacks.

A lot of Debian-based distributions (including the first Ubuntu releases) are basically Sid snapshots at some point in time. It's been like that for a long time.

Apart from Debian Sid: has no testing, but Tumbleweed does - so those breakages don't happen

Most (if not all) packages installed in any *nix have tests, which (hopefully) are run during the system-specific package creation.

Testing packages, and testing that an entire distribution works with those packages on it, are two very different things..

Well, aren't they saying this is Sid but with good QA?

Ok, I'm a long time suse fan (ever since I dropped mandrake). But, mostly I run it in desktop VM's (openSuse) or on server class machines (sles). With the late 12.x series, it was working so well as a desktop OS in the VM that I decided to install it on a real piece of hardware and use it as a daily driver. That was a first for me in about 10 years. And it mostly worked on the laptop I installed it on. Wifi (including fn/enabled/disable buttons, and transitions between networks/wired/etc), suspend/resume, 3d acceleration/opengl, keyboard volume controls, etc all just worked. About the only thing that required futzing was the LCD brightness keyboard bindings were dorked (even though its a standard i3/etc machine) and the bluetooth was also broken. Three or four hours of recompiling modules, and messing with KDE/etc and everything on the machine worked. That made it the _ONLY_ laptop I've ever seen that worked without compromises in linux.

Fantastic experience, but with the 13.x series it seemed to go downhill a little. Enough that I stopped upgrading it because things break and I have to screw with it.

I wanted to like the leap concept, but it seems so many things were broken that the two days I spent trying to install it on a fairly normal x99 machine with a NVMe drive that I reverted it back to running it in a windows hosted VM.

I can't imagine what tumbleweed is like. Getting the kernel to boot and X to display graphics is just the first steps in my book. Plus, don't get me started on the UI changes. I want my desktop machine, and my servers to be _STABLE_. Screwing around with the bluetooth driver every time the kernel gets upgraded (or something similar) isn't something I want to be dealing with when I have actual work that needs finishing.

So, before someone tells me to try ubuntu/etc I'm going to say that I regularly run fedora/ubuntu/etc and they all have enough hangups that I continue to run opensuse based products at home (work has a different set of requirements, where I've been fedora/RHEL based recently rather than opensuse/sles).

Anyway, I really wish someone would take the leap type concept and stabilize the base OS/X/KDE and maintain it for a long time period while assuring that newer version of linux applications continue to work on that platform. You know, like windows (before 10). That way I won't have to worry that my 1 year old KDE version is to old to run a cd ripping application (or whatever).

This is just anecdotal as well, but I've been running Tumbleweed for at least a year now on my Thinkpad. It's been rock solid. I'm a Gnome user though which I think makes a huge difference in this case, I see a lot of noise about problems with KDE.

Seems like a step forward in solving the "stale versus reliable" conundrum. Over the years I've repeatedly been irked by how stale the package repositories are in some distros. At the fresh end of the spectrum, I used Gentoo for a while, and I do like many things about Gentoo, but I came to the conclusion that it is essentially impossible to QA Gentoo. Plus, falling behind in Gentoo opens you to a world of hurt trying to update.

So I'll probably try Tumbleweed. I think the acid test will be how hard it is to update a few-months old installation. Testing each release is one thing -- testing all variations of update paths, again, leads to combinatorial explosion. Perhaps they have an update mechanism that deals with migration issues -- I'd like to understand how they approach that problem.

zypper takes care of most of that for us - it really is an exceptional package manager

We have some tests in openQA that keep an eye on upgrades, as well as some users who like to stagger their Tumbleweed updates for some weeks.

I think the record I heard was like 3 months (and our gcc5 migration happened during those months), and then they upgraded with no problem

3 months? That's peanuts. Having too many Arch boxes, back in '12/'13 I had one that was 9 months behind, with the big filesystem movearound and sysvinit->systemd change happening in that period. It took a couple of hours of work, but the upgrade went through fine in the end.

I'd suggest you do automated testing of at least 6 month old system upgrades. Because users are going to do it, and much complaining will ensue when it breaks their system.

We do automated testing from our 'traditional' distributions, including versions that existed BEFORE Tumbleweed in it's current form..like 13.1, 12.3, etc..

So in that case, we're going back years ;)

Is this guy leaving out periods as some kind of intentional stylistic thing, or did he just fail at proofreading? Lots of other typos too. A little ironic when you're writing about doing things correctly and testing comprehensively!

You've unfortunately been massively downvoted for saying this, but I think you're right. Also, in Tested Well just before the screenshot I couldn't understand the sentence "And if we don’t test it well enough we want to and you can contribute tests as everything in [openQA is 100% open source]."

The author mentions at the bottom that it was inspired by a Reddit article, so I think this was just thrown together in rapid-fire response style with little proofreading.

I think it's the nice color scheme and well-picked fonts that clash - we've all come to expect perfect typography when presented with good site design, and it's a bit of a shock.

Definitely hope the article gets cleaned up, because the content is really compelling.

Will do..thanks for the critique

It probably didn't come across as well as I was hoping, but I wasn't going for a scathing-review motif. I could probably have couched my response a little more comfortably though. Kudos for the brave face and the positive reaction :)

Also, I didn't make the connection that you were the article author (!) - your followup on here has been admirable and impressive. Not many of the people who submit OC (of sorts) also follow up.

And thanks for writing this, it's definitely made me very curious about trying out OpenSuSE - or more accurately why I would try it. I'm using Arch at the moment, but I might be exploring in future :)

The AUR is pretty amazing, though... I don't use it that much, but it's nice that I've been able to install the stuff I have wanted with a single command. I get the impression there isn't a comparable alternative to this for SuSE...?

There is the graphical option, using http://software.opensuse.org and the 1-click install functionality there to add packages direct from OBS

on the commandline, there is a plugin for osc (the OBS commandline tool) https://build.opensuse.org/package/show/openSUSE:Tools/osc-p...

This enables 'osc install $foo' which will search the build service for package $foo and install it, which I believe to be the closest approximation of what you expect from AUR

I noticed the 1-click install system, that's kinda cute :P (I have a vaguely similar experience when I go, er, Slackware package fishing.)

It seems to me that OpenSuSE (and SuSE itself) follows a philosophy of using centralized build management and verification, with a policy that supports minimal (if any) local from-source recompilation. Basically the exact opposite approach to Gentoo, the only distribution where gcc is more important than eth0 :P

This centralized model is actually exactly what I've been pining for for a very long time - an approach that a) verifies that XYZ works right in a central location, then distributes that known-working configuration, and b) creates an environment where clients are built using solely using such known-working configuration objects, and are thus relatively easily reproducible at scale.

I'm obviously testing OpenSuSE sometime in the short to medium term :D here's hoping it works out well for me in practice!

I'd heard of the OBS, but I didn't know the (Open)SuSE ecosystem was wrapped around it quite like I do now.

The one question I do have now is, where do you think OpenSuSE sits in relation to functional (ie static/absolute) package management? The distributed-known-working-blob approach sounds like it would fit in incredibly well with a model like what the Nix package manager uses.

Of course the current OpenSuSE ecosystem doesn't use this approach so adding it tomorrow would provide nothing unless everyone shifted mindsets, which would naturally not happen anytime soon. I'm just curious about what would happen if the two ideas were combined, since they don't seem to be particularly mutually exclusive or conflicting, and the result sounds like it might be potentially shiny and interesting (and possibly very powerful). And non-relative configuration sounds like the future (to me at least).

Things that prevented me from using OpenSuSE:

1. no single letter user name allowed in the installer

2. yast rewriting config files

3. I couldn't get the partitioning tool to do what I tried which is plain simple /boot plus encrypted swap and root.

1. kinda true, though the current version of the installer lets you skip the username creation so you can do whatever you want afterwards :)

2. formerly true - YaST lives happily with config files these days. When possible it co-exists, only a few specific YaST modules need that absolute control and only rewrites them by warning you well in advance. ie. Not true any more

3. I think we fixed that..it's a radio button now.. Encrypted LVM-based proposal https://lizards.opensuse.org/2016/03/15/highlights-of-develo...

where can I find exact documentation about which config files are still manipulated / changed by yast and which are not?

the only one I can think of is the apache one. If you use the YaST apache configuration module it will warn you before taking over and changing any local changes. It actually does its best to do a merge of local changes and its own, but it's the one case I know of where that merge can be destructive.

Thank you very much for your attention - however I asked where I can find exact documentation about these things?

"Mr Brown from Suse wrote in a HN thread..." will not be enough as a reliable source of information. Yes, I could just read the source, however I am expecting such an extraordinary important thing that will change my config files to be documented. I need precise information here.

Also it would be important to know how this conflicts with configuration management systems like chef, ansible, puppet etc. - or is yast able to manage multiple workstations itself, so is is a replacement for these tools?


There is pretty extensive documentation on http://yast.github.io/documentation.html

It doesn't conflict with other configuration management systems. I've used openSUSE extensively with puppet and saltstack.

Many of SUSE's products use other configuration management systems as part of their toolchain, and SUSE are shipping SUSE Manager 3.0 with SaltStack, so their customers are expected to be able to use YaST alongside such a system

Multiple 404 on that page:



I am not trying to make you angry, but I think I put the finger right on the place where it hurts very much.

There is no clear documentation about what yast does or what it will not do - basically a blackbox. Maybe it will change apache config. Maybe not. Who knows.

:) fair point..I'll let the YaST team know (or you could if you happen to be on freenode, they live in the #yast channel)

Users know - YaST doesn't change stuff without telling you, that was the point I was trying to make earlier. Users of YaST no longer have to worry about it silently taking over config files, it either co-exists or doesn't do anything without telling the user with great big pop-up boxes first

OpenQA seems really brilliant, but I guess someone could eventually just port it to Arch, Gentoo, Debian, or Fedora Rawhide? I hope they do.

Sure they can (and Fedora are already using and contributing to openQA)

But you really need to be using OBS too in order to produce those builds in the fast, easily contributed way that we then (ab)use with Tumbleweed

Of course, other projects could use OBS also..either hosting their own or by using our servers even..after all it builds Arch, Debian, Ubuntu, Fedora and other packages

But that's what being open is all about right? Doing cool things because you need them and then benefiting even more when everyone else finally catches up and starts working with you on it :)

So for every library Foo, it tracks all of the packages that depend on Foo? What if I add my own package -- do I have to tell Foo about mine as well? What if Foo's API changed or breaks my package?

Yes, the open build service takes care of that for you

You don't need to tell Foo about your package, the build service will detect those relationships

If you build against Foo, and Foo changes, the OBS will rebuild your package for you

If it breaks your package, the OBS will keep the last version published while you can debug the build failure

Well, this might just replace my current Debian setup as a daily driver. I've been wanting something like this for a while but had no idea how to do it technically in a feasible way. I guess you've done it for me.

Thanks for posting this and (if you still read this) the thorough followup in this thread.

My pleasure :) thanks for reading!

I've stuck with openSuSE/SuSE since the early days, and doing Tumbleweed since about when it surfaced. There are several reasons I didn't switch over to Ubuntu (and a few reasons I wanted to, but the pros outweigh the downsides for me).

1. Active Directory: I'm not a Unix shop and everywhere I've had a need for it, AD has been the predominant identity service in the environment. Getting openSuSE setup, via yast, to work with my AD account is really simple. It can be done everywhere else, too, but being able to set it up during installation without touching a config file is nice.

2. Bleeding edge: I like to play and Tumbleweed was always on a later kernel (and other core bits) than other distributions, giving me access to the new toys early on with the "feeling" that I'm not running "beta" (and feelings aside, I've not run into anything on the stability, yet). Never having to think about doing a "distribution upgrade" (because you're basically doing it in pieces every time you update) is nice, too.

3. YaST: Much like the easy AD integration, there's a whole mess of administrative things that can be done without editing configuration files or using console tools that I'm not in nearly enough to memorize their usage. Simple things like opening a port in the firewall are as easy as they are on a consumer router and the ncurses interface is perfect over an SSH connection (I haven't used GUI YaST but it looks nice, too). People always say Ubuntu is the most "n00b" friendly Linux distribution, and it probably is when the large community behind it is factored in, but I find SuSE to be superior, personally.

On the downside, though:

There's less available in Zypper. I get jealous when every answer in StackExchange includes an apt-get and about 15% of the time no package even exists for whatever it is that I'm being told to apt-get. But it's little more pleasant to watch on the console.

The smaller community makes StackExchange less useful and they use different tools. Much of it is due to the bleeding edge nature of Tumbleweed and I've read plenty of "it's better, that old stuff will be deprecated everywhere else, too" and I can't speak to the merits of any of that, but when my network connection goes south, I instinctively reach for "ifconfig" and it's not there. And all of the help out there points me at things that don't work, either[1]. And because it's a rolling distribution, these kinds of changes don't pop up when you expect them (like when you're doing a distribution upgrade...)

Tumbleweed's rolling update model has thoroughly broken on me, once, but not in a "hey, it just decided not to boot this time" kind of way. I believe this happened when they switched to "Leap" and "Tumbleweed" as the only options. I thought I was up-to-date, but I was pointing to repos that were invalid and had to modify my configs and download an .rpm to fix it. Other than not seeing software updates (which is pretty odd since there's usually something every time I update), there was no indication that anything was wrong.

Out of the box, sudo isn't setup (maybe it is, now, I haven't done a fresh install in a long time). It's not hard to get working, though.

[1] It's one of the new "ip something" commands I can never remember when I need it: http://www.linuxfoundation.org/collaborate/workgroups/networ...

With the recent announcement of Neon by KDE, and the rising popularity of Arch, and Fedora trying to get more modular (with multiple release versions) I can't help but feel the market is way oversaturated, but everyone is bringing something to the table better than everyone else. If we had a unified distro set rather than bascially four reimplementations of the same thing, we could be seeing a lot more momentum towards desktop Linux's adoption.

In general, there are two users of desktop distros:

A. Professionals and grandparents, who want stable, unchanging systems with long support. Think your SUSE Leap, your RHEL, your Ubuntu LTS, your Debian Stable, and kind-of-not-really Manjaro.[1]

B. Enthusiasts, developers, power-users, and gamers(!) who want the latest and greatest software. These are the SUSE Tumbleweed, Debian Testing, Ubuntu-nonLTS users (with a ton of PPAs), Fedora Rawhide, and Arch users. They don't want unstable software, they just want to get new stuff when the makers actually release it rather than have computers frozen in time - which in practice, no other OS actually does.

Nobody else is trying to ship an OS that intentionally holds back upstream released new releases, unless you consider the tiered versions of OSX / Windows / Android like that, except their availability is more based on your willingness to spend money / upgrade your device to get them than them just being delayed.

[1] I think there might be a schism between these two even further - grandparents probably want Neon style desktops just because individual distros are not honoring the release cadences of major projects like KDE or Gnome. Plasma 5.6 just came out and will not be in Ubuntu 16.04 alongside Qt 5.6, despite both fixing a tremendous number of bugs and issues with the desktop. That isn't cutting edge features, that is bug fixes, and I don't really believe in holding back major project releases like KDE or Gnome regardless of distro. Just have a switch to pin it if its corporate. But there is a tremendous lack of communication between upstream when major software projects like KDE cannot bugfix their major userbases at all for years (IE, Ubuntu LTS).

A great comparison would be to the Android world. Google devices see continuous releases for their support periods monthly now, like fixed release distros. Cyanogenmod has daily releases like a rolling distro. Any non-Google phone basically never gets updates and plays Debian Stable. But all of them, when using the Play Store, will upgrade every user facing APK on the system whenever updates are available, because those updates are not Google's responsibility, they are the developers, and users are used to knowing it is the app makers fault when something breaks, not the distributor (Google Play). General adoption of the Linux desktop requires this distinction be made, where developers can both easily offer their software to Linux users and be held responsible when they break it, rather than having everything filtered through distros that will basically throw the baby out with the bathwater by freezing all software because they trust nothing.

But on topic, it seems like we now have a half dozen different solutions to the same problem. Why can't I have a distro with SUSE's infrastructure, Ubuntu's brand and userbase, Debian's community, and Arch's software availability? It is like everyone has their own piece of the pie perfected but rather than us having a whole pie to solve any of these 2/3 use cases (I'd consider the server use case is extremely well provided for nowadays, the only domain Linux distros are doing a good job at, between RHEL / Stable / Ubuntu Server). It isn't even really about different distros for different desktops anymore, since pretty much every desktop is available on every distro - either officially like in Arch / Debian, or through the software universe of Ubuntu / SUSE / Fedora. The distinguishing factors are just the holes the other distros will not fill because they are too busy fixing another problem unaddressed at home.

It does not help that the work is then duplicated across all these projects. Packaging Qt and KDE are a pain in the ass, but there are four release teams between all these distros maintaining those releases independently. There is no actual way to fix this - the Debian, SUSE, Ubuntu, Fedora, and Arch devs have already dug in within their camps - but it is still sad that desktop Linux remains niche because rather than working together and combining the best of everything we have everyone is working on their own thing while the whole is lackluster against the competition.

Yes, you should use weed.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact