
Why You Should Use Tumbleweed - rbrownsuse
http://rootco.de/2016-03-28-why-use-tumbleweed/
======
vmorgulis
"It uses openCV and a library of reference screenshots (with areas of interest
selected to allow openQA to ignore things we don’t care about) which we call
‘needles’ ..."

OpenQA is a fascinating project. The tests are written in perl and I think the
OpenCV magical stuff is around the "check_screen()" function.

For example, a test for Firefox:

    
    
        x11_start_program("firefox https://html5test.com/index.html", 6, {valid => 1});
    
        # makes firefox as default browser
    
        if (check_screen('firefox_default_browser')) {
            assert_and_click 'firefox_default_browser_yes';
        }
    

[https://github.com/os-autoinst/os-autoinst-distri-
opensuse/b...](https://github.com/os-autoinst/os-autoinst-distri-
opensuse/blob/master/tests/x11/firefox.pm)

~~~
rbrownsuse
check_screen is a non-fatal check, really useful for stuff like you cite in
that example "tell openQA to see if something is on the screen, then react to
it"

assert_screen is it's fatal cousin "make sure the screen has this needle on
it, or die"

assert_and_click is it's mouse-controlling companion, "make sure the screen
has this needle on it, then click on it"

[https://github.com/os-
autoinst/openQA/blob/master/docs/Writi...](https://github.com/os-
autoinst/openQA/blob/master/docs/WritingTests.asciidoc)

The above doc covers the basics, we also have a whole bunch of tutorials on
YouTube

[https://www.youtube.com/watch?v=-fqvaSO6nKE](https://www.youtube.com/watch?v=-fqvaSO6nKE)
[https://www.youtube.com/watch?v=a8LmqhwpVvg](https://www.youtube.com/watch?v=a8LmqhwpVvg)
[https://www.youtube.com/watch?v=EM3XmaQXcLg](https://www.youtube.com/watch?v=EM3XmaQXcLg)

~~~
vmorgulis
That's great!

I found the piece of code I was looking for inside "tinycv" (about OpenCV):
[https://github.com/os-autoinst/os-
autoinst/blob/3c2f5edda09b...](https://github.com/os-autoinst/os-
autoinst/blob/3c2f5edda09bff3246f5438d47ad84123a5cf151/ppmclibs/tinycv_impl.cc#L200)

I switched to Debian from Fedora a year ago during the "freeze" in the release
cycle. I wonder what is the stability level of Tumbleweed compared to a
standard (non-beta) Fedora or a Debian testing.

Is it the good repo for Tumbleweed OpenQA test? [https://github.com/os-
autoinst/os-autoinst-distri-opensuse](https://github.com/os-autoinst/os-
autoinst-distri-opensuse)

~~~
rbrownsuse
Yes, we share one test repo amongst all the openSUSE distributions and SUSE
distributions

SUSE use openQA for testing their enterprise distributions too, and keeping
all our tests together and reused as heavily as possible really helps in all
directions (and proves the argument that maintaining openQA tests isnt hard
even when dealing with multiple codebases-under-test all moving at different
paces)

I use Tumbleweed as my daily driver on all my machines besides my server
(Leap).

In 2 years I've had no problems that I didnt cause myself (eg. rm -rf /) and
even then snapper saved the day and let me rollback to a working snapshot
(Tumbleweed, like all openSUSE/SUSE distributions have btrfs snapshots and
rollback by default)

So I'd say its comparible to Fedora or Debian testing

And we'd like to see more contributors, not only adding more packages and
maintaining them, but to openQA also so that Tumbleweeds quality doesn't just
stay at that level but gets even higher

~~~
vmorgulis
Thank you a lot. I'll try Tumbleweed.

------
wjoe
This does all sound quite compelling. Somehow OpenSUSE has fallen under my
radar, I've been an Arch user for a few years, and Gentoo user before that,
and I'd only ever be interested in rolling distros, but I wasn't aware that
OpenSUSE had Tumbleweed for this.

I did find dependency conflicts all too common on Gentoo, and keeping the
system up to date was a frequent struggle. I've not had any major issues with
Arch recently, and any that occur are usually fixed by doing a full system
upgrade, but they do occur from time to time. I'd be interested to see how
SUSE differs.

One thing I've noticed when doing a quick bit of reading on your Wiki - it
sounds like you can't update Nvidia drivers from the package manager in
Tumbleweed, and it has to be done manually. It also sounds like this process
has to be done any time you update the kernel or X. This sounds like quite a
pain - manually installing the nvidia drivers isn't exactly complex, but it's
a number of manual steps that I'd rather not have to go through outside of the
package manager every time I do a system update. Am I missing something here,
or is this the case?

It also sounds like it supports installing with sysvinit rather than systemd,
which is a rarity these days, and might appeal to some people.

~~~
rbrownsuse
nvidia drivers - we don't recommend them because we have a habit of adding
kernels quicker than NVIDIA can keep up with their proprietary drivers

If you really want to use them, I'd recommend the dkms drivers we have in OBS
[https://en.opensuse.org/SDB:NVIDIA_the_hard_way#Further_read...](https://en.opensuse.org/SDB:NVIDIA_the_hard_way#Further_readings)

And yes, I believe if you want you can install with sysvinit, though we are
primarily a systemd distribution first (with extensive sysvinit
compatibility.. people still love those runlevels)

~~~
wjoe
Ah, that's fine in theory then - I use a dkms package on Arch too since I use
a non standard kernel with VFIO patches for PCI passthrough. Interestingly I
see people have reported getting PCI passthrough working with the standard
SUSE kernel too, so I'll have to give that a try.

As far as I can see the x11-video-nvidia package in the page you linked has
quite an old nvidia driver version for Tumbleweed -
[https://software.opensuse.org/package/x11-video-
nvidia](https://software.opensuse.org/package/x11-video-nvidia)

Again I might be missing something, and in practice it might be easy to
install the latest drivers, but this is a bit off putting. I don't know the
details of how your build system works or why it keeping up with Nvidia
drivers complicated, but this seems like it'd be quite a fundamental thing for
a lot of users. The vast majority of people who play games on Linux are going
to want recent proprietary Nvidia drivers, so having that as a footnote at the
bottom of an article that says it's not recommended/supported doesn't inspire
confidence.

Still, I'm intrigued enough to give it a try.

~~~
rbrownsuse
The search seems to be doing something weird at the moment..

There is a second "openSUSE Tumbleweed" under the "unsupported distributions"
category, and that has the more up to date nvidia packages I'd expect

If someone would like to take a look at the lovely ruby behind
software.opensuse.org and fix that bug, I'm sure we'd love the pull requests
:) [https://github.com/openSUSE/software-
o-o](https://github.com/openSUSE/software-o-o)

------
alexandrerond
The main problem these days is that people don't give a fuck about packaging
their software for the constellations of distributions that exist. OpenSUSE
topic-repositories usually offer the latest packaged version of most things
for the Stable version too (say the Games repository), where available. But in
many cases, latest versions are just not there for anyone.

We are in a model now where OS repositories have been replaced by npm, pip,
rubygems in the best case. By pulling from Github's master branch in the worst
(wink at Golang). In a sense, your system is already a rolling distro, except
plugged to a third-party-managed repository which you trust to install
packages produced by total strangers.

------
pre
That does sound pretty cool.

Who owns Suse again? Do they have a social contract? How's their democratic
decision making? Do they properly separate non-free software from their free
packages?

\- Debian User.

~~~
rbrownsuse
and one question that doesn't get answered by that link - yes, we properly
separate non-free software from our free packages

~~~
pre
"Who Owns It" seems to be a company owned by a company floated on the stock
markets. But with excellent community input.

That company is apparently actually making money selling at-least-mostly-free
software, while supporting the properly-free openSuse.

Is that about right?

That's probably better than being Shuttleworth's toy.

Looks easily enough forked if needed.

I think I'd feel weird about going back to a OS supported by a for-profit
rather than a charity, but that test suite does sound top.

I may give it a go for a while at least, next time I have to reinstall.

Cheers for making me pay some attention to it ;)

~~~
rbrownsuse
I'd say that's a fair assessment :)

SUSE isn't your typical 'for-profit', and the nature of the relationship with
openSUSE reflects that

Thanks for the consideration

------
krylon
I have some relatively fond memories of SuSE, because it was my first
GNU/Linux distribution. Back then, it had the advantage of being pretty
newbie-friendly (graphical installer / system control utility) and good ISDN
support. Since ISDN apparently never became popular outside of Germany, not
many distros had good support for it, and setting it up manually could be a
real pain (at least for a newbie - a couple of years later, I set up a dial-
on-demand ISDN router using NetBSD). So SuSE was, in retrospect a good choice
for a first distro.

Eventually, though, I got fed up with YaST, because it used to overwrite
config files and thus wipe out all manual changes one made to them. Does SuSE
still do that? I remember reading about plans to rewrite YaST from scratch, so
hopefully they found a better way of dealing with config files. From a UI
perspective, YaST was a neat tool for non-techies.

~~~
rbrownsuse
YaST was written into Ruby, yes.

These days YaST no longer overwrites config files, except from a very few
corner cases where it really really wants to be in control of specific config
files for very specific reasons. But if it notices local changes, it warns the
admin and doesn't take over unless the admin consents

So, no more unexpected config file obliteration :)

(and even if it did, YaST is integrated with openSUSE's default btrfs snapshot
tooling, so YaST takes a snapshot before and after it changes anything, so you
could always revert)

~~~
krylon
Thanks for the info! I think I'll give it a try over the next extended
weekend.

------
Zikes
So it automatically ships the latest versions of e.g. screen and vim, no
manual recompiling/installing necessary?

That's always been my biggest gripe with other distros, they'd have a version
of an app or tool that was built once years ago and never touched again. I'm
_sold,_ I'll be taking Tumbleweed for a test drive tonight.

~~~
placeybordeaux
It is a rolling release, that means each time a new version of a package is
available they will try and incorporate it. Apparently the difference they
have to gentoo, arch et al is that they have a large CI that actually inspects
screen shots to verify that the new package didn't break anything.

I run antergos (arch based) at home, having the bleeding edge is nice, but the
AUR is really what seals the deal for me. That being said I have been cut by
the bleeding edge at least twice, once so badly that it booted up without the
eth0 interface. Maybe tumbleweed wouldn't have this problem as often.

~~~
toyg
CI tests as described are very good for minor patches, but new features and
significant UI changes seem to require significant manual updates to tests
anyway.

~~~
rbrownsuse
You'd be surprised how little a problem it is in the real world

In the case of UI changes, openQA can have it's 'needles' updated in it's
webUI. Look at the new screenshot, compare to the old screenshot, tell openQA
to like the new stuff, done.

In the case of functionality or workflow changes, yeah, tests need to be
adapted, but the openQA test writing language is pretty human friendly,
describing the steps that humans actually do, so it's not that hard to alter

And because we test at the point of submission before anything is merged to
the distribution, we generally catch the behaviour changes as part of the
package submission, then we have developers keen to get their stuff in the
distribution happy to help tune up the tests ;)

------
StillBored
Ok, I'm a long time suse fan (ever since I dropped mandrake). But, mostly I
run it in desktop VM's (openSuse) or on server class machines (sles). With the
late 12.x series, it was working so well as a desktop OS in the VM that I
decided to install it on a real piece of hardware and use it as a daily
driver. That was a first for me in about 10 years. And it mostly worked on the
laptop I installed it on. Wifi (including fn/enabled/disable buttons, and
transitions between networks/wired/etc), suspend/resume, 3d
acceleration/opengl, keyboard volume controls, etc all just worked. About the
only thing that required futzing was the LCD brightness keyboard bindings were
dorked (even though its a standard i3/etc machine) and the bluetooth was also
broken. Three or four hours of recompiling modules, and messing with KDE/etc
and everything on the machine worked. That made it the _ONLY_ laptop I've ever
seen that worked without compromises in linux.

Fantastic experience, but with the 13.x series it seemed to go downhill a
little. Enough that I stopped upgrading it because things break and I have to
screw with it.

I wanted to like the leap concept, but it seems so many things were broken
that the two days I spent trying to install it on a fairly normal x99 machine
with a NVMe drive that I reverted it back to running it in a windows hosted
VM.

I can't imagine what tumbleweed is like. Getting the kernel to boot and X to
display graphics is just the first steps in my book. Plus, don't get me
started on the UI changes. I want my desktop machine, and my servers to be
_STABLE_. Screwing around with the bluetooth driver every time the kernel gets
upgraded (or something similar) isn't something I want to be dealing with when
I have actual work that needs finishing.

So, before someone tells me to try ubuntu/etc I'm going to say that I
regularly run fedora/ubuntu/etc and they all have enough hangups that I
continue to run opensuse based products at home (work has a different set of
requirements, where I've been fedora/RHEL based recently rather than
opensuse/sles).

Anyway, I really wish someone would take the leap type concept and stabilize
the base OS/X/KDE and maintain it for a long time period while assuring that
newer version of linux applications continue to work on that platform. You
know, like windows (before 10). That way I won't have to worry that my 1 year
old KDE version is to old to run a cd ripping application (or whatever).

~~~
krig
This is just anecdotal as well, but I've been running Tumbleweed for at least
a year now on my Thinkpad. It's been rock solid. I'm a Gnome user though which
I think makes a huge difference in this case, I see a lot of noise about
problems with KDE.

------
dbcurtis
Seems like a step forward in solving the "stale versus reliable" conundrum.
Over the years I've repeatedly been irked by how stale the package
repositories are in some distros. At the fresh end of the spectrum, I used
Gentoo for a while, and I do like many things about Gentoo, but I came to the
conclusion that it is essentially impossible to QA Gentoo. Plus, falling
behind in Gentoo opens you to a world of hurt trying to update.

So I'll probably try Tumbleweed. I think the acid test will be how hard it is
to update a few-months old installation. Testing each release is one thing --
testing all variations of update paths, again, leads to combinatorial
explosion. Perhaps they have an update mechanism that deals with migration
issues -- I'd like to understand how they approach that problem.

~~~
rbrownsuse
zypper takes care of most of that for us - it really is an exceptional package
manager

We have some tests in openQA that keep an eye on upgrades, as well as some
users who like to stagger their Tumbleweed updates for some weeks.

I think the record I heard was like 3 months (and our gcc5 migration happened
during those months), and then they upgraded with no problem

~~~
semi-extrinsic
3 months? That's peanuts. Having too many Arch boxes, back in '12/'13 I had
one that was 9 months behind, with the big filesystem movearound and
sysvinit->systemd change happening in that period. It took a couple of hours
of work, but the upgrade went through fine in the end.

I'd suggest you do automated testing of at least 6 month old system upgrades.
Because users are going to do it, and much complaining will ensue when it
breaks their system.

~~~
rbrownsuse
We do automated testing from our 'traditional' distributions, including
versions that existed BEFORE Tumbleweed in it's current form..like 13.1, 12.3,
etc..

So in that case, we're going back years ;)

------
foobarbecue
Is this guy leaving out periods as some kind of intentional stylistic thing,
or did he just fail at proofreading? Lots of other typos too. A little ironic
when you're writing about doing things correctly and testing comprehensively!

~~~
i336_
You've unfortunately been massively downvoted for saying this, but I think
you're right. Also, in _Tested Well_ just before the screenshot I couldn't
understand the sentence "And if we don’t test it well enough we want to and
you can contribute tests as everything in [openQA is 100% open source]."

The author mentions at the bottom that it was inspired by a Reddit article, so
I think this was just thrown together in rapid-fire response style with little
proofreading.

I think it's the nice color scheme and well-picked fonts that clash - we've
all come to expect perfect typography when presented with good site design,
and it's a bit of a shock.

Definitely hope the article gets cleaned up, because the content is really
compelling.

~~~
rbrownsuse
Will do..thanks for the critique

~~~
i336_
It probably didn't come across as well as I was hoping, but I wasn't going for
a scathing-review motif. I could probably have couched my response a little
more comfortably though. Kudos for the brave face and the positive reaction :)

Also, I didn't make the connection that you were the article author (!) - your
followup on here has been admirable and impressive. Not many of the people who
submit OC (of sorts) also follow up.

And thanks for writing this, it's definitely made me very curious about trying
out OpenSuSE - or more accurately why I _would_ try it. I'm using Arch at the
moment, but I might be exploring in future :)

The AUR is pretty amazing, though... I don't use it that much, but it's nice
that I've been able to install the stuff I have wanted with a single command.
I get the impression there isn't a comparable alternative to this for SuSE...?

~~~
rbrownsuse
There is the graphical option, using
[http://software.opensuse.org](http://software.opensuse.org) and the 1-click
install functionality there to add packages direct from OBS

on the commandline, there is a plugin for osc (the OBS commandline tool)
[https://build.opensuse.org/package/show/openSUSE:Tools/osc-p...](https://build.opensuse.org/package/show/openSUSE:Tools/osc-
plugin-install)

This enables 'osc install $foo' which will search the build service for
package $foo and install it, which I believe to be the closest approximation
of what you expect from AUR

~~~
i336_
I noticed the 1-click install system, that's kinda cute :P (I have a vaguely
similar experience when I go, er, Slackware package fishing.)

It seems to me that OpenSuSE (and SuSE itself) follows a philosophy of using
centralized build management and verification, with a policy that supports
minimal (if any) local from-source recompilation. Basically the exact opposite
approach to Gentoo, the only distribution where gcc is more important than
eth0 :P

This centralized model is actually exactly what I've been pining for for a
very long time - an approach that a) verifies that XYZ works right in a
central location, then distributes that known-working configuration, and b)
creates an environment where clients are built using solely using such known-
working configuration objects, and are thus relatively easily reproducible at
scale.

I'm obviously testing OpenSuSE sometime in the short to medium term :D here's
hoping it works out well for me in practice!

I'd heard of the OBS, but I didn't know the (Open)SuSE ecosystem was wrapped
around it quite like I do now.

The one question I do have now is, where do you think OpenSuSE sits in
relation to functional (ie static/absolute) package management? The
distributed-known-working-blob approach sounds like it would fit in incredibly
well with a model like what the Nix package manager uses.

Of course the current OpenSuSE ecosystem doesn't use this approach so adding
it tomorrow would provide nothing unless everyone shifted mindsets, which
would naturally not happen anytime soon. I'm just curious about what would
happen if the two ideas were combined, since they don't seem to be
particularly mutually exclusive or conflicting, and the result sounds like it
might be potentially shiny and interesting (and possibly very powerful). And
non-relative configuration sounds like the future (to me at least).

------
cm3
Things that prevented me from using OpenSuSE:

1\. no single letter user name allowed in the installer

2\. yast rewriting config files

3\. I couldn't get the partitioning tool to do what I tried which is plain
simple /boot plus encrypted swap and root.

~~~
rbrownsuse
1\. kinda true, though the current version of the installer lets you skip the
username creation so you can do whatever you want afterwards :)

2\. formerly true - YaST lives happily with config files these days. When
possible it co-exists, only a few specific YaST modules need that absolute
control and only rewrites them by warning you well in advance. ie. Not true
any more

3\. I think we fixed that..it's a radio button now.. Encrypted LVM-based
proposal [https://lizards.opensuse.org/2016/03/15/highlights-of-
develo...](https://lizards.opensuse.org/2016/03/15/highlights-of-development-
sprint-16/)

~~~
code_research
where can I find exact documentation about which config files are still
manipulated / changed by yast and which are not?

~~~
rbrownsuse
the only one I can think of is the apache one. If you use the YaST apache
configuration module it will warn you before taking over and changing any
local changes. It actually does its best to do a merge of local changes and
its own, but it's the one case I know of where that merge can be destructive.

~~~
code_research
Thank you very much for your attention - however I asked where I can find
exact documentation about these things?

"Mr Brown from Suse wrote in a HN thread..." will not be enough as a reliable
source of information. Yes, I could just read the source, however I am
expecting such an extraordinary important thing that will change my config
files to be documented. I need precise information here.

Also it would be important to know how this conflicts with configuration
management systems like chef, ansible, puppet etc. - or is yast able to manage
multiple workstations itself, so is is a replacement for these tools?

Thanks!

~~~
rbrownsuse
There is pretty extensive documentation on
[http://yast.github.io/documentation.html](http://yast.github.io/documentation.html)

It doesn't conflict with other configuration management systems. I've used
openSUSE extensively with puppet and saltstack.

Many of SUSE's products use other configuration management systems as part of
their toolchain, and SUSE are shipping SUSE Manager 3.0 with SaltStack, so
their customers are expected to be able to use YaST alongside such a system

~~~
code_research
Multiple 404 on that page:

[http://www.rubydoc.info/github/yast/yast-
yast2/](http://www.rubydoc.info/github/yast/yast-yast2/)

[http://yast-core.readthedocs.org/en/latest/](http://yast-
core.readthedocs.org/en/latest/)

I am not trying to make you angry, but I think I put the finger right on the
place where it hurts very much.

There is no clear documentation about what yast does or what it will not do -
basically a blackbox. Maybe it will change apache config. Maybe not. Who
knows.

~~~
rbrownsuse
:) fair point..I'll let the YaST team know (or you could if you happen to be
on freenode, they live in the #yast channel)

Users know - YaST doesn't change stuff without telling you, that was the point
I was trying to make earlier. Users of YaST no longer have to worry about it
silently taking over config files, it either co-exists or doesn't do anything
without telling the user with great big pop-up boxes first

------
mehrzad
OpenQA seems really brilliant, but I guess someone could eventually just port
it to Arch, Gentoo, Debian, or Fedora Rawhide? I hope they do.

~~~
rbrownsuse
Sure they can (and Fedora are already using and contributing to openQA)

But you really need to be using OBS too in order to produce those builds in
the fast, easily contributed way that we then (ab)use with Tumbleweed

Of course, other projects could use OBS also..either hosting their own or by
using our servers even..after all it builds Arch, Debian, Ubuntu, Fedora and
other packages

But that's what being open is all about right? Doing cool things because you
need them and then benefiting even more when everyone else finally catches up
and starts working with you on it :)

------
waynecochran
So for every library Foo, it tracks all of the packages that depend on Foo?
What if I add my own package -- do I have to tell Foo about mine as well? What
if Foo's API changed or breaks my package?

~~~
rbrownsuse
Yes, the open build service takes care of that for you

You don't need to tell Foo about your package, the build service will detect
those relationships

If you build against Foo, and Foo changes, the OBS will rebuild your package
for you

If it breaks your package, the OBS will keep the last version published while
you can debug the build failure

------
breakingcups
Well, this might just replace my current Debian setup as a daily driver. I've
been wanting something like this for a while but had no idea how to do it
technically in a feasible way. I guess you've done it for me.

Thanks for posting this and (if you still read this) the thorough followup in
this thread.

~~~
rbrownsuse
My pleasure :) thanks for reading!

------
mdip
I've stuck with openSuSE/SuSE since the early days, and doing Tumbleweed since
about when it surfaced. There are several reasons I didn't switch over to
Ubuntu (and a few reasons I wanted to, but the pros outweigh the downsides for
me).

1\. Active Directory: I'm not a Unix shop and everywhere I've had a need for
it, AD has been the predominant identity service in the environment. Getting
openSuSE setup, via yast, to work with my AD account is _really simple_. It
can be done everywhere else, too, but being able to set it up during
installation without touching a config file is nice.

2\. Bleeding edge: I like to play and Tumbleweed was always on a later kernel
(and other core bits) than other distributions, giving me access to the new
toys early on with the "feeling" that I'm not running "beta" (and feelings
aside, I've not run into anything on the stability, yet). Never having to
think about doing a "distribution upgrade" (because you're basically doing it
in pieces every time you update) is nice, too.

3\. YaST: Much like the easy AD integration, there's a whole mess of
administrative things that can be done without editing configuration files or
using console tools that I'm not in nearly enough to memorize their usage.
Simple things like opening a port in the firewall are as easy as they are on a
consumer router and the ncurses interface is perfect over an SSH connection (I
haven't used GUI YaST but it looks nice, too). People always say Ubuntu is the
most "n00b" friendly Linux distribution, and it probably is when the large
community behind it is factored in, but I find SuSE to be superior,
personally.

On the downside, though:

There's less available in Zypper. I get jealous when every answer in
StackExchange includes an apt-get and about 15% of the time no package even
exists for whatever it is that I'm being told to apt-get. But it's little more
pleasant to watch on the console.

The smaller community makes StackExchange less useful and they use different
tools. Much of it is due to the bleeding edge nature of Tumbleweed and I've
read plenty of "it's better, that old stuff will be deprecated everywhere
else, too" and I can't speak to the merits of any of that, but when my network
connection goes south, I instinctively reach for "ifconfig" and it's not
there. And all of the help out there points me at things that don't work,
either[1]. And because it's a rolling distribution, these kinds of changes
don't pop up when you expect them (like when you're doing a distribution
upgrade...)

Tumbleweed's rolling update model has thoroughly broken on me, once, but not
in a "hey, it just decided not to boot this time" kind of way. I believe this
happened when they switched to "Leap" and "Tumbleweed" as the only options. I
thought I was up-to-date, but I was pointing to repos that were invalid and
had to modify my configs and download an .rpm to fix it. Other than not seeing
software updates (which is pretty odd since there's usually something _every
time I update_ ), there was no indication that anything was wrong.

Out of the box, sudo isn't setup (maybe it is, now, I haven't done a fresh
install in a long time). It's not hard to get working, though.

[1] It's one of the new "ip _something_ " commands I can never remember when I
need it:
[http://www.linuxfoundation.org/collaborate/workgroups/networ...](http://www.linuxfoundation.org/collaborate/workgroups/networking/net-
tools)

------
zanny
With the recent announcement of Neon by KDE, and the rising popularity of
Arch, and Fedora trying to get more modular (with multiple release versions) I
can't help but feel the market is way oversaturated, but everyone is bringing
something to the table better than everyone else. If we had a unified distro
set rather than bascially four reimplementations of the same thing, we could
be seeing a lot more momentum towards desktop Linux's adoption.

In general, there are two users of desktop distros:

A. Professionals and grandparents, who want stable, unchanging systems with
long support. Think your SUSE Leap, your RHEL, your Ubuntu LTS, your Debian
Stable, and kind-of-not-really Manjaro.[1]

B. Enthusiasts, developers, power-users, and gamers(!) who want the latest and
greatest software. These are the SUSE Tumbleweed, Debian Testing, Ubuntu-
nonLTS users (with a ton of PPAs), Fedora Rawhide, and Arch users. They don't
want unstable software, they just want to get new stuff when the makers
actually release it rather than have computers frozen in time - which in
practice, no other OS actually does.

Nobody else is trying to ship an OS that intentionally holds back upstream
released new releases, unless you consider the tiered versions of OSX /
Windows / Android like that, except their availability is more based on your
willingness to spend money / upgrade your device to get them than them just
being delayed.

[1] I think there might be a schism between these two even further -
grandparents probably want Neon style desktops just because individual distros
are not honoring the release cadences of major projects like KDE or Gnome.
Plasma 5.6 just came out and will not be in Ubuntu 16.04 alongside Qt 5.6,
despite both fixing a tremendous number of bugs and issues with the desktop.
That isn't cutting edge features, that is bug fixes, and I don't really
believe in holding back major project releases like KDE or Gnome regardless of
distro. Just have a switch to pin it if its corporate. But there is a
tremendous lack of communication between upstream when major software projects
like KDE cannot bugfix their major userbases at all for years (IE, Ubuntu
LTS).

A great comparison would be to the Android world. Google devices see
continuous releases for their support periods monthly now, like fixed release
distros. Cyanogenmod has daily releases like a rolling distro. Any non-Google
phone basically never gets updates and plays Debian Stable. But _all_ of them,
when using the Play Store, will upgrade every user facing APK on the system
whenever updates are available, because those updates are not Google's
responsibility, they are the developers, and users are used to knowing it is
the app makers fault when something breaks, not the distributor (Google Play).
General adoption of the Linux desktop requires this distinction be made, where
developers can both easily offer their software to Linux users and be held
responsible when they break it, rather than having everything filtered through
distros that will basically throw the baby out with the bathwater by freezing
all software because they trust nothing.

But on topic, it seems like we now have a half dozen different solutions to
the same problem. Why can't I have a distro with SUSE's infrastructure,
Ubuntu's brand and userbase, Debian's community, and Arch's software
availability? It is like everyone has their own piece of the pie perfected but
rather than us having a whole pie to solve any of these 2/3 use cases (I'd
consider the server use case is _extremely_ well provided for nowadays, the
only domain Linux distros are doing a good job at, between RHEL / Stable /
Ubuntu Server). It isn't even really about different distros for different
desktops anymore, since pretty much every desktop is available on every distro
- either officially like in Arch / Debian, or through the software universe of
Ubuntu / SUSE / Fedora. The distinguishing factors are just the holes the
other distros will not fill because they are too busy fixing another problem
unaddressed at home.

It does not help that the work is then duplicated across all these projects.
Packaging Qt and KDE are a pain in the ass, but there are four release teams
between all these distros maintaining those releases independently. There is
no actual way to fix this - the Debian, SUSE, Ubuntu, Fedora, and Arch devs
have already dug in within their camps - but it is still sad that desktop
Linux remains niche because rather than working together and combining the
best of everything we have everyone is working on their own thing while the
whole is lackluster against the competition.

------
random55643
Yes, you should use weed.

