It's well known that Canonical likes to do things their way and maybe suffers from a slight NIH syndrome. But do people have to be so upset about it? Maybe Rogers explanation is completely wrong and the real reason is "we thought it would be fun to write a new display server!" Does it matter? Saying "you're increasing desktop fragmentation" is silly and implies that writing Mir is somehow worse than writing no code at all.
Graphics on desktop Linux currently kind of sucks. Windows and OS X are years ahead while we are stuck on X11 with flakey, sometimes-working, compiz-quality 3d effects. I think any effort to remedy that situation should be cherised.
As a non-Linux user, it's funny to me how Linux users complain about how this will bring fragmentation to Linux, when Linux is one of the most fragmented ecosystems ever. There are tens or hundred of different distros, and at least several different application packages, and I'm probably not aware about a lot of other stuff, too.
I support Canonical in this move, because they do need a very optimized display server and interface for mobile hardware, and they are also the ones who've pushed Linux the most into the mainstream, sometimes by taking steps that are very annoying to regular Linux users, but in the same time very pragmatic in bringing Linux closer to the non-technical current Linux user, or future Linux user.
Fragmentation can be very different. The sickest type of it is hardware fragmentation caused by incompatible drivers. The most obvious example is Android. Their choice of bionic libc makes running normal glibc Linux with Android drivers impossible (unless you attempt to translate glibc into bionic, like libhybris does). So this fragmentation creates a very strong barrier for using hardware and everyone would agree such kind of rifts are not a pleasant thing. Luckily in Mir's case Canonical seems to be interested in avoiding drivers fragmentation with Wayland.
I didn't understand the question. libc lies in the core of the system, and if libc is incompatible it translates to non reusable drivers, which translates to "Android only" hardware. What do you call a "non-standard GNU extension"?
> Linux is one of the most fragmented ecosystems ever. There are tens or hundred of different distros, and at least several different application packages
That kind of fragmentation is hardly noticable and/or meaningful to a developer though. On the other hand, a completely different display server will cause a world of trouble.
> I support Canonical in this move, because they do need a very optimized display server and interface for mobile hardware
I think it boils down to people being upset because Canonical through Ubuntu actually has the power to make Mir a real contender just like that, and it may very well leave Wayland in the dust due to internal discussions in Canonical rather than gaining acceptance by the community first
Everything else seems like excuses to me. And while I use Ubuntu, and Mir sounds appealing, especially given my experience with other Ubuntu projects (including Unity, which I love), I don't particularly like this approach either. Though I have no reason to think I won't stick with Ubuntu.
On OS X, things generally just work, and work quickly. 3D performance is good. I've never had graphics hardware that was not supported. Granted, OS X doesn't have to deal with a lot of different hardware configurations like Linux does, but still.
On Linux, things are flaky. Sometimes it works, sometimes it doesn't. Some things work, other things do not. Alt-Tab out of full screen games? Upgrading the NVIDIA and ATI graphics driver without dropping out of X? Not possible. 3D performance is extremely variable: depending on your driver and hardware, some 3D operations are hardware accelerated, others are not. On OS X everything that should be hardware accelerated, always is. Ditto on Windows.
The Windows Display Driver Model allows upgrading the display driver without rebooting, and without even relogging in. It runs the display driver partially in user mode so that if it crashes it can be restarted. I don't know how far Linux is away from a similar feat.
I was kinda meaning user-experience stuff, rather than the quality of the backend. What do the OSX or Windows desktops do with the 3D capability, in terms of improved experience or workflow, that Linux doesn't?
I'm not really a gamer any more so I don't often run into any issues with full-screen gaming.
I'm pretty sure that the video driver on linux can be updated without a reboot, but you'd probably need to log out of X, yes.
Just read up on why Wayland was created and what problems it indents to correct. Im not interested in enumerating X11:s latency issues, compositing and font rendering limitations etc when someone has already done the job much better than me. You may also try and use a recent Windows version for a few days and see if you can spot the differences.
Why? What if Wayland and Mir have different goals? What if the venn-diagram overlap of those goals is less than ideal for Ubuntu and where it is going?
Look at the absolute MESS that compiz, x and other things in Linux have had b/c of the "support all the things" attitude.
The simple FACT is that it is much easier to be exceedingly great at a small subset of things than a large super set of things. Mir is specific and exact to what Canonical wants and needs out of Ubuntu. Wayland is a GENERAL solution that, sure, Canonical could influence, but would ultimately need to be more than just what Canonical needs. That means, you guessed it, bugs, rough corners, things that aren't supported.
It's called focus. Canonical is focusing on delivering an EXPERIENCE and they decided they could do it better in this approach. I agree with this b/c of 1. history 2. experience and 3. the current state of the various projects.
And don't forget that the code is going to be open source.
And I don't understand why this is a problem in the linux world when the web world this is typically encouraged as "the best project will win". Do we need Derby, Backbone, Meteor, Tower, Express, Knockout etc etc etc? What about Rails, Sinatra, Camping, Padrino? Django, Flask, Pyramid? Riak,Mongo,Couch? Why can't they all join forces and "contribute code and influence"? In the web world, this is kind of encouraged and seen as good for competition. In the linux world is it seen as "not being a good community member". It's fuckin' bullshit.
As far as it actually goes it's only a missed opportunity for collaboration, but if Mir and Wayland end up being less compatible than Ubuntu is promising it could also mean a lot of extra effort for others.
Specifically, if they don't end up being able to use the same desktop drivers then this schism will cause a lot of extra work for driver writers. And if applications can't simply link against the Ubuntu-provided Mir version of libwayland then that'll be a lot of extra work for application writers.
Whoa there. Yes, X sucks for a huge number of reasons, but please don't assume that all existing compositors are as buggy as compiz. Mutter and KWin work quite well.
Mutter works...now. Ubuntu used mutter at one point and it was TERRIBLE. Performance, crashes and the lot. I recall they tried to improve it and got massive pushback from GNOME.
This led them to use Compiz, which had its own issues.
I can totally see why they would do something like Mir given the history of non-supportiveness they got from the various upstream projects in the past.
I don't have a problem with Mir, mostly just the way it was announced by dumping on Wayland. Frankly, with Canonical working closely with Nvidia on an EGL driver that Wayland could also use... I see a net win for everyone when the dust settles.
It's funny, anyone remember Beryl? Compiz-fusion? :) Fun times in OSS land.
> Weston, the reference Wayland compositor, is a test-bed. It's for the development of the Wayland protocol, not for being an actual desktop shell. We could have forked Weston and bent it to our will, but we're on a bit of an automated-testing run at the moment, and it's generally hard to retro-fit tests onto an existing codebase. Weston has some tests, but we want super-awesome-tested code.
I'd accept this as reason alone to go with Mir, assuming it's correct. My experience with window managers on linux has been riddled with constant bugs and regressions that are the hallmark of software that isn't adequately tested.
And he's absolutely right. If code isn't designed ahead of time to be modular and decoupled (testable), you're going to have a hell of a time refactoring it later to get good test coverage.
I'm not asserting Weston is these things. But, accepting what Christopher says here, Canonical might actually be justified in creating Mir. I'd especially be interested in hearing if someone can refute the mentioned point.
The section the GP quoted was not the entire article. Rogers went on to end the quoted paragraph with the rhetorical question "We don't want Weston, but maybe we want Wayland?", and went on to address that question in the rest of the article.
Whether or not you agree with the chain of reasoning, pretending he never wrote anything other than a single step in it, and calling that a "strawman" on the grounds that it isn't a whole chain, is a little silly.
Perhaps you meant to say "it depends on the circumstances, the goals and current state of the other projects, but in my experience refactoring is a bit easier than rewriting"?
If not, wow...blanket statements are rarely ever true.
> yes, it is much easier to start from _something_ than from nothing
I disagree. I have worked on multiple projects where the code base needed to be nuked from orbit and redone. You use the old code as a guide (which is what I'm assuming they're doing with Mir), but by the time you're done it's an entirely different codebase. It gets especially fun when you run into code that is almost certainly buggy; without tests to tell you what the old developer was looking for, you have to make an educated guess about fixing it.
This kind of rewrite is far beyond a typical project fork. I don't know enough about Wayland or Weston to competently state if that's necessary here. I can just tell you I have done something like this before.
Perhaps you're lucky enough to work with only disciplined, competent programmers. I have not had such luck in the past.
I find it interesting in the comments that most people are claiming that Canonical's time would be better spent contributing to existing projects, but isn't the nature of Open Source to be constantly forking, iterating, and contributing new code when you think the existing stuff isn't good enough?
Aren't these attitudes the basis of why many subsystems in Linux have been around for decades?
Others complain about fragmentation, which is another obtuse argument when you count the number Linux distributions available, the number of open source packages, and the entire nature of GitHub with their "please fork me" button. Open Source is by nature fragmented.
If they want to go run off and do their own thing, what's the problem? It's their time and money. If what they write is crap, no other distributions will use it. If it's awesome, it has a lot of potential, but the community should decide after it's released, not before.
isn't the nature of Open Source to be constantly forking, iterating, and contributing new code when you think the existing stuff isn't good enough?
You can always start from scratch if existing projects don't cut it. When talking about a windows manager (like Unity) or display manager (like Mir), starting from scratch is, financially, a pretty big decision, so you should make it an informed decision. The weird thing is that Canonical never even contacted the Wayland developers. It's weird because they should be interested in any mistakes Wayland made on their way (to not make them again), and because it's so easy in the open source community. If you find a bug in your Intel graphics driver you just go to freenode and talk to the developer that wrote that code. Same for Wayland, and most other projects.
If they really want to be better than Wayland, they should learn from Wayland's mistakes, not make the same mistakes again.
Forking, iterating, and contributing new code are all good things. Reinventing the wheel, coming up with a square, and then getting the majority of users (by virtue of being the default, not by being the best), is not so good. The complaint is that Canonical is doing the latter with a lot of things.
I didn't get the answer. Why couldn't they work with Wayland to build the missing pieces? Or Wayland developers had some principal objections? It looks like Canonical didn't even make an attempt to collaborate. That's the main point of criticism.
> the upsides of doing our own thing - we can do exactly and only what we want, we can build an easily-testable codebase, we can use our own infrastructure, we don't have an additional layer of upstream review
That's exactly the NIH problem. So in essence - no real reason, except selfish interests.
> I'm particularly excited about our engagements with NVIDIA and AMD; although it's early days, I'm hopeful we can get a solution for “but what about proprietary drivers?” not just for Mir, but for everyone.
At least this part sounds good. If drivers can be shared - hardware fragmentation will be avoided - (i.e. Wayland will reuse the same drivers especially on mobile).
Shuttleworth actually goes on to say in the comments
"The very people protesting their super-collaborative credentials have a long history of being super-antagonistic to Ubuntu in practice."
I don't think it's even the NIH problem it's just sheer delusion.
something was said about it in irc a few days ago. i am not competent enough to paraphrase the reason. so here is the quote from http://pastebin.com/KjRm3be1
00:15 <Prf_Jakob> RAOF: why do you guys want server-allocated buffers?
00:15 <RAOF> Prf_Jakob: Mostly arm damage, there.
00:15 <Prf_Jakob> RAOF: so thats whats its really about then? Closed source drivers?
00:16 <RAOF> No - arm hardware (apparently) has insane constraints that make server allocation more attractive.
00:16 <RAOF> eg - you have at most 6 buffers that are framebuffer-compatible.
00:16 <krh> we have those GPUs at intel too
00:17 <krh> weston runs on those
(note that daniel stone said:
00:35 <daniels> RAOF: fwiw, i've got a wayland backend for arm hardware which does server-side allocation right now. didn't require one single change to any of the clients, or even compositors. it's all internal to the egl stack.
)
I have a bad feeling this G+ post is about to be ripped apart as badly as the initial wiki debacle.
I wonder when people will realize, though, that as Canonical tries to justify itself on technical merits and having to backpedal every time, their answer of "we want to control it so we can move it and change it as we want to without having to wait for consensus or forking" is not good, but legitimate.
Something... something... support Android graphics drivers. Even though the Wayland devs have said that this is possible in Wayland or with some modifications.
"ranging from the sensible (we want to write our own
display serve so that we can control it)"
is sensible to him.
What the hell is going on at Canonical? I can't imagine how BADLY mir will damage desktop development in linux if Canonical actually forces its adoption and then holds the main repository.
It would have to be forked to get it away from them. Are we even allowed to fork CCA code?
Why do I get the feeling that if any of this happens, canonical is going to be suing linux end users.
Why wouldn't you be able to fork Mir? The open source licenses used explicitly allow that. Disagree with Canonical on strategy if you want, but spreading FUD about lawsuits is misinformed at best, maliciously dishonest at worst..
If you for Mir you can't accomplish anything by doing so because the drivers target the reference implementation. It is why you don't see prolific display servers all about like you do text editors, terminal emulators, or music players.
And as Mir changes, and as those changes prompt changes in the drivers that run Mir, and derivative project ends up having to adopt those changes to continue using newer drivers. And unless the developers of a Mir fork also plan on maintaining patches to all the GUI toolkits and compositors to support their fork, they also need to emulate Mir's behavior in relation to the rest of the upwards stack.
The kernel doesn't have a stable ABI, don't expect Canonical to get stable ABI's for Mir. And the fact it is meant to use SurfaceFlinger drivers gives two degrees of instability to contend with, because if Google changes SF at all and demands revised drivers, those changes have to make their way to Mir to use any revisions.
I'm assuming that these drivers are userspace drivers, whatever interface they have with the kernel is another problem.
The presence of SurfaceFlinger in these decisions seems to be derived from some speculation in an article, not based on what Canonical themselves have said.
When it comes to embedded hardware graphics, the industry is coalescing around Android as a kind of HAL/BSP, and so other projects that want to support off the shelf available consumer hardware either end up having to adopt Bionic or develop a solution like libhybris. This allows non-Android systems to use the same hardware accelerated EGL drivers.
SurfaceFlinger is one option for managing hardware buffers and blitting between surfaces in the main and GPU memory.
Some core concepts of upstart and systemd are incompatible. Even if they patched upstart instead of starting from scratch it would now be a completely different system.
In his initial blog post about systemd, Lennart Poettering writes: "Well, the point of the part about Upstart above was to show that the core design of Upstart is flawed, in our opinion. Starting completely from scratch suggests itself if the existing solution appears flawed in its core. However, note that we took a lot of inspiration from Upstart's code-base otherwise."
Unlike the Mir specs, this post is really detailed. Upstart is certainly an improvement on sysvinit, but falls short compared to more modern init systems like Apple's launchd.
Mir likewise states that inherent incompatibilities is exactly why they're creating a new project as well. Though a big difference being that Upstart is actually in use, whereas no one's using Wayland.
To be quite honest, a lot Poettering's rationalizations with systemd, especially when discussing other inits like launchd, seem to be purely NIH. He could do this work and organize with what's already out there, but he'd rather do our own thing. Instead of creating a system that is backward compatible, he's saying for daemon writers to write for a completely new specification, and to ignore any pretenses of compatibility, like not double forking. More discouraging are the intentional incompatibilities, and statements that they're only targeting the linux kernel and udev, and will reject patches that make it more compatible with other systems.
>This comment literally makes no sense. Can you try explaining it again?
SURE! Let's give it a go!
>Canonical can't force any other project to use Mir. So, what are you saying?
Yes they can, Canonical can force adoption with their user base. Ubuntu has made itself the defacto distribution. Denying that canonical can't force adoption of a standard in linux is almost as bad as denying microsoft can't force the adoption of a standard in general. Lookout everybody! Here comes C# and .net! choo choo.
>Any open source project can be forked. So, what are you saying?
Actually no, they can't. Freely licensed open source projects (FOSS) definitely can! But the CCA is not GPL compatible. Worse, it's not clear exactly what Canonical can do with the CCA.
The CCA gives canonical the right to change the license at whim. To be honest, this is fear, uncertainty, and doubt. but no spreading necessary.
Canonical changed to a new, not well understood license, which gave them permission to change to other licenses.
F I don't trust canonical,
U I'm not certain exactly what they can do with that license
D and I doubt it's anything good.
Shoe fits.
>Canonical suing linux END users? What? What are you trying to say here?
I have a feeling that if any group of developers actively deprived Canonical of it's control over any of the projects by forking them and becoming more popular, they would manipulate the terms of their license, and sue the party that forked the repository.
This was speculation, I have no idea to be honest what they are capable of. Of course, I wouldn't be worried at ALL if they were just using the freaking GPL.
The CCA isn't a software license issued to end users, like the GPL; it's an agreement contributors sign giving Canonical rights to their code. It looks like a less-restrictive alternative to copyright assignment, which the FSF requires for contributions. What is GPL incompatible about it?
Note that it includes "a promise back to the contributor to release their contribution under the license in place when they made the contribution" (http://www.canonical.com/contributors/faq).
Sooooooo....I'm not sure if you are massively misinformed, a troll or simply trying to spread misinformation. I'll assume you are misinformed as it seems the most charitable.
>>Canonical can't force any other project to use Mir. So, what are you saying?
>Yes they can, Canonical can force adoption with their user base. Ubuntu has made itself the defacto distribution. Denying that canonical can't force adoption of a standard in linux is almost as bad as denying microsoft can't force the adoption of a standard in general. Lookout everybody! Here comes C# and .net! choo choo.
Uh, no. Of course Canonical decides, to a large degree, what goes in Ubuntu. Though the community chooses this as well. The entire repository system is setup to allow someone with interest and access to put put something in and make it available. Install Erlang (sudo apt-get install erlang). It's there, but it isn't in Ubuntu (as are literally thousands of other packages).
And, again, Canonical CANNOT force another project to use something. Every project, even Ubuntu derivatives, are free to choose. Otherwise you'd see Mint with Unity instead of Mate and Cinnamon. And you think Fedora is going to give a rats ass what Canonical says? Heck, look at GNOME these days, they are doing things counter to Ubuntu out of spite.
>>Any open source project can be forked. So, what are you saying?
>Actually no, they can't. Freely licensed open source projects (FOSS) definitely can! But the CCA is not GPL compatible. Worse, it's not clear exactly what Canonical can do with the CCA.
The CCA gives canonical the right to change the license at whim. To be honest, this is fear, uncertainty, and doubt. but no spreading necessary.
Canonical changed to a new, not well understood license, which gave them permission to change to other licenses.
F I don't trust canonical,
U I'm not certain exactly what they can do with that license
D and I doubt it's anything good.
Shoe fits
The person below already said this, but the CCA is not a licence; it is an agreement between contributor and entity (this entity being Canonical). ALL THE CODE IS STILL GPL (look it up)!
>>Canonical suing linux END users? What? What are you trying to say here?
>I have a feeling that if any group of developers actively deprived Canonical of it's control over any of the projects by forking them and becoming more popular, they would manipulate the terms of their license, and sue the party that forked the repository.
This was speculation, I have no idea to be honest what they are capable of. Of course, I wouldn't be worried at ALL if they were just using the freaking GPL.
Hope that answered your questions.
Uh, they are using the freaking GPL. ALL THE CODE IS GPL (look it up)! Every project Canonical has released is either GPLv2 or v3.
So which was it, just curious. Misinformed, a troll or willfully spreading misinformation?
I do wish that Canonical would fix their SSL issue. Here is what I see when I visit from Chrome [1]. This is particularly frustrating because when I click proceed, this is the next thing I see [2]. I surely am not going to make an SSL exception in our corporate proxy just because Canonical can't fix their certificates.
What will be the effects of this fragmentation? For example, will we see Firefox for Mir and Firefox for X and Firefox for Wayland? Is there a risk that one window system will be without applications? I am too distanced to draw these conclusions myself.
No one can screw up Linux. It's open source and there's a huge and disparate community behind many projects spanning thousands of use cases. Even if Ubuntu somehow managed to go closed source and charge a mandatory license fee, Linux would still be Linux. Ubuntu is not the only project, by far. And you wonder why Shuttleworth is getting pissed at you guys.
I wouldn't say it was needless. As far as I'm concerned, it is the KDE and Wayland guys who have come out screaming in a very public, very vitriolic attack against Ubuntu. No matter what side of the argument you stand on, you can't deny that they have been taking a shit on the Ubuntu brand (although a lot of the worst comments have been deleted it appears). It is no wonder Mark is reacting in a caustic manner in my opinion.
> it is the KDE and Wayland guys who have come out screaming in a very public, very vitriolic attack against Ubuntu
When blatantly false things are stated about your project by an organization as big as Canonical, one really can't blame them for expressing frustration. Especially when there is FUD about Wayland everywhere you go.
I sympathize with them. I'm still seeing mistruths about Wayland all over the Internet as a result of that highly inaccurate (at least initially, they've editted it) MirSpec document.
Well, the Linux community has been needlessly caustic in their attacks on Ubuntu for years now. I have to believe the community's resistance to any change or consensus is partly to blame for Ubuntu becoming more insular. If change needs to get done, it's not coming from the outside. All that's coming from outside is hate.
I agree. I've been noticing community people at Canonical (especially Mark) doing this kind of thing a lot. It's making it a lot harder for me to respect them or their project, even though I'd like to. One of the reasons I started using Ubuntu was its level-headedness and pragmatism. Lately I'm seeing a lot of blog posts from these guys, full of half-truths and accusations of massive FUD-spreading conspiracies, spaced between walls of vacuous vitriol. That definitely is not the sensible project I remember.
Present facts, assume the other party means well (we're all here making free software!), and remember that typing at someone on the internet sounds very different from talking with them in person. It isn't hard to do, but sometimes it's easy to forget, and then stuff like that Google+ thread happens.
Does anyone think that licence issues may be important? The comments in the original article about the CLA licence that Canonical use were interesting.
The entire exchange reads like disgruntled co-workers doing nothing but accusing and blaming each-other, and Mark's comments may be needlessly volatile however it's not misdirected.
It's a shining example of office politics spilling over to the open-source community.
Graphics on desktop Linux currently kind of sucks. Windows and OS X are years ahead while we are stuck on X11 with flakey, sometimes-working, compiz-quality 3d effects. I think any effort to remedy that situation should be cherised.