What is the point of Mir existing? I get that Mir added support for Android GPU drivers, but hasn't Wayland since added that in response? Are there any other advantages besides satiating their "not invented here" problem?
Control over a critical part of the stack. Canonical can bend Mir whichever way they want without waiting for the upstream to implement something or accept patches or wait for releases that contain features Unity doesn't need.
They implement only what they want, how they want (full-blown automated testing) and get to dictate the direction of the display server which is critical for Ubuntu across devices.
Upstreams don't always share the vision and often do things that don't align well with the distro. Remember Nautilus changes a couple of cycles back. I'm not saying Gnome devs did anything wrong with Nautilus, but pointing out how it was a big pain for Ubuntu developers/users.
I believe you have to control all the critical parts of your product and not live on the mercy of upstreams.
Canonical is not here to donate resources to help make your linux ecosystem better. That is only a side-effect of it being a member of that ecosystem. Canonical's primary goal is to create a awesome product that can disrupt multiple markets so I would say it makes perfect sense for Mir to exist.
The free drivers like Nouveau are way behind what they should be. The reason is that there is no commercial interest for companies to create good GPU drivers on desktop linux.
Nvidia is not contributing to wayland and so on.
In Android, there is.
So the situation is this, iOS already has fantastic GPU support, Windows 8 and Android have too. Desktop Linux is behind the curve.
You can't make so many things in the desktop because you can't assert GPU accel.
So Mark wants to change that with with what already exist.
Android devices already provide with drivers capable of real time zoom effects or any other capability of the device, measured in OpenGL extensions support.
They are not that well documented, but they work ok, because if not, the manufacturer does not sell the device.
Mir inherits Android graphic system on purpose.
In wayland you need Open drivers, like Nouveau which are very slow as the power management is not complete, or they are not sure enough to activate it(it could toast thousands of cards):
http://nouveau.freedesktop.org/wiki/FeatureMatrix/
The life of those developers is hard, all day fixing bugs, and new bugs appear like it is never going to finish(you don't see the light at the end of the tunnel).
I don't get it, why does Wayland require Nouveau? The closed-source NVidia drivers are feature-compatible with their closed-source Windows drivers, so why would Wayland (or MIR, for that matter) be limited by them? What more do Wayland or MIR need besides shaders and OpenCL?
Or is all of this just because the Wayland and MIR developers only want to use open-source drivers?
@c0un7d0wn below (somehow I can't reply directly?)
I didn't know that, makes sense. That said, Mesa appears to provide EGL on top of GLX, which probably isn't as efficient as a driver that directly provides an EGL interface, but likely 'good enough' for basic desktop rendering. So I still don't really understand why I would need Nouveau to use Wayland or Mir.
For me, using Nouveau would be a definite showstopper for whatever new display server becomes the default in Linux distro's. Not to talk down the efforts of the Nouveau developers, but I only have bad experiences with the driver, ranging from crashes, screen corruption, unbootable installations, terrible performance and loss of indispensible features such as proper multi-monitor support and hardware video decoding. One of the first steps in installing a new Linux system on Nvidia hardware is usually to remove, purge and blacklist Nouveau, and install the closed driver, because that just works (tm). I'd even use software rendering over llvmpipe before I'd consider Nouveau.
If the future of the Linux desktop on Nvidia hardware requires a reverse-engineered GPU driver that lacks all the features provided by the closed-source driver, we can postpone the 'year of the Linux desktop' for at least another decade.
Weston requires KMS, Wayland does not.
This is why Wayland isn't being actively pushed on non-Linux systems, since the reference compositor won't work with UMS drivers.
Mir heavily emulates Android's SurfaceFlinger design, and Canonical claims it'll be a better fit for Ubuntu for smartphones. Highly debatable whether that'll actually hold true.
In any case, there's no fundamental design flaw in Wayland; this just seems like a common case of NIH.
> window management is tightly coupled to display management.
Yes, it is, both in general and in Wayland.
Specifically, what would once have been called just a window manager now needs to be a compositing manager, managing window contents and compositing them onto the screen. That compositing manager also needs to handle input, since input depends on where, how, and if windows are rendered. And once you're handling all of that, there's nothing useful left that corresponds to an X server other than graphics memory buffer management, which is the kernel's job these days.
Wayland handles software modularity by providing a library for use by prospective compositing managers. Moving the compositing manager into a separate process from the thing actually doing the rendering does not add a useful form of modularity.
Citation? X11 window managers are decoupled from the rest of X and the OS by well-specified protocols.
That compositing manager also needs to handle input
Oh yes, I forgot how Wayland couples the WM to the OS as well.
Wayland handles software modularity by providing a library for use by prospective compositing managers. Moving the compositing manager into a separate process from the thing actually doing the rendering does not add a useful form of modularity.
1) Can you point me to an API specification for Wayland compositor libraries? If not, it's not decoupled.
2) Can one switch window managers without restarting the compositor (and thus losing window state) with Wayland? If not, it's missing a "useful form of modularity" provided by X11.
The focus within Mir is Unity, not anything else. Wayland wants to work with every desktop environment. Further in Mir they did not give a stability agreement, Wayland has.
Software modularity is overrated, but in this case I think you got it wrong.
It made sense to me. You praised Mir's modularity. bkor doesn't find Mir to be modular, but doesn't reckon modularity to be as important as many people think.
What is the point of Chrome existing? I get that Chrome added a JIT javascript compiler, but hasn't Firefox since added that in response? Are there any other advantages besides satiating their "not invented here" problem?
But chromium and chrome do not lead to duplication of efforts:
Chrome is basicly Chromium + addons - and chrome devs contribute to chromium - they supplement eachother.
Chromium/Chrome are based on Webkit (forked into Blink since Chrome 27), which in turn is based on KHTML/KJS. Chrome was built in large part by former Mozilla engineers, and Mozilla Firefox itself was built based on the browser engine open-sourced by Netscape as its swan-song gift to the open web.
> I get that Chrome added a JIT javascript compiler, but hasn't Firefox since added that in response?
In addition to what @ergo14 said, Firefox had a JIT compiler called SpiderMonkey before Chrome was even released [1]. With that said, I'm sure there's cross-pollination both ways between OdinMonkey (Firefox's current JIT compiler) and Chrome's JIT compiler in V8.
Chrome performs a lot better, even now, which was its stated goal and what it was advertised as being about. The Ubuntu team has consistently failed to articulate what Mir is supposed to give us over Wayland.
Every FOSS project has the possibility of creating fragmentation, but it's not necessarily a bad thing.
Greater software selection allows more choice and offers a better environment for competition.
Some will win over others, some will find a niche market to cater, and others will be in continual completion with each other with an almost split marketshare.
And while you may feel as though Mir's only aim is to fragment the Linux community, keep in mind that Ubuntu will be using QML extensively and that will be compatible with Wayland, and through the use of libhybris both Mir and Wayland can target the same display driver.
Thank you. I am tired of people telling developers to stop making things, especially other developers. This is how evolution works. You have to try multiple approaches and see what works. If you have the money and developers to scratch an itch the way you want to, do it. If it works out, everyone benefits. If it doesn't work, everyone benefits by learning what went wrong. Those advocating that everyone pick one standard and stick to it are missing the point of distributed development.
It ended up with fragmentation everywhere, code full of #ifdef spaghetti to cater for all UNIX variants that one needs to support and increased development costs.
The difference between the competition we have now and the 'UNIX wars' is all a matter of licensing.
With the latter, proprietary licensing caused a large part of this fragmentation as there was no easy way of using code from one project in another to preserve compatibility.
Today, most of these projects are FOSS, so the risk of another '*nix war' has been negated to the point of being a non-issue.
Nothing is stopping the users of those Linux distributions from grabbing the source code and dependencies and compiling any program not in their package repository, or from grabbing a different package manager like pkgsrc, and many proprietary Linux programs that are tied to a particular distro e.g. Steam have been shown to work on other distros with minimal effort.
And in regards to the BSDs, what exactly are you referring to?
If it's Wayland, that's being ported. If it's KMS drivers, Intel has been ported and radeon is in the process. Nouveau probably won't be ported due to the fact that they already have a pretty stable binary Nvidia driver.
The only two things I can think of that aren't compatible with BSD have both been designed with exclusively Linux by the developers heading those projects, SystemD and ALSA, and even then the BSDs already have suitable substitutions with those, init and OSS.
So, since there's greater cross-compatibility than ever before, how would you suppose that another nix war would erupt?
This Linux community, is a fragment of the Unix community.
That Linux community is further fragmented within itself.
The BSD community is decidedly less fragmented than the Linux community.
At this point, who cares what Ubuntu does?
If they come up with some better ideas, then those ideas may gain traction within the greater Unix community, and if not, I don't care.
So how would the Gtk+ developers be able to ensure that it works under Mir? Mir is distribution specific. Same for an application running under Mir.
It creates lots of extra work for people who do not run Ubuntu. It will result in extra bugs as things cannot be tested. "choice", short-term benefit of trying to be first. Long term it seems really bad.
Gtk works under X, Windows and OS X. It already has a system in place that can support multiple backends and I am sure Canonical will be more than happy to write the Mir backend for Gtk. All Gtk devs have to do is to accept the patch.
So don't take consideration for Dragonflybsd, OpenBSD, or the biggest elephant in the bsd fork room, OSX.
FOSS will fragment because people have different opinions and are passionate about the choices they make in the software to create and use. There is no alternative, because in free software the developer has last say. There is no school of "the one way" of how to do anything in foss, the only possible proxy being the popularity of the software created.
I like that old adage. I actually had never heard it.
I think BSD's success with i386 has brought the stark pragmatism of the "server" to the PC. The dream of BSDI. The reliability, robustness and the uniformity. Maybe BSD UNIX is not as "fun" as MS and Apple, but it _works_ the best. Relatively minimal bloat. And things can be removed relatively easily. Getting small is not frowned upon or ignored as a worthy objective. "The hero is the negative coder." (credit: Doug McIlroy, one of the original UNIX developers)
Linux OTOH (a horde of wannabe kernel hackers and a gazillion idiosyncratic GNU programs) has brought the chaos of the PC[1] to the idea of "UNIX". There is bloat and endless tweaking around every corner. The vigilance and skill it takes to make your Linux small and simple and actually keep it that way is, IME, quite the burden. The respect for the user also differs. Man pages? I guess the assumption is we'll just read the code when we have a question. But then there's also an expectation we'll use binary packages. Linux just makes my head spin. I'm just not smart enough to use it. I wish I was... because they have better hardware support.
Based on what I've seen, I doubt many a Linux developer would think "The hero is the negative coder." Linux just keeps growing and becoming more complex and unwieldy. And so by comparison, IMO, Linux less embodies the idea of UNIX than BSD.
1. The Windows experience of constantly searching for some non-MS program to do some task that base Windows itself can't even begin to handle... and you end up with an ad hoc mess of programs that barely do what you need, and most of them poorly written by who knows who.
I had an appreciation for FreeBSD. They were smart about some things. Especially circa 1996 era. It was an easy install. Pop in a 3.5" disk and do a network install. It had drivers and automagic support for obscure ethernet cards which Linux did not support at that time. It even had an easy modem install. When I needed a small server quick I'd buy a PC from a store, pop a disk in it and network install FreeBSD. Get up a DNS, mail or web server in no time. Making the install process easy can help an OS get out there a lot. The servers were sturdy and good too. A nice ports system for applications - better than dpkg or rpm at the time. It just worked. The install itself was a thing of beauty - no hassles. FreeBSD 1996 beats Gentoo 2013 in that department.
As far as bloat, some BSDs are more elegant, but I do not think Linux is that bloated. It is small enough to succeed on embedded systems and Androids. The device driver section is bloated, but the core system is not as bad.
Yes, Ubuntu is not small and simple, but Debian is if you want it to be.
"It is small enough to succeed on embedded systems and Androids.'
Right. (In fact, it can succeed anywhere. And it does.) But my point is that it's not _presented to the user_ in a form that makes reducing it to embeddable size or porting it to ARM easy. I compile my BSD systems from source. And small modifications are (compared to Linux) easy.
Linux From Scratch (LFS) and Busybox were the closest I saw to what I was looking for. But when you look at the Linux ecosystem, LFS seems so obscure. And how many people create their own customized Busybox? To pursue these objectives with Linux feels like moving against the grain. No one aims to assist you by making the system easy to comprehend and modify.
The people I see modifying Linux at the level I would want to modify it are just too far beyond my comfort level with C and assembly. I can't do what they do and believe my system will still be reliable. Not only that, it would take forever to do it. I'm just not smart enough to use Linux. As I said above, it's just too much work.
It was an easy install as long as you didn't mind wiping your whole hard disk. Many people tried Linux first because they could do that without having to blow away Windows, and then stayed there.
I've dropped Fedora and switched to Debian stable in anticipation of avoiding Wayland/Mir for as many years as possible. I'm not worried about fragmentation, I'm just tired of being an early adopter.
I must be missing something. My understanding is that they implement the X protocol via a wrapper layer. Aside from a display manager/compositor, what is going to fragment?
Well now there is Mir, Wayland, and X, instead of just X. Just right there you now have three codebases instead of one, so that is fragmentation in itself. Is it harmful fragmentation? I think that remains to be seen.
I'm not worried about fragmentation though.
My position is that currently X "Just Works" for me and I am not willing to sacrifice this stability so that I can test out the pet projects of a bunch of Red Hat and Canonical employees. I figure I have at least 3 years, hopefully 5, before I will be forced to switch. (3 years left on wheezy, and hopefully 2 more years with jessie assuming X is still the default and viable). 3-5 years to let them work out all of the kinks? Yes please, I'll take that.
A lot of the Wayland developers are X developers. It basically is X12, just that calling it X results in a lot of unneeded steps (loads of people will have a say into the spec, etc). Daniel Stone explained this in a video.
I doubt he is, it has been hard to get FreeBSD working on any laptop that isn't specifically supported by someone.
You can see many non-working things here: http://laptop.bsdgroup.de/freebsd/ from no sound to can't boot. Power management is also notoriously bad so you need your laptop to be plugged in.
It is reminiscent of Linux a few years ago when everyone with brand new laptops tried to install it.
FreeBSD is still fine on desktop systems and even works quite well for a desktop in some cases, but Linux is just so big at this point that it supports more things, while FreeBSD has always seemed more server oriented.
FreeBSD is still fine on desktop systems and even works quite well for a desktop in some cases, but Linux is just so big at this point that it supports more things, while FreeBSD has always seemed more server oriented.
Apparently you have not tried FreeBSD on an Acer Aspire One netbook, because if you did you would realize that your last paragraph is outdated by at least 3 minor version iterations.
I first tried PC-BSD 8.x in 2009, and with KDE as the default DE it unusably slow, but when I tried FreeBSD 8.x last year I was pleasantly suprised with how well the system ran on my little netbook.
Then, when I upgraded to 9.0 I realized that I had finally found what I had been searching for all these years. Now, with 9.1 the system is even better.
I used to run a PC-BSD system and it pretty much worked out of the box, that is until I got an AMD 6* series video card but I hear the radeon driver is being developed and very close to being integrated into FreeBSD 10 and backported to 9.
Yes, FreeBSD lacks some features that Linux has, but it's still being developed and it has its uses outside of the server market too. For example, Sony is reportedly using it as the core OS for the new PS4 and there's a project that is aiming to create a HUD device which is also based on FreeBSD, http://hmdviking.blogspot.jp/
I've been running FreeBSD on my netbook since last autumn and it's more responsive, and noticeably faster than #!, which I tried out last month. (I just reinstalled FBSD last night)
Crunchbang just doesn't compare as far as speed goes.
I tested both OS' with Xfce4.1, as well as no DE with dwm & dmenu. In each case FreeBSD blows crunchbang away, and this is on an Intel Atom N270 with 1 GB RAM.
The only thing #! has going for it is ease of setup, but FreeBSD isn't that hard to setup and use, and all questions I've had are easily answered by reading the FreeBSD Handbook.
Some will decry FBSD's support for suspend/hibernate, but since I never use those features it's a non issue for me.
That said, I do believe that suspend, etc, is working for some laptops, although YMMV.
How do you pack up your laptop and bring it somewhere else without it melting down in the bag? Just make sure nothing is burning CPU, pack it up, and hope for the best?
Depends what you meant be 'save your sessions.' If you have an entire environment setup with multiple applications open in various states, then no, you can't 'save state' in any way that I know of (other than suspend-to-disk). It can also be a pain in the ass to setup said environment again.
On the flip side:
- Using suspend-to-RAM is less secure.
- Enabling secure swap (at least on Linux with in a password-less setup[1]) doesn't work with suspend-to-disk.
[1] In a password-less setup, there is no way to recover the swap after a fresh boot since the key is regenerated on every boot (at least it was the case the last time that I manually setup encrypted swap).
I have been using my notebook lately while on the move as an external battery for the tablet. Just plug the usb, put the laptop on suspend and you have 12 hours of decent performance ahead of you - so suspend is somewhat important for me right now. (8 hours in a bus is testing experience)
One of my favorite features on my current laptop is the USB port that is connected directly to the battery, so it charges even when the laptop is not turned on.
What laptop is this? Is this an advertised feature or something that you only got information on in some obscure way (newsgroups, forums, etc vs. the official manual)?
Alienware M11X R2. It's not really advertised, I just noticed the option in the bios one day - http://www.techmonsters.com/DellTraining/bin/Foundation2010/... . It's ridiculously useful when travelling - if I don't use my laptop I get an extra 2/3 days worth of battery for my phone.
Fragmentation has a cost, because higher levels (Qt,Gtk,etc in this case) must use portability layers. However, staying in one project also has costs: much more discussion and politics.
Both Qt and Gtk has long had explicit support for different backends. Whether or not the user uses an X backend or a Wayland backend or a Mir backend should have minimal impact.
Mir and Wayland seem to be pretty related though. Same gaps that Wayland has (needing development time) also are with Mir AFAIK. So once fixed for Wayland, Mir will have it easy :P
Note that real Mir support is only planned for 14.10. At the moment it is just XMir, which is totally different from the what is going on with Wayland (goal is native, this is way more difficult). I have seen the development that is needed to get Wayland really working, Mir seems way behind on this though they probably can copy what was done for Wayland and pretend they did it :P
I'm thinking that before too long there will be WayMir and MirLand shims to let you run wayland-only programs on mir and vice-versa. Then you can run Unity in Mir in XMir through MirLand on Wayland on XWayland on WayMir on Mir on Mir (the international space station).
Realpolitik. Although Wayland 'could' support Android based/styled graphics drivers Mir should be able to work easily with these drivers.
My understanding of the GPU vendor world is that generic Linux GPU support is way below the priority given to Android. This isn't to say that there couldn't be some alignment/commonality for a driver in Linux or Android, but Android will get a lot more testing and biz focus.
The Wayland and 'open software community' GPU advocates either have not considered this, don't care or hope that GPU stack implementations will move towards their world rather than Android's. Although ironically the licensing for Wayland and libdrm and whatnot are MIT rather than GPL like Mir.
There is a good link below with some technical rationale (I think the most important part, at least according to Canonical developers, is "Server Allocated Buffers in Mir"), but I'm actually wondering - has Wayland officially added the support for working on the Android graphics stack? There has been at least two proof-of-concept projects, but I don't see either of them being actually committed to the Wayland repository. Given that Canonical is working against the clock it's not that surprising that they are going with their own solution - they obviously believe that they can get it working faster.
The lead developer of libhybris (the project that makes it possible to run Android drivers on GNU/Linux and also what Canonical uses with Mir) is working on support for Android drivers on Wayland.
"the multimillionaire and erstwhile astronaut wrote"
It seems like whenever someone writes about Shuttleworth, they feel the need to point out he's a multimillionaire or something similar. More so than most other people they write about. To me, it just sounds like poor writing. Trying to "fluff" things up a bit.
I've actually thought the opposite. I'm surprised that his personal story doesn't get talked about more in the press. He's a guy who started a company, made a huge amount of money, became the first South African in space, and now has a company that is trying to build an operating system to dethrone Windows and OSX.
In Ubuntu 13.10, Mir will be inserted between X and the graphics drivers (roughly speaking). Although that is one step toward eventually eliminating the need for X, X will still be needed (e.g., to run Chromium or Firefox) in Ubuntu 13.10.
I.e., the title of this story, "Ubuntu 13.10 to ship with Mir instead of X," (emphasis mine) is false.
Well the work done to prep for moving to Wayland has probably helped a lot for moving to any other not-X, I would imagine. E.g. Qt has a lot better abstraction capabilities now than it did for Qt 3.
I'm well aware of where Qt has (and has not) been able to run.
What I'm saying is that it's a lot easier to make new ports now than it was back then. This was mostly driven by Nokia's requirements, but it's still useful in the context of making a version that would run on Mir.
> I'm impressed they've essentially replaced it in <6 months
They've spent the past few years moving most of the hardware-interfacing parts into the kernel, making the gap between graphics API and functional desktop pretty small
I believe one of the large reasons why it took so long is that the linux graphics stack had to be completely redone (KMS etc) before this could be started, and decent open source drivers had to be developed as well. I also think a sufficient number of capable people with funding had to feel like there was no hope in salvaging Xorg.
No. upstart existed before systemd, and was the first of its kind.
Unity exists because GNOME took a different direction from Ubuntu's vision of what the desktop should look like (just look at the number of people who complain about GNOME 3 removing features).
Mir developers have justified their position technically. I understand that their technical position is refuted by Wayland proponents. But if you don't understand the technical arguments yourself, I don't think you can justifiably comment, since there's clearly politics involved in the claims and rebuttals made.
If anything, Ubuntu is infected with a Just Get It Done syndrome. I don't think that's a bad thing. Show me a real world distro with significant user share that ships Wayland, and does it better than Ubuntu. Without that you have no argument.
And even when it was being developed in the open, it still managed to make the vast majority of the community have qualms with the direction GS was heading in.
Hell, Cinnamon is a good case and point, an open source project who had to fork the GS code base in order to add features that they felt the user would prefer.
Gnome 3 has its own NIH syndrome. XFCE was NIH because otherw desktops existed. LXDE was NIH because other desktops existed. Almost every new thing back to the beginning of time was NIH because there was something approximately similar before. There is truly nothing new under the sun. So what?
Linux already has a great marketshare - Android - but no perceptible effect on society, most people dont still know what Linux is, and certainly not what freedom in software is about. Which sucks, but thats the way it is, and once that changes, then we're in for great times.
Does Mir's existence have anything to do with Wayland's slow development? I read (source unavailable, sorry) that Canonical grew frustrated with Wayland's crawl and decided to make their own.
When Mir was initiated, Wayland's future was not as clear and little was working at all. Instead of talking, Canonical went and just did what they thought is the right way to do it.
You can consider this good (more doing, less talking) or bad (not invented here syndrom).
Actually GNOME shell has a Wayland compositor for around as much time as when Mir was initiated (not when it was announced). The future was also clear. Whatever reason Canonical had to start Mir, it is not related to Wayland.
Wayland developers had to fix all kinds of things elsewere. The development is not slow. One of the Mir developers attributed the quicker speed of Mir to the work that the Wayland developers did. Further, a lot of stuff in Mir is copied/taken from Wayland. E.g. XWayland -> XMir. They did hire the XWayland developer AFAIC. Wayland developers want to release a good version, apparently for XWayland to work well they need some kernel changes or something. XMir is reported to have issues (check e.g. comments on LWN.net, two mouse pointers, etc)
> Shuttleworth said he has been running Mir on his own laptop, an "all-Intel" Dell XPS, for two weeks, and that barring a few minor glitches, the system feels smoother than it did before.
I wonder how well Nvidia/AMD graphics will work with Mir, considering they closed source and have left a lot to be desired in terms of performance and compatibility.
AMD has very good free drivers, good for anything but "hardcore" linux gaming (whatever that is). Nvidia similarly has good free drivers, though those unlike the AMD drivers are reverse engineered. Still works fine from what I understand.
Mir just uses KMS like Wayland right? It should work just fine unless you want to run Crysis with WINE or something.
To anyone not familiar with this, jlgreco is referring to open-source drivers - both AMD and nVidia provide closed-source driver binaries that tend to outperform their open-source counterparts.
nVidia and AMD's closed-source drivers have advanced noticeably since Valve released the Steam for Linux client.
You misread his comment, the closed source drivers outperform.
The advantage of the open source drivers is they are hassle free. They never piss you off by finding new and creative ways to wreck your system and leave you in a bind. For I suspect most linux users, that is the most important factor, not squeezing every last FPS out of your GPU and every last minute out of your battery.
I agree. I've been using NVidia hardware for the better part of the last decade and switched to Intel HD4000 graphics sometime last year. The experience is really refreshing. Stuff works and I don't have to screw around with the binary drivers. The performance is also acceptable for light gaming. It is definately much slower than the GTX NVidia card I have in my big bulky box that's gathering dust but it fits comfortably in the laptop without making it a cooking oven.
For the past 12 years, the only thing I've had to do to get the nVidia driver working again after an upgrade was a reboot. It doesn't wreck the system and it still works better than its AMD/ATi counterparts.
Nouveau (the reverse engineered Nvidia drivers) only work on some of the systems for some of the users some of the time(1). It is an incredible amount of work, and an incredible amount of functionality that they have. And they are constantly chasing advances in Nvidia's hardware, as well as progression in the Linux desktop world.
As an example of Nouveau not working "fully" - I get pointer trails, incomplete drawing, no vsync, monitor not doing DPMS power down and several other glitches. As a third monitor this is okay, but it wouldn't work for a primary.
Nvidia's binary drivers are generally very good providing you only want what they provide. For example it was only very recently (as in months) that they started supporting xrandr 1.4 and even then it doesn't play nicely with others.
(1) The Nouveau folks don't claim any different, and "some" does often turn out to be "a lot"
They will break a lot of compatibility work, specifically in terms of hardware acceleration, meaning that a Linux user with a $500 GeForce 780 will be using software rendering.