What users actually depend on Linux's iron-clad promise to never break binary compatibility? It's curious to me that this issue is considered so sacred and yet I rarely hear the rationale or actual user stories of people who want to run 20-year-old userland on a new kernel.
In particular, note that the promise of kernel binary compatibility does not guarantee that an old binary will run on a modern Linux distro unless the binary is statically linked. Most user-space libraries bump their major version number every so often, so it's unlikely that the required .so's for a very old binary will be present on a new system.
> Why are most open source programs dynamically linked again?
Probably because dynamically-linked binaries are smaller, use less memory (by avoiding duplication), and can get fixes or security updates without rebuilding or re-deploying. When you're a distro it hardly makes sense to ship a copy of libc inside every single binary. Any fix/update to libc would require re-downloading basically the whole system!
Maybe the problem is saying all programs have to behave the same. It would indeed be irritating to have every unix-tool link to it's own libraries. But most well-known tools are small programs that don't get constant feature updates. But it's a whole different thing for typical desktop applications. Those are often way more often updated than the distros and there's no 2 distros out there having the same library versions which might be needed for an app to run. Still distros do not like if such applications use separate library versions... and that just doesn't work so well except maybe for the handful of high-profile applications which have enough maintainers. I mean even keeping an up-to-date Iceweasel on Debian already caused me trouble with library dependencies just some months ago - and that is probably the most prominent desktop application in the free software world.
It's getting even stranger in the world of free games. When games need library fixes (very typical situation as games and engines are very interwoven), which are not yet in a distro (and won't be for some time because the library is for example not yet officially updated or maybe a certain patch simply won't be included). On a system like Windows it's no problem - modify the library source and add the dll. On Linux distro's... well, just not easy possible. Which means funny enough that it's easier to modify library sources in the in the proprietary microsoft world than in the free software world. We got freedom coming with library + distro gatekeepers so to say... which pretty much sucks (even for the library authors).
Strong versioning is key. Only allow links to be redirected to bug fixes, never feature enhancements. If you wrote your code for 1.0, you'll continue to link against 1.0, barring some bug fix (1.0.1). Your code should never be silently upgraded to link against 1.1. Microsoft also gets this right with the global assembly cache.
As an aside, DLL Hell was coined by Szyperski, who still works for Microsoft.
Well they are dynamically linked but usually only to Apple provided libraries as far as I know. It's not like every app installs a dozen new dlls to your system and that's what makes the big difference. It's what makes Mac apps (at least traditionally) standalone and executable from anywhere.
I fear however, that Apple is currently destroying this design philosophy with the sandboxing. At least the application data folders have become way more complex now and I wouldn't like to troubleshoot them anymore, something that has been always been super easy.
If only Apple wouldn't unnecessarily break their runtime environment every few releases.
What runtime-breaking changes are you thinking of? Apple's been very good about not breaking the binary compatibility -- of the runtime, or of their frameworks between OS releases.
Sure, there are some differences in 64-bit systems from 32-bit systems. But, if you take a 32-bit Mac app from five years ago, and run it, it will still run just as well today as it did when it was first compiled.
>Why are most open source programs dynamically linked again?
To save on bloat of shipping multiple redundant libraries with every app and to guard against the perceived 'DLL hell' in Windows(which was fixed a decade ago). Ironic that ended up in 'Dependency hell' with RPMs and Apt packages requiring specific versions of libraries.
Read through this excellent discussion thread if you're really interested in the problems with linking and the lack of a Common Object Model in Linux.
This is incomprehensible. "DLL Hell" is what happens you use dynamically linked libraries without managing versions and compatibility correctly. DLL Hell can only happen when you are doing dynamic linking so I'm not sure what you can possibly mean that dynamic linking "guards against perceived 'DLL hell'".
Sigh, sneering and snark and also downvotes on my GP comment. That's HN I guess. Must.. resist.. temptation.. to be snarky myself.
Anyway back to the point...
>Oh boy SxS. So now you have seperate DLLs for every application.
No, only if the application uses a different version that explicitly is marked as NOT being compatible. If the version used is the same, you do not have separate DLLs for every application.
>How about just linking it in statically and make the applications standalone?
That will needlessly bloat up the application.
Assume 10 applications need library X version 2.3 and one application needs 2.2. With SxS, you will have one 2.3 DLL and one 2.2 DLL. If you link it statically, the same code will be duplicated in 10 EXEs. Multiply this by all applications and DLLs used across applications.Not to mention waiting fot 10 apps to update to fix a security bug.
I can see the benefits in terms of security but not in terms of space. So what if the application folder uses half a gig more, if I never have to use installers for most applications.
And as far as your example goes: From what I understand most Windows developers just use a fixed library version number to specify what dll to use. When that happens, security updates won't do any good either. IMO as long as APIs can be changed in between library versions, developers will always be responsible themselves to upgrade to the newest libraries. It's a nice idea but it just doesn't reflect the reality in the world of Business application where incompatibility directly result in monetary losses.
This rant makes little sense to me. Linking and COM are mostly orthogonal issues for once, and ABI compatibility in windows is not directly tight to COM.
I think the lack of ABI compatibility in linux has more to do with incentive and economics: innovating while keeping ABI is very difficult and ressource-consuming. I don't understand why Miguel would cite Apple, as they are pretty lousy in that department, whereas MS famously spend tons of resources on that issue.
Commercial vendors do. It is surprisingly hard to maintain proprietary software for linux, as different versions ships with different shared libraries, have different ways of doing things. Even maintaining systems that need to run on RHEL 4 through 6 takes _a lot_ of effort. And when you're creating commercial software, effort means time and money.
The people who write and deploy software to limited markets appreciate it. Have you seen how many Windows 2000 deployments are still out there? It's the same with BSD and Linux - there are a lot of 10+ year old systems out there doing useful work. Functional binaries save hundreds of hours of work to maintain the software.
I don't think your argument applies: you are talking about running new binaries on an old OS/kernel, but this is not guaranteed to work. New system calls and kernel interfaces are added all the time, so new binaries are unlikely to work on old kernels. The promise is in the other direction; old binaries are guaranteed to work on new kernels.
At my office, we actually still have some Windows 3.1 servers in production alongside some Windows 2000. Replacing the software running on it has proven to be a bit more of a trick than previously thought.
Anyone who relies on (for example) Oracle Database on Linux. Not so much for the compatibility itself (oracle do maintain their software), but because it builds trust between the Linux kernel team and the software vendors who are shipping stuff on top.
If it wasn't for this promise, I think it's much less likely that people happily developing for Unix would have moved over to Linux
I suspect windows end-user don't care so much either, but the enterprise world does. The number of Fortune 500 companies that still use windows xp because they can't upgrade is most likely significant. People also often underestimate incompetency and things like "we lost the source of this software", "nobody knows how to upgrade this codebase at an affordable price".
This is the bread and butter of companies like RH (many companies are still on RHEL 5, and most of those have some legacy RHEL3 systems in my experience).
There are definitely occasional stories about people running old binary apps.
But I think the real reason for this policy is its simplicity. I don't know if Linux developers are disciplined enough to follow Solaris-like deprecation schedules. When you allow exceptions, there is a tendency to allow more and more of them.
has linux desktop failed? Really? Correct me if I am wrong but we are talking about a concept which has spawned thousands of distros used in millions of computers still.
And main argument is rotten as heck: backward compatibility out of the box? On Mac? Please! You have to install Rosetta for that, without it you have no chance of running that escape velocity on your shiny macbook pro. Windows 7 make you feel like it can do backward compatibility jig, until you meet ultrahyperfast running software. And no I am not talking about some console application but the second incarnation of Saints Row.
I mean I would have understood if he simply said "I have started using Mac Os insert-cat-name-here and liked it" nobody would have any problems. I really can't understand the need to declare previously used software dead, bad or other derogatory terms. You use Mac, I use linux and it's still alive and kicking, considering I got security updates this morning.
>has linux desktop failed? Really? Correct me if I am wrong but we are talking about a concept which has spawned thousands of distros used in millions of computers still.
Desktop linux has utterly failed, yes, clearly. OSX grew to over 10%. Linux desktop continues to remain statistical line noise. That doesn't mean that literally no one on the planet uses it, just that a very small number of people use it.
That depends on how far Microsoft and Apple go with the tabletization of Windows and OS X in the next few years.
I've been a mac user for years, but after playing with mountain lion, I could see the writing on the wall. It was time for a new laptop, so I went with a thinkpad with Arch Linux.
I don't think that Microsoft will ever manage to completely force the app store on everyone--solely because of enterprise customers--unless maybe they force you to use it on home/basic and make it optional on business/enterprise. However, I do think that's what they and Apple eventually want. I also think that's probably what the average consumer wants.
We need Linux as a third (and 4th, 5th...) option for all of us edge cases that don't fit the standard consumer mold.
As an aside, which thinkpad did you go with? I bought a macbook air thinking I could easily replace OS X with linux but almost bricked the thing due to EFI issues (rEFIT wasn't any help and actually caused the problem). I'm thinking of going with a thinkpad next time around but not sure which one as I've never had one before. X1 Carbon looked promising but battery life is a bit disappointing.
I went with a T430, mainly because it's built like tank, and it's cheap enough that I'm not afraid to haul it around everywhere. At first glance it looks like an older computer, so I'm not too worried about someone stealing it either.
I love the trackpoint (don't have to take my hands off the home keys to use the touchpad). I also love the docking bay, with my mac I had to unplug the power, speakers and usb, now I just push a button and it pops off the dock.
Have the same experience with reFIT. Just unnecessary. What you can do instead is to just install Linux normally with GRUB, and then access it through the Windows boot function built into the mac. Hold command during boot and choose windows, takes you to GRUB.
It's funny you switch to Arch Linux, because Arch Linux is exactly the worst example of broken userland that Linus is talking about in the post. Arch Linux values "bleeding edge" over all things including basic usability, and they willfully break compatibility with all other distros even when there is no conceivable benefit.
The best example of this is the Arch mishandling of Python 3. Remember when they decided that "python" was now Python 3? If you are a Python dev, then alarm bells should already be ringing in your head. This is a BAD decision with no excuses.
Python is wonderful because it is very portable. I can write Python scripts on my Mac and run them on Linux or even Windows without significant changes. Python 2 and Python 3 are incompatible, and I usually target Python 2 for anything I want to be portable, since Python 2 ships default on OS X and on lots of Linux distros. This is reality.
My Python 2 scripts begin with a shebang, "#!/usr/bin/env python" which launches the Python 2 executable, wherever it is installed, on OS X or Linux. Unless you are running Arch Linux, in which case you need to go in and manually change the script to point to "python2" instead of "python". Except "python2" is essentially unique to Arch, it doesn't exist on Debian or Fedora or OS X or whatever. So no portable script is EVER going to point to "python2".
But there's no real benefit to making "python" into Python 3. Anyone targeting Python 3 will write "python3", which works wherever Python 3 is installed, including Arch Linux. This is the decision the upstream made, and it's a good decision. Anyone targeting Python 2 writes "python", which works everywhere except Arch, and on Arch you have to edit your scripts. Who wants to keep a separate copy of your scripts for Arch? What if they're not your scripts? Do you write a shell script that runs "sed" to fix them? What if you keep your Python scripts in Git or SVN, and don't want to change them every damn time you check out a fresh copy on Arch?
Worse yet, lots of Python devs want to install and test with specific versions of Python -- say, the Python version that their servers run. Arch sabotages this, because as soon as you put "python" in your $PATH, a ton of Arch programs break if they target Python 3. And they wouldn't have broken if they just said "python3" in the shebang to begin with.
The result is that Arch has to maintain a fork of EVERY Python script in their repository. For what? No real reason. The only conceivable benefit of this terrible change is that a user can type "python" in the terminal and get Python 3 instead of Python 2. I've been doing this for years with an alias in my .bashrc, and it doesn't break every Python package in the repository.
So in short, the Arch Linux developers promise a new world in which not only is binary compatibility impossible, but source compatibility for SCRIPTING LANGUAGES is impossible as well. They have taken a bad idea and stuck to it, dismissing anyone who disagrees with them by suggesting that they use "sed" to fix the user's "broken" scripts, in such a way that it would break compatibility with every distribution besides Arch.
And I have further rants about Arch with respect to (1) their dismissal of bug reports of serious security vulnerabilities in default configurations (2) the dismal performance and general usability of their package manager (3) the inability of their maintainers to create a correct package for something as simple as a font (4) the terrible average quality of advice on their wiki, which is often gives advice that simply doesn't work for well-documented and easy-to-discover reasons.
Gawd, use anything else. Gentoo, even.
Okay, I want to rant more.
(1) Security vulnerabilities are dismissed with the kind of reasoning like "users who run this package know what they are doing, and don't run the application on untrusted networks." Considering any (non-virtual) network secure is almost certainly a sign of incompetence in your sysop.
(3) Install "terminus-font", it won't work. You have to manually "xset fp" the path to it.
(4) See #3, and then look up the Wiki advice for it. It just doesn't work if you run e.g. GDM, which doesn't run ~/.xinit.
I don't really understand the fuss about the Python thing. `#!/usr/bin/python2` should work on pretty much any system, not just an Arch Linux system. Whatever happened to "explicit is better than implicit"?
If the Python community eventually wants Python 3 to become the default, they will all have to deal with this issue sooner or later; Arch is just opting for "sooner", as that's in line with the way Arch approaches all changes. The real problem is that there are two incompatible versions of Python, and most Python developers give too little thought to forward compatibility to include an extra '2' in their shebang lines in order to disambiguate their intentions. Just because python == python2 on most systems today does not mean the status quo will forever be the same, nor that it ought to be.
In any case, there is no problem making Python scripts work on Arch, even if they use the ambiguous shebang. All of the official packages work out of the box with no problem, of course, and anything unofficial (i.e. AUR packages) just needs a single, trivial sed command added to the PKGBUILD. Arch is not "maintaining a separate fork of each Python package". This is the opposite of true; in fact Arch packages are, on the whole, far closer to their upstream counterparts than the same packages from other distros, where it is common for huge divergences to be made from the upstream defaults. Hardly worth ranting about.
You come across as if the Arch developers viciously attacked your family or something. Maybe it's just a distro with different values than yours, man! Relax, there are lots of other choices! It's not supposed to appeal to everyone, but for those of us who align well with the Arch "philosophy", it's fantastic.
There is a serious problem when "pretty much any system" excludes both Debian and Mac OS X, and I don't know which others. The "python2" symlink is frighteningly recent in the Python world, and some of us like to support systems other than the bleeding edge. It makes me kind of skeptical that you read my post, because this was the main point of my complaint.
My point is not that it's hard to get Python scripts to run on Arch, my point is that I shouldn't have to do any work at all.
"Until the conventions described in this PEP are more widely adopted, having python invoke python2 will remain the recommended option."
"This feature [python2 symlink] will first appear in the default installation process in CPython 2.7.3."
Python 2.7.3 was released in August 9, 2012, slightly over one month ago. That means that if you are running a Python 2 that is more than one month old, you won't have the symlink unless you make it yourself, or your distribution provides one for you.
I'm using Archlinux for years now and I'm perfectly happy with it. And yes, I'm a Python dev, but I think that Archlinux' naming approach is the right one. Main reason why I love Arch is because I like bleeding edge - and I don't really care if it breaks compatibility with other distros (what does this even mean? except for the Python example most other software doesn't require any modifications at all).
At work I'm using OS X machines and "administrating" them is way more effort than for Archlinux. With Arch I run a full upgrade every once in a while and if something breaks it's always trivial to fix (if you understand the system). With OS X you have to choose one of multiple broken package management systems (macports, brew, ...). A third of the packages I'd like are missing, the next third is outdated, and the last third doesn't build at all.
It may be perfectly possible that Arch doesn't work for you. But stop assuming that everyone is like you. I don't need someone telling me what to use and what not to use. If you like Gentoo (or OS X or whatever) better than just use it.
this is so lame, do a ps -e | grep python to find any python running apps that you normally use(when idling) I found out on my Arch its using python2 not just python, so why don't you remove python and add a sym link to python2 as python, wont hurt would it? I've used Arch for about 6 months, I only got to break the system by my mistake, it was never broken as per for its own fault(I actually do read the new on updates when they recommend it). The thing with computers is that they aren't toys that are easy to use, stop trying to make them: you'll either loose security or performance or space. If something like chmod +x was there on Windows virus propagation might be alot less, but who the hell cares to chmod +x an executable downloaded from the net? Or who has the "time"? so they make big and bloaty anti viruses to do that stuff and its not even doing it right!
>That depends on how far Microsoft and Apple go with the tabletization of Windows and OS X in the next few years.
>We need Linux as a third (and 4th, 5th...) option for all of us edge cases that don't fit the standard consumer mold.
Err, what about Android?
>I don't think that Microsoft will ever manage to completely force the app store on everyone--solely because of enterprise customers--unless maybe they force you to use it on home/basic and make it optional on business/enterprise
I think the problem is that some Linux detractors do not realize that there are two seperate standards of success (as mentioned on the Google+ discussion). There is the business standard, and there is the community standard.
Linux doesn't need to be a threat to Apple/Microsoft in order to be a success. That is an artificial bar that some people seem to have become obsessed with.
I agree, I often get criticised for running a Linux desktop by Windows and Mac people because "nobody uses that apart from geeks lol". Well I'm a geek and I use it because it's the best system for what I do (writing Linux hosted server side stuff) so why do I care what the rest of the market is doing?
I guess there is some merit to this argument in the sense that if Linux was the defacto standard on the desktop I wouldn't have to maintain a dual boot. On the other hand I quite like using a niche OS because it means that I'm not a target for malware/shovelware/adware vendors.
The desktop needs Linux if you care about a free desktop. I spend most of my waking life on the desktop and I just prefer a system that allows me to dig as deep into it as I want. Not because I enjoy the digging, but because I know it's available when I have to (once in a while).
I'm a little confused. Are Linus (and Alan, Ingo and even Ted Ts'o) arguing that the GNOME team experiences breakages because they ignored the kernel team practices and used internal interfaces instead of the public ones?
Also why do they refer to GNOME as a "research" project?
No, they're arguing that the GNOME team defends breaking (GNOME) public APIs by arguing that the kernel team does not hesitate to break internal APIs, instead of recognising that actual public Linux APIs are highly stable.
It's a bit more nuanced than that. Miguel(the founder of Gnome but was uninvolved since 5 years) said that the culture of Linux was to break things(see driver ABI) and then use the fact that the driver source is available for most drivers to get around that and just recompile them with changes. And that the userland adopted that practice(see autoconf) and led to fragmentation of the software platform with library hell, leading to commercial software staying away for the most part.
As you say, others responded with external vs. internal interfaces, but what is an external and internal interface for things like Gnome or KDE? They have only one API that's used by both their other libraries/applications and application writers.
>Which is a problem they without doubt did not inherit from the kernel team.
Interesting point, Windows, OS X, iOS etc. get derided for having internal private APIs that they try to prevent external devs from using, and not doing the same thing is now a 'problem' for GNOME and KDE? Isn't the whole point of Linux for developers, freedom to use it as you see fit?
Isn't the whole point of Linux for developers, freedom to use it as you see fit?
I've yet to hear of a philosophy that allows you to both 1) use everything as you see fit, even things marked as "internal" or "private" and 2) give you the right to complain about your software getting broken because you relied on things marked as "internal" or "private".
In other words, you can either restrict yourself to the published APIs and demand compatibility or you can use the guts of the system and have no expectations of stability when the guts change.
How many things can you mark as private in an application GUI toolkit like Gnome or KDE/Qt and make them available to your own applications? Private/public API doesn't really make sense for GUI or sound libraries.
The Linux internal APIs are things that can only be used to construct kernel modules & components - derivative works that make the kernel function differently or support more hardware. Windows/Apple 'internal' APIs enable Apple & Microsoft to create applications with features or performance not available to competitors.
Windows and Apple due it to illegally stifle or at least lazily under-support potential competition with their apps. That is a non-issue in free software. And Windows and Apple are the talking about company-private userland APIs, not kernel-internal APIs.
I don't really consider the Linux desktop "dead". Besides Android, it just has a marginal market share because you mostly need to be tech oriented to use it or install it.
I think the risk for the future is that the idea that the traditional desktop is dying out. In some respects the touch capable tablet UI's are much more user friendly. On the other side, the lightweight web-only interfaces (Chrome OS) are becoming increasingly more powerful/useful.
Windows and Mac have already moved to hybrid designs.
Off topic. App.Net says it is "a real-time social feed without the ads". This G+ page is quite on topic with some heavy participants, design looks good, and clear from ads. Why would I want to pay App.Net?
For one thing, G+ doesn't have a serious API yet, whereas App.Net already has Android and iPhone apps by third-parties using its API.
But overall, I've found it interesting how little G+ comes up in discussions about Twitter and App.Net. I happen to think it's an absolutely excellent product and find it much more engaging than Twitter. Most people I see complaining about it have made hardly any posts and haven't taken the time to build up a network.
> But overall, I've found it interesting how little G+ comes up in discussions about Twitter and App.Net.
Probably because it's completely Apples-to-Oranges. Twitter/App.net is for short status updates. I deeply value the enforced brevity. G+ posts tend toward blog-style bloviating; the use cases really don't overlap.
Fair point. I've noticed G+ doesn't work at all for live event updates, so expecting them to build a Facebook-like "heartbeat" thing on the side at some stage.
At the same time, I can say I've moved a lot of link sharing to G+, so for that use case, it's a fairly direct substitute. The rich embedding is better and I can say what I want without having to cram it. It's like that Pascal quote about not having time to make it concise...G+ just lets you write a couple of sentences, or more on whim, without having to then manicure it into 140 characters.
I don't enjoy it either. The tampering of the scrollbar and having the text stuffed into some sort of frame ruins the experience. Those two problems make it difficult to read on G+ so I usually just avoid it.
At first I thought this would just be an easy CSS fix, and it almost is -- but the reason this problem exists is because they put that huge sidebar at the top of the page with the guy's face and four or five links under it.
Seeing as long posts and discussions are starting to become pretty common on G+ I hope that the G+ team invests some time into figuring out how to make everything more readable.
If you are referring to the fairly durable schism between the folks who define "open" and "free" one way and the people who define "open" and "free" a different way, then I don't think a whole lot would have happened . I tried to get Sun back into the workstation/desktop game from afar (I had already left by that time) but it wasn't to be. Their original business model is just as durable today was it was when they were founded, spec the hardware, do enough software work that the user experience is capable, and let others build on that. I keep hoping someone will do that with FreeBSD but so far, no such luck.
 We can use the existence proof that FreeBSD exists with a bsd license and it hasn't materially changed the Linux outcome.
I think that iXsystems' support of PC-BSD is something like an attempt to do that, but it seems like they're not doing much workstation stuff these days, whereas previously they were at least trying to do slightly more. PC-BSD is getting a reasonable open source ecosystem and user experience around it, though, such that someone who can handle the business development and hardware side of things would have a rather easier time than someone starting from stock FreeBSD.
That's easy,some BigCo would've co-opted it, paid developers to go the last mile and polish it to make it usable, and make it incompatible with everything out there and sold it. Perhaps Sun or IBM or even Oracle. E.g See OS X and to a lesser extent, Linspire/Lindows.
Hasn't happened to PostgreSQL and I doubt it would have happened to Linux. The leader and community are pretty strong with Linux and that is not about the GPL. I would imagine some folks would fork it, but that always is a losing game from a business point of view (paying for a fork vs. patching and getting everyone to pay). Having BSD licensed kernel at that point in history during the AT&T BSD lawsuit would have been seriously interesting.
Maintaining a fork isn't a big issue if you have enough resources, considering that if Linux was BSD licensed they would still have had their proprietary fork mirror it "close enough" to pull in at least some of the big changes from the main Linux tree.
Android seems to work in this manor to a certain extent (although they open source patches back when they have to because of GPL).
You would most likely have ended up with a bleeding edge open source version and several more conservative closed ports.
Bear in mind that a lot of the reason Linux development is strong is because many of it's developers are paid to work on it and then contribute back (again because of GPL). If a company could gain a competitive advantage by paying the same people to work on a closed fork..
I think the problem with maintaining the fork is the speed of development of the community, and let's face it, it is much easier to monetize hardware than software in the open source world. The fear of being left behind and weird vendor specific errors should not be discounted.
I agree though, someone would try. I just think Linux would have done just as well with a BSD license being released at the time it was. The people and the timing deserve more credit than the license.
See also the http://en.wikipedia.org/wiki/Unix_wars, during which the industry mostly lost a decade of work entombed in proprietary forks. Tit-for-tat has better survival characteristics, and the GPL is no exception.
I strongly disagree with this phrase. I'm not saying making Linux usable is 90% of the work; it isn't, but it isn't "the last mile" either (maybe that + customer support + developer relations and evangelism equals to 15-20%).
I meant last mile as in developer scratching their itch. It's much more sexy to work on new and interesting things(that mostly break existing things) rather than the doing the grunt work on backwards compatibility, bugs or documentation, not to mention enforcing a central vision and design on developers(ever hear of OSS devs quitting or making forks because of differences in opinion?) .
Here's a few places it falls short with reaching mainstream consumers, I realize many people may not care about that, but I think it could be a viable alternative to windows or OSX for many people if it were easier to get started with and use.
1. Not clear how easy it is to dual boot install with windows for non tech users. e.g. difficult to get started.
2. Make it faster, latest ubuntu is sluggish. The UI gets non tech users excited, a few minutes of using it and they are going to bounce.
3. Make installing programs easier, I know how to get them running, I still don't know how to get them properly installed. Average Joe has no chance.
4. Support all common user tasks out of the box, for example viewing videos online.
To me it feels like Ubuntu is 90% of the way there, just a few things stop it from being a good alternative (these things of course are probably not trivial to fix, but I would say they are critical).
I don't think so. It's so hideously designed  that the more people don't use it (and have a feeling that it's great, as they've heard others like it), the better for them. Now I'm trained to not open G+ posts unless I'm absolutely sure there's something I will enjoy/learn from. YMMV.
: That stupid "Join G+" black bar, the absolutely unnecessary chrome around the content that takes up more than 83% of screen real state on my 15" MBP, no links to comments
The good thing about Ubuntu is that it has so many forks: Xubuntu (Xfce), Kubuntu (KDE), Lubuntu (LXDE). They all use the same Ubuntu repositories. For example, you can get an Lubuntu system by removing the unity and gnome package from Ubuntu and installing the LXDE ones. I held off upgrading my Ubuntu machine from 10 because of the Unity thing. I installed Lubuntu 12 on another laptop last week and everything works great and there pretty much aren't any surprises going from 10.
What's annoying isn't Unity but the way Ubuntu changes the way startup scripts are done and small things like .xsession don't run by default (and if you turn them on, suddenly .Xmodmap stops working...). That's true for the Ubuntu derivatives too. But that's a fair trade-off for good drivers (even if they are proprietary) and up to date packages. Otherwise I'd go with Debian testing.
Yeah, at first I was annoyed with the lack of configurability in Gnome 3. But when I discovered you could press the super key and just start typing the name of a program to start it, I found it nicer to use than Gnome 2.
I recall trying Netbook remix on my Aspire One and quickly reverting back to stock Android, but I can't remember what I didn't like about it. OTOH Cinnamon reminds me pleasantly of Gnome Classic but feels less hobbled, compared to the old Gnome desktop.
I've been using Unity for about a year and it's not as bad as many proclaim it to be. My biggest complaint is that advanced features are hidden away. If all those crazy Compiz editors and whatnot were easier to accidentally find it would all be fine. A separate "For Advanced Users Only" options set in "Settings" would fix it and probably resolve tens of thousands of web searches without a browser.
Whilst I agree to an extent in that I used to be a Unity basher and now I use it as my main desktop and think it's by far the most polished Linux desktop UI, I can totally understand why it doesn't fit a lot of people's workflows.
It has a very opinionated approach to things and throws away a lot of customisation options (focus follows mouse, you can't move the launcher or go back to a standard taskbar etc).
I love some features but it just causes too many troubles when working with it.
I need for example right-click menues sometimes and those are simply not supported. So the only workaround is installing packages that get me applications menues back which is then a mess where the buttons to close a window switch from left to right depending on fullscreen or windows all the time. Not that I liked top-menues to start with anyway. And certainly lot's of applications just work badly with the top-menu simply because Linux-apps never were written for that. So you get stuff like double-menues even with some applications which are in the offical distro.
Or the idea to use F10 as a system key - sorry, but it just isn't one and never was. Which is why it's used by lots of applications which suddenly miss a key now - who came up with this shit?
There is one reason Ubuntu stays on my Laptop - it works most of the time good enough with the hardware. And I can live with Unity most of the time. But Unity is the reason why Ubuntu doesn't make it on my desktop.
I've been using it for a few months and becoming more and more upset with it. I have very high inertia and so I've stuck with it to give it a chance, but the deal-breaker is probably going to be that sliding my mouse to a new window ought to result in me immediately being able to type in the new window.
That's not the only mouse-related thing it does poorly. The workstation I use at work came with Unity as the default. I decided to give myself a week to get used to it but gave up after only 3 days. Using Synergy with Unity is a complete pain in the ass with that menu thing that is on the left side of the screen. It was constantly activating when I moved my mouse to the other computer or not activating at all when I wanted it to.
After 3 days I just installed Awesome and went from there. Unity provides me absolutely nothing that I want that Awesome doesn't. The only thing it does for me is get in the way.
The Linux desktop will never succeed as long as nobody can profit from it. As long as Microsoft prevents the distribution of Linux through tithes or threats of patent litigation, why would anyone spend a penny making the desktop look better?
>Those developers are very unlikely to find Gnome 3 or Unity too constraining for their tastes
The problem is that most of those developers have already left Linux and would be unwilling to switch either because of the hardware they like or the Mac app ecosystem or toolchain they use (for example iOS development). I doubt a significant percentage would come back even with a better Linux.
Where are the lawsuits against System76? If there really was demand, you think a company wouldn't start up and offer Linux? In fact there have been OEMs shipping Linux, but stopped or hid their offerings after the high rate of returns from people buying and returning them. And the tech savvy people who are the real customers can buy ANY PC out there and throw Linux on it(or just get a Macbook Air). There's not much money in it for the OEMs except high support costs, return rates and a whole toolchain to get Linux images(hard on thin margins).
If there is real mass consumer demand, a company can ship Linux profitably, MS won't care about them till they start making hundreds of millions a quarter, at which point successfully fighting a lawsuit based on antitrust case precedent on PCs or paying a low amount per PC won't really hurt the manufacturer. So, stop painting MS as the bogeyman in this discussion, Windows' and Office dominance(due to Open/LibreOffice not being up to par due to various reasons including Office compatibility) is a well known reason.
1). Yes, companies ship Android devices, but if you read any of the articles you would have seen that those companies pay $8-15 for a Windows Mobile license for each phone.
2). You are right about Microsoft not trying to litigate for Linux servers, but I'd take a guess that is because most companies already have a heterogeneous environment, and if push came to shove I would assume that most companies would consider a full switch to Linux if Microsoft forced their hand.
4). Dell does not currently distribute any consumer laptop with Linux as an option, at least not when I tried the top 10 popular laptops on their site.
5). They don't profit from putting System76 out of business, and in fact would probably cause much more financial harm in terms of what it would do their image as picking on the little guy.
6). You are absolutely right about the low consumer demand. However, don't you think it's weird that NOBODY is pushing it? Not even as an experiment? No smartbook running Android? Why is the Asus Transformer the closest attack we see?
> Dell does not currently distribute any consumer laptop with Linux as an option,
>However, don't you think it's weird that NOBODY is pushing it? Not even as an experiment? No smartbook running Android?
Wrong. It's funny you're commenting on this so vehemently with so much anger against MS in this and previous threads and you haven't even heard of Dell's Sputnik?
Microsoft is plenty evil, but there's no need to make up things.
Edit: How many would buy Dell's laptop? Most devs would prefer their own choice, perhaps a Macbook Air or throw Linux on a Thinkpad. How does Dell get a leg up on Lenovo(with Windows) here without making 20 laptops with Linux?
Can you buy a Sputnik laptop? No, it's not for sale. Until you can, it's just another Linux laptop that gets press and mysteriously never gets released; see every smartbook (aside from the recent Chromebooks, which corps. still have to pay to suppress patent litigation) http://www.techradar.com/us/news/phone-and-communications/mo....
This strategy is very risky. Microsoft can litigate any OEM to the ground and the fees they extract from Android makers would eat away any profit margin in the x86 PC space. The fact they convinced Android OEMs to pay can be used to legitimize those patents on the PC space and many of their "licensees" already are PC makers (and thus will have a harder time litigating those patents).
Any shakedown by MS in the PC space extremely unlikely to fly given the terms of settlement in the antitrust case in both the US and EU and restrictions still in place in the EU on the browser choice screen, requiring network and document specs etc. Hell, B&N even tried the antitrust angle in the Android tablet patents cae.
I find this discussion increasingly irrelevant. It's hard to make the case that companies like Samsung or Dell are not pushing Linux more just because MS may increase their Android requirements.
Even a hint of any such thing will result in a multibillion dollar fine paid to the OEMs and injunction against MS in the courts. What about the fact that there is little interest in the general public for Linux(due to many reasons including Wintel and Office monopolies) and the OEMs are hampered more by that and the reasons in my OP above rather than your weak reasoning?
Have you seen how few Chromebooks have been sold? Do you really think MS hampered them with their patent licensing deals(there have been a couple) or do you think there are much bigger reasons that ChromeOS/Linux hasn't taken off ? What about new companies like System76? Why don't you start a new company that sells Linux PCs? Do you know how Dell or Compaq started? It was just one or two people in a garage assembling PCs and throwing DOS on them. The difference was that people bought a lot of them.
You argue there is no demand and I say that, even if there was a demand, the PC margins woulds not support the Microsoft taxes associated with Linux. You say Microsoft would not extort PC makers like they do with Android makers but there is no evidence for that. If you have any documents suggesting otherwise, it would be nice if you could produce them.
Besides, when all PC makers depend so much on Microsoft, there are many other creative ways to punish OEMs who want to be more independent without raising too many eyebrows.
>A. Microsoft shall not retaliate against an OEM by altering Microsoft's commercial relations with that OEM, or by withholding newly introduced forms of non-monetary Consideration (including but not limited to new versions of existing forms of non-monetary Consideration) from that OEM, because it is known to Microsoft that the OEM is or is contemplating:
developing, distributing, promoting, using, selling, or licensing any software that competes with Microsoft Platform Software or any product or service that distributes or promotes any Non-Microsoft Middleware;
shipping a Personal Computer that (a) includes both a Windows Operating System Product and a non-Microsoft Operating System, or (b) will boot with more than one Operating System; or
exercising any of the options or alternatives provided for under this Final Judgment.
>Besides, when all PC makers depend so much on Microsoft, there are many other creative ways to punish OEMs who want to be more independent without raising too many eyebrows
If an OEM can prove it, billions of dollars to them in court.
Maybe System76 can be the big Linux OEM, except that Linux folks buy Thinkpads or Macs.