Hacker News new | comments | show | ask | jobs | submit login
Major Linux Problems on the Desktop, 2016 Edition (narod.ru)
329 points by gerbilly 711 days ago | hide | past | web | favorite | 371 comments



He's right.

Many of the driver problems come from the fact that Linux finally worked on the desktop about the time desktop machines were replaced by laptops. Desktops with slots tended to have relatively well-defined hardware, and plugging in third party hardware was normal. This is much less true for laptops. OS development for laptops requires that laptop. It needs a Q/A organization which has one of everything you support. Linux lacks that.

Microsoft got drivers under control with the Static Driver Verifier, which uses automatic formal proofs of correctness to determine whether a driver can crash the kernel. (The driver may not control the device correctly, but at least it won't blither over kernel memory or make a kernel API call with bogus parameters. So driver bugs just mean a device doesn't work, and you know which device and driver.) All signed Windows drivers since Windows 7 have passed that. This has eliminated most system crashes caused by drivers. Before that, more than half of Windows crashes were driver related. Linux has no comparable technology.

The monolithic Linux kernel is just too big. What is it now, 20,000,000 lines? There's no hope of debugging that. It shows.


> The monolithic Linux kernel is just too big. What is it now, 20,000,000 lines? There's no hope of debugging that. It shows.

The Linux kernel is about 20MSLOC. The Windows kernel is about 50MSLOC. IIRC, OS X used to be ~80KSLOC.

Problems with debugging are endemic to any monolithic kernel. Neither Windows nor OS X is easier to debug technologically, but Microsoft and Apple both have many employees and lots of money invested compared to Linux.

(Also, Apple solved the problem by making their OS specific to their computers, so they had the whole thing under their control.)


>Microsoft and Apple both have many employees and lots of money invested compared to Linux

There are a lot of people who are paid to work on the kernel full time from Red Hat, Google, IBM and many others. If I had to guess I'd say there are probably more than the other two, it would be interesting to find out. But if you include people where it's not 100% of their job, but still an official part of their job, I'd say it's almost certainly more for Linux (not even counting unpaid contributions).

Linux is the most popular platform for servers and HPC because most of the time it's the better kernel. It's so dominant that Apple has basically left the server area and the handful of showcase supercomputers built on Apple gear have long since faded from view. Linux went from having one supercomputer in the top 500, a fraction of a percent, in 1998 to 98.8% of the top 500 currently.[0] The other six seem to be IBM machines running something else. OS X's first release was 2001 and it and Microsoft offerings are simply not present in that top section of the HPC space.

Linux is also the kernel on the most popular smartphone platform so it's not all computational either, when the hardware is tightly controlled it works fine. The problem isn't the number of developers on kernel or how hard it is to debug, it's that laptops and desktops aren't offered from a single company that can tie everything together.

[0]http://www.top500.org/statistics/details/osfam/1


> Red Hat, Google, IBM and many others

And they are all working on desktop\Laptop support right? Linux is great in the data center because it has big guns behind it in the data center. Linux runs well on cell phones because Google put in the effort. As soon as someone is willing and able to put in the effort on desktop Linux, it will be as good as it is in those other areas.


I think everyone can agree that Linux's biggest shortcomings on desktops/laptops are the graphical and audio part. So, even though I agree with "As soon as someone is willing and able to put in the effort on desktop Linux, it will be as good as it is in those other areas." I would replace "someone" with "every vendor".

I feel like most vendors (NVIDIA/ATI/Wacom/whatnot) concentrate much more of their effort into supporting Windows and even OSX becasuse thay's their audience.

Also, I remember reading somewhere that NVIDIA/ATI work closely together with Microsoft because od Direct X[citation needed, though]. I had the opportunity to work with Direct X (the new API) and I found it much more pleasant than working with OpenGL (even though I ended up using OGL in the end; I used Windows and DX to simplify prototyping, because doing the same thing in OGL required much more dev time, at first atleast).

EDIT: Also, let's not forget how most of the majority of linux developer community neglects GUI and the overall end-user friendliness, and how the environment is in most cases quite hostile towards UX/GUI designers in general. There are, of course, exceptions, but those are few.


While it is a vendors issue, they don't just randomly support hardware. They go where the big guys are. As you say NVIDIA\ATI contribute to Direct X, but it happens because MS makes it happen. Google dragged the cell vendors into supporting Android. Desktop Linux just needs a backer that is big enough to make them put forth the effort.


"...making their OS specific to their computers..."

As a non-Linux open source project OS user, I am continually faced with driver deficiencies as a result of hardware specs being under NDA.

A recurring idea I have which I am here sharing for the first time (apologies!) is: why not just pick a single item of hardware and build an open source, free OS project around it?

Why? Hopefully, more control, to the extent possible (notwithstanding Intel ME, etc.). Coreboot, support for as many peripherals a possible, etc. Most importantly, the elimination of the issue of hardware support and the notion of a list of "supported hardware".

Why not? Performance, latest advances, etc.

Hasn't this been done? Maybe. OpenWRT, etc.? But my understanding is that the use of Linux on this router was initially the non-public work of a company, Linksys, and the open sourcing by Cisco was neither anticipated nor intentional.

How is my idea different? The project would be free, open source, but intentionally focused on a _single_ target. Big tradeoff, but maybe some interesting gains.

To be clear, I like the idea of hardware that is more or less "OS agnostic", e.g., RPi and booting from SD card.

But I am tired of watching volunteers struggle to keep up with the latest hardware (many thanks to the OpenBSD and FreeBSD contributors who write drivers for networking, etc.), or having to settle for binary blobs.

Maybe I am just dreaming but I could forsee such a project potentially growing into a symbiotic relationship with some manufacturer if the OS developed a sufficiently large, growing user base. And these users were all purchasing a very specific item(s) of hardware, known to be supported by this OS.

If you comment, please remember I am not a Linux user. And hardware support is not quite the same under BSD. As such, it is something I often have to think about and cannot just take for granted.


Linux mostly works on most laptops. However, my next laptop is definitely going to at the very least be on Ubuntu's supported list and probably going to be either a Dell or system 76 with pre-installed Ubuntu.


Oops, I meant ~80MSLOC for OS X.


Kernel debuggers exist. Of course, we're all aware of Linus' opinion on the matter.


Not all debug tools are single step debuggers. There are memory verifiers, formal proof techniques and self testing frameworks that Linux uses that are very, very useful


What are those tools?


sparse -- adds annotations to kernel code which can be checked by the compiler. It is a little bit like a parallel type system which provides domain-specific knowledge like "this function takes lock A and then lock B" or "this function runs in interrupt context." See https://sparse.wiki.kernel.org/index.php/Main_Page

kmemcheck -- sort of like valgrind, but for the kernel.

CONFIG_FAULT_INJECTION -- inject random faults at runtime (such as in memory allocation) to test infrequently encountered error paths.

CONFIG_DEBUG_MUTEXES, CONFIG_DEBUG_SPINLOCK -- run expensive mutex validation checks at runtime.

coccinelle -- a source code matching and transformation engine. You can use it in some of the same contexts as sed or awk. Unlike those tools, it is aware of the C language so it can do smarter things like add an extra final argument to all occurrences of a call to do_foo_bar_baz(). See http://coccinelle.lip6.fr/

checkpatch.pl -- Checks a patch to see if it conforms to the kernel style guide. Simple things like enforcing 80-column lines, but also more complicated things as well like variable naming, whitespace, etc.

smatch, flawfinder -- static analysis tools that are similar in principle to Coverity. Like Coverity, they are unsound, but often helpful.


Thanks for the summary.

However, what do we have on the formal proof side?


I'm not, does he not like kernel debuggers or something?


"No" would be an understatement: https://lwn.net/2000/0914/a/lt-debugger.php3


He apparently softened somewhat on the issue and did eventually (2008 / 2.6.26) merge a debugger into mainline. I don't think he ever really had a good answer to Alan Cox pointing out that you can't always reason your way through hardware misbehavior.


His argument also necessarily presupposes that you're never going to use a debugger to root out latent bugs, that is, you're only using a debugger because you've just written some buggy code and and you're trying to figure out where the problem lies. Which is silly. There are bugs in the kernel right now, there were bugs when that was written (he even alludes to it, but seemingly fails to understand the implication of it), there will be bugs introduced in the future, and someone at some point might like to find those bugs.

Some of the best "What is the worst bug you ever encountered?" war stories are cases where people end up narrowing things down to a bug in what they were previously treating as bedrock.


> And quite frankly, I don't care. I don't think kernel development should be "easy". I do not condone single-stepping through code to find the bug. I do not think that extra visibility into the system is necessarily a good thing.

That's insane.


That's a really interesting post, and I'm glad I read it. I find myself agreeing with Linus quite a bit, and until now I did not realize that there is programming being done without step by step debugging. And I think Linus's claims that people would be more careful when first designing and writing code if they didn't have a debugger to help them going forward makes a lot of sense.

Unfortunately I work with too many people whose approach to programming is "Read spec, code, debug why code isn't working to spec, fix that specific bug". Design, architecture, etc. are simply not part of the process. I can see how a lack of easy debugging may force some forethought into the development process.

That being said, how valid would this be in a development environment (like the kernel) where a lot of the work you are doing is making changes designed and created by someone else? Linus says the solution is to make sure you were careful at the start. But what if you weren't even there at the start, and had to step in later?


The point is his view about kernel debuggers is complete nonsense, and I'm baffled that you and so many others could ever take it seriously.

Yes, people would be more careful without debuggers, the same way they'd be more careful with cooperative multitasking hanging the system when they forget to yield. That doesn't make it good.


Quite often I read posts by Linus and find myself unable to take him seriously. He's arrogant, abrasive, and some of his opinions (security, debuggers apparently) are ridiculous.


Honestly Linus is a genius about the social side of big projects. He managed to scale Linux development very well, even writing Git in the middle. About the technical parts in their epoch context, it is more mixed: technically before 2.6 versions the kernel had far too many architectural flaw to be considered for serious workloads (even if I enjoyed using it even from the 2.0 series); and today I actually don't know of a single programmer around me that think he is right about the approach of security. On the other hand he was one of the first high profile programmer to call the modern interpretation of C undefined behaviors by modern compiler writers insane.

But at least the beginning technical shortcomings could be explained because Linux kernel hackers of that time were unexperimented, and they actually several times silently learned from (some) NT designs and/or maybe big Unixes of that epoch (probably mainly Solaris) and adapted them not a too long time after trolling about why their previous simpler one was good enough (when it was clearly not). The security approach today coming from people having work that long in the industry is inexplicable. (maybe it is just to early and they will just silently convert to reasonable opinions in 2016? :p )


No, he's not. He is a little arrogant - it's hard to be such a significant tech lead without being so - but if you actually read what he writes, he's not actually that abrasive.

What happens is that some sensationalist idiot will see Torvalds lose his patience and post an angry retort to someone, and post just that response to places like HN, completely out of context. I've lost count of the number of times it's happened here, when everyone's tutting about Torvalds' behaviour and how he's "so aggressive right out of the gate", when no-one looks at the thread history to see him being patient and explaining things.

In any case, zero of the tech heads from Jobs on down satisfy your demands to be taken seriously. I really don't understand why Torvalds gets held to this higher standard of hippie-level friendliness when people don't expect the same from other major tech project managers. I mean hell, Jobs was adored for having opinions that people considered ridiculous.


And how people would be more careful drivers of you replaced airbags with a metal spike.

You would wind up with more careful drivers, but anyone who thinks that's the only goal is misguided.


I think the better analogy would be driving with a very very dirty windshield. Or a slippery stealing wheel.

It's not that absence of debugger would make you (in Linus' opinion) more careful because it's risky, but because it's really inconvenient.


And yet the kernel that he has created and is developed with his methods, it runs on the greatest variety of systems, from toasters right up to being the predominant OS in the top 500 supercomputers (by a wide margin).


Not so sure I really disagree. If I had to guess, I'd think there are a lot more people working on the Linux kernel in aggregate than on either the Windows kernel or the OS X kernel.


Aren't a significant chunk of those people "working on the linux kernel" 1-commit wonders? My guess is if we rank by man-hours, Windows would top the list, possibly followed by OS X.


The desktop computer market is a pretty small part of the overall kernel developer market. Embedded systems is by far the biggest employer. There are just many more phones and other devices out there than PCs-- a trend that seems set to continue. Linux dominates embedded systems and the server market, so I would expect it to dominate the overall mindshare numbers as well. I would expect OS X to have the least number of kernel developers because there just aren't that many hardware configurations for iDevices, and everyone who is bringing up an OS X board works in Cupertino (which has good and bad aspects, of course).


I wasn't sure OS X would be ahead of Linux, that's why I hedged my statement.

Also, while the embedded Linux ecosystem is rather large, what fraction is kernel code that gets upstreamed/mainlined? As an outsider, my guess that it's a minority (some work with is on non-kernel code, and not all through embedded kernel code changes are/will be in the mainline kernel, so I would not count those as actual contributions).

I say this as a Linux desktop user/layman.


There are a large number of people who do kernel-level integration work-- configuring things, downloading drivers, setting things up, and otherwise dealing with the hardware. This kind of work isn't the most glamorous, but it definitely counts as "actual contribution" to the community. Integrators find bugs, make suggestions on mailing lists, and help develop products that make investing in Linux worthwhile for everyone involved. Most integrators that I've known like to submit patches every now and then, just for fun.

I would guess that there are more people whose primary job is integration than people whose primary job is kernel development. I don't really have any concrete numbers, though. On the driver development side, companies are starting to get smarter about getting their code upstream first-- look at NVidia and Intel's recent efforts in that area, for example.


Most committers only review or care about a small area of code, and thus have a lower frequency of commits. That's a good thing, IMO.


> Microsoft got drivers under control with the Static Driver Verifier, which uses automatic formal proofs of correctness to determine whether a driver can crash the kernel. ... Linux has no comparable technology.

That's a pretty cool idea (also similar to the static verification in NaCl.) Is there a reason one couldn't implement a static verifier for Linux drivers? Would the problem be harder in any sense for Linux, because of e.g. number of exposed kernel APIs? Or could a static verifier "read off" the kernel's API with no manual annotation required?


AFAIK MSR sponsored lots of research in this area. Program verification is still not that easy.

[edit] http://research.microsoft.com/en-us/projects/slam/


not really, people are doing some experiments with linux kernel as well. https://www.usenix.org/system/files/conference/osdi14/osdi14...


Seems like a good way to cope with this, from a market standpoint, would be for Linux development to focus on specific laptop. If say, Dell, were to pay Ubuntu to test and verify for some of their specific laptops and then Ubuntu subsequently was able to list "100% Certified" laptops for purchase it could create an opening.

Same thing with developing to put it on Apple laptops with that very specific hardware set, etc.


Isn't this already happening? http://www.ubuntu.com/certification/desktop/


Beware. I went out of my way to get an Ubuntu certified laptop[1]. It took me months to get it to a usable state. Graphics drivers crashed or corrupted the screen[2]. Bluetooth didn't work. Wifi didn't work. While suspended to RAM, it drained 10% of the battery every hour. In short, it was a nightmare.

I've had it for almost two years now, and I've given up on getting Bluetooth to work. After resuming from suspend, the wifi works about half the time, and screen brightness is always set to maximum. It's tolerable, but I only use the laptop when I have to.

I doubt I'll ever buy a Linux laptop again, but if I do, I'll be sure to try it hands-on before buying.

1. The Lenovo X140e: http://www.ubuntu.com/certification/hardware/201309-14195/

2. https://twitter.com/ggreer/status/548923450640056321


> Beware. I went out of my way to get an Ubuntu certified laptop[1]. It took me months to get it to a usable state.

Why didn't you return it and get -say- a Thinkpad? (Were you -perhaps- just curious how shitty the "Ubuntu Certified Laptop" program is?) It clearly failed the "Fitness for advertised purpose" test. AFAIK -if you're in the US- the seller can't refuse to accept your return... unless it was sold as-is.

> I doubt I'll ever buy a Linux laptop again, but if I do, I'll be sure to try it hands-on before buying.

I've found great success with the following method:

* Get a detailed list of the parts inside a given laptop. (lspci info from the target model is a really good sign)

* Run screaming if the video card is made by Nvidia. [0]

* Find if there are in-tree kernel drivers for each of the parts. (If there are, this is a really good sign.)

* Find the out-of-tree drivers for the remaining parts, and see if there are solid plans to get them in-tree. (If there are any such plans, that's a good sign.)

* Discover the known issues for all of those drivers.

* If the drivers seem to do everything that I need them to, and the known issues list doesn't contain any show-stoppers, the laptop will likely work just fine. :)

[0] I know that this is a controversial opinion. I've had awful luck with the nouveau driver and really bad luck with the official Nvidia driver. Other people haven't. I'll stick with Intel-powered laptop video cards if I can. :)


> Why didn't you return it and get -say- a Thinkpad?

The Lenovo X140e is a ThinkPad.[1] I didn't blindly trust Ubuntu's certification. I made sure to get a brand that historically has had good Linux support. I also knew about Nvidia graphics and avoided them. Still, I got burned.

I don't doubt your checklist is good advice for buying a Linux laptop, but it's simply too time consuming to check all of those things. Even if it wasn't, the likelihood of everything working well is low. All it takes is one bad driver for one piece of hardware and the laptop becomes a constant annoyance. Considering the number of hardware devices (Bluetooth, wifi, mic, camera(s), trackpad, GPU, fan, power saving, etc.) it's all but certain something will go wrong. Maybe audio won't automatically switch between headphone and speaker output. Maybe the fan will run at a few discrete speeds instead of gradually ramping up/down. Maybe it will wake from sleep if you open the lid, but not if you hit a key on the keyboard.

I'd rather just pay money and get something that I know will work. That's why my main development machine is a MacBook. I wish there was a competing brand of unix laptops, but so far… no dice. :(

1. http://shop.lenovo.com/us/en/laptops/thinkpad/x-series/x140e... Though after I purchased it, some people told me it wasn't a true ThinkPad, whatever that means.


> The Lenovo X140e is a ThinkPad.

Oh, heh. Derp. Edit: I mean to say: My bad. I overlooked that. :(

I see at [0] that the only Ubuntu Certified configuration is with a rather ancient pre-installed version of Ubuntu. Did you get the system in that configuration, or did you purchase it and put Linux on it? [1]

Regardless. Why didn't you return it and get something that worked? Curiosity? Cussedness?

> [I]t's simply too time consuming to check all of those things.

Odd. I find it reasonable to spend between a couple of days to a week researching the suitability of something that I'll use throughout the day, every single day for next three-to-ten years. Perhaps my opinion is atypical.

> Considering the number of hardware devices (Bluetooth, wifi, mic, camera(s), trackpad, GPU, fan, power saving, etc.) it's all but certain something will go wrong.

I guess I've had fantastic luck with my personal selections and the recommendations that I've given to others. Given that luck is my super power, I'm somewhat willing to believe that my experience is somewhat atypical. :)

Anyway. Good luck with your projects and such, and I hope that Apple keeps producing hardware that meets your needs.

[0] http://www.ubuntu.com/certification/hardware/201309-14195/

[1] Still... one would expect that any Ubuntu Certified Laptop that has a supported hardware configuration would be detected by the Ubuntu installer and configured appropriately (or you'd get a big fat warning when the hardware isn't "supported" by a later Ubuntu release). OTOH, Canonical isn't the best at getting things right, so... :-/


> I find it reasonable to spend between a couple of days to a week researching the suitability of something that I'll use throughout the day, every single day for next three-to-ten years. Perhaps my opinion is atypical.

Aargh, this is precisely the sort of attitude that is causing problems in the first place! Of course some research is necessary prior to any purchase, but the issue here is that clear and correct information doesn't even exist in the first place!!

The expectation of nerds that people have this sort of time to take off simply to get a working computer in this day and age is mind boggling. People shouldn't HAVE to take a week to do research to get basic things like this to work.

It's not reasonable and your opinion is atypical for people who have full time jobs with long hours and families to look after. I really want to use and support Free/Libre Gnu/Linux, whilst still being productive. But many Linux users have better things to do than trawl through lspci, do literature reviews of ancient threads on email lists and bug reports, etc, etc, and then finally somehow manage to design a hardware/software configuration that even mostly kind of works. The very worst thing of all is that the typical Linux nerd thinks that this is normal.


So just get a laptop with pre-installed Linux then...


Can you give me a recommendation?


I think that Dell do some. There is a company called System 76 that sells pre-installed Ubuntu boxes. I can't speak for either companies products having personally tried neither myself. I will be using one of them for my next purchase but I'm not sure which.


> Did you get the system in that configuration, or did you purchase it and put Linux on it?

I tried to get it with Ubuntu preinstalled, but neither Lenovo's website nor their phone support could configure it that way. After about 20 minutes on the phone, I managed to get the exact hardware configuration shown on Ubuntu's certification page: AMD A4-5000, Broadcom BCM43142, etc. In hindsight, I doubt Lenovo ever sold an X140e with Ubuntu preinstalled.

> Regardless. Why didn't you return it and get something that worked? Curiosity? Cussedness?

When I first turned it on, I noticed the X140e had several annoying LEDs. Both "ThinkPad" logos had glowing red dots in their i's. There was also a large green LED near the camera. It glowed whenever wifi was powered-up (pretty much all the time). I found these annoying, so I painted them over. Oops. Next time I'll use nail polish, which can be removed with acetone.

> I find it reasonable to spend between a couple of days to a week researching the suitability of something that I'll use throughout the day, every single day for next three-to-ten years.

Our thoughts on this matter are quite similar. On average, I spend almost 10 hours a day using my primary development machine. I upgrade every 1-2 years, which works out to 5,400 hours of use. That's a lot of time interacting with one piece of hardware. I definitely want to make sure I get the best tool for the job. That 5,400 hours has another implication: Amortized over the life of the machine, even a $3,000 laptop will only cost ≈50 cents per hour. That makes me extremely insensitive to price. I simply want whatever works best.[1]

As peatfreak said, research won't guarantee satisfaction. The only way to really know if a piece of hardware will work for you is to actually use it. That's one huge advantage of Apple (and now Microsoft) products: I can walk into a store and test the hardware/software combo. In just a few minutes, I can tell if the it lacks the annoyances in my list[2]. These details are extremely hard to verify without actually using the machine.

I would be much more open to getting a Linux laptop if I could try it out before buying. Unfortunately, I don't think the market is big enough to make brick-and-mortar stores feasible.

1. I've written about this in more detail at http://geoff.greer.fm/2010/10/30/expensive-computers-are-wor...

2. http://geoff.greer.fm/2015/07/25/laptop-annoyances-or-why-i-...


> Run screaming if the video card is made by Nvidia.

In my experience, it's the opposite. 5-6 years ago, ATI was the friendly one and nvidia gave you hell trying to get it to work. Now it's flipped - the ATI cards I've tried just plain don't work, whereas the nvidia ones will work, and with a few choice harsh words, will work well. Just my anecdata, though, and this is with desktop cards, not laptops (I use thinkpads with intel graphics...)


Are the ATI cards you've been using the absolute newest ones, or a couple of generations back?

Also, are you using the closed-source or the open-source ATI drivers?

And, are you using Ubuntu, or are you using some other distro? (My experience with non-LTS Ubuntu has been... substantially less than stellar over the past several years.)


Well then. It's as if the people at Ubuntu are pretty smart and know what they're doing. :-)


Except they certify systems with lots of problems. Dell had to stop selling their XPS 13 with Ubuntu for a while due to the number of issues.

http://bartongeorge.net/2015/07/20/xps-13-developer-edition-...


I happily paid a premium to System76 for their Linux laptop. Then I took a calculated risk and replaced their Ubuntu distribution with CentOS 7. I've had small hardware troubles (the biggest being video initialization during boot - sometimes it just locks up at a black screen; power cycle eventually resolves the issue, which does not recur until the next power cycle) - but my biggest troubles come from my employer's enthusiastic embrace of proprietary Microsoft protocols. For which I use the company issued Windows 7 machine, and the Mac users get a virtual machine image.


Nonconformity, customization, and choice are major reasons people use Linux instead of OSX, just as they are for Androids over iPhones. People who are okay with One Good Product Per Category are using Apple hardware already.


Only a minority of the problems in the list are laptop-specific. The hybrid graphics mess is an unfortunate one, though most people are well served by laptops with integrated-only video. People who want to play 3D games on laptops get the short stick.


> though most people are well served by laptops with integrated-only video

You're right and it's amazing. I use Linux exclusively on my laptops and had stuck with dedicated GPUs running Nvidia binary drivers since 2009. But this summer I bit the bullet and got a new Broadwell laptop, and I honestly do not miss a single thing about the Nvidia graphics. My Intel 5500 can push well over 10 million pixels across three displays, 3D-accelerated, drawing minimal power.


intel drivers are awesome (and made by intel, mind you)


> The monolithic Linux kernel is just too big. What is it now, 20,000,000 lines? There's no hope of debugging that. It shows.

That makes no sense. Most of that are drivers and platforms you won't ever use. And some forms of symbolic checkers exists for Linux (maybe not that advanced as for the Windows kernelspace, though, I don't know -- although even simple local heuristic checks are quite useful to check automatically in their ability to find real bugs -- sometimes even more than very complex solvers)


"Most of that are drivers and platforms you won't ever use"

That's the point -- developers can't test all possible hardware configurations and resulting driver mixes. Some of the most often used architectures and configurations usually work pretty well out of a box, but if you had something a little more exotic, prepare to get your hands dirty. Honestly, I don't see a solution to this problem, unless some "openness" is sacrificed by signing drivers (Microsoft style).


Drivers and support for platforms should be separate from the kernel and work via a stable and well-designed API & ABI. The lack of this has been Linux's single biggest design flaw from the beginning.


Is Tanenbaum going to have the last laugh?


Tanenbaum was vindicated two decades ago, but most people have yet to get the memo. A lot are under the impression that basic software design principles apply to everything besides operating system kernels, for some reason. And the myth is highly persistent.


It's quite telling though that most microkernel proponents tend not to be kernel developers.


It is also quite telling that micro-kernel haters fail to acknowledge that they won in the embedded space and software systems for high integrity deployments.


I... never said anything about these spaces, which I suspect you would agree require significantly different kernel design from a "desktop system" which is what this topic is supposedly about.


"Desktop system" is vague, but obviously u-kernels can serve as workstations, and hence desktops. CTOS was a major example of its day, if not the purest possible: https://en.wikipedia.org/wiki/Convergent_Technologies_Operat...


FWIW, CTOS was surprisingly unportable, which is hugely contradictory to the claimed advantages in this particular discussion.


Right, because most Linux proponents are kernel hackers. It's quite telling that you have no leg to stand on but to resort to hapless diversions. Or I suppose the flood of researchers who have built viable and innovative microkernel-based architectures throughout the decades are all a bunch of phonies?


"Right, because most Linux proponents are kernel hackers."

They don't have to be. All they have to do is not go around smugly suggesting they know better than kernel developers and they're ok in my books.

Incidentally, precisely how many of those research kernels have become widely used, mainstream kernels capable of high-throughput?

And do you really think it has turned out that way because the whole industry is full of blind dumbasses? I think it's a far more likely proposition that they understand something you don't.


I'm not sure what fantasy world you live in where the software industry is always adopting the most technologically superior solutions by default. No industry works like this.

YMMV on mainstream (they are widely adopted, though), but: OKL4, PikeOS, QNX...

It's quite obvious you have no background on the issues and are using this as an opportunity for provocation.


"OKL4, PikeOS, QNX"

High throughput, mister, high throughput.

Realtime != high throughput. It just means deterministic throughput. FSVO deterministic.

Show me people running big farms of servers running these operating systems where even single-percentage computational overheads really matter.

(added:) The reason for this is that it costs one hell of a lot flipping your page tables and flushing your TLBs every time you have to switch to ("pass a message", whatever) to a different subservice of your kernel.

(also added:) Oh and interestingly many (most?) users of OKL4 go on to host Linux inside it because, hey, it turns out that doing all your work in a microkernel ain't always all that great. So 90% of the "kernel" work in these systems is happening in a monolothic kernel.


QNX4 is high throughput, mister.

Other contenders include eMCOS and FFMK, though those are obscure.

That said, I don't even understand the logic. HPC clusters where single-percentage overheads really matter are an extremely specialized use case, so of course COTS u-kernels might not cut it. Where's the shocker here?

Response to added: Not necessarily with message passing properly integrated with the CPU scheduler.

Response to added #2: Hosting a single-server is a valid microkernel use case. What's your problem? Isolation and separation kernels are a major research and usage interest.


"QNX4 is high throughput, mister."

Ok then, show me the server farms...

I'm not even really talking about HPC, just the massive datacentres that run everyone's lives. All for the most part running monolithic kernels. I doubt the thousands of engineers who work on such systems consider the "huge monolithic kernel" "undebuggable". And I don't see examples of microkernel OSs that are able to cut it in these circumstances.

Even in a mobile device, you don't really want to waste battery doing context switches inside the kernel.

Microkernels have their place, but believing that the world that chooses not to use them are just clearly dumbasses is bullshit dogma.


You appear to be assuming (not being a mind-reader, whether you actually are or not is of course unknown to me) that QNX would automatically be used in server farms if it was high throughput; and, since it's not visibly used there, it is not high-throughput.

(As an aside, I'll grant that even a high-throughput microkernel seems likely, to me, to have a lower throughput relative to a more tightly-coupled monolithic kernel. That's just one of the architectural trade-offs involved here.)

As I see it, there are technical (e.g. hardware drivers, precompiled proprietary binaries) and social (e.g. relative lack of QNX expertise = $$, proprietary licensing) reasons for many people to choose one of the more popular OSes, running monolithic kernels.

I can't say what's technically superior, but even if QNX was, nobody's a dumbass for choosing something else -- and I don't think the fellow you're replying to was saying so. There are, of course, reasons and trade-offs.

An OS's adoption is a social thing, and proves nothing technical about it. If it wasn't for licensing (a social problem), BSD might have taken off, and Linux been comparatively marginalized.

Just sharing my perspective here.


Entertain for a moment the idea that someone's rationale in choosing to deploy a given OS lies deeper than the 1-dimensional rubric you're suggesting, and instead may have something to do with questions like "how easy is it going to be to support this?" and other network effects.

You're getting all red in the face using some really dubious arguments to back you up here.


(response to 2:) Er, so bypassing the microkernel for the vast majority of your work is a vindication of the "microkernels are just better" line is it?


It's not bypassing the microkernel. It's using it either as a hypervisor or as a separation kernel [1].

[1] https://en.wikipedia.org/wiki/Separation_kernel


You missed Tron/ETron and the *tron variants, which is likely the most used kernel in the world.


> they are widely adopted, though

For some definition of "widely".

And feel free to stop adding personal attacks to your comments. They do not enhance the credibility of your posts.


"Widely adopted" is not a synonym for "known by the average consumer".


I know it's not. And I know about QNX, at least (the others are new to me).

And I know that you didn't claim they are mainstream, so we may be quibbling about where we draw lines around the word "widely". But...

What's the installed base of systems running QNX, say? (Throw in the others if you wish.) Estimates are acceptable, too, if you don't have hard numbers.


BlackBerry doesn't seem to give out hard numbers for installations, but they have overviews of the market and lists of particular customers per category here: http://www.qnx.com/solutions/industries/automotive/index.htm...

It's not only worth looking how many, but what. They're in vehicles, medical devices, industrial automation, military and telecom. Those are all areas where blunders lead to loss of lives, not just annoying downtimes. Insofar as infotainment and telematics is concerned, they estimate at 60% of 2011, so it's likely your car runs QNX.


OK, if it's in cars (even if only one CPU per car, or even only in high-end cars), then yes, that certainly is "widely used". (In terms of numbers shipped, not necessarily in terms of "design wins" - but then, Windows doesn't have that many "design wins" either.)


So now you're moving the goalposts with "design wins". Just what are the design wins of a SysV Unix clone like Linux, pray tell? It's hard not to be on the offensive when you seem to beg for it. Where did the Windows comparison come from?

The design wins, of course, should be obvious to anyone willing to do a modicum of research.


Nope, not moving the goalposts. Re-read my previous post.

To clarify: Windows is, by any definition, both "mainstream" and "widely used". Yet it has very few "design wins". Therefore, the argument that cars are "only a few design wins" cannot be used to say that QNX, say, is not widely used or mainstream, since Windows is obviously mainstream and widely used.

> It's hard not to be on the offensive when you seem to beg for it.

You need to re-calibrate your sensitivity. You seem eager to take offense at nearly everything. Very little of it is worthy of your outrage.


The "design win" comment should be read as a concession to your earlier point about the role of technological superiority in industry decisions.


Every iPhone now shipping and most Android devices too I believe are running their main operating system kernel as a layer on top of an L4 kernel. They mostly handles low level security and the cell modem and stay out of the way except for that. Still, I think that should certainly count as widely adopted.


Correction: in Apple's case, the Secure Enclave, which runs L4, does not run on the application processor but on a separate ARM processor integrated on chip. Competitors tend to use TrustZone and hypervisor mode for this, but Apple currently uses them only for kernel patch protection rather than anything more important.

Not that that changes the core fact that Apple is shipping L4.


Your comment would imply that Javascript may not be the most technologically advanced solution for execution on remote clients. This is obviously wrong, so by implication the Software Industry DOES adopt the most technologically superior solution by default.


There are zillions of academic solutions that if implemented properly would be better than the industrial version. Academics are just notoriously bad at building real-world systems. I think this is mostly because it's a waste of time and money as far as publications are concerned.


This is basically the "everyone's a dumbass" argument.


I didn't say people in industry weren't smart. There's plenty of stuff that gets published at conferences where the industry guys are like, we did that 15 years ago.

My argument is that there's lots of great-in-theory but untested-in-practice stuff in academia, and that you can't discount something altogether just because it's untested. It's hardly fair to compare the output of a few grad students over a few years with all of the effort that goes into a major industrial product.

And anyway, the architecture of Linux originated in academia too.


Some of them are.


I don't disagree.


Marginally off-topic, but Minix 3.3 runs the (almost) complete NetBSD userland now. To use X11 you have to compile the development branch, however.

It's a real, POSIX-compliant, reliable microkernel that can run useful things now.


Which is cool and all, but the isolation is reportedly (USB driver crash) a very leaky abstraction and the entire thing is written in C. I'm cheering on Redox OS, which looks better from my very limited level of understanding.


It's also got almost no one actively developing it and and very little project or organizational infrastructure to speak of.


[flagged]


Real time OSes used in high integrity systems, mobile radio stacks, car control systems, ...


Lower throughput. Almost by definition real-time systems focus on latency at the expense throughput.


QnX.


QNX is making a comeback, via robotics and automatic driving. All the Boston Dynamics robots ran QNX; all the hydraulic valves were coordinated by one CPU. The valve servoloop ran at 1KHz and the balance loop at 100Hz.

QNX is behind some automotive dashboards, and they're moving into automatic driving. They have some big announcement coming at CES in January.

But nobody runs QNX on the desktop any more. This year, they finally stopped supporting the self-hosted development environment. Ten years ago, you could run Firebird (pre-Firefox) and Thunderbird on a QNX desktop. But when QNX stopped offering a free version, free software development for it stopped.


I have a barebones re-implementation of QnX for the 32 bit x86, I don't have time enough to clean it up or port it to the 64 bit model. It blew the doors off the competition back in the day (about 2 decades ago), 200K context switches per second on a 33 MHz 486. Fast enough for real time control of all kinds of hardware and with a seamless path from a self hosting desktop environment to embedded hardware. I never got around to porting 'X' to it, but I did build a rudimentary window manager and some apps (terminal, calculator, some graphic demos to test the blitting software). Best demo was 250 tasks running independent graphics demos in windows without any noticeable lag or stutter. I really liked that project, some of the best code I ever wrote.


Is there source available for this or QNX?


Not for my stuff, I do have it (obviously) but it is definitely not worth sharing in the state it is in. Essentially it is a kernel, some userland device drivers and a rudimentary (but functional) network stack cobbled together from various bits and pieces. The toolchain was GCC and djgpp to bootstrap the development until it was capable of self hosting. It would need serious work (several man-months) before it could be opened up.

QnX had an open source program going for a while but it was shut down again. See: http://www.qnx.com/news/pr_2471_1.html


Did anyone here manage to snag the code before the open source program was shut down?


Even if, it all depends on the terms of the license whether or not it could have been used to fork it or to base a free version of it.

" Access to QNX source code is free, but commercial deployments of QNX Neutrino runtime components still require royalties, and commercial developers will continue to pay for QNX Momentics® development seats. However, noncommercial developers, academic faculty members, and qualified partners will be given access to QNX development tools and runtime products at no charge.

Customer and community members will also have the ability to participate in the QNX development process, similar to projects in the open source world. Through a transparent development process, software designers at QNX will publish development plans, post builds and bug fixes, and provide moderated support to the development process. They will also collaborate with customers and the QNX community, using public forums, wikis, and source code repositories."

Suggests that it was open source more in name than in fact.


I'm more curious to read it than to use it, though.


Wasn't there a patent on the message passing part of the QnX kernel?


I don't know. Frankly I don't give a rats ass if there is or isn't but if there is such a patent it would now be the property of RIM (or whoever gets it after RIM folds).

Message passing pre-dated QnX by a considerable time, they just did a really nice and clean implementation of it.

I'd absolutely support their copyright claims on their code, at the time their implementation was unique but I'd totally object against any patent claims, message passing systems had been used widely by that time, also at the kernel level. QnX may have been the first microkernel on that principle that received wide adoption because of the strength of the implementation.


High throughput. As I've said elsewhere, realtime != high throughput. Just deterministic throughput. Users of these systems are willing to use slightly overpowered hardware if it means hitting processing deadlines.


For the desktop, most users care about responsiveness, not throughput. That high throughput Linux kernel makes it utterly craptastic for professional audio usage, with insane amounts of latency. Of course, that has more to do with ALSA being a steaming pile than the kernel in general, but it's one example that shows throughput isn't the only thing that matters /especially/ on the desktop.


Linux users often take the idea of 'self-examination' way too far and it turns into 'self-disparagement'.

The fact is, if you vet your hardware and use a major distro (Ubuntu, OpenSUSE, Fedora) you'll wind up with a perfectly functioning Linux desktop or laptop.

You think about it, OSX only runs on a few laptops. Linux runs perfectly on more laptops than exist for OSX. Windows run on many laptops, more often than not quite well, not always perfect though, despite being bundled together.

I've been using Ubuntu on a ThinkPad T530 for several years, it just works. Couldn't be happier. Everything works BTW, function keys, fingerprint scanner, everything.

As for the Linux eco-system - major browsers work, Steam works, there's a phenomenal ecosystem around Linux if you do any sort of programming, data science, etc... I really have nothing to complain about these days.


Exactly. A non negligible part of his rant was about proprietary graphic drivers. Free software developers can't do anything about that, and anyway what is the point? Today, if you want to game just use Windows, if you want a fine Linux laptop get one with something like an Haswell with integrated graphics (or maybe Broadwell, still wait a little for Skylakes)

And anyway if you look at what the "competition" sells while being this much critical, you can probably write something at least as long. Even last MS flagship devices running last Windows 10 versions are full of bugs now -- and likewise for major PC vendors like Dell -- so GNU/Linux distributions might as well become attractive just because Windows devices are of terrible quality today :p


I thought a similar thing when I was reading it, but regardless of who is responsible for a problem, it's still a problem that should be acknowledged in such a list. Graphics on linux is a hard problem to solve cleanly. It's not anyone in particular's fault, and it's entirely reasonable when you understand the context, but it's still a problem. In context, linux does very well given the restrictions, but it's still not as buttery as the proprietary offerings.


> It's not anyone in particular's fault

If the company producing and selling the hardware is not giving the specs to their users, then it's their fault. Perhaps that's a bit too RMS for some people, but in this case I basically agree with him. It's mine, I bought it, I want to run whatever I want to on it.


> If the company producing and selling the hardware is not giving the specs to their users, then it's their fault.

AMD started releasing the low-level documentation for their GPUs in 2008, and although the FOSS drivers have benefitted stability-wise they're still lagging in API features and often offer less than half the performance of the proprietary counterparts. As far as I know we don't have a complete FOSS OpenCL 1.0 (ca. 2009) implementation for any ISA, nevermind newer versions or competitive performance.

Unfortunately GPUs are so complex that specs alone don't guarantee good drivers.


> A non negligible part of his rant was about proprietary graphic drivers. Free software developers can't do anything about that, and anyway what is the point?

Whether or not it is fixable by the community is unimportant to an end-user that just wants something that works.


I'm really waiting for the day an open source gpu with decent capabilities and clean design for simple drivers hit the market. The reverse engineering headaches are getting absurd these days.


I don't think this is about self-disparagement, but about holding ourselves to a high standard. I love Linux, and have been using it without major problems for some time now (mainly Ubuntu-based distros). Especially if you are into programming, I'd say it beats MS hands down.

However, I do acknowledge that there are still many problems that bar Linux from being an operating system "for the masses" (i.e. all those people who are not computer nerds). Many small problems can be fixed with a few commands in terminal - but which grandmother/stressed office worker/gamer kid is willing to learn how to use a UNIX shell or configure fstab just to do their stuff? And there are other problems that aren't solved as easily. I help out at the Ubuntu Forums, and I see plenty of posters with problems that the combined wisdom of a few thousand experienced Linux users can't solve. (Just have a look at the "Unanswered Posts" section.)

So yes, Linux is a fantastic OS with great software available, and by all means let's keep advertising it. But let's not pretend that "it just works, right out of the box!"(TM) every time.


> But let's not pretend that "it just works, right out of the box!"

It does though. Get an Ubuntu laptop from Dell, it works. Get a Chromebook from Google and friends, it works.

Yes, install Linux on a random 5 year old laptop, and you may have problems. Ever built a 'hackintosh'? Same deal. Ever install Windows? It's a pain.

As for shell commands, Windows has that. So does OSX. Linux also has GUIs that can install packages, that can change settings. The shell is quick, but it's not the only way.

You have to compare apples to apples. And the fact is, if you install a popular distro on popular, well supported hardware, it does work. If you buy a laptop/desktop/server that comes with Linux, it works.


>Yes, install Linux on a random 5 year old laptop, and you may have problems

This is in fact the opposite of reality. Old hardware works relatively well. New hardware can take quite a while to get support because it's not a priority for vendors, especially GPUs which the article harpes on, is a very risky gamble - even if it works "works well" is misleading - it's always behind Windows performance wise and very often power management is inferior too.


Please don't misquote me. What I actually said was:

> But let's not pretend that "it just works, right out of the box!"(TM) every time. [Emphasis added]

Of course Linux often works. Perhaps even most times. A couple of weeks ago I reinstalled my laptop, switching from Ubuntu to elementaryOS. Took me about two hours (not counting most of the backups). On Windows, I would have needed two days. I have done an OS install dozens of times, with various versions of Windows and various Linux distros. And I find that Linux is often easier than Windows, because you can install all the software from one official repository instead of hunting through the download pages of a dozen vendors. BUT, and here comes the big but - that still doesn't meant that it always works smoothly. (Not that Windows always works smoothly, but that's not what we're talking about. We're talking about Linux' problems right now.) To claim that there are never problems is simply not true.

About shell commands: of course Windows has those. But when do you ever really have to use them? (If you are a sysadmin, perhaps, but again, that's not what we're talking about. We are just considering "normal" users.) Everything that needs to be done can be done graphically. The various Linux DEs have made a lot of progress in that area in the past few years, but don't kid yourself. There's still a lot you can't do with a GUI.

And on a final (not quite serious) note:

> You have to compare apples to apples.

If I did that, I would never get away from Mac OS X, would I? :D


its nice to hold oneself to a high standard, but this is not what the article is about. most issues cited are non-issues/have been fixed over the past year. seems just like a random rant.


I spent a whole afternoon+evening reading before recommending my brother a laptop that is a) available where he happened to be and b) would probably just work fine with Linux.

The article discusses several points that would substantially improve Linux. And it addresses your concerns several times, like here:

"There's a great chance that you, as a user, won't ever encounter any of them (if you have the right hardware, never mess with your system and use quite a limited set of software from your distro exclusively)."

Ignoring criticism and possibilities of how to improve Linux ("cause it works for me already") doesn't do any good. IMHO the article is a great write-up that could help to improve these issues in the long run.


Exactly, I've always used whatever hardware I wanted, and Linux mostly just worked.

The occasional hiccups are proprietary Nvidia drivers, and some wifi chips being too recent and needing an open source driver readily available.

I use Ubuntu on a dual boot, I hardly ever boot Windows. And I'm a gamer! Steam + wine is enough for me.


I've been using Ubuntu Linux as my primary OS since 2006, and I can't disagree with many of the annoyances. Especially with regards to graphics drivers and the switch to the Unity desktop. I tried Linux Mint with the Cinnamon desktop and it is nicer, but couldn't commit to it I guess.

But everytime I use Windows and Macs again, I get even more annoyed. Mainly with how slow things are. Even on a new Windows 10 desktop or laptop, it's not uncommon to have to wait minutes for things to settle down after booting and logging in, or waiting while updates are installed during startup or shutdown. And on OS X, the spinning beachball was one of the main reasons why I stopped macs for the most part in 2002 or so. I figured it was just a byproduct of OS X still being early in development and computers not being fast enough or having enough RAM. But no, everytime I try a brand new mac, it still pops up, especially when trying to type in a URL or something like that. The bouncing in the dock and not knowing if an app is running or shutdown or not is annoying, too.

But as long as you keep your stuff backed up or in the cloud (like google drive, google music, etc.), and stick mostly to stuff that works cross-platform (Chrome browser, Libreoffice, cross-platform IDEs, etc.), it's very painless to switch being OS's or devices or to wipe and reinstall things. Even Microsoft Word and Excel work in the browser nowadays, though I still usually just stick with Google Docs.


> it's not uncommon to have to wait minutes for things to settle down after booting and logging in

This is no longer my experience with Windows on an SSD. I'm always a bit shocked when I reboot and I'm back at my desktop in under 30 seconds.

On a machine without an SSD, I'm annoyed at how slow everything is -- not just booting.


I've got a windows 10 machine with a pretty fast NVMe SSD. It takes 10 seconds to get to the login screen, and another three minutes to load up all of the services that are set to run on login. I'm convinced that the NTFS driver must be a nightmare of blocking I/O.


That's definitely not the norm. On both my Windows 10 machines the desktop becomes usable within seconds after login. Might be worth having a look at what's taking so long on your machine...

As far as I remember there is even a Microsoft tool that highlights startup jobs that are slow to run, isn't there?



Task manager shows that.


I have to agree with others who point out that this is not at all typical. I have two SSD-backed Windows 10 machines (one at home, one at work), both with much less fancy SATA SSds than yours. Neither takes more than 20 seconds to go from powered off to fully usable (unless I mistype my password a few times).

I'd check my autoruns if I were you.


As others have pointed out, that's not normal. I have a huge number of services, not only Windows development related services but also MySQL and multiple instances of Apache and it literally has no noticeable impact.

It might be worth turning off any applications that launch on login that you don't need. This now easily accessible from the task manager. It even tells you how much startup impact each app has so you can find out which ones might be causing you issues.


There must be something wrong with your installation or you have an insane amount of services. I have around 16 auto starting background programs including MS SQL Server and it's about 5s to login screen and another 10-20 until chrome is running and I can start doing stuff.


That's actually the reason windows is and will stay my main desktop system. Former Windows 7, now an "unfucked" Windows 10 Enterprise. It's just smoother than OSX or Linux Desktop and I always test some linux distros if I get new Hardware. Last time in novembre I upgrade to i7-6700k, Titan X, Samsung 950Pro 1TB. Of course it's necessary to have always at least one linux based server vm running I use putty to connect with. That's my ideal desktop setup i am using for years.

IMHO X11 needs to be replaced as fast as possible.


Linux biggest problem is perpetual rewrites. X11 is pretty much the only subsystem that hasn't suffered a backwards-incompatible rewrite in the past 10 years.

X11 could use some improvement, but I doubt a new system will address the core problems, which are basically the result of hardware drivers being written for Windows with Linux as an afterthought.


I agree, thankfully Wayland is well on its way. GDM uses it by default as of 3.16 and the Gnome DE just needs a --session=-gnome-wayland parameter.

It has actually gotten to the point where it's stable enough for daily use now.


> It has actually gotten to the point where it's stable enough for daily use now.

Really? What about X11 compatibility?


Really recommend Fedora 23 - login using Wayland. You will be truly surprised. FYI - you can try it out without installing (as a livecd)


I was surprised by Fedora 23 when it asked me to reboot in order to apply patches. And these were not kernels that were updates. Huge step backwards.


I would settle for a high resolution tty that works without much fiddling.


> Former Windows 7, now an "unfucked" Windows 10 Enterprise.

Congratulations ;-)

http://www.networkworld.com/article/2956574/microsoft-subnet...

> IMHO X11 needs to be replaced as fast as possible.

Why? If you don't like X11 you can switch to Wayland right now. I am content with X11 for many years.


"IMHO X11 needs to be replaced as fast as possible."

X11 has very little to do with any of your complaints.


On what machine btw ?

I have a minimal system, but it boots in 4 seconds from BIOS to ion3 (arch). It's funny when the BIOS takes longer than booting. #coreboot


What hardware are you running coreboot on?


I'm not (at least for now, I have a spare compatible thinkpad), I was just implying that the BIOS is now the bottleneck.


Oh, I see. I also ran Arch (on MBA 2012, perfect compatibility). It's wicked fast to boot using the kernel as an EFI payload (efistub).


> I'm always a bit shocked when I reboot and I'm back at my desktop in under 30 seconds.

30 seconds? That's still a lot of time. Debian Linux starts within 7s on a dual core 2GHz with SSD.


It seems to me that Linux is slightly faster on an SSD and a hell of a lot faster on an HDD than Windows 10 when it first boots up. I have the exact same problems the parent describes.


FWIW -- I have my own complaints about OS X (a lot of them), but I can't remember the last time I got beachballed, certainly not something like typing a URL.


The entry level mac mini is terrible and beach balls surfing. I am surprised they sell it. ( cant upgrade the memory either )


What would consider entry level? I use a Mac mini from 2012 (i7 with 8GB) every day and I'm not seeing any beach balls unless Flash crashes some web page.


http://www.apple.com/shop/buy-mac/mac-mini?product=MGEM2LL/A...

Dont add any extras... its simply not usable on the net

Hardware

    1.4GHz Dual-Core Intel Core i5 (Turbo Boost up to 2.7GHz)
    4GB 1600MHz LPDDR3 SDRAM
    500GB Serial ATA Drive @ 5400 rpm
    Intel HD Graphics 5000
    User's Guide (English)
    Accessory Kit


Maybe you have a faulty unit or are driving a very high resolution display. Otherwise at first glance that seems powerful enough to browse the net.


"The net" is a very varied place these days. It would be helpful to mention how many tabs you're using, the sites you're visiting, etc.

My work computer is a 3GHz i7 Mac Mini bought in early 2015, with 16GB ram, and I still have times when news sites freeze and won't scroll because someone has done something stupid when coding the site.

But my system uptime is 89 days and I get to the point where chrome shows a red bar because it hasn't been updated recently. There are a lot of irritating little bugs, but the only things that seem unstable are individual tabs and MyEclipse.


The memory's a little low but yeah that's an otherwise fine machine.


Own one, can confirm that OSX is unusable on a 5400rpm drive. Replacing it with any SSD makes it decent.


Not usable on the net?? Shit this was close spec'd to my compiling machine a few years ago.


I think the issue is usually really poor quality HD supplied by Apple on low end machines.


It's slow, but not poor quality.


I'm not sure /which/ entry level mac mini you're talking about, but the current entry level ($500) is neck & neck with the $1000 model of 2012[0]. I wouldn't recommend the mac mini to someone trying to squeeze every last penny out of their computing dollar, but it's not a rip-off either.

[0] http://cpuboss.com/cpus/Intel-Core-i7-3615QM-vs-Intel-Core-i...


Sounds like your hard drive is faulty: I recommend you hold down the D key or the Command and D keys while the Mac mini is booting to run the Apple Hardware Test. (One can replace the hard drive on that mini.)


I have my things backed up on the cloud and I used linux for awhile for my development system. Apparently for MacBooks there is something screwed up with ACPI so that it won't suspend correctly. Also, every time I would run out of battery the system would have problems booting up. Eventually I went back to Mac OS X after running the computer out of battery and then having it unable to start.


my macbook pro is easily the fastest, most responsive system I have ever used.


When visiting this blog post on my Android phone, the ads (or something) redirect me to a page with popups telling me my phone has a virus and encouraging me to install an app to 'clean' it, using Google branding on some dodgy domain. I seem to get a different one each time.

Reported it at https://www.google.com/safebrowsing/report_phish/ but mentioning here so that others don't fall victim to it.


Easy to see for yourself that this happens by using user agent spoofing in your browser's dev tools: http://i.imgur.com/vlLV8PW.png

Hard to track what exactly is causing this because of the ~200 requests this website makes before redirecting your browser.

The article itself links archive.org's copy of the article which I would recommend using over the original website since it appears to be free of malicious redirects:

http://web.archive.org/web/20151230152933/http://linuxfonts....


The site is now blacklisted for Chrome users. I've emailed the author letting him know about this issue.


http://sitereview.bluecoat.com/sitereview.jsp#/?search=http%...

The page you want reviewed is http://linuxfonts.narod.ru/

This page is currently categorized as Suspicious

Last Time Rated/Reviewed: > 7 days

This page has a risk level of High


AV on Windows flagged this site as being malicious.


Yep same here.


I stopped using Linux as my primary desktop OS around 2012. Until then I was an Arch user with my own desktop environment built on StumpWM and a hodge-podge of hand-selected tools. There was no Gnome or KDE in my setup. I liked it quite a bit.

I used Ubuntu on my laptops since I wanted to spend less time administering drivers and arcane configuration formats.

This is a good list.

I just got tired of the configuration formats, crappy drivers, inconsistencies, dependencies... I was irritated at how easy it was for Apple users to plug in a projector and have it just work. I was irritated by every update to some random library that would cause a sub-system to stop working. I hated having to spend any amount of time administering my desktop environment. To me it should just work and the less time I have to spend trawling forums, logs, and restarting processes to find the correct incantations of dependencies and configuration variables the better.

I've stuck with my MacPro Retina, despite my early trepidation about a GUI-driven proprietary OS, because I've spent probably less than an hour in the last 4 years administering it. It's still snappy and works as well as it did on day one (also the hardware is nothing short of amazing). The only thing that sucks at this point is that the OpenGL drivers Apple ships are woefully out of date and I'm thinking of jumping to Windows unless something changes (damnit I wants me compute shaders).

I still use Linux every single day... just in a VM, container, or on some server.


My counterpoint: as a full-time Mageia user on my laptop/workstation since 2011, my linux configuration knowledge has atrophied to the point of near-uselessness.

Mageia is the first and only distribution I've ever tried that, on a variety of hardware, Just Works. Every hardware feature works out of the box, and with as little surprise as you'd expect (e.g. plugging in a monitor, using wifi out at a coffee shop, plugging in a printer). I've even done the distribution upgrades (e.g. version 3 to 4), which I was always scared of, and somehow those too just work. I literally cannot remember the last time I hand-edited a config file (not counting Apache configs on a Linode VPS with Ubuntu Server).

I've had a chance to use recent versions of OS X, and I even experienced more crashing (not a lot, mind, but my linux laptop never crashes) than on this stock HP with mageia.

Happy that you've found a solution that works for you, but I'm also extremely happy that I get to keep using Linux without spending one more second thinking about hand-editing configs! :)


we're soon in 2016, 4 years later. you might want to try again (with arch in particular).

I've zero issue with Plasma 5. I plug my laptop into the projector and it just works ;-) I used to have that problem as well, like everyone else. But honestly its been a thing of the past since 2 years or so.

Also, using Plasma ensure I don't have to fiddle with configuration. It's not as leet as stumpwm or many others, but its certainly more customizable and more easily than OSX - while still working out of the box, just like OSX.


Oh... and integration with my devices too. I use hand-off to take calls from my phone on my MBPr all the time. I also use the Keynote app on my phone which controls the application on my laptop. The sync features I don't use as much only because I trust their cloud about as far as I can throw it.

I don't remember Bluetooth or any device integration working well or at all on any Linux distro I've tried. Maybe that has changed but that kind of stuff is nice!


I have a friend who went through much the same process. I converted him to full-time Linux usage, and he stuck with it for a good 5-6 years, but eventually went back to Windows as his host OS so that he could conveniently play games, connect projectors, modify photos and video, and do all of the other basic things that only Windows and OS X users get to do reliably. He still does all of his development and sysadmin work from a Linux VM that runs on his Windows host.

I've considered doing this myself, especially with WINE taking eons to implement adequate DX11 support (most games released in the last 3 years won't work), but it's a plunge I haven't been able to make yet. I think I'm going to just set a Windows box right next to my Linux workstation and move the mouse, kb, and monitors over when I want to do something Windowsy.

The ideal would be getting hardware that supported raw GPU passthrough, getting a spare GPU, and doing all of the Windows stuff in a VM (this allows you to get 90-95% of your full GPU performance, and have the card controlled by the Windows system), which would really be the ideal solution IMO. However, I found that even the full technical manuals rarely mentioned if the hardware had the IOMMU, the require capability (both the processor and the motherboard must support), and that by far the vast majority of hardware doesn't.

I have a dual-boot partition now, but I use it probably about once every 9 months, and mostly just to poke around for a second and see if it has spontaneously combusted whilst unsupervised before rebooting back into my workstation OS. It's simply too much hassle to have my primary workstation offline for hours at a time while I game, because it hosts many services that my house relies on. I also have a Windows VM that I do most of my photo editing in.


That's about the date I went back to Linux.

I really hated that the old KDE4 was just a clone of WinXP, and later Gnome until 2010 was just a clone of OSX.

Ubuntu's Unity is the first linux that's proud to be itself, and it shows.


from StumpWM to OSX? ouch


I was not happy at first. :)


Yep. I'm glad that someone compiles a list like this every year so I'm not tempted to lose another week trying Linux again.


As a daily user of Linux on the desktop for over 15 years, I can appreciate many of these complaints, but despite it's flaws Linux on the desktop is a pleasure to use and is better than any other options. This is a nice resource for kernel developers and OSS contributors who wish to make a difference and solve tough problems.


Same here: started using Linux in 2000. Back then hardware support was a real big issue, software was missing, graphical environments for normal people not completely ready, etc. I think things really started to change after Ubuntu, and I would say that Linux got "ready for desktop" around 2008-9. My parents for instance have been using Ubuntu for many years (they also have a windows 7 laptop that is completely owned by malware) and they are very happy with it. Things changed a lot, but yeah, Linux isn't perfect yet, and I agree with some points (for instance the dpi thing). The only way that Linux could be massively used on desktop is by having a company like Microsoft making deals with every single manufacturer in the world for decades.


This has been my experience as well. I started using Linux some time around 1993-1995. Linux was inferior to FreeBSD in many ways back then (and for the following few years). Things got better and better and by 2008 it was a lot better on the desktop, and I think Ubuntu gets a lot of credit for that. It still had some trouble with wireless drivers and that sort of thing, but all these problems started clearing up over time.


Same here. I'm so used to Ubuntu that Windows annoys me tremendously, which goes to show that a huge part of the user experience is familiarity with the environment.

In my experience some things work better on Ubuntu than on the last Windows version I used, Windows 7. Detection of wireless HP printers, for example. Some things are exactly the same. Finally, some things work slightly worse: drivers for 3D gaming are still a bit worse on Ubuntu (unless you luck out and find the one driver that runs wonderfully and doesn't require any magical incantations), and configuring everything to work just right with some games can still be a pain. Less demanding games are just fine, and I use my Linux laptop a lot for this (heavy user of GOG.com, Humble Bundle and Steam here!).

The jury is out on OS X for me. Using my wife's macbook drives me nut, mostly due to lack of familiarity. Some things are just as bizarre as with any Linux/Windows computer; for example, the other day I was advised to hard-reset the macbook because it wouldn't recognize any USB device... if this nice bit of WTF-ery had happened with Linux, the response would have been "what did you expect? Nothing 'just works' with Linux". But I guess computers will be computers regardless of the OS...


Agreed. I've been using Linux as my only desktop OS for over a decade, and if you pick the right hardware it's just great. Usually it takes a bit to get it working just right, but once you do that, a well configured Linux box with Gnome is a superb desktop environment.



Gotta say I agree with the basic gist (certainly haven't read this very long, excellently-detailed article yet), and I've run linux as my primary OS on various ultrabooks for the past sereral years.

Just skimming, I found this bit on-point and amusing:

> It's worth noting that the most vocal participants of the Open Source community are extremely bitchy and overly idealistic people peremptorily requiring everything to be open source and free or it has no right to exist at all in Linux.


I must not be a vocal participant of Linux, then. Adobe! Shut up and take my money, already!


I have one more which this article doesn't mention: Bluetooth. It's an extremely fragile house of cards (as far as I can figure, there are a few kernel modules, dbus, a bluetooth daemon and pulseaudio involved) and every upgrade you roll is extremely risky. Currently my BT works but after a day or so uptime it will simply stop working and nothing short of reboot helps. (More https://bbs.archlinux.org/viewtopic.php?id=206032 here)

And yes, I have been running Linux since before there was 1.0. And no, I don't like it. But the alternatives are even worse IMO.


I constantly battle Bluetooth issues on every platform. It sucks and needs to die.


This is true even on my Mac with wireless Apple accessories. Pairing issues galore if I have to change the batteries.


The basic thing is that Bluetooth didn't start on a multi-user OS. It started on Ericsson mobile phones.

And there it was at first basically a way to do handsfree without a wire.

From that you have OBEX from IRDA added to handle various data transfers, and a concept of profiles that are half in hardware half in software.

In essence the very design jumps back and forth between kernel/hardware and user space.

Still, the place of failure i most often encounter is something getting stuck in dbus. Meaning that i can restart all the daemons, but unless i restart dbus shit all changes once a failure happens. But restarting dbus at best kills my desktop, at worst reboots the computer (hello systemd).


Bluetooth is a complete mess. Every new bluez version breaks backwards compatibility (all applications depending on it break!) and removes features.


Perhaps one of the major problems with bluetooth is it requires such a deep stack of confusing software which it's hard to find logs for when things are acting up.

For me, it turned out one piece of my stack had been set up by debian by default (or perhaps KDE's bluedevil?) to try and put uploaded files into a directory I didn't have permissions on. Leading to a near-silent error when trying to push files. Go figure.


I solved it and all of my wireless-networking issues by installing Intel wireless cards in all of my computers. Broadcom and Atheros have always given me problems.


Does Intel make USB bluetooth sticks? If my laptop didn't come with a combo miniPCIe card I doubt the necessary antenna is there. In a desktop, I looked for mPCIe - PCIe x1 adapters with lots of antennas but I can't really find any in low profile. So... how did you do that?


For desktop, perhaps the Gigabyte GC-WB867D-I?

Specs: http://www.gigabyte.com/products/product-page.aspx?pid=5157

Price: https://pcpartpicker.com/part/gigabyte-wireless-network-card...

It's basically an Intel 7260 Wireless-AC/Bluetooth chip plus a PCIe adapter.


Intel has a desktop version and it comes with both brackets! http://www.intel.com/content/www/us/en/wireless-products/dua... http://www.amazon.com/Intel-Wireless-AC-Desktop-Network-7260... Also, I will be adding to a somewhat old HP SFF so I am using the single PCI slot to add an additional USB header (Rosewill RC-100). PCI speed is plenty for USB 2.0.


Thanks! However, http://www.amazon.com/forum/-/Tx18X5FNV15HHU8/ref=ask_dp_dpm... it looks like this does not come with a low profile bracket. And, are two antennas ... enough? Just my 6300 in my laptop have three.


I don't understand - I never see a mention of the biggest annoyance to developers on linux : consistent copy paste. It is a cognitive exercise to copy from the terminal or paste to the browser...or (the horror) copy from the terminal and paste on vim.

It does not help that this works beautifully on the Mac.

Is this not an annoyance to anyone else...and more importantly, considering all distros are now using libinput,can I compile my own libinput that will universally copy paste using win+c ?


Is the issue specific to terminal or are you seeing it somewhere else?

The issue is that the default terminal behavior of ctrl+c sends a SIGINT to the running application[1], and so terminal programs override it with shift+ctrl+c, and likely add the shift for the others for consistency.

I use "Terminator" as my terminal app - which allows you to set your keybindings - so I replace the "shift" in copy / paste and it's worked great for me for years. Unfortunately it doesn't allow you to override the SIG* keys, so it can be an issue when using `watch`, which removes your selection when it updates and then if you don't tap ctrl+x in time, it will send SIGINT the watch command, which is the exact opposite if what you want at that moment.

As for VIM, being sure you're in insert mode is essential no matter what OS you're using.

Is there anywhere else besides the terminal where copy / paste are inconsistent? I haven't seen any that I recall in quite a few years of Ubuntu on my desktop, laptop, media PCs, etc.

1: https://en.wikipedia.org/wiki/Control-C


no please - I'm not talking about ctrl-C. I'm talking about CUA - https://en.wikipedia.org/wiki/IBM_Common_User_Access

Even on OSX, the bindings that are used are cmd-c and cmd-v. It is universal - if you have never used the terminal, vim and firefox on a mac... I really urge you to do that.

you'll see what i mean.


I see, thank you for clarifying. Having used all versions of Windows up until 8[1] and 8+ years of ubuntu, I'm surprised I never realized this standard existed. It makes perfect sense that it does - I just didn't know it. I agree that it's unfortunate it's not as well supported on Linux as it should be.

Coming from the other side of things, I had a lot of trouble when I was lent a brand new Macbook Pro for travel during my last job. It seemed some things worked using cmd and others using ctrl. I don't remember the specifics, as I didn't use it often enough, except that I'd find myself mashing keys on occasion trying to figure out the right combination.

It was far worse when I dual booted with ubuntu. The keys made no sense to me there either. Finally, I replaced OSX completely and the keys were "normal" again (ctrl+* for everything).

1: Also lots of DOS, but I don't even remember what versions. I had managed lots of desktops and servers running DOS for a few years in IT a couple lifetimes ago.


You've got the causality reversed. The Apple Lisa introduced the command-Z/X/C/V keyboard equivalents for undo/cut/copy/paste, and they were kept with the Macintosh and Apple IIgs.

IBM CUA and Microsoft Windows adopted them afterwards, adapted to the PC keyboard; I think Windows only adopted them with Windows 95 and NT4.


No, they were already there in Windows 3.x


> Unfortunately it doesn't allow you to override the SIG* keys

You should be able to set these with `stty` (they're not actually a function of the terminal but of the tty, a detail no one should have to know).


Highlight to copy, middle-click to paste. Works every time for me.

Obviously, if you want to paste into vim you have to be in insert mode.


Technically vim exposes the x11 clipboards (plural) as registers - I never use registers, but I think

  "+p
[Ed: for those not familiar with vim: " can be thought of as 'with/from/into register named:', + is the register and p is paste. Similarily "+yw is 'yank/copy current word into register named +' ]

should be paste x11 selection. I don't normally use gvim/vim as an x11 application, but looks like * (star) is typically system clipboard:

http://vim.wikia.com/wiki/Accessing_the_system_clipboard


why does it work on osx then ? is there something special happening at the OS level there ?


Afaik ctrl-v works in gvim? (I think its just bound to "+p or similar).


Only with :behave ms (default on windows), else ctrl-v starts block visual mode.


Yes, except when you have to select something in between the time you copy & paste, e.g. change focus to the URL bar of a browser. Then ctrl-C is your friend, and works everywhere except the terminal.

But the terminal is a special case when it comes to copy/paste and always will be a little weird on any platform. Especially when it comes to line-wrapping in full-terminal applications like vi.


Pretty much, never had a problem with it.


my laptop does not have middle button. so I have to do a weird left-right button click. Very inconvenient for me.

I truly envy the OSX people just on this one aspect.


shift-insert works in X if you can't middle click. In gvim too, out-of-the-box if you're in insert mode.


I was HOPING for that. Its part of IBM CUA. Unfortunately it doesn't work everywhere.. Especially the terminal (unless you remap keybindings)


Oh, this was an annoyance until I installed a clipboard manager (parcellite in my case) which automatically syncs the different clipboards.


But how do you paste? I mean I have all sort of keymaps to enable paste in vim using CUA (shift-insert), but try explaining that to a first time user.

Its the small things that OSX does right.


Why would you need keymaps? As far as I know, the terminal emulator (urxvt in my case) takes care of pasting the text into Vim. I never had to configure anything to use shift-insert or the wheel button to paste.


ahh.. I see.

s/vim/gvim/g


Why would a "first time user" use a program that has a user interface designed in 1976? Why would a "first time user" expect this program to obey interface guidelines that did not exist until 11 years after that user interface was designed?


ctrl-shift-C on the terminal, ctrl-V on everything else. How is this a significant annoyance to you?


Tried double-tapping in the article text to get it to reflow, and got one of those bogus popups claiming my phone is infected with a virus. Thanks for trying to make my new year interesting. :-(


The site is now blocked by Google Safe Browsing as a result of these complains. Author has been emailed asking that he corrects this behavior in order to be removed from the blacklist.


This happens to me as soon as I enter the website on my mobile. Get forwarded to one of those websites. Although I'm not sure why I want to read yet another person complain about Linux anyway, when I find it is excellent for certain purposes.


Me too. I can't read the article at all because of it.


Flagged the link. But surely I'm not the first. How does this malicious post stay up for four hours?


Yeah, is this considered acceptable in today's world? I goto your site and get attacked by your ad service?


Me too: my response was a) f@#$ this article/website and b) f@#$ Chrome for allowing websites access to the phones vibration and message pop-up window and C) closes Chrome


Mine was 'You were selected! Click OK to claim blah blah!'


Happened to me using Hacker News 2 on Android, which made the "Malware Warning" popup show over the HN comments. Tried a few different ways to actually read the article, since the conversation was interesting, but in the end, just gave up and figured I'd read it later on my desktop.


The site auto-redirected to malware for me. Didn't have to click anything.


> X.org 2D acceleration technologies and APIs aren't as mature and fast as Direct2D and DirectWrite in Windows.

This complaint is confused. X is the wrong place for that stuff (the failure of XRender to live up to expectations being a testament to this). The windowing server should be multiplexing GPU buffers and that's it.

Direct2D and DirectWrite are user mode, client side libraries on Windows. (Maybe the author has them confused with the legacy GDI, which lives in the kernel?)

What this should say is that Skia-GL/Ganesh and Cairo-GL, which are the open source Direct2D competitors, are not at performance parity with Direct2D. I've heard this in the past, though I don't know how accurate it is anymore. Thanks to the work of Google, which depends on Skia-GL for Chrome and Android, it's made rapid progress lately. In fact, in my experience, Skia-GL is basically at performance parity with CG::OGL (i.e. the Mac/iOS 2D rendering backend)—so if the Mac is your benchmark, Linux has caught up there, if you use the latest Skia and configure it properly. Very few Linux desktop apps use Skia-GL in practice, though, which is a separate, and unfortunate, issue.

(Finally, I should mention that Direct2D and its competitors constitute a really low bar if you compare to the actual state of the art in GPU vector graphics, which is stuff like Scaleform.)


Oh, here's another I noticed:

> Year 2015 welcomed us with 134 vulnerabilities in one package alone: WebKitGTK+ WSA-2015-0002. More eyes, less vulnerabilities you say, right?

Glancing through the list that was linked to, most of these vulnerabilities affected Chrome on Windows/Mac and/or Safari on Mac too. It's not fair to use general vulnerabilities in widely used Web browser engines as an indictment of Linux specifically. Nor is "look at the number of vulnerabilities in Web browser engines!" a particularly good proxy for anything other than how popular, and security-critical, Web browser engines are. (Some of the other security criticisms, for example of X, seem fair though—e.g. DRI2 is really bad.)


The website is hard to read on mobile because it is wider than my screen. Bit unfortunate.

Regarding VA-API: Since gstreamer-vaapi 0.7.0, Totem finally works with VA-API. Before only MPV worked. IMO if you used mplayer you better just switch to mpv. It still doesn't work perfectly in VLC, though AFAIK VA-API should be built in. In my experience filing bugs was enough to get improvements. Once it works well enough distributions should install it by default. IMO it pretty reasonable to achieve "va-api installed by default" for major distributions during 2016 provided enough people put their time in. Heard that gstreamer-vaapi will be merged into gstreamer itself, so that'll help a lot as well.

Regarding laptop support: Canonical seems to do a lot in this area. Testing Ubuntu and adding (upstreaming) workarounds/quirks to e.g. the kernel.

Some problematic items I wasn't aware of (good to have this list). E.g. font anti aliasing. Stuff like that is why freedesktop.org started, to have a way to agree on such things.

At least for some of the items on this list I have the idea that I can help to improve. Some items seems impossible. Still, just a little bit of effort can help a lot. Often I spend less than 30min to get improvements (though mostly thanks to actual paid developers putting in their effort).

Edit: The site should remove the swearing. You can be explicit in your disagreement without needing to swear.


Up until fairly recently I had a number of issues with VLC and vaapi / vdpau, but now it all seems to work well. I even have working accelerated video in Chromium by removing the GPU blacklist and it works fine. The only things missing now on the video front for Linux is good encoder support, but I can get that right now with ffmpeg - its just missing in a lot of capture software.

Though we really should all converge to openmax at some point. Like, all encode / decode with openmax in the future. It just makes bloody sense.


To me this reads as "I can't use hardware xyz on linux". Well, that's no surprise. There's "wxy" hardware that you can't use with Windows or OS X as well.

Today Linux supports the same amount (or more) hardware that Windows does. Hardware support is not a reason to use or not use Linux. The software ecosystem is much more important - not to mention the freedom aspect...


I often see people proposing others install Linux without actually considering hardware, and that easily and quickly loses potential adopters.

No, Linux does not support every piece of computer hardware ever made. You cannot blame Linux for that, if in the case of - ex - Broadcom, the company does not want to provide any documentation, support, drivers, or anything whatsoever, there is nothing you can practically expect this community to do about it. It is insane to think that if a hardware vendor won't support their own products its my job by telling you you should use Linux to then reverse engineer drivers for everything, and somehow magically break the signed firmware bullshit on a lot of modern hardware. In the same way your 1998 Lexmark printer probably doesn't work at all with Windows 8+, <insert arbitrary random thing from a decade ago> won't get support in Linux if it isn't already there.

But I also blame the advocates who sell false promises about supreme compatibility - its an operating system. It supports a lot of hardware. A lot of companies support it. That does not mean all companies support it, or that all computers can work with it. But there is a difference between not supporting any hardware in a product class (ex: I have great experience with Epson xp-XXX printers and always buy them for Linux clients because they work out of the box with gutenprint in scan, print, and ink levels on Cups and do rotation / double sided properly no problem / I use Atheros ar9462 Wifi cards everywhere because they have great bluetooth + wifi that are stable as hell out of the box without configuration) and supporting enough to be functional, and Linux is absolutely functional.


No, there's no consumer-level desktop hardware you can't use on either Windows or OS X or both. That's false.

Hardware support is indeed a reason not to use Linux, but the question is who needs to pick up the slack. With other operating systems, manufacturers put (more or less) effort into writing device drivers that are functional. With Linux, they really do not. It's a large and moving target, and the market value of doing so is not clear to them.


Really? There's plenty of old hardware which no longer works in recent Windows versions. It's not in a manufacture's interest to update a driver to recent OS versions. In Linux drivers usually remain for a long time.


Example: Sony VIAO laptops

http://esupport.sony.com/LA/perl/os10upgrade.pl?stage1=24&st...

"IMPORTANT:

Drivers for other hardware components such as Network cards, etc..., may be available on the manufacturer's web site. Sony software applications that originally shipped with the computer may not work after installing Windows 10. Sony cannot guarantee your system capability after installing Windows 10."

Had a friend with this. Sony sold division to some other corporation. Which leaves Win10 qualified drivers for such optional things as, oh, WIRELESS NETWORKING in limbo after a Win10 upgrade.

Sure, you or I would pull the wifi card model, go to the manufacturer's site, and use the drivers from there. But Joe, Sally, Bob, or Jane public aren't going to get past "Wifi stopped working, and it wasn't on the update screen".


Yeah. GP's point was overstated. But there's a weaker claim that is just as important and, I think, indisputably true: Windows supports virtually all hardware that is popular among desktop users. Or, even better, Windows supports far more hardware than desktop Linux, especially if you weight hardware by popularity.

It's understandable, of course--Windows has a greater consumer installed base resulting in greater manufacturer interest in supporting the platform. For Linux desktop users, unfortunately, the dynamic is often the opposite: the manufacturer has no interest in supporting the platform, and users/volunteer developers have to do all the work...which the manufacturer may then break at will.

But that doesn't mean that hardware support is not still a major problem for Linux desktop adoption.


Whether there's continuing support for a product is a very different issue. At some point, all consumer hardware was functional in a version of Windows or OS X, since it was released precisely with support for one or both of those OSs. Yes, updates break things, but that is a very different problem than the one facing Linux. The problem facing Linux is that manufacturers are not supporting (or not seeking with much vigor to support) any Linux distribution upon initial release of the hardware.


It is a very different issue but can be very much in favor of GNU/Linux distribution when they work on a given piece of hardware: they are far more likely to be able to get version N+1 of said distribution than Windows N+1. Now Windows 10 might bring the best of both worlds for the Windows world, but for now it does not work well that much.


Sure, there's lots of old hardware that the manufacturer no longer supports. That, to me, seems like a very different problem, as the older hardware /was/ supported by an older version of windows. It's true that this traps you in an un-patched OS, but (as pointed out many other times in this thread) patching Linux components also risks creating hardware incompatibilities.


Probably, but I'd also wager that the majority of those devices use physical connections that any machine capable of running recent windows versions do not have. E.g. DB-25 ports.

I will also wager that the only real "wall" that would prevent old hardware from working was the removal of 16-bit support from x64 versions of windows.


> No, there's no consumer-level desktop hardware you can't use on either Windows or OS X or both. That's false.

Oh, I can show you multitude of printers, scanners, tv tuners that do not work with Windows or OSX. Or they work with exactly one Windows version and that's it.


Maybe old printers, old scanners and old tv tuners?

The reality is that when a new laptop is released, Windows will support everything it has from day one. If that's because of Microsoft's or the manufacturer's efforts, that doesn't matter for regular users.

On Linux, you buy the laptop and wait 1-2 years for bugs to be ironed out or even for things to barely work for the first time. Afterwards, Linux will continue to support that for a very long time while Windows may drop support after 2-3 versions because the manufacturer has no interest in upgrading drivers for the newest Windows releases. Specially for something that is not selling anymore.

Most people I know in the open source world barely use any fancy features like tv tuners, fingerprint scanners, touchscreens, or even printers. From reading some comments here, these are usually the people saying "everything is fine". I do think everything is fine for open source developers but no so much for regular users. And if the focus can be on the latter, then I guess everybody will benefit at the same time.

EDIT: typing this from Fedora 23 on a laptop with a NVIDIA card that is disabled because I got tired of bugs in nouveau and the constant hassle of updating the kernel and the GPU drivers stopping to work. I'm happy with the ingrated graphics though so my next laptop won't have a dGPU, probably.


I think just the opposite - most of my software runs on Linux, but hardware is the main obstacle.

I've installed Linux many times on desktops and laptops, and I have yet to simply install a graphics driver and have it work first try. My laptop locks up the cursor after two minutes of use, doesn't support two-finger touchpad scrolling, and exhibits constant graphical artifacts, all under multiple driver and kernel combinations.

Speaking as a big fan of Linux, the hardware support is abysmal, except where the money is: server hardware.


The parent is correct, modern Linux supports a lot more hardware than any single Windows version. The difference is that Linux is much better at supporting old hardware, and Windows is slightly better at supporting new hardware. Try using a 20 year old non-generic printer on Windows 10, for example.


As it so happens, I'm pretty sure the marginal Linux laptop installation is on a new machine. So from the universe of possible hardware installations, Linux might be superior; from the universe of actual installations in 2015, it lags well behind Windows.


That used to be a problem, but I don't think it is any more. Linux "just works" on both my laptop (including sound, multitouch touchpad, wifi, and graphics, also when I'm not using it, I only ever put it to sleep by closing the lid, and I've never had a problem) and my desktop, which has an AMD graphics card (Radeon HD 7950) and an external USB audio interface (Focusrite Scarlett 2i2).


I haven't tested but I suspect there is still a distro issue - I haven't had any hardware compatibility problems on Ubuntu, but there are plenty of groanworthy desktop distro recommendations on the net.


No, funtionality is important. It doesnt matter how free the software is if it doesnt work. I don't know where you live, but in both the countries I live in you cannot buy linux compatible wifi adaptors over the counter. I can't close my office or loungeroom doors at the moment, because none of the three wifi adaptors I own work with ubuntu, so theres a massive network cable snaking through both. The hardware compatibility lists for distros are regularly out of date and rarely comprehensive. If you expect the average citizen of the world to trawl forums for compatibility info and pay hundreds of dollars in shipping and wait weeks to get compatible hardware that only seems to be sold in the US, you've got buckleys. To be frank, your attitude is what holds back desktop linux. We need to admit these flaws and fix them, not blame the world for being insufficiently hardcore.


"I can't close my office or loungeroom doors at the moment, because none of the three wifi adaptors I own work with ubuntu, so theres a massive network cable snaking through both."

Homeplug[1] devices any good?

I used a pair of those for a bit to save putting cat 5 in the wall (I'm lazy about DIY). Worked OK but I didn't need super-fast.

[1] https://en.wikipedia.org/wiki/HomePlug


They're very expensive compared to wifi, especially compared to their performance, and based on anecdotal reports their lifespan isn't very good. How's your mileage been?


I used a couple of slower ones (German make, around £30 for the two from PC World, not the most economical outlet) for about three years until I got rid of the desktop PC. No issues. Just worked. Throughput fine at 4 mb/s Internet speeds which is all I used it for.

If you need AV speeds, and if you have 'unusual' wiring, best ask on an appropriate forum supporting home networking or something.


Driver availability is only a small portion of that list.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: