Many of the driver problems come from the fact that Linux finally worked on the desktop about the time desktop machines were replaced by laptops. Desktops with slots tended to have relatively well-defined hardware, and plugging in third party hardware was normal. This is much less true for laptops. OS development for laptops requires that laptop. It needs a Q/A organization which has one of everything you support. Linux lacks that.
Microsoft got drivers under control with the Static Driver Verifier, which uses automatic formal proofs of correctness to determine whether a driver can crash the kernel. (The driver may not control the device correctly, but at least it won't blither over kernel memory or make a kernel API call with bogus parameters. So driver bugs just mean a device doesn't work, and you know which device and driver.) All signed Windows drivers since Windows 7 have passed that. This has eliminated most system crashes caused by drivers. Before that, more than half of Windows crashes were driver related. Linux has no comparable technology.
The monolithic Linux kernel is just too big. What is it now, 20,000,000 lines? There's no hope of debugging that. It shows.
The Linux kernel is about 20MSLOC. The Windows kernel is about 50MSLOC. IIRC, OS X used to be ~80KSLOC.
Problems with debugging are endemic to any monolithic kernel.
Neither Windows nor OS X is easier to debug technologically, but Microsoft and Apple both have many employees and lots of money invested compared to Linux.
(Also, Apple solved the problem by making their OS specific to their computers, so they had the whole thing under their control.)
There are a lot of people who are paid to work on the kernel full time from Red Hat, Google, IBM and many others. If I had to guess I'd say there are probably more than the other two, it would be interesting to find out. But if you include people where it's not 100% of their job, but still an official part of their job, I'd say it's almost certainly more for Linux (not even counting unpaid contributions).
Linux is the most popular platform for servers and HPC because most of the time it's the better kernel. It's so dominant that Apple has basically left the server area and the handful of showcase supercomputers built on Apple gear have long since faded from view. Linux went from having one supercomputer in the top 500, a fraction of a percent, in 1998 to 98.8% of the top 500 currently. The other six seem to be IBM machines running something else. OS X's first release was 2001 and it and Microsoft offerings are simply not present in that top section of the HPC space.
Linux is also the kernel on the most popular smartphone platform so it's not all computational either, when the hardware is tightly controlled it works fine. The problem isn't the number of developers on kernel or how hard it is to debug, it's that laptops and desktops aren't offered from a single company that can tie everything together.
And they are all working on desktop\Laptop support right? Linux is great in the data center because it has big guns behind it in the data center. Linux runs well on cell phones because Google put in the effort. As soon as someone is willing and able to put in the effort on desktop Linux, it will be as good as it is in those other areas.
I feel like most vendors (NVIDIA/ATI/Wacom/whatnot) concentrate much more of their effort into supporting Windows and even OSX becasuse thay's their audience.
Also, I remember reading somewhere that NVIDIA/ATI work closely together with Microsoft because od Direct X[citation needed, though]. I had the opportunity to work with Direct X (the new API) and I found it much more pleasant than working with OpenGL (even though I ended up using OGL in the end; I used Windows and DX to simplify prototyping, because doing the same thing in OGL required much more dev time, at first atleast).
EDIT: Also, let's not forget how most of the majority of linux developer community neglects GUI and the overall end-user friendliness, and how the environment is in most cases quite hostile towards UX/GUI designers in general. There are, of course, exceptions, but those are few.
As a non-Linux open source project OS user, I am continually faced with driver deficiencies as a result of hardware specs being under NDA.
A recurring idea I have which I am here sharing for the first time (apologies!) is: why not just pick a single item of hardware and build an open source, free OS project around it?
Hopefully, more control, to the extent possible (notwithstanding Intel ME, etc.). Coreboot, support for as many peripherals a possible, etc. Most importantly, the elimination of the issue of hardware support and the notion of a list of "supported hardware".
Performance, latest advances, etc.
Hasn't this been done?
Maybe. OpenWRT, etc.? But my understanding is that the use of Linux on this router was initially the non-public work of a company, Linksys, and the open sourcing by Cisco was neither anticipated nor intentional.
How is my idea different?
The project would be free, open source, but intentionally focused on a _single_ target. Big tradeoff, but maybe some interesting gains.
To be clear, I like the idea of hardware that is more or less "OS agnostic", e.g., RPi and booting from SD card.
But I am tired of watching volunteers struggle to keep up with the latest hardware (many thanks to the OpenBSD and FreeBSD contributors who write drivers for networking, etc.), or having to settle for binary blobs.
Maybe I am just dreaming but I could forsee such a project potentially growing into a symbiotic relationship with some manufacturer if the OS developed a sufficiently large, growing user base. And these users were all purchasing a very specific item(s) of hardware, known to be supported by this OS.
If you comment, please remember I am not a Linux user. And hardware support is not quite the same under BSD. As such, it is something I often have to think about and cannot just take for granted.
kmemcheck -- sort of like valgrind, but for the kernel.
CONFIG_FAULT_INJECTION -- inject random faults at runtime (such as in memory allocation) to test infrequently encountered error paths.
CONFIG_DEBUG_MUTEXES, CONFIG_DEBUG_SPINLOCK -- run expensive mutex validation checks at runtime.
coccinelle -- a source code matching and transformation engine. You can use it in some of the same contexts as sed or awk. Unlike those tools, it is aware of the C language so it can do smarter things like add an extra final argument to all occurrences of a call to do_foo_bar_baz(). See http://coccinelle.lip6.fr/
checkpatch.pl -- Checks a patch to see if it conforms to the kernel style guide. Simple things like enforcing 80-column lines, but also more complicated things as well like variable naming, whitespace, etc.
smatch, flawfinder -- static analysis tools that are similar in principle to Coverity. Like Coverity, they are unsound, but often helpful.
However, what do we have on the formal proof side?
Some of the best "What is the worst bug you ever encountered?" war stories are cases where people end up narrowing things down to a bug in what they were previously treating as bedrock.
Unfortunately I work with too many people whose approach to programming is "Read spec, code, debug why code isn't working to spec, fix that specific bug". Design, architecture, etc. are simply not part of the process. I can see how a lack of easy debugging may force some forethought into the development process.
That being said, how valid would this be in a development environment (like the kernel) where a lot of the work you are doing is making changes designed and created by someone else? Linus says the solution is to make sure you were careful at the start. But what if you weren't even there at the start, and had to step in later?
Yes, people would be more careful without debuggers, the same way they'd be more careful with cooperative multitasking hanging the system when they forget to yield. That doesn't make it good.
But at least the beginning technical shortcomings could be explained because Linux kernel hackers of that time were unexperimented, and they actually several times silently learned from (some) NT designs and/or maybe big Unixes of that epoch (probably mainly Solaris) and adapted them not a too long time after trolling about why their previous simpler one was good enough (when it was clearly not).
The security approach today coming from people having work that long in the industry is inexplicable. (maybe it is just to early and they will just silently convert to reasonable opinions in 2016? :p )
What happens is that some sensationalist idiot will see Torvalds lose his patience and post an angry retort to someone, and post just that response to places like HN, completely out of context. I've lost count of the number of times it's happened here, when everyone's tutting about Torvalds' behaviour and how he's "so aggressive right out of the gate", when no-one looks at the thread history to see him being patient and explaining things.
In any case, zero of the tech heads from Jobs on down satisfy your demands to be taken seriously. I really don't understand why Torvalds gets held to this higher standard of hippie-level friendliness when people don't expect the same from other major tech project managers. I mean hell, Jobs was adored for having opinions that people considered ridiculous.
You would wind up with more careful drivers, but anyone who thinks that's the only goal is misguided.
It's not that absence of debugger would make you (in Linus' opinion) more careful because it's risky, but because it's really inconvenient.
Also, while the embedded Linux ecosystem is rather large, what fraction is kernel code that gets upstreamed/mainlined? As an outsider, my guess that it's a minority (some work with is on non-kernel code, and not all through embedded kernel code changes are/will be in the mainline kernel, so I would not count those as actual contributions).
I say this as a Linux desktop user/layman.
I would guess that there are more people whose primary job is integration than people whose primary job is kernel development. I don't really have any concrete numbers, though. On the driver development side, companies are starting to get smarter about getting their code upstream first-- look at NVidia and Intel's recent efforts in that area, for example.
That's a pretty cool idea (also similar to the static verification in NaCl.) Is there a reason one couldn't implement a static verifier for Linux drivers? Would the problem be harder in any sense for Linux, because of e.g. number of exposed kernel APIs? Or could a static verifier "read off" the kernel's API with no manual annotation required?
Same thing with developing to put it on Apple laptops with that very specific hardware set, etc.
I've had it for almost two years now, and I've given up on getting Bluetooth to work. After resuming from suspend, the wifi works about half the time, and screen brightness is always set to maximum. It's tolerable, but I only use the laptop when I have to.
I doubt I'll ever buy a Linux laptop again, but if I do, I'll be sure to try it hands-on before buying.
1. The Lenovo X140e: http://www.ubuntu.com/certification/hardware/201309-14195/
Why didn't you return it and get -say- a Thinkpad? (Were you -perhaps- just curious how shitty the "Ubuntu Certified Laptop" program is?) It clearly failed the "Fitness for advertised purpose" test. AFAIK -if you're in the US- the seller can't refuse to accept your return... unless it was sold as-is.
> I doubt I'll ever buy a Linux laptop again, but if I do, I'll be sure to try it hands-on before buying.
I've found great success with the following method:
* Get a detailed list of the parts inside a given laptop. (lspci info from the target model is a really good sign)
* Run screaming if the video card is made by Nvidia. 
* Find if there are in-tree kernel drivers for each of the parts. (If there are, this is a really good sign.)
* Find the out-of-tree drivers for the remaining parts, and see if there are solid plans to get them in-tree. (If there are any such plans, that's a good sign.)
* Discover the known issues for all of those drivers.
* If the drivers seem to do everything that I need them to, and the known issues list doesn't contain any show-stoppers, the laptop will likely work just fine. :)
 I know that this is a controversial opinion. I've had awful luck with the nouveau driver and really bad luck with the official Nvidia driver. Other people haven't. I'll stick with Intel-powered laptop video cards if I can. :)
The Lenovo X140e is a ThinkPad. I didn't blindly trust Ubuntu's certification. I made sure to get a brand that historically has had good Linux support. I also knew about Nvidia graphics and avoided them. Still, I got burned.
I don't doubt your checklist is good advice for buying a Linux laptop, but it's simply too time consuming to check all of those things. Even if it wasn't, the likelihood of everything working well is low. All it takes is one bad driver for one piece of hardware and the laptop becomes a constant annoyance. Considering the number of hardware devices (Bluetooth, wifi, mic, camera(s), trackpad, GPU, fan, power saving, etc.) it's all but certain something will go wrong. Maybe audio won't automatically switch between headphone and speaker output. Maybe the fan will run at a few discrete speeds instead of gradually ramping up/down. Maybe it will wake from sleep if you open the lid, but not if you hit a key on the keyboard.
I'd rather just pay money and get something that I know will work. That's why my main development machine is a MacBook. I wish there was a competing brand of unix laptops, but so far… no dice. :(
1. http://shop.lenovo.com/us/en/laptops/thinkpad/x-series/x140e... Though after I purchased it, some people told me it wasn't a true ThinkPad, whatever that means.
Oh, heh. Derp. Edit: I mean to say: My bad. I overlooked that. :(
I see at  that the only Ubuntu Certified configuration is with a rather ancient pre-installed version of Ubuntu. Did you get the system in that configuration, or did you purchase it and put Linux on it? 
Regardless. Why didn't you return it and get something that worked? Curiosity? Cussedness?
> [I]t's simply too time consuming to check all of those things.
Odd. I find it reasonable to spend between a couple of days to a week researching the suitability of something that I'll use throughout the day, every single day for next three-to-ten years. Perhaps my opinion is atypical.
> Considering the number of hardware devices (Bluetooth, wifi, mic, camera(s), trackpad, GPU, fan, power saving, etc.) it's all but certain something will go wrong.
I guess I've had fantastic luck with my personal selections and the recommendations that I've given to others. Given that luck is my super power, I'm somewhat willing to believe that my experience is somewhat atypical. :)
Anyway. Good luck with your projects and such, and I hope that Apple keeps producing hardware that meets your needs.
 Still... one would expect that any Ubuntu Certified Laptop that has a supported hardware configuration would be detected by the Ubuntu installer and configured appropriately (or you'd get a big fat warning when the hardware isn't "supported" by a later Ubuntu release). OTOH, Canonical isn't the best at getting things right, so... :-/
Aargh, this is precisely the sort of attitude that is causing problems in the first place! Of course some research is necessary prior to any purchase, but the issue here is that clear and correct information doesn't even exist in the first place!!
The expectation of nerds that people have this sort of time to take off simply to get a working computer in this day and age is mind boggling. People shouldn't HAVE to take a week to do research to get basic things like this to work.
It's not reasonable and your opinion is atypical for people who have full time jobs with long hours and families to look after. I really want to use and support Free/Libre Gnu/Linux, whilst still being productive. But many Linux users have better things to do than trawl through lspci, do literature reviews of ancient threads on email lists and bug reports, etc, etc, and then finally somehow manage to design a hardware/software configuration that even mostly kind of works. The very worst thing of all is that the typical Linux nerd thinks that this is normal.
I tried to get it with Ubuntu preinstalled, but neither Lenovo's website nor their phone support could configure it that way. After about 20 minutes on the phone, I managed to get the exact hardware configuration shown on Ubuntu's certification page: AMD A4-5000, Broadcom BCM43142, etc. In hindsight, I doubt Lenovo ever sold an X140e with Ubuntu preinstalled.
> Regardless. Why didn't you return it and get something that worked? Curiosity? Cussedness?
When I first turned it on, I noticed the X140e had several annoying LEDs. Both "ThinkPad" logos had glowing red dots in their i's. There was also a large green LED near the camera. It glowed whenever wifi was powered-up (pretty much all the time). I found these annoying, so I painted them over. Oops. Next time I'll use nail polish, which can be removed with acetone.
> I find it reasonable to spend between a couple of days to a week researching the suitability of something that I'll use throughout the day, every single day for next three-to-ten years.
Our thoughts on this matter are quite similar. On average, I spend almost 10 hours a day using my primary development machine. I upgrade every 1-2 years, which works out to 5,400 hours of use. That's a lot of time interacting with one piece of hardware. I definitely want to make sure I get the best tool for the job. That 5,400 hours has another implication: Amortized over the life of the machine, even a $3,000 laptop will only cost ≈50 cents per hour. That makes me extremely insensitive to price. I simply want whatever works best.
As peatfreak said, research won't guarantee satisfaction. The only way to really know if a piece of hardware will work for you is to actually use it. That's one huge advantage of Apple (and now Microsoft) products: I can walk into a store and test the hardware/software combo. In just a few minutes, I can tell if the it lacks the annoyances in my list. These details are extremely hard to verify without actually using the machine.
I would be much more open to getting a Linux laptop if I could try it out before buying. Unfortunately, I don't think the market is big enough to make brick-and-mortar stores feasible.
1. I've written about this in more detail at http://geoff.greer.fm/2010/10/30/expensive-computers-are-wor...
In my experience, it's the opposite. 5-6 years ago, ATI was the friendly one and nvidia gave you hell trying to get it to work. Now it's flipped - the ATI cards I've tried just plain don't work, whereas the nvidia ones will work, and with a few choice harsh words, will work well. Just my anecdata, though, and this is with desktop cards, not laptops (I use thinkpads with intel graphics...)
Also, are you using the closed-source or the open-source ATI drivers?
And, are you using Ubuntu, or are you using some other distro? (My experience with non-LTS Ubuntu has been... substantially less than stellar over the past several years.)
You're right and it's amazing. I use Linux exclusively on my laptops and had stuck with dedicated GPUs running Nvidia binary drivers since 2009. But this summer I bit the bullet and got a new Broadwell laptop, and I honestly do not miss a single thing about the Nvidia graphics. My Intel 5500 can push well over 10 million pixels across three displays, 3D-accelerated, drawing minimal power.
That makes no sense. Most of that are drivers and platforms you won't ever use. And some forms of symbolic checkers exists for Linux (maybe not that advanced as for the Windows kernelspace, though, I don't know -- although even simple local heuristic checks are quite useful to check automatically in their ability to find real bugs -- sometimes even more than very complex solvers)
That's the point -- developers can't test all possible hardware configurations and resulting driver mixes. Some of the most often used architectures and configurations usually work pretty well out of a box, but if you had something a little more exotic, prepare to get your hands dirty. Honestly, I don't see a solution to this problem, unless some "openness" is sacrificed by signing drivers (Microsoft style).
They don't have to be. All they have to do is not go around smugly suggesting they know better than kernel developers and they're ok in my books.
Incidentally, precisely how many of those research kernels have become widely used, mainstream kernels capable of high-throughput?
And do you really think it has turned out that way because the whole industry is full of blind dumbasses? I think it's a far more likely proposition that they understand something you don't.
YMMV on mainstream (they are widely adopted, though), but: OKL4, PikeOS, QNX...
It's quite obvious you have no background on the issues and are using this as an opportunity for provocation.
High throughput, mister, high throughput.
Realtime != high throughput. It just means deterministic throughput. FSVO deterministic.
Show me people running big farms of servers running these operating systems where even single-percentage computational overheads really matter.
(added:) The reason for this is that it costs one hell of a lot flipping your page tables and flushing your TLBs every time you have to switch to ("pass a message", whatever) to a different subservice of your kernel.
(also added:) Oh and interestingly many (most?) users of OKL4 go on to host Linux inside it because, hey, it turns out that doing all your work in a microkernel ain't always all that great. So 90% of the "kernel" work in these systems is happening in a monolothic kernel.
Other contenders include eMCOS and FFMK, though those are obscure.
That said, I don't even understand the logic. HPC clusters where single-percentage overheads really matter are an extremely specialized use case, so of course COTS u-kernels might not cut it. Where's the shocker here?
Response to added: Not necessarily with message passing properly integrated with the CPU scheduler.
Response to added #2: Hosting a single-server is a valid microkernel use case. What's your problem? Isolation and separation kernels are a major research and usage interest.
Ok then, show me the server farms...
I'm not even really talking about HPC, just the massive datacentres that run everyone's lives. All for the most part running monolithic kernels. I doubt the thousands of engineers who work on such systems consider the "huge monolithic kernel" "undebuggable". And I don't see examples of microkernel OSs that are able to cut it in these circumstances.
Even in a mobile device, you don't really want to waste battery doing context switches inside the kernel.
Microkernels have their place, but believing that the world that chooses not to use them are just clearly dumbasses is bullshit dogma.
(As an aside, I'll grant that even a high-throughput microkernel seems likely, to me, to have a lower throughput relative to a more tightly-coupled monolithic kernel. That's just one of the architectural trade-offs involved here.)
As I see it, there are technical (e.g. hardware drivers, precompiled proprietary binaries) and social (e.g. relative lack of QNX expertise = $$, proprietary licensing) reasons for many people to choose one of the more popular OSes, running monolithic kernels.
I can't say what's technically superior, but even if QNX was, nobody's a dumbass for choosing something else -- and I don't think the fellow you're replying to was saying so. There are, of course, reasons and trade-offs.
An OS's adoption is a social thing, and proves nothing technical about it. If it wasn't for licensing (a social problem), BSD might have taken off, and Linux been comparatively marginalized.
Just sharing my perspective here.
You're getting all red in the face using some really dubious arguments to back you up here.
For some definition of "widely".
And feel free to stop adding personal attacks to your comments. They do not enhance the credibility of your posts.
And I know that you didn't claim they are mainstream, so we may be quibbling about where we draw lines around the word "widely". But...
What's the installed base of systems running QNX, say? (Throw in the others if you wish.) Estimates are acceptable, too, if you don't have hard numbers.
It's not only worth looking how many, but what. They're in vehicles, medical devices, industrial automation, military and telecom. Those are all areas where blunders lead to loss of lives, not just annoying downtimes. Insofar as infotainment and telematics is concerned, they estimate at 60% of 2011, so it's likely your car runs QNX.
The design wins, of course, should be obvious to anyone willing to do a modicum of research.
To clarify: Windows is, by any definition, both "mainstream" and "widely used". Yet it has very few "design wins". Therefore, the argument that cars are "only a few design wins" cannot be used to say that QNX, say, is not widely used or mainstream, since Windows is obviously mainstream and widely used.
> It's hard not to be on the offensive when you seem to beg for it.
You need to re-calibrate your sensitivity. You seem eager to take offense at nearly everything. Very little of it is worthy of your outrage.
Not that that changes the core fact that Apple is shipping L4.
My argument is that there's lots of great-in-theory but untested-in-practice stuff in academia, and that you can't discount something altogether just because it's untested. It's hardly fair to compare the output of a few grad students over a few years with all of the effort that goes into a major industrial product.
And anyway, the architecture of Linux originated in academia too.
It's a real, POSIX-compliant, reliable microkernel that can run useful things now.
QNX is behind some automotive dashboards, and they're moving into automatic driving. They have some big announcement coming at CES in January.
But nobody runs QNX on the desktop any more. This year, they finally stopped supporting the self-hosted development environment. Ten years ago, you could run Firebird (pre-Firefox) and Thunderbird on a QNX desktop. But when QNX stopped offering a free version, free software development for it stopped.
QnX had an open source program going for a while but it was shut down again. See: http://www.qnx.com/news/pr_2471_1.html
" Access to QNX source code is free, but commercial deployments of QNX Neutrino runtime components still require royalties, and commercial developers will continue to pay for QNX Momentics® development seats. However, noncommercial developers, academic faculty members, and qualified partners will be given access to QNX development tools and runtime products at no charge.
Customer and community members will also have the ability to participate in the QNX development process, similar to projects in the open source world. Through a transparent development process, software designers at QNX will publish development plans, post builds and bug fixes, and provide moderated support to the development process. They will also collaborate with customers and the QNX community, using public forums, wikis, and source code repositories."
Suggests that it was open source more in name than in fact.
Message passing pre-dated QnX by a considerable time, they just did a really nice and clean implementation of it.
I'd absolutely support their copyright claims on their code, at the time their implementation was unique but I'd totally object against any patent claims, message passing systems had been used widely by that time, also at the kernel level. QnX may have been the first microkernel on that principle that received wide adoption because of the strength of the implementation.
The fact is, if you vet your hardware and use a major distro (Ubuntu, OpenSUSE, Fedora) you'll wind up with a perfectly functioning Linux desktop or laptop.
You think about it, OSX only runs on a few laptops. Linux runs perfectly on more laptops than exist for OSX. Windows run on many laptops, more often than not quite well, not always perfect though, despite being bundled together.
I've been using Ubuntu on a ThinkPad T530 for several years, it just works. Couldn't be happier. Everything works BTW, function keys, fingerprint scanner, everything.
As for the Linux eco-system - major browsers work, Steam works, there's a phenomenal ecosystem around Linux if you do any sort of programming, data science, etc... I really have nothing to complain about these days.
And anyway if you look at what the "competition" sells while being this much critical, you can probably write something at least as long. Even last MS flagship devices running last Windows 10 versions are full of bugs now -- and likewise for major PC vendors like Dell -- so GNU/Linux distributions might as well become attractive just because Windows devices are of terrible quality today :p
Whether or not it is fixable by the community is unimportant to an end-user that just wants something that works.
If the company producing and selling the hardware is not giving the specs to their users, then it's their fault. Perhaps that's a bit too RMS for some people, but in this case I basically agree with him. It's mine, I bought it, I want to run whatever I want to on it.
AMD started releasing the low-level documentation for their GPUs in 2008, and although the FOSS drivers have benefitted stability-wise they're still lagging in API features and often offer less than half the performance of the proprietary counterparts. As far as I know we don't have a complete FOSS OpenCL 1.0 (ca. 2009) implementation for any ISA, nevermind newer versions or competitive performance.
Unfortunately GPUs are so complex that specs alone don't guarantee good drivers.
However, I do acknowledge that there are still many problems that bar Linux from being an operating system "for the masses" (i.e. all those people who are not computer nerds). Many small problems can be fixed with a few commands in terminal - but which grandmother/stressed office worker/gamer kid is willing to learn how to use a UNIX shell or configure fstab just to do their stuff? And there are other problems that aren't solved as easily. I help out at the Ubuntu Forums, and I see plenty of posters with problems that the combined wisdom of a few thousand experienced Linux users can't solve. (Just have a look at the "Unanswered Posts" section.)
So yes, Linux is a fantastic OS with great software available, and by all means let's keep advertising it. But let's not pretend that "it just works, right out of the box!"(TM) every time.
It does though. Get an Ubuntu laptop from Dell, it works. Get a Chromebook from Google and friends, it works.
Yes, install Linux on a random 5 year old laptop, and you may have problems. Ever built a 'hackintosh'? Same deal. Ever install Windows? It's a pain.
As for shell commands, Windows has that. So does OSX. Linux also has GUIs that can install packages, that can change settings. The shell is quick, but it's not the only way.
You have to compare apples to apples. And the fact is, if you install a popular distro on popular, well supported hardware, it does work. If you buy a laptop/desktop/server that comes with Linux, it works.
This is in fact the opposite of reality. Old hardware works relatively well. New hardware can take quite a while to get support because it's not a priority for vendors, especially GPUs which the article harpes on, is a very risky gamble - even if it works "works well" is misleading - it's always behind Windows performance wise and very often power management is inferior too.
> But let's not pretend that "it just works, right out of the box!"(TM) every time. [Emphasis added]
Of course Linux often works. Perhaps even most times. A couple of weeks ago I reinstalled my laptop, switching from Ubuntu to elementaryOS. Took me about two hours (not counting most of the backups). On Windows, I would have needed two days. I have done an OS install dozens of times, with various versions of Windows and various Linux distros. And I find that Linux is often easier than Windows, because you can install all the software from one official repository instead of hunting through the download pages of a dozen vendors. BUT, and here comes the big but - that still doesn't meant that it always works smoothly. (Not that Windows always works smoothly, but that's not what we're talking about. We're talking about Linux' problems right now.) To claim that there are never problems is simply not true.
About shell commands: of course Windows has those. But when do you ever really have to use them? (If you are a sysadmin, perhaps, but again, that's not what we're talking about. We are just considering "normal" users.) Everything that needs to be done can be done graphically. The various Linux DEs have made a lot of progress in that area in the past few years, but don't kid yourself. There's still a lot you can't do with a GUI.
And on a final (not quite serious) note:
> You have to compare apples to apples.
If I did that, I would never get away from Mac OS X, would I? :D
The article discusses several points that would substantially improve Linux. And it addresses your concerns several times, like here:
"There's a great chance that you, as a user, won't ever encounter any of them (if you have the right hardware, never mess with your system and use quite a limited set of software from your distro exclusively)."
Ignoring criticism and possibilities of how to improve Linux ("cause it works for me already") doesn't do any good. IMHO the article is a great write-up that could help to improve these issues in the long run.
The occasional hiccups are proprietary Nvidia drivers, and some wifi chips being too recent and needing an open source driver readily available.
I use Ubuntu on a dual boot, I hardly ever boot Windows. And I'm a gamer! Steam + wine is enough for me.
But everytime I use Windows and Macs again, I get even more annoyed. Mainly with how slow things are. Even on a new Windows 10 desktop or laptop, it's not uncommon to have to wait minutes for things to settle down after booting and logging in, or waiting while updates are installed during startup or shutdown. And on OS X, the spinning beachball was one of the main reasons why I stopped macs for the most part in 2002 or so. I figured it was just a byproduct of OS X still being early in development and computers not being fast enough or having enough RAM. But no, everytime I try a brand new mac, it still pops up, especially when trying to type in a URL or something like that. The bouncing in the dock and not knowing if an app is running or shutdown or not is annoying, too.
But as long as you keep your stuff backed up or in the cloud (like google drive, google music, etc.), and stick mostly to stuff that works cross-platform (Chrome browser, Libreoffice, cross-platform IDEs, etc.), it's very painless to switch being OS's or devices or to wipe and reinstall things. Even Microsoft Word and Excel work in the browser nowadays, though I still usually just stick with Google Docs.
This is no longer my experience with Windows on an SSD. I'm always a bit shocked when I reboot and I'm back at my desktop in under 30 seconds.
On a machine without an SSD, I'm annoyed at how slow everything is -- not just booting.
As far as I remember there is even a Microsoft tool that highlights startup jobs that are slow to run, isn't there?
I'd check my autoruns if I were you.
It might be worth turning off any applications that launch on login that you don't need. This now easily accessible from the task manager. It even tells you how much startup impact each app has so you can find out which ones might be causing you issues.
IMHO X11 needs to be replaced as fast as possible.
X11 could use some improvement, but I doubt a new system will address the core problems, which are basically the result of hardware drivers being written for Windows with Linux as an afterthought.
It has actually gotten to the point where it's stable enough for daily use now.
Really? What about X11 compatibility?
> IMHO X11 needs to be replaced as fast as possible.
Why? If you don't like X11 you can switch to Wayland right now. I am content with X11 for many years.
X11 has very little to do with any of your complaints.
I have a minimal system, but it boots in 4 seconds from BIOS to ion3 (arch). It's funny when the BIOS takes longer than booting. #coreboot
30 seconds? That's still a lot of time. Debian Linux starts within 7s on a dual core 2GHz with SSD.
Dont add any extras... its simply not usable on the net
1.4GHz Dual-Core Intel Core i5 (Turbo Boost up to 2.7GHz)
4GB 1600MHz LPDDR3 SDRAM
500GB Serial ATA Drive @ 5400 rpm
Intel HD Graphics 5000
User's Guide (English)
My work computer is a 3GHz i7 Mac Mini bought in early 2015, with 16GB ram, and I still have times when news sites freeze and won't scroll because someone has done something stupid when coding the site.
But my system uptime is 89 days and I get to the point where chrome shows a red bar because it hasn't been updated recently. There are a lot of irritating little bugs, but the only things that seem unstable are individual tabs and MyEclipse.
Reported it at https://www.google.com/safebrowsing/report_phish/ but mentioning here so that others don't fall victim to it.
Hard to track what exactly is causing this because of the ~200 requests this website makes before redirecting your browser.
The article itself links archive.org's copy of the article which I would recommend using over the original website since it appears to be free of malicious redirects:
The page you want reviewed is http://linuxfonts.narod.ru/
This page is currently categorized as Suspicious
Last Time Rated/Reviewed: > 7 days
This page has a risk level of High
I used Ubuntu on my laptops since I wanted to spend less time administering drivers and arcane configuration formats.
This is a good list.
I just got tired of the configuration formats, crappy drivers, inconsistencies, dependencies... I was irritated at how easy it was for Apple users to plug in a projector and have it just work. I was irritated by every update to some random library that would cause a sub-system to stop working. I hated having to spend any amount of time administering my desktop environment. To me it should just work and the less time I have to spend trawling forums, logs, and restarting processes to find the correct incantations of dependencies and configuration variables the better.
I've stuck with my MacPro Retina, despite my early trepidation about a GUI-driven proprietary OS, because I've spent probably less than an hour in the last 4 years administering it. It's still snappy and works as well as it did on day one (also the hardware is nothing short of amazing). The only thing that sucks at this point is that the OpenGL drivers Apple ships are woefully out of date and I'm thinking of jumping to Windows unless something changes (damnit I wants me compute shaders).
I still use Linux every single day... just in a VM, container, or on some server.
Mageia is the first and only distribution I've ever tried that, on a variety of hardware, Just Works. Every hardware feature works out of the box, and with as little surprise as you'd expect (e.g. plugging in a monitor, using wifi out at a coffee shop, plugging in a printer). I've even done the distribution upgrades (e.g. version 3 to 4), which I was always scared of, and somehow those too just work. I literally cannot remember the last time I hand-edited a config file (not counting Apache configs on a Linode VPS with Ubuntu Server).
I've had a chance to use recent versions of OS X, and I even experienced more crashing (not a lot, mind, but my linux laptop never crashes) than on this stock HP with mageia.
Happy that you've found a solution that works for you, but I'm also extremely happy that I get to keep using Linux without spending one more second thinking about hand-editing configs! :)
I've zero issue with Plasma 5. I plug my laptop into the projector and it just works ;-) I used to have that problem as well, like everyone else. But honestly its been a thing of the past since 2 years or so.
Also, using Plasma ensure I don't have to fiddle with configuration. It's not as leet as stumpwm or many others, but its certainly more customizable and more easily than OSX - while still working out of the box, just like OSX.
I don't remember Bluetooth or any device integration working well or at all on any Linux distro I've tried. Maybe that has changed but that kind of stuff is nice!
I've considered doing this myself, especially with WINE taking eons to implement adequate DX11 support (most games released in the last 3 years won't work), but it's a plunge I haven't been able to make yet. I think I'm going to just set a Windows box right next to my Linux workstation and move the mouse, kb, and monitors over when I want to do something Windowsy.
The ideal would be getting hardware that supported raw GPU passthrough, getting a spare GPU, and doing all of the Windows stuff in a VM (this allows you to get 90-95% of your full GPU performance, and have the card controlled by the Windows system), which would really be the ideal solution IMO. However, I found that even the full technical manuals rarely mentioned if the hardware had the IOMMU, the require capability (both the processor and the motherboard must support), and that by far the vast majority of hardware doesn't.
I have a dual-boot partition now, but I use it probably about once every 9 months, and mostly just to poke around for a second and see if it has spontaneously combusted whilst unsupervised before rebooting back into my workstation OS. It's simply too much hassle to have my primary workstation offline for hours at a time while I game, because it hosts many services that my house relies on. I also have a Windows VM that I do most of my photo editing in.
I really hated that the old KDE4 was just a clone of WinXP, and later Gnome until 2010 was just a clone of OSX.
Ubuntu's Unity is the first linux that's proud to be itself, and it shows.
In my experience some things work better on Ubuntu than on the last Windows version I used, Windows 7. Detection of wireless HP printers, for example. Some things are exactly the same. Finally, some things work slightly worse: drivers for 3D gaming are still a bit worse on Ubuntu (unless you luck out and find the one driver that runs wonderfully and doesn't require any magical incantations), and configuring everything to work just right with some games can still be a pain. Less demanding games are just fine, and I use my Linux laptop a lot for this (heavy user of GOG.com, Humble Bundle and Steam here!).
The jury is out on OS X for me. Using my wife's macbook drives me nut, mostly due to lack of familiarity. Some things are just as bizarre as with any Linux/Windows computer; for example, the other day I was advised to hard-reset the macbook because it wouldn't recognize any USB device... if this nice bit of WTF-ery had happened with Linux, the response would have been "what did you expect? Nothing 'just works' with Linux". But I guess computers will be computers regardless of the OS...
Just skimming, I found this bit on-point and amusing:
> It's worth noting that the most vocal participants of the Open Source community are extremely bitchy and overly idealistic people peremptorily requiring everything to be open source and free or it has no right to exist at all in Linux.
And yes, I have been running Linux since before there was 1.0. And no, I don't like it. But the alternatives are even worse IMO.
And there it was at first basically a way to do handsfree without a wire.
From that you have OBEX from IRDA added to handle various data transfers, and a concept of profiles that are half in hardware half in software.
In essence the very design jumps back and forth between kernel/hardware and user space.
Still, the place of failure i most often encounter is something getting stuck in dbus. Meaning that i can restart all the daemons, but unless i restart dbus shit all changes once a failure happens. But restarting dbus at best kills my desktop, at worst reboots the computer (hello systemd).
For me, it turned out one piece of my stack had been set up by debian by default (or perhaps KDE's bluedevil?) to try and put uploaded files into a directory I didn't have permissions on. Leading to a near-silent error when trying to push files. Go figure.
It's basically an Intel 7260 Wireless-AC/Bluetooth chip plus a PCIe adapter.
It does not help that this works beautifully on the Mac.
Is this not an annoyance to anyone else...and more importantly, considering all distros are now using libinput,can I compile my own libinput that will universally copy paste using win+c ?
The issue is that the default terminal behavior of ctrl+c sends a SIGINT to the running application, and so terminal programs override it with shift+ctrl+c, and likely add the shift for the others for consistency.
I use "Terminator" as my terminal app - which allows you to set your keybindings - so I replace the "shift" in copy / paste and it's worked great for me for years. Unfortunately it doesn't allow you to override the SIG* keys, so it can be an issue when using `watch`, which removes your selection when it updates and then if you don't tap ctrl+x in time, it will send SIGINT the watch command, which is the exact opposite if what you want at that moment.
As for VIM, being sure you're in insert mode is essential no matter what OS you're using.
Is there anywhere else besides the terminal where copy / paste are inconsistent? I haven't seen any that I recall in quite a few years of Ubuntu on my desktop, laptop, media PCs, etc.
Even on OSX, the bindings that are used are cmd-c and cmd-v. It is universal - if you have never used the terminal, vim and firefox on a mac... I really urge you to do that.
you'll see what i mean.
Coming from the other side of things, I had a lot of trouble when I was lent a brand new Macbook Pro for travel during my last job. It seemed some things worked using cmd and others using ctrl. I don't remember the specifics, as I didn't use it often enough, except that I'd find myself mashing keys on occasion trying to figure out the right combination.
It was far worse when I dual booted with ubuntu. The keys made no sense to me there either. Finally, I replaced OSX completely and the keys were "normal" again (ctrl+* for everything).
1: Also lots of DOS, but I don't even remember what versions. I had managed lots of desktops and servers running DOS for a few years in IT a couple lifetimes ago.
IBM CUA and Microsoft Windows adopted them afterwards, adapted to the PC keyboard; I think Windows only adopted them with Windows 95 and NT4.
You should be able to set these with `stty` (they're not actually a function of the terminal but of the tty, a detail no one should have to know).
Obviously, if you want to paste into vim you have to be in insert mode.
should be paste x11 selection. I don't normally use gvim/vim as an x11 application, but looks like * (star) is typically system clipboard:
But the terminal is a special case when it comes to copy/paste and always will be a little weird on any platform. Especially when it comes to line-wrapping in full-terminal applications like vi.
I truly envy the OSX people just on this one aspect.
Its the small things that OSX does right.