Hacker News new | past | comments | ask | show | jobs | submit login
The sad state of Linux Wi-Fi (sesse.net)
432 points by tshepang on July 23, 2014 | hide | past | web | favorite | 266 comments

One problem is that WiFi is completely opaque, especially but not only on Linux. You get no low-level and no debugging information. It just connects and you get some bars, or it doesn't. I never know where to start when debugging a bad connection, and I wouldn't know where to start if I wanted to improve Linux WiFi.

For example, sometimes I can see a network, but can't connect. Why? I'd like to see something like "sent 100 low level packets, checksum failed on 88 of them, disconnecting".

Or I'd like some way to see whether receiving or sending is the problem - do I get garbled packets, or do I get good ones, but no answers to the ones I send.

Sometimes I know a connection is WPA2, but it stubbornly tries another encryption method. Why? Does the AP suggest it, or is it my configuration? Sometimes I can't enter a text password, it only accepts a fixed-length hexadecimal string (happens on Windows a lot). Again why? There is no good central low-level log file or debug tool that lets me see what is going on.

I've never dug in to it before, but some googling just suggested


  # cat /sys/kernel/debug/ieee80211/phy*/netdev:*/stations/*/rc_stats

  rate      throughput  ewma prob  this prob  this succ/attempt   success    attempts
       1         0.6       68.0      100.0             0(  0)         9          12

Looks interesting, but which devices and drivers does it apply to? I have no expectation that this would work over every common wifi interface in current laptops (to say nothing of the typical home gateway with their embedded linux distribution and custom proprietary drivers).

Any driver which supports debugfs. Proprietary drivers of course will always be problematic.

Manufacturers could always agree on some kind of debugging protocol... I'm sure we'd up with only one "standard" ;-)

I've wondered the same thing. Up until now I've avoided digging deep into my issues -- I've never found good docs, googling leads you to misleading forum posts and generally had bad signal-to-noise, and the WiFi stack is nested so deep that it requires a lot of knowledge. But I guess I assumed that the underlying stuff was generally correct and written by experts in the field. Maybe it's time a motivated layman like me dug in and started asking dumb questions.

It's opaque on Linux, but that's not the big problem. It's opaque on OS X and Windows as well, but nobody cares because it works.

I wonder if we'll eventually get the kind of stability that we currently have with ethernet drivers... once the speeds are high enough that we stop upgrading, once the chips go into long-term production instead of changing every few years, once things settle down...

I disagree. It's possible to get advanced output from various command line tools in Linux, and it's possible to get networking information. It's much more difficult to do that on other OSs, especially Windows. I loathe debugging network troubles in Windows, because there is essentially no information. OS X has most of your standard tools, though some in some odd places, e.g., printing the routing table not being done with `route`.

Furthermore, I care. My 2013 Macbook Pro is terribly slow to associate to even nearby APs, and drops the connection often. I have no idea why, because it is opaque: how do you list nearby APs? their signal strength or encryption type? I didn't learn this until just now, and it's,

    /System/Library/PrivateFrameworks/Apple80211.framework/Versions/A/Resources/airport -s
Seriously. There's no way I'll remember it, and if you're reaching for it, you likely can't Google it. And it appears to be deprecated. It's a much shorter command in Linux, and in general I find it easier to determine if the problem is the network connection or DHCP failure. IIRC, Network Manager's icon changes — admittedly not much – depending on which one it's on, so even without command line tools, I have some idea what's going on.

OS X has the troubleshooter dialog, but that's never been able to fix the problem. (In general, I feel, those things never do. They also never tell you what they're doing. For all I know, they're just progress bars and timers.) WiFi off, WiFi on fixes a lot of problems.

Open Wireless Diagnostics in /System/Library/CoreServices/Applications, then click Window -> Utilities. Gives lots of detailed info, frame capture, logging, etc.


You can access it easier by holding down the "option" key and selecting the Wifi menu http://imgb.mp/jxy.jpg

>I disagree. It's possible to get advanced output from various command line tools in Linux

dont worry, Lennart Poettering (systemD/pulseaudio) is working to solve this problem


Oh man, the audio problems I sometimes had with pulseaudio...

airport -s is incredibly important in setting up roaming correctly with heterogeneous APs. You absolutely want SECURITY column to be identical, else devices won't roam.

> There's no way I'll remember it

I have history(1) and a symlink in a toolbox git repo.

> you likely can't Google it

"!g OSX airport command" works fine

> route

I found netstat -r is the most "Unix-portable" way of querying the routing table (Windows route is a peculiar beast anyway). route is deprecated on linux in favor of ip route.

> > you likely can't Google it

> "!g OSX airport command" works fine

Of course. The parent was referring to a situation when someone lacks internet access, due to a networking problem they are trying to fix, rendering them unable to use Google to find the command.

On OSX, just press option and click on your wifi-icon. You'll see the signal strength, channel, speed, encryption type etc of the AP you're connected to expanded in the dropdown menu.

I wish. On OSX, I often see a connection with full bars, which fails to connect. I have no idea why. It's incredibly frustrating.

Yeah, OSX is good but it still has problems.

    . chooses 2.4 over 5
    . not automatically connecting to my mobile wifi even though I use it all the time
    . simply tries and fails to connect over and over and over .. I have to stop wifi and start it, and then it just works.
My 2013 Air has had three updates including firmware that mention wifi fixes but it still has problems.

OTOH, my Fedora 20 install has been behaving beautifully (though to be said in far less challenging environments)

Given that wifi has been constantly evolving since it started, it's not going to settle on chips any time soon. 802.11ac has just been released, prompting a new round of chips.

Wireshark is a good tool for inspecting 802.11 frames, but it's still a pain. If you do go down this path, I highly recommend adding a capture filter for your adapters MAC, so you don't get flooded with management frames. Filter string: wlan addr 00:00:00:00:00:00.

NetworkManager doesn't relay any error information to the interface at all, but it's log files are quite detailed.

At least on systemd based distributions (tested on arch), running

  journalctl -fu NetworkManager
will show you the live logs of NetworkManager interleaved with the logs of it's subprocesses; this has lots of juicy information that i've found quite usefull for debugging.

> journalctl -fu NetworkManager

Perfectly sums up my feelings about NetworkManager.

What so bad about Network Manager? nmcli is really useful.

Unnecessarily complex daemon; three lines in /etc/network/interfaces and a few shell scripts will suffice, thank you very much.

What's good for you might not be the best option for the majority of people.

Making things that "just work" means handling complexity for users, and leads to somewhat complex code. Debian's static network configuration is great for servers and okay for desktops that never move. But it's nothing that you should put on laptops operated by enterprise users (the people that pay for Linux desktop development). Imagine users calling support from a Starbucks, trying the edit the wifi config files.

Handling complexity by piling more complexity on top is sadly very common, even so common that people think it's inevitable. But it's not the only way. The other way can sometimes be harder, and it tends to take more thought, but in the end you may actually solve and get rid of complexity.

Complexity can be caused by the developers of the software, and in that case it is unwarranted. But my assumption is that NetworkManager developers are reasonably competent, and take good decisions in all the trade-offs they have to make.

In their case, they need to support lots of features, and make all of them work seamlessly: multiple Ethernet, Wifi and VPN interfaces, IPv4/IPv6 configuration, modems, firewall policies, and so on. n^m different states. To get rid of this complexity, you'd need to remove options at the bottom of the stack, e.g. only allow communication via serial port at a fixed rate.

You're right. This is incredibly frusturating. Are there any good work arounds?

I feel the same way on Windows, but I've never attempted to dig deeper.

>Sometimes I can't enter a text password, it only accepts a fixed-length hexadecimal string (happens on Windows a lot). Again why?

Um, maybe because WEP passphrases are fixed length hexadecimal strings?

Right here: https://wikidevi.com/wiki/Atheros_AR5B22

This thing works. I just buy them in bulk. I can get one for $13 off ebay. Any device I run Linux on, I replace whatever half size mpcie card it has with one. I don't fuck with the realtek or broadcom chips I get, because ath9k is all-open, no proprietary firmware, works out of the box. Drivers are in any kernel since the 3 series started, the bluetooth is just a generic HCI bluetooth adapter over PCI, shows up as a usb device and works with bluez no problem.

My best speeds on 5ghz with one of these has been around 20MB/s. Across my house I usually get around 8 - 10 MB/s. I think these chips are supposed to get much higher average throughput, but it works well enough I don't care. Works under Debian, Suse, Fedora, Arch, Mint, Mageia, even Slackware.

Ath10k makes me mad, since they are now shipping firmware blobs. Again. And they were doing so well.

I trust these chips so much when I'm doing IT supports and trying to advocate Linux to customers, I have been able to join every wifi network I've thrown the thing at, from ancient wireless-a routers to wireless AC ASUS routers with 2GB throughput.

Point is, different vendors have different quality. The ath drivers have been great for me, but I've only ever bought this specific chip because of the value proposition.

Much appreciated tip, thanks. I've known some atheros chips had really good drivers, but this is more specific than my vague impressions.

I suppose a major cause of the situation is that hardly anyone buys a laptop with the wifi chip as the first concern. Most laptops for sale don't clearly show the wifi chip, unless you go into the online configurator where you might be able to pay for an upgrade (yes I'm one of the weird people who considers the wifi chip, having once worked at a wifi ap vendor). I also doubt it's possible to replace the wifi chip on a macbook air or retina. So it's rare to have "cult favorite" chips of this type that enthusiasts can gravitate to, usually we just deal with whatever we end up with.

It relates back to a real problem in the Linux ecosystem today - the assumption Windows Computer == Linux Computer. A false assumption, especially when you get into the realms of device support for specific motherboard features, wifi cards, expansion cards, etc.

If anything, the real problem is the lack of an easy to reference directory of Linux hardware from the buyers perspective, rather than from the owners perspective. IE, "I want to buy <insert part> (or <notebook>) that supports Linux, all the parts manufacturers provide open drivers or documentation, and all the parts are compatible.

The lack of such a resource probably turns a lot of potential Linux converts off.

thanks to previous comment i just ordered one card on ebay for under $10 to test my theory - any laptop with pcie wifi card can be "upgraded" to atheros card. and i might even get few bucks back, since usually laptop-specific cards cost more on ebay :))

in theory if pcie cards are completely interchangeable, i should be able just swap them in and out. will see if it work out.

Lenovo laptops tend to have BIOS enforced whitelists, so if you have one, dont be surprised if the new card doesnt work.

i have dell, will see how it work out. my current card work ok, so i always can go back

That was some valuable tip. Thank you.

I stopped using NetworkManager years ago, I just use wpa_supplicant and dhcpcd directly (well I use the systemd services for these so they're started automatically). I also always name the 2.4GHz and 5GHz networks differently, and connect explicitly to the one I want.

Yes it would be great if NetworkManager did the right things automatically, but the task might be impossible due to hardware and driver quirks. WiFi chips are among the buggiest chips in computers today. Really, the chip. I've seen ridiculous workaround in drivers for multiple vendors' chips. Also, the drivers are complicated and buggy. And the linux drivers don't get quite enough full-time attention from the chip vendor to work solidly. Those windows and mac wifi driver teams are pretty big.

If you haven't used Network Manager in years, I'd suggest taking another look. It (like PulseAudio) was shipped far too early, souring early-adopters on its use.

Later kernels have (somewhat) cleaned up the wireless driver interface so these days I don't have problems with day-to-day usage of Network Manager.

PulseAudio still requires you to be a hacker to get sound over network - something it promised to do well, but oh doesnt.

If you aren't blessed with luck, one might say that PulseAudio still requires you to be a hacker to get sound. Over and over again. Every time you think it's fixed for good, it'll prove you wrong.

On the large majority of laptops I've installed Ubuntu on recently, sound has just worked out of the box, so here's to hoping my lucky streak continues!

Sure, sound works fine for a bit, but wait until you want to plug a USB headset in. First time, great. Second time, fail. Log out, log back in, headset works again. Lame!

It's all about having a common hardware profile.

Add any external sound hardware, use a optical or SPDIF port instead, add some speakers. It'll break sooner-or-later, and in my own experiences it tends to be whenever my personal computer gets away from the 'typical' desktop profile.

Try to plug in headset and have the sound out of it, plug it out and have the sound on the laptop speakers, without going all hacker on it, or for example just try to use your bluetooth headset. Yeah. Nope.

Or if you're even less lucky, PulseAudio still requires you to mute the "system sounds" thing because once skype starts, you get persistent noise out of the audio jack, after which you try to email Lennart following the guide on his page, only to receive no reply.

My current regular experience is:

Pulse audio + reconnecting bluetooth speaker after it powers off = bad day

+1 I tried for an hour to get PulseAudio to work with Bluetooth and A2DP and use my phone as a sound source. Decided I wasn't going to waste time on this. Gave up and ended up buying an external A2DP box off Amazon and plugged it into my Line In port.

It is this little things that made me move away from GNU/Linux as main laptop OS.

The OS is great when it works, but then those little things are always around the corner, turning weekend actions into weeks.

All of my computers run Linux, but that's because I can deal with most of the flaws. This is definitely part of the reason why desktop Linux has never taken off with the general public.

Also, the people who manage Linux distributions seem to absolutely love suddenly getting rid of things that work and replacing them with incomplete alternatives, without any kind of migration of user data and settings. Those alternatives should be pushed out as developer previews until they either

(1) match each and every feature of whatever they are replacing AND capable of importing all settings


(2) warn the user months ahead of time with a list of features that are going to disappear in the replacement


(3) provide an easy, 1-click option to let the user continue using whatever they were using as their default, with continued support and updates


But it is still my main OS everywhere, I refuse to bow to proprietary overlords. Despite these downsides.

I started working when proprietary overlords was the only option available, so it doesn't matter me that much.

What you mean?

I too perform work for enterprises, getting payed to work with and in all things Linux, Java and Python.

Just because its freedom software doesnt mean there is no money to be made or make.

I mean when I started working there wasn't open source as such.

Maybe the local computer club or some rich guys that could pay for BBS connections exchanging stuff.

Everything from hardware, text processing, drawing, music, compilers, editors,..., was provided by proprietary overlords.

Oh, you are ancient - respect!

I had luck, started around 1999 when RedHat 6.2 existed, with gcc, all from a free book and CD from the public library.

Years ago it didn't take me that long to get PulseAudio setup to play from my HTPC through my laptop (so that I could use the headphone jack on the laptop). That said, it wasn't plug-and-play. I didn't use it too often because there were too many moving parts every time I wanted to get it setup (i.e. setup the laptop to receive audio, then get the HTPC to connect to the laptop and send audio... then disable it all to get things back to normal afterwards).

What would have been preferable would be for the HTPC to advertise itself as a audio source, and the laptop to be able to list sources, and let the use select one.

I had been using wicd on my old laptop for a few years. Then, when configuring a new computer, wicd failed to work on it. I tried, probably, everything, even the most low level CLI tools, but still connection kept failing, and frustratingly, it was providing almost no information about what was going wrong.. Then, I installed XFCE to had some temporary GUI, and its NM connected to WiFi. Now, I'm using NM, and still have no idea, why other tools did not work for me.

Years ago, I had issues with several hotel wifis on the east coast. I found that completely disabling NetworkManager and manually connecting was the only work-around. But NetworkManager had to be disabled from startup. If it was started at any point, even if the service was stopped (and a manual connection was attempted) I couldn't connect to the network.

It's hit-or-miss.

Yeah, NM rarely works correctly

"Ah why don't you open a bug", because people are not interested in making it work (like a lot of "modern" Linux stuff)

If it works manually (dhcp, wpa_supplicant, etc), I don't see what's the problem. Or better, I see: NM doesn't work.

Yeah, unfortunately you're right about WiFi chips. It's about making the bare minimum hardware and shoving everything onto the drivers.

NM tries to be too clever. I have no idea what some of the options in the interface mean.

Contrary to what other commenters are saying, I've been using NetworkManager on my ArchLinux Thinkpads for years and it works great. (And I don't even have a desktop environment, I have DWM, so I am usually a "do it yourself" person.)

Whereas I could never get all the mess working to get things working well without NetworkManager.

My experience is, spend hours messing with configuration shit or just let NetworkManager do all that automatically for you, which in my case, it does well.

Network Manager uses wpa_supplicant underneath, and delegates roaming and AP selection to it. So I don't know how what should be a dump program on top gets such a bad wrap.

Not that I don't use wpa_supplicant directly anyway, but when I have used NM, it seems to do the same job.

I'm also a DIY person. Using networkd already along with systemd and wpa_supplicant. It's very simple and reliable.

I use wpa_supplicant on Raspberry Pi. It works fine even with bad WiFi signal, though I have trouble with Samba client. It forgets to reconnect to SMB server every few weeks after dozens of WiFi reconnects. Tracking down the issue and filing a bug is a bit complicated (automated embedded device, WiFi signal noise, etc).

Your voltage on the USB port might be too low.

I always had troubles with the managers too, and now use wpa_supplicant directly combined with systemd-networkd. When a connection is established networkd automatically starts dhcp. I think it connects faster, and I've had no problems with wifi since I switched.

Same here, I kind of never got NetworkManager to do any good, I just gave up, so mii-tool, dhcpcd, iwconfig, wpa_supplicant are well used tools, basically it works fine, when I actually have the drivers. These tools dont bail out on me or try to hide complexities behind more complex abstractions like NetworkManager does. If im going to be troubleshooting anything, its going to be the networking issue and not dbus to networkmanager interface or policy files.

So basically you need four different tools to set up networking? Well, it sure sounds like the complexities are there and in plain sight then. I can't help but wonder though if it isn't possible to get rid of some of these complexities and make life simpler?

It's not as complicated as it sounds in practice and can take the form of a simple config block in /etc/network/interfaces

        iface wlan0 inet dhcp

                wireless-essi DE:C0:D1:F1:ED  

                wireless-mode managed  

                wpa-ssid "Your SSID"  

                wpa-psk "YourPassword123"

I wrote a python app/CLI program for dealing with those couple of programs.


It works ok and just wraps ifup and iwlist. It also reads and writes /etc/network/interface style config, so you can see what's going on under the hood.

But I agree. Getting it all work the first time (or when I encounter a new type of network) is just ridiculous.

I dont know, possibly, I see now that mii-tool is packaged together with ip. So really, it depends, it is different layers - different tools , but could still package all together into one package and have one syntax for all. wpa_supplicant and dhcpcd should go under the net-tools/ip package too.

ip link should also show the information of link-speed from mii-tool, ip addr should take an dhcp argument instead of requiring a separate dhcpcd. Then merge wpa_supplicant into ip, perhaps make a ip-wlink which requires some SSID/keyphrase to set it to "up" state.

So wpa_supplicant dhcpcd iwconfig/iwlist/scan-tools should be merged to net-tools and have same syntax as the rest of the net-tools package.

But yeah, even then you need to learn the layers and set them up. In my opinion Windows got it wrong, its user interface is horrible and people often complain about networking problems which are really UI problems or bugs in Windows/drivers there too.

This problem is classic - networking is a stack and thats a fact, and the best tools/UI is one which allows you to dig through that stack layer by layer. On windows its all or nothing.

I think a big portion of the problems Linux has these days is how unbelievably bloated and overconvulated distributions have become. Every single function has three, four or more ways to do the same thing, often with different results. When trying to get something working I never know if I should run the /etc/init.d script, restart the service manually, edit a config file, use the command line config until or use the GUI settings option. Sometimes you edit a config file and get the thing to work, only to find out some other utility will overwrite it next reboot. Other times you get strange errors or just nothing at all, because it's a deprecated method and you should have used the brand new tool-du-jour instead. It's a mess.

I tried setting proxy configuration so that apt-get would work with my employer's proxy. I put my (plain-text) username and password into a dozen different configuration files before it finally worked. Then I changed my password, and couldn't remember where all I had put my credentials in. I ended up having to reinstall the whole OS because I couldn't figure out how to undo the proxy settings that took me forever to figure out in the first place. Skipping over the fact that I had to put the password in plain text.

So yes, I agree.

I had similar problems with Linux. This is why I have now a rather long textfile in a dropbox folder in which I write down all problems/solution/tweaks I had to deal with. If the same thing pops up again after half a year I can just look it up.

Great idea. I keep a folder full of Markdown files that I push to a Bitbucket wiki.

I do the same thing with regex one-liners.

Yes, apt ignoring standard HTTP_PROXY settings by default is a shame!

Try Slackware, http://www.slackware.com/, none of that auto-magical stuff of the other distros, with it you control your system instead of your system controlling you.

It's for this reason that I love openSUSE - or, more specifically, YaST. Thanks to YaST, you don't have to worry about all the crap that goes with getting a service to run and be configured properly.

I have a good number of servers and desktops running openSUSE, and only once have I had to manually edit a configuration file (in order to work around some obscure DBus glitch on a coworker's personal desktop); everything else has been manageable with YaST alone.

Copy paste, printing and networking. Biggest usability misses of Linux.

I don't consider these technical issues...but usability issues. Things like insisting on keeping the install ISO at 750 mb, leaves seamless driver support out of the todo list because...hey, there are no drivers anyway.

I'm really surprised there is not a paid version of Linux with these features baked in. I would GLADLY pay for all these (as well as the royalties for mp3,flash, etc)

EDIT: does anyone know if systemd-networkd would make things better [1]

[1] https://wiki.archlinux.org/index.php/Systemd-networkd

Why is copy-paste broken? I never had issues.

I don't have issues anymore with printing and networking. I understand that this stuff is per vendor so I usually do my research before buying (I have thinkpads, and my printers are all pretty friendly). I do see many windows computer fall over when printers connect to new wireless networks, though.

As for mp3, flash, isn't is as simple as enabling a non-free repository and let it do its thing? If you use Ubuntu the option to enable that is right at the installer. I've never had the issue since I switched to Linux full time (11.04)

In X, you have N selections referenced by atom. Two atoms (PRIMARY and CLIPBOARD) are sometimes used interchangeably by various software, leading to all sorts of shenanigans where for example selecting then middle clicking pastes something that was "Ctrl+Ced" elsewhere or vice versa.

Additionally, X selections aren't buffers - they're handles used asynchronously. So, when you paste, if the source application is dead or has mis-handled its state internally, you don't get what you expect.

These behaviors are patched over by clipboard managers which manage PRIMARY and CLIPBOARD interactions and which immediately copy the selection into a buffer to make it long-lived. However, each desktop environment's clipboard manager has gradually expanded to include all kinds of strange environment-specific metadata possibilities (to enable, i.e. "Paste Special" options from a spreadsheet).

This has some nice little side effects, like I can use the highlight -> middle-click action to get around JS in the browser that is triggered on copying text.

Not that I'm a fan of the situation, Keepass2/Mono break my routine everyday.

> Why is copy-paste broken? I never had issues.

Different programs use different conventions. For example, shift+insert does not do the same thing in say, xterm and firefox.

Interesting. I only noticed now that shift-insert in Firefox doesn't use PRIMARY but CLIPBOARD. I'm so used to middle mouse button paste that I never noticed. This is probably due to Firefox not using a native graphical toolkit.

Don't they use Gtk2 & Cairo?

Chromium is slow to deliver the clipboard too (just containing text), so you press ctrl+v and wait for several seconds.

Chromium and some others take the clipboard with them when the app is Quit, suddenly the clipboad is cleared.

Really? I use chromium on the odd occasion and never have to wait.. nor have I ever noticed the clearing of clipboard..

Depends on the programs you use, and whether or not you're running a full DE or a bare WM. Most programs use Ctrl+X/C/V, Shift+Delete/Shift+Insert/Control+Insert, and/or highlight/middle-click in various unpredictable combinations.

I've found this to be less of a problem in modern desktop environments (especially KDE, in my experience), since most DEs nowadays feature their own clipboard/buffer management.

Huh. Interesting. I only ever use C-x C-c C-v, and then C-Shift-c C-shift-v in my terminal emulator so I never notice these issues.

For me even that's annoying enough. On a mac it's consistently command-x command-c command-v everywhere, whereas in Linux I have to think "oh, I'm in a terminal now, use shift", and if you get it wrong things screw up (e.g. C-shift-v opens the inspector in Firefox if memory serves).

Same here - I didn't know that Linux copy-paste still had problems, but that could be just because of how we're using it.

Sound isn't too great either. The BlueZ developers for example decided to drop HSF/HSP support in BlueZ 5.x [1], which means there is no way to get a bluetooth headset (with a microphone) working on Linux anymore; there's just no way.

Although it's not like sound is great on Windows either.

[1] http://www.freedesktop.org/wiki/Software/PulseAudio/Notes/5....

> Although it's not like sound is great on Windows either.

Pre-Vista I might have agreed with you but now? I think Windows has quite good audio stack.



Vista was the first time I suddenly found myself unable to use my headphones on a Windows machine. Sounds was forced to go out through the speakers. I eventually rebooted to fix it.

I feel things are moving backwards.

Your single anectodal evidence of single non-updated driver does not really prove anything about "going backwards" or tell anything about the merit of new audio stack.

The irony of this to me is that I started using Linux back in 1998 or so (though I've used it off and on for the past 15 years, I'm far from an expert, just a hobbyist)

But in 1998, Sound on Linux was almost impossible to get working correctly (that's hyperbole, but it wasn't too far off)

It's gotten better, but not by much. But considering 15 years have gone by, that's faint praise indeed.

I'm just here (on my laptop with speakers that have mysteriously stopped working) to second any comments relating to the sorry state of Linux audio.

It's a good thing my brightness settings are stuck at 100% or I wouldn't be able to upvote this comment.

Does xbacklight work?

My macbook pro (running linux) has a strange issue where I have to run setpci -v -H1 -s 00:01.00 BRIDGE_CONTROL=0 before the backlight works (when using official NVIDIA drivers, nouveau works fine).

My T410 thinkpad required some XOrg.conf setting get the backlight working. It really is strange how bad the NVIDIA drivers are with this backlight stuff.

Apparently the BlueZ team only works on Android now, even though it's no longer shipped with AOSP (or even any phones?)

The reason appears to be that Intel wants Android to work well on their hardware, and the new BlueDroid stack doesn't support any Intel-specific features. Instead of fixing the sketchy (details in article) BlueDroid code, they decided to make BlueZ a drop-in replacement for BlueDroid:


I haven't had trouble with printing in years- in fact, I can basically print on everything with no configuration, and people with OS X and Windows laptops seem to have lots of problems.

While this is another 'Well, works for me" kind of response, I've never had a problem printing from a laptop with OS X. Windows, yes. But OS X generally just finds the printer and goes, with no configuration necessary at all.

Aren't most Linux distributions still using CUPS, the system OS X uses under the hood?

(More on topic for the actual post here, the last couple of times I've tried to install Linux on a laptop getting wifi going has been a bit finicky, but usually so has getting things like booting into a GUI. I've attributed that less to Linux than to me trying to stick it on obsolete Apple hardware, though.)

As I mentioned, it has less to do with the underlying technology than the effort it takes to tie them together.

Setting up HP WiFi printers is not seamless. In know how to do it (using hp-plugin), but things could have been packaged to work seamlessly.

I was just printing on someone else's computer running barebones Lubuntu. I found that going to localhost:631 (cups web admin) has all the settings needed to add the printer, which came up as "Detected network printers". I think the OSX one has a GUI frontend for that. I just tested the GNOME gui on my laptop, it doesn't seem to detect the printer but localhost:631 does.

That's strange, since GNU/Linux and OS X both (last I checked) use CUPS for printing.

That said, it really depends on what printer(s) you buy. I've had wonderful experiences with HP printers, and tend to recommend them. Brother, Konica Minolta, and Epson printers work reasonably well with some tinkering and research in my experience.

Canon? Godspeed ;)

Is there any sort of certification scheme for CUPS? Given that's the standard for Mac and Linux, I'm surprised companies don't make 'CUPS compliant' printers.

I see a lot of recommendations for Brother printers, but they seem to require quite an involved process to set up.

My Lexmark X4850 begs to differ. No drivers, doesn't work, period. Lexmark doesn't care, and apparently no Linux driver developers have this printer, so no one has made support for it yet.

Lexmark have historically been badly supported in Linux because of their DRM shenanigans and their lack of interest.

Don't buy Lexmark, basically.

Used many wireless printers or scanners?

There are no royalties for Flash. Well, apart from the yearly cap that Adobe pays.

Similarly, Fluendo have been paying the cap for MP3 decoding since 2005 on their open source decoder for Linux.

(MP3 decode should now be out of patent, as should most encode tasks but "intellectual property" likes to keep its boundary lines as vague as possible so you never know if you're trespassing and just pay out of fear/habit.)

Printing has also been pretty good for a while, so much so that Apple adopted the same solution in 2002.

Are you sure your complaints are still valid?

I have only used Linux for the last 12 years. I use a thinkpad in an office full of macs and HP printers and 5ghz networks.

None of it works as seamlessly as on a Mac. I'm pretty comfortable compiling my custom kernels, so I'm more than the average Joe trying to setup networking.

There is still a reason why every Linux install asks you to explicitly select that you want mp3 codecs insralled. Flash is not installed by default. I don't know what the legal reasons are, but I'm willing to pay my share to not having to deal with it.

"There is still a reason why every Linux install asks you to explicitly select that you want mp3 codecs insralled." Ubuntu is the only installer I know of that does this...

Mint does to, IIRC.

They use the same (or an extremely similar) installer last I checked.

> Copy paste, printing and networking. Biggest usability misses of Linux.

Hah. 5 years ago I switched my desktop from Linux to Windows because copy-paste suddenly stopped working. Suddenly it became impossible to copy an URL link from the terminal into FF's address bar. Then I said to myself I didn't have time for this s*t. I've never looked back after switching.

Concerning networkd, I don't think so. It's mostly geared towards (and I believe even originating from) CoreOS, and environments similar to it. Mostly for managing network devices in containers and virtualized deployments. Red Hat's been sponsoring Project Atomic recently.

> Copy paste

Out of curiosity, what do you feel is bad about copy-pasting under linux? It's inconsistent, but it's still the best copy-paste functionality I've experienced so far on any operating system. It'd be nice if all applications understood the dual clipboard, and if terminal applications behaved a bit better, but still.. by far the best of any OS as far as I'm concerned.

Inconsistency - again this is not about Ctrl c but rather some (any) consistency.

I truly envy the OSX guys, their consistency

P.S. google for IBM CUA . Linux is supposed to follow it, but doesn't. Macvim is more consistent than gvim !

Every time I use OSX or Windows I catch myself marking a portion of text and trying to paste with middle mouse button.

good point. Except... all new laptops are coming without a middle mouse button.

This is the trend in all the latest Thinkpads as well a lot of other computers (I know of Asus and Acer as well).

If you don't feel like doing the research to make sure that your hardware will work correctly under Linux, you can buy a computer with Linux pre-installed.

Which is to say, you know, the one Dell laptop and the System76 behemoths. So, yeah, two options out of ... thousands.

And, research or not, _no_ hardware really, fully supports linux (or, vice-versa). Which is sort of the point of this whole thread.

Zareason is the bees knees: http://zareason.com/shop/UltraLap-440.html

They'll ship with the distro of your choice preinstalled, and I've had wonderful experience with their support (where they suggested a kernel upgrade for me a few versions higher than what was shipping with my preferred distro.) Every model except the UltraLap comes with a mini-screwdriver and encouragement to use it. The UltraLap is their competition with 'ultrabooks' though, so it's not put together as nicely as the others - no screwdriver there:(

They just test everything, and only send you stuff that works.

Awesome, did not know about them. Thanks!

there's a bunch of sources nowadays.


You completely forgot the many GPU issues that still exist.

I discovered this for myself after converting my parents 5 year old PC from XP to Linux Mint.

As you can see from the bug tracker: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/852190 the kernel drivers are much buggier than the GPL'd Realtek drivers. But the official Realtek drivers don't work with newer kernel versions.

Fortunately there is: https://github.com/pvaret/rtl8192cu-fixes

Does anyone know why doesn't the kernel adopt the official drivers? Is there anything I can do to help? What steps do I need to do?

If those are really under GPL, nothing should stop you from converting them into a kmod using the kernel 802.11 architecture and pushing them upstream. I'd imagine that since it hasn't happened yet there is something else at work impeding it.

They ARE GPLv2. See for yourself. 4.0.2_9000:


Converting them to the latest kernel driver interface is beyond my skills and pvaret's rtl8192cu-fixes. Pvaret's remarks on github indicate it's a hack to just get them running and someone with Linux networking internals experience is needed to do a proper port.

I'd be willing to test and file bug reports. Your 'fix it yourself' attitude is exactly the kind of response that gives Linux a bad name.

But "someone else should fix it" does not help either.

If you do not provide Realtek the financial incentives (ie, not buying their products) for not maintaining proper Linux drivers (in kernel ones are what I would consider proper) then nobody in the world has a financial incentive to make them work for you. If you are buying realtek products, having them not work, and not demanding a refund for the lack of functionality, then nothing changes.

It is disingenuous to blame kernel developers for not fixing a hardware vendors shitty driver when said hardware vendor is not paying them squat.

Realtek provided a rock solid, stable, GPLv2 driver. The "proper" driver is a piece of shit, requiring a reboot after a few minutes of use. They shouldn't have changed the driver interface, leaving people like my parents with broken hardware, if they don’t have the manpower to fix this sort of issue.

> Is there anything I can do to help? What steps do I need to do?

Normally I would agree with you but zanny's reply was perfectly legitimate for your above question. I don't see any attitude from his/her side.

One of a handful reasons I don't have Linux on my laptop anymore, even though I like so many things about Linux so much more. I'm not sure about this point, though:

> Connection time. I dislike OS X pretty deeply and think that many of its technical merits are way overblown, but it's got one thing going for it; it connects to an AP fast.

I remember Linux being slow (and unreliable) at this, but OSX is pretty slow too, at least on my MBP. The OS that I've always had the best experience just with connecting to APs is actually Windows.

This is one of the reasons I've stopped running Linux 'bare metal' and have instead always just used an OS with good drivers to serve as a VM host. Every time I fire up a Linux VM it makes me feel a little guilty (after all, if everyone does what I'm doing there's essentially 0 reason for hardware providers to think about Linux drivers), but I've consigned myself to the fact that Linux's server roots are always going to show.

I do the same thing. No reason to feel guilty about it, not that I can see; indeed, when freed of the hardware compatibility issues that it just can't handle, Linux really shines.

Which MBP do you have?

It's fast on Apple products because they 'cheat', and re-use the last-given DHCP address.

http://cafbit.com/entry/rapid_dhcp_or_how_do (and the follow up, http://cafbit.com/entry/rapid_dhcp_redux)

Essentially, they behave as if they've spent the last (however long you were offline) without sending any packets. If their lease is still valid, they should still be able to use the same IP address; i.e., they shouldn't need to re-acquire one. That's not always the case, so they also do a DHCP request along with that.

I have a late 2012 13". It usually takes ~8 seconds to connect to an AP. Seems to depend on the AP and anecdotally it's very sensitive to the wireless channel. Maybe I just have a bad card or something.

I have found that the MacBook Pro's speed at connecting is dependent on the AP. My MBP connects to my work APs quickly, but it has issues with my home AP. Sometimes the connection to my home AP gets dropped for no reason, and when I try to reconnect it refuses to let me back on, claiming my password is incorrect. My MBP is the only device I own that has this problem (even my iPhone doesn't have this issue)

I've noticed that on my OSX macbook the connection time is slow, but it seems to connect before you log in if it's asleep. I think Apple uses tricks like this to make it seem much faster.

I'm running Archlinux from a flash stick on a cheap Haswell Celeron. From BIOS to chromium/wifi on is about 20-30 seconds. And I'm using connman so it could be faster with directly using wpa_supplicant/dhcpcd. On Windows it took up to two 2 minutes. So it is definitely an issue with hardware and drivers not Linux in general. Also using old hardware (including old wifi dongles) is a recipe for headaches. A cheap Haswell laptop or a desktop coupled with Linux friendly wifi dongle or a card should work fine.

I can sympathize with this article. Every time my kernel is upgraded, I must manually recompile my wireless driver. I'm using a patched version of a Broadcom wireless driver that some kind soul on Github has been maintaining. If I was new to linux, there would be no way that I would have been able to get my wireless interface working in the first place. Linux has made vast improvements over the years in how well it works with so much hardware. There just seems to be more of a need for better wireless driver support.

Look into dkms[0] so the driver is compiled automatically on each kernel upgrade.

[0] - https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support

Thank you. I will check that out.

The author wrote: "a billion mobile devices running Linux and using Wi-Fi all the time". I would bet that Android has a separate implementation of some of the wifi stack to make it work better.

Under the hood Android just uses wpa_supplicant for connecting to WAPs.

Android is open-source. I suppose that these workarounds would make it into the mainline kernel if they existed.

>Android is open-source.

But none of the WiFi drivers of Android devices are. If they've found an improved way to do WiFi, it's baked into the proprietary drivers.

Coming from a guy that worked on pre-production android hardware... you wouldn't want any of those drivers. Ever.


They don't even properly report the cell signal levels correctly half the time (.... like returning -1 always on the error correction signal level...).

And as long as our computers don't come with Linux preinstalled it will continue to be this way. This is what happens when your OS is a stranger in a strange land; it doesn't feel like it belongs there.

If desktop Linux ever became popular enough for computers to come with it preinstalled, it would immediately go down the binary blob driver path that Android is on.

For vendors, the reasons to close up their drivers would be identical whatever OS their kernel drivers would run on top.

Of course, but it would at least work.

As the other poster mentioned, the driver and firmware can be closed source. The rest of the Android WiFi stack is very liberally licensed. The Android Java portion is apache and the supplicant is BSD. You can do a lot to tune the stack without sending code upstream.

I recently helped a relative move an old Win XP machine to Ubuntu. I was going to move them to Lubuntu; however, the USB WiFi dongle they are using refused to cooperate -- or rather, the Lubuntu live disk I was testing refused to cooperate with it.

The dongle's chipset was Linux friendly and had apparently worked without major problems up to about Ubuntu 10.10 or so. Google revealed the eruption of numerous reports of problems at that time.

Problems that apparently persisted through several releases, for a good couple of years.

The solution that people found worked was to download driver source code from the chipset manufacturer (RA) and build it, with custom settings, on one's own machine. Some also found success with banning one or more apparently concurrently competing drivers from being loaded on their system. Per some descriptions, multiple compatible drivers would wrestle for control of the device, evincing symptoms matching what I'd experienced.

I was getting ready start a custom build and/or perhaps whacking driver loads -- after installing Lubuntu to get past the fixed Live CD configuration, when I thought to try the plain Ubuntu live disk as opposed to Lubuntu. Problem gone.

Working stuff breaks. Breaks persist for months if not years. Ostensibly compatible/comparable systems aren't.

I'm not going to complain; it is what we make of it. But still, today, we don't always do such a good job of making -- or maintaining.

Separately, the Ubuntu screen image on my relative's truly mass produced, 17" Dell LCD is shifted slightly to the right -- just enough to hide the rightmost few pixels. An old 17" LCD I have plugged into an old T42, has a similar shifting. Useable, but slightly annoying, particularly with respect to today's anorexic scroll bars.

I did a little research into the problem, months ago, but did not find a ready solution. Not WiFi, but still slightly to moderately awkward.

If it's a VGA monitor you probably need to hit the auto-adjust button on it. Different graphics cards and drivers produce slightly different timings for the same mode, so the monitor has to compensate, and it only adjusts its setting when you first get it or explicitly tell it to.

Thanks! I don't know about my relative's monitor, but the monitor attached to the T42 has a dedicated "Auto" button. I think I had waded through the separate menu choices without success, but I never tried that "Auto" button.

Pressing "Auto", the image futzed around for a few seconds and then apparently aligned properly. It didn't even mess up my brightness setting.

Sigh... I'm getting old.

I guess I'll add that for a long time, this monitor was dual boot, and I wasn't too interested at regaining the pixels in Ubuntu, only to lose them or their corresponding columns on the other side, under Windows. Nowadays, no longer a concern...

You can probably fix your display issue with xvidtune

Thanks you; I'll have a look at that.

As I grumped in another comment, I'm getting old. One symptom of such is increasingly "putting up" with marginal cases. Starts becoming "easier" than finding/learning yet "one more thing".

TL;DR: Don't get old...

I would extend it out to laptops and Linux in general. I've tried a few times over the last 10 years or so to use Linux on normal consumer laptops (Dells, Thinkpads, etc.) and it's always a really bad experience. Wifi issues, battery life, etc. have major problems. In the end I just gave up trying and use a MacBook with virtualization for running Linux.

I use Linux on ThinkPads since 1999 or so. I can not remember issues I could not solve. Just last week I installed Debian Wheezy on a X61 and everything worked out of the box. Works like a charm. This is a helpful resource: http://www.thinkwiki.org

Yup, Thinkpad X60 with Intel wifi cards seems to work OK provided you add the non-free firmware iwlwifi package. Some of the X61 and X61s series had an Atheros wifi card and would work from a default Debian install.

I've also had good luck with a Dell Latitiude E5420 (i5). This and similar models have a Broadcom wifi card which is a known problem so I simply purchased an Atheros half size wifi card and popped it in. Unlike Thinkpads, the Dell bios will take hardware changes in its stride.

Of course, we should not have to do these things. Perhaps as laptop sales decrease, a crowdfunded fully free laptop will become economically viable.

the free software foundation lists exactly one laptop that gets their 'respects your freedom' cert.


I think this just depends on the make and model. I did a bit of homework before purchase a few years ago (to ensure good compatibility with GNU/Linux), and my Dell Inspiron N7010 has been serving me faithfully for years. No major wifi issues, no battery life issues.

Similarly, I did some research before I bought my Dell Latitude E4310, and it works flawlessly with Ubuntu, and always has.* Graphics (both on screen and on external monitors), WiFi, sound, even Bluetooth.

It's not a guarantee, of course, but my general impression is that going with "pure Intel" (CPU, GPU, sound, WiFi) laptop helps ensure compatibility.

*It's possible that the battery life is worse than Windows; I wouldn't know, because I've never used Windows on it.

I went Windows instead, but share the same feelings.

I kind of feel like some user facing stuff on linux gets developed not until it's in a good or perfect state but until it's bearable.

You can work with the GUIs for instance, but somehow they still feel very sluggish

The sluggishness, in my experience, depends on several factors:

* Which toolkit is used (if any). Tk-based apps seem to be very quick, since Tk is pretty spartan and basic. GTK-based apps are okay. QT apps aren't quite as OK. Pure-X apps are zippy, but they're ugly as heck.

* Which WM/DE you're using. GNOME3 and its relatives (Cinnamon, Unity?) are sluggish as heck. KDE and GNOME2/MATE are much more tolerable, with or without desktop effects. (Open|Black|Flux)box are zippy, as are cwm, Emerillon, WindowManager, and virtually all of the tiling WMs. Enlightenment is zippy sometimes, but I don't think I've ever managed to get it to run without crashing back to a login screen within 5 minutes of use.

One curious thing I've noticed is that system resources have absolutely no bearing on UI zippiness. Whether I'm on a PowerBook with 512MB of RAM or a gaming rig with some Intel Core i9-867-5309-Quakemaster-Ludicrous-Gibs-Edition-whatever and terabytes of RAM with some NVidia GeFarce GTXXX 5-million-CUDA-core 8GB SLI monstrosity of a video card with hardware-accelerated 3D grass rendering, GNOME will always act like it's running on a God-damn ENIAC.

This is precisely the problem with Linux. Everyone wants to work on cool kernel-level stuff or daemon-level stuff; nobody wants to bother with the tedious, unglamorous last-mile work of actually delivering a polished user experience around all of that cool stuff.

linux is a pretty vague term which can refer to many _moving_ things

That's another problem with Linux. "Linux" is a kit of 50,000 mostly-compatible little parts. If your hobby is assembling systems out of these parts, that's great; but it's less useful to you if you want to do higher-order work on top of an assembled computer system.

For example, tailing /proc files and compiling new kernel drivers shouldn't be a part of getting wifi to work anymore in 2014. It should just work. Of course, if you want to tinker it's great that you _can_ tail the /proc files, but you shouldn't have to. You should be able to turn your computer on and just have wifi, out of the box.

I think it is unwise of him to generalize his issues to whole of Linux if he has only experience of one vendor and their drivers. Maybe the issues stem from problems in the Linux stack, or maybe they are vendor specific. Problem is that with a single datapoint you can't tell the difference.

I have 3 Linux laptops in the house. One with Intel running Ubuntu, another with Broadcom running Fedora and the third with a Prism interface running CentOS 6. All three work flawlessly.

We also have 3 Macbooks, a white one, a Pro 13 and a Retina 13. All three work flawlessly with our Wi-fi.

My wife had a corporate-issued HP laptop running Windows 7. It connected to our wireless once or twice over a year. I had a network cable for her.

My in-laws have a Windows 7 Dell laptop. It's now running Ubuntu, booted off a USB stick. It's doing so because Windows 7 sometimes connects to the network, sometimes doesn't and I never identified a pattern, so I simply gave up. Under Ubuntu, it works flawlessly.

And yet, somehow, it's the sad state of Linux wi-fi... Go figure...

Yeah, I haven't even so much as bothered to look at the wifi chipsets in computers that I purchase since like... 2007? Since then I just assume that it works, and so far it always has.

The only trouble I have is with this corporate T420 with Windows 7, on my home wifi network. Sometimes it takes 5+ minutes to connect (2.4ghz or 5ghz wireless-n, I have both available and it has trouble with both.)

I've also never had a single problem with NetworkManager, despite what others are saying. Then again, I've also never had problems with the infamous PulseAudio... I bought a cheap-as-hell USB audio card off the internet the other day and it Just Worked™. That didn't even surprise me.

a) Your personal anecdotes may not broadly apply.

b) Just because Windows is worse doesn't mean Linux isn't frustrating.

c) The post complains more about performance (rate, connect time, etc) and complexity (numerous overlapping components) than "Does it work at all".

> a) Your personal anecdotes may not broadly apply.

I'd say that if we were talking about a single computer, but here I have stories of 8 different computers with 8 different CPUs and 5 different wireless interfaces, radios, antennas and software all connected to a single wireless router.

If I got my numbers right, there is 81.41% chance this is not a fluke.

You have your numbers very, very wrong. You're about 10 or 11 orders of magnitude off.

I've used exclusively linux at home, for the last 6 years (except at work), and I did find that some of the nano wi-fi USB things (necessary for rPi) don't work.

Yes I have one that "works" in Linux, but it only ever connects at a glacial speed. In Windows it works at full speed as expected.

("Submit a bug", "Have you tried...", "What does [some command] say?", "It's the fault of the manufacturers" etc.)

This stuff can be tricky.

For years I was using a D-Link USB dongle with a Ralink chip to connect to my home network. At first I had to use a non-mainline driver but eventually mainline caught up. Anyway, either way, it worked pretty well without much fuss.

Then I moved to an apartment building with dozens of repeaters in it for a large university network, and my connection became unbearably slow (even though my laptop running Linux and my Android phone and tablet all worked fine) despite working OK on Windows.

So I ordered another dongle by a different manufacturer with a different chipset, which had many reviews exclaiming how well it worked on Linux. It had the same problems.

Eventually I got a PCIE Intel card and it worked splendidly, with no fiddling whatsoever.

The moral of the story is that there are a huge number of different hardware and software configurations and environments to use them in. And what's more, a configuration that works without issue in one environment can fail spectacularly in another.

> The moral of the story is that there are a huge number > of different hardware and software configurations and > environments to use them in

True, and it partly makes all these complaints valid for other OSes as well. I had major trouble with my MacBook connecting to my home wifi, until I bought a new router. No trouble anymore.

I believe it works for you, but can only say I am insanely jealous.

What do you mean "flawlessly"? What are your criteria for success? I suspect they're different from the person who wrote the submission.

The best method I found for getting a Windows 7 Dell to stop being stubborn is to restart the wireless driver.

Toggling the wireless power is less reliable than soft restarting.

Making the soft restart easy involves downloading the Windows SDK, to get a copy of DevCon, and then stuffing it into a scheduled task so that you can run it without a UAC prompt.

(I mean this more as 'surprise surprise it's the drivers' than as a defense of windows)

Win8.1 is even better, we had some colleagues change their simple DSL modems versus routers (which is better anyway, but...) because it's just not possible to use a DSL modem reliably with Win8.1!

What routers are you using?

I bought a Netgear wifi-to-ethernet adapter when I switched to Linux. For desktops, it's an ideal solution: zero-maintenance, the computer just knows eth0 has a connection from somewhere. It has a web interface to set up connections. The only downside - and this doesn't matter for most desktop users - no monitor mode, no reaver, nothing related to wireless at all, because as far as the OS is concerned, it's wired.

For laptops, there are always plenty micro-sized USB adapters with known compatibility if your built-in wifi has bad or no drivers.

Wow. So many people in this thread are missing the point. «I use Linux at home with card X and it works great when I custom-configure wpa_supplicant» or «Just buy a MBP».


  - wireless chips are obscure and buggy
  - audio chips are obscure and buggy
I wish for some organized effort to bring a few set of open hardware chips to replace proprietary ones that seem to only work easily on proprietary OSes.

That would complement the work of guys like bunnie huang (novena laptop) and would let the linux world enjoy sound hardware for (allegedly) sound software.

Maybe that's just a pipe dream and the complexity emerge whether or not it's open.

Many chips/hardware pieces do already have a good Linux support, and the internet is full of info what will work perfect and what may cause problems.

"The 5 GHz signal is just as strong" Interesting. My dormitory at MIT has both 2.4GHz and 5GHz signals. The 5GHz is extremely weak but my Android devices love to pick a weak 5GHz signal over the 2.4GHz and subsequently have terrible speeds.

On another note, I wish that browsers and applications would keep firing spawning and firing requests at a rate beyond human perception, until one succeeds. The state of browsing the web over Wi-Fi while moving from access point to access point is equally sad. I get an IP address, but applications almost universally refuse to retry their connections until the first zombie socket times out. Seriously, I shouldn't have to wait 10 seconds after each access point change. Should be more like <0.1 second after getting an IP.

OSes/Applications should be thinking "This is Wi-Fi. Wi-Fi is supposed to be fast. Since no bytes came in for a full 0.5 seconds, something is wrong. I'm going to keep opening/closing sockets like hell, change networks, change frequencies, whatever it takes to get data to come in the next 0.3 seconds and make the user happy."

Building devices with dual Wi-Fi cards may also offer ways to help alleviate the handover problem.

> I wish that browsers and applications would keep firing spawning and firing requests at a rate beyond human perception, until one succeeds.

I do not think you would enjoy the network conditions that come with that behavior. The point about killing old sockets early when switching wifi makes a lot of sense, however.

Well, the problem is really that the network isn't even connected properly in the first place. I agree that it shouldn't behave badly while actually on the network. It's just odd that madly hitting refresh while wifi is reconnecting actually gives me a sooner and faster page load than letting the machine load by itself. That means there is something can be automated but unfortunately isn't being automated.

Network stacks are layered and even after 40+ years, on every OS I've seen, L4 (TCP) never queries nor uses any L1/L2/L3 link quality/stability/availability information when computing retrans intervals, etc. Doing that would indeed be an aesthetically distasteful layering violation, but it would enable much more optimal behavior in a lot of wifi & cell network scenarios, as you've said.

It will probably happen eventually, at least in Linux, after a few more years of commercial pressure to make it suck less.

Honestly, it's almost easier these days just to virtualize a linux guest on your MBA/rMBP/whatever windows box and let the host handle the wi-fi.

It only gets worse on headless/embedded/arm machines. The RPi list of Wi-Fi adapters gives something of a picture:


Note the number of ones which require some additional configuration steps, downloads, patches, or other monkeying around.

And don't even get me started on connman and wicd.

This is pretty easily avoided if you just buy an adapter known to work on Linux with only free software. The driver for this is included in the kernel.


wicd is pretty dead. It uses deprecated kernel features that will be removed at some point and with no maintainer for wicd, it will just stop working.

Right, that's the main issue with wicd— it's a dead-end, but the usability of its curses text UI is still way nicer than connman. So you get a devil's bargain between shipping something nice-to-use that's eventually going to break everywhere vs. something faster, smaller, and newer which is user-hostile and is being actively developed (ie, has some weird bugs not really yet worked out).

Would you care to list those deprecated features, or provide a source? I know wicd is a dead or dying project, but I didn't know it was that critical.

I bet half those problems would go away if Rasbian Wheezy, with its ancient 3.2-based kernel, wasn't the default for many R-Pi users.

Raspbian Wheezy comes with kernel 3.12 (see http://www.raspberrypi.org/downloads/)

The first laptop I got at my current job is now sitting unused because of unreliable wifi in Linux and Windows 7. Due to BIOS restrictions work just got me a new laptop instead of messing with trying to replace the card.

It's incredibly sad how shoddy modern wifi can be, and a testament to the importance of networking that flaky wifi can render a computer useless.

A main reason to use Apple products is that the hardware and software are always bundled and guaranteed to work together without any additional tweaking or messing around. You turn it on and it just works.

Both Windows and Linux suffer from the problem of attempting to support n different hardware configurations in a decentralized fashion, and neither has solved it very well.

Actually, I know people with MacBooks who regularly have problems with hibernation instability.

And the whole reason for having a choice in hardware configuration choice is because one-size-fits-all doesn't work too well in the real world. It might be fine for all the coders in San Francisco writing iPhone apps and Rails websites, but there are many factors that come into play for other people.

Some people can't afford an expensive laptop, want better specs, want to play games, want a touchscreen, want a full keyboard, etc. There are plenty of reasons for not going with whatever Apple has decided upon from on high.

Pretty sure you can't ship a product that doesn't work on Windows. I'm just saiyan'

I ended up blacklisting the built-in card on my ThinkPad X220 because it was too flaky to rely on:

  $ cat /etc/modprobe.d/blacklist-local.conf
  blacklist rtl8192ce
I use a USB adapter instead (SMCWUSB-N2), which, besides providing wireless internet, is a fine reminder to just get a MacBook next time.

Yeah, the default card in my X220 was a piece of crap. I got one of these for $15 and replaced it myself (took 10 minutes) and it's been muuuuch better.


(I remember this was actually a customization option when I bought it and I stupidly didn't pick it. So you don't have to get a macbook, just read carefully when you get another thinkpad.)

It truly was an option. Patting myself on the back now for taking half an hour to research it two years ago. Although one can disassemble X200 with (relative) ease.

Only get a MacBook if you're hanging on to the USB adapter. The Broadcom chips they come with are terrible (and so is its EFI boot.)

That worked for me very well ... until the USB gizmo fell out. :(

Ironically, I quit OS-X because my macbook was even worse.

How is it that for the last decade I've been running Linux on whatever decent machines I was primarily using and whatever random garbage I could get my hands on and I haven't had any of these problems that people are perpetually complaining about?

Are these people using some kind of exotic hardware? Am I just really lucky?

Just FYI, this response has about the same validity that refutation of online anonymity / pseudonymity as a non-issue does: simply because you're not experiencing a problem doesn't mean others aren't, and doesn't make their frustrations any less valid.

I've used Unix for over 25 years, Linux for over 17. It's my platform of choice, I very, very rarely use anything else.

And my Thinkpad T520i listing a "03:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 [Condor Peak]" under lspci and running Debian GNU/Linux jessie/sid has _never_ had reliable WiFi, and I run it essentially 24/7 with a Cat5 cable plugged into hardwire networking.

I've tried network manager, wicd-cli, wicd-curses, and other tools. I can see networks. I cannot connect to them. Plugging a cable in solves the problem far faster than futzing with a nonintuitive, low-feedback/diagnostics interface.

So yeah, you're probably lucky.

And with that impetus, I've just set up ye olde /etc/network/interfaces configuration, and I've got a WPA2 connection running. One less cord to trip over.

Why I could never get network manager nor wicd to work ... I don't know.

Yay, I'm lucky too. Not only that but 2 out of 3 MacBook Pros in my office do very badly with WiFi - slow to connect (my android phone is 10x faster) & they drop at the first sign of a flaky signal.

The most robust laptop I've had on wifi was a Samsung running Fedora 20. Very fast to connect and never dropped. The pre-installed Windows 7 dropped continuously and often failed to connect.

I suspect the "issue" may be very hardware dependent.

For my desktop use I just use a TP-Link router in "client mode", as far as my laptop is aware I'm on ethernet. For my laptop, I buy cheap TP Link Wireless USB's my laptop has a crappy internal wireless card regardless of OS. Sometimes buying cheap USB drives work better than "top notch" wireless cards. I would still agree, Wireless on Linux is configured terribly. Sometimes you get bugs that have fixes you have to hunt down on the internet, which you wouldn't be able to get on, if your WiFi isn't working at all. You'd think the biggest thing to get attention on a Linux OS is anything related to networking, the most crucial features of any OS these days.

I've never been impressed with WiFi in general.

Glad to see others in this thread reflecting a similar sentiment.

I still prefer a wired connection, where possible. Not because I like wires, but because it is more reliable and the protocol is easier to understand.

My regular laptop with Win7 gave up the other day and I have been attempting to rebuild an old laptop with Linux as a temporary solution. I bought a cheap usb wifi dongle ... and lo and behold, support nightmare ensued.

Eventually, I realized that I didn't make a right choice in buying that tiny dongle. So now I am on phase two of rebuilding old laptop with Linux, with a different brand of usb wifi (double the price of the first cheap one).

I am not a Windows fan anymore, but everything in Windows world just works out of the box. And what's with Linux on daily basis looking to install upgrade for its OS?

Device manufacturers write drivers for Windows. They don't write drivers for Linux. So it's probably (mostly) the fault of the manufacturer.

Another thing is that Windows has an abstraction layer called NDIS which network drivers communicate to. This abstraction is complete enough that if you have a compatibility layer for NDIS, you can usually use Windows drivers directly on Linux. The project is called ndiswrapper. https://en.wikipedia.org/wiki/NDISwrapper Edit: to be specific, maybe it would be helpful if Linux had a similar abstraction for drivers to use?

As for the updates, that depends 100% on your distro and your own settings. If you're on Ubuntu, you can just uninstall the update-notifier, and/or edit /etc/apt/50unattended-upgrades to install updates without notifying you all the time.

Yup, ndiswrapper was one of the support routine. But even when things work, they don't work optimally, as the author is correctly claiming. But on the flip side, I should also mention that on another newer laptop with embedded wifi chip, I never had any wifi problems. Ultimately, I believe, Linux is probably suffering reputation - in comparison with Win/Mac - due to its multiple flavors and plethora of developmental end points. The lack of manufacturer support is just one of the issue.

This. Wifi is the #1 reason by far that I ditched my $800 thinkpad to switch to a $3500 MPB. (to still be in a unix environment)

On my thinkpad, with one of the known, supported wifi chipsets, wifi would work about 8-9/10 times.

But because I'm doing web dev stuff, those 1 or 2 times would basically brick the laptop for doing any kind of productive work. And that's not worth any kind of savings or effort....

afaik it's a driver problem first, before a linux wifi problem, but really I have no idea why it was working or not working.

But if anyone is out there listening, this is how much the problem is worth to me- roughly $2500...

Wouldn't a simple USB WLAN solve your problems in the 2/10 case ? These things cost a couple of $ and are smaller than a USB stick.

If you're using NetworkMangler, God help you. Shit's never worked right for me... even connecting to my home wifi it gets way fewer bars than it should, and keeps making and breaking the association.

Using just wpa_supplicant and dhclient, I've had far fewer problems, particularly with Intel wireless chipsets.

Linux would benefit from a bit of expectations management. Some of us neither want nor need a less expensive Windows, and Linux in general tends to function a whole lot better if you don't treat it as such.

I've been using NetworkManager for years with few complaints. I love its VPN support and have used it consistently at a couple jobs.

The moral: Anecdotes aren't very useful.

They can be when they are in abundance.

The plural of anecdote is not data, though.

> The plural of anecdote is not data, though.

Of course it is.

What do you think data comprises? It's just an aggregation of anecdotes.

Are you really sure? Data is supposed to have been collected by some method, not just randomly waiting in forums for complains about something.

Thanks god you use an Intel card. When I bought a Thinkpad, it came with a Realtek card. I'm still connecting through ethernet; never managed to make it work.

I have a thinkpad wired into a wifi bridge. Since it almost never moves nowadays, it's a pert-near ideal setup and sure beats the naff kernel support for the thing's internal rtl8192se chipset.

Ah, interesting. I barely move my Thinkpad, too, so it's not that big of a deal, but it's very sad all the situation. Someone mentioned here a github user was maintaining a driver for the card he was using, so I decided to check out if someone had a patched version of the Realtek I have on my computer, and it seems that there is: https://github.com/FreedomBen/rtl8188ce-linux-driver

You might want to check out https://github.com/pvaret/rtl8192cu-fixes and https://github.com/dz0ny/rt8192cu

It can be somehow remedied a bit by adopting two strategies:

- Cherry-pick hardware, in this case cards

- Use very recent software stacks

I just plugged in a Huawei 4G LTE dongle. I spent some time making sure this particular card worked, and discarded many others. I'm running the most recent kernel, systemd, udev, etc. It was a plug & play experience. If I had proceeded otherwise, it'd have been a nightmare.

Not much better on OS X ... my brand new laptop with the most current OS X is totally unreliable about connecting to wi-fi on wake from sleep. Even on reboot half the time it fails. Maybe different cause from that highlighted by the OP's article ... but in the end not much better situation.

Try a different router. This helped me a great deal.

did you try turning off bluetooth? weird, but it worked for me on OS X to make connecting to wifi much faster.

Imho wifi on Linux should also be taken care of by the Core Infrastructure Initiative - wifi is critical today


  Why does my Intel card consistently pick 2.4 GHz over 5 GHz?

"Overall the 5GHz has shorter range compared to the 2.4GHz. It is recommended to select the 2.4 GHz if you using computers and wireless devices to access the Internet for simple browsing and email. These applications do not take too much bandwidth and work fine at a greater distance.

However, if you are in a place which is crowded with more wireless signals, it is advisable to use the 5GHz network to avoid interferences. Furthermore, the 5GHz is most suited for devices which require uninterrupted wider bandwidth for video/audio streaming or multimedia content."

  I can sit literally right next to an AP and get a connection on the lowest basic rate
It's possible that Linux is more likely to reduce your rate in the face of increasing errors and noise, whereas other OSes/drivers might ignore the noise/errors and keep the rate the same, regardless of literally not having the advertised throughput.

  In any case, even the WPA2 setup is slow for some reason, it's not just DHCP.
Both your DHCP client and your WPA client are not optimized for speed on Linux, they are optimized for reliability.

  it's not unusual at all to be stuck at some low-speed AP when a higher-speed one is available
Yeah; your wireless client isn't going to change APs until it loses signal.

Most of these complaints are industry issues that have different proprietary fixes intended to appease consumers, but none of them are recommended or required by the industry.

> "Instead we get access points trying to layer hacks upon hacks to try to force clients into making the right decisions."

point-in-fact, this is how 802.11 works.

> And separate ESSIDs for 2.4 GHz and 5 GHz.

I'm not sure if the author thinks this is a good idea, or a bad one.

I understood it that he just wants the one SSID and let his devices connect to the appropriate one. But his Linux system will always try the 2.4ghz first, so he ends up creating something like "mywifi24" and "mywifi5".

Personally, I would prefer doing the two separate names so I can know what I'm connecting to. Being a radio guy, I see it as two separate bands, two separate physical radios. I don't see a point in trying to give them the same name.

Of course, for my home environment, I'm pretty much using just 2.4, and I give all my access points the same SSID so I can "roam" between them. I suppose someone could want to be able to roam between 2.4 and 5ghz (I tend to use 5ghz for backhaul).

I never was able to make my Ralink based card (D-Link DWA-160) to work on 802.11n band. It only works with b/g. I suspect it's a driver limitation (rt2800usb), but I never got any response from the developers about it.

Maybe because I chose a laptop specifically for use with Linux (ThinkPad T530), but it works perfectly. Everything. Even WiFi, sound, suspend, Fn keys, fingerprint reader, everything.

> between the Intel cards I've always been using

That's probably a big part of the issue; my experience with Intel wireless on Linux hasn't been as great as with, say, Atheros chipsets.

Plus, NetworkManager. Good God is that terrible. To put it in perspective: on one of my laptops (a PowerBook G4 running OpenBSD), I basically run the following by hand to connect to a wireless network:

    sudo ifconfig bwi0 nwid $MY_SSID
    sudo ifconfig wpa
    sudo ifconfig wpakey $MY_WPA_KEY
    sudo dhclient bwi0
Even that - on a crappy Broadcom wireless NIC, no less - is faster and more reliable than NetworkManager connecting with a good Atheros card.

I've got this great thing where it doesn't work when I unhibernate .. well, it's OK since if I shut the lid and open it again then it works.

Literally in the /etc/rc.local of the machine I'm on now:

    /usr/sbin/service networking stop
    echo "STOPPED"
wpa_supplicant is no picnic, but at least I have more of an idea of WTF is going on.

What distro are you on?

On the ones I'm familiar with, the network manager service is called 'NetworkManager' (inc caps) so I need to do 'service NetworkManager stop'

Ubuntu 13.10. NB the process name is still "NetworkManager" despite the string I use to /usr/sbin/service

when linux works, it's wonderful; when not, it's a pain in the a*^

Did you _ever_ try to fix a Windows problem?

this thread (not the article itself) is one fine example of the decline "hacker"-news is going through

if you look at "linux" as a community of users and developers there is lots of get stuff for free attitude and not enough people capable of and willing to work on the tasks waiting (open source drivers for mobile GPUs anyone?)

but hey, aren't we all busy on making lots of $$$ with our übercool startups these days? call it the sad state of hacker ethics or continue to improve the free software world day by day .. choice is yours

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact