One problem is that WiFi is completely opaque, especially but not only on Linux. You get no low-level and no debugging information. It just connects and you get some bars, or it doesn't. I never know where to start when debugging a bad connection, and I wouldn't know where to start if I wanted to improve Linux WiFi.
For example, sometimes I can see a network, but can't connect. Why? I'd like to see something like "sent 100 low level packets, checksum failed on 88 of them, disconnecting".
Or I'd like some way to see whether receiving or sending is the problem - do I get garbled packets, or do I get good ones, but no answers to the ones I send.
Sometimes I know a connection is WPA2, but it stubbornly tries another encryption method. Why? Does the AP suggest it, or is it my configuration? Sometimes I can't enter a text password, it only accepts a fixed-length hexadecimal string (happens on Windows a lot). Again why? There is no good central low-level log file or debug tool that lets me see what is going on.
Looks interesting, but which devices and drivers does it apply to? I have no expectation that this would work over every common wifi interface in current laptops (to say nothing of the typical home gateway with their embedded linux distribution and custom proprietary drivers).
I've wondered the same thing. Up until now I've avoided digging deep into my issues -- I've never found good docs, googling leads you to misleading forum posts and generally had bad signal-to-noise, and the WiFi stack is nested so deep that it requires a lot of knowledge. But I guess I assumed that the underlying stuff was generally correct and written by experts in the field. Maybe it's time a motivated layman like me dug in and started asking dumb questions.
It's opaque on Linux, but that's not the big problem. It's opaque on OS X and Windows as well, but nobody cares because it works.
I wonder if we'll eventually get the kind of stability that we currently have with ethernet drivers... once the speeds are high enough that we stop upgrading, once the chips go into long-term production instead of changing every few years, once things settle down...
I disagree. It's possible to get advanced output from various command line tools in Linux, and it's possible to get networking information. It's much more difficult to do that on other OSs, especially Windows. I loathe debugging network troubles in Windows, because there is essentially no information. OS X has most of your standard tools, though some in some odd places, e.g., printing the routing table not being done with `route`.
Furthermore, I care. My 2013 Macbook Pro is terribly slow to associate to even nearby APs, and drops the connection often. I have no idea why, because it is opaque: how do you list nearby APs? their signal strength or encryption type? I didn't learn this until just now, and it's,
Seriously. There's no way I'll remember it, and if you're reaching for it, you likely can't Google it. And it appears to be deprecated. It's a much shorter command in Linux, and in general I find it easier to determine if the problem is the network connection or DHCP failure. IIRC, Network Manager's icon changes — admittedly not much – depending on which one it's on, so even without command line tools, I have some idea what's going on.
OS X has the troubleshooter dialog, but that's never been able to fix the problem. (In general, I feel, those things never do. They also never tell you what they're doing. For all I know, they're just progress bars and timers.) WiFi off, WiFi on fixes a lot of problems.
Open Wireless Diagnostics in /System/Library/CoreServices/Applications, then click Window -> Utilities. Gives lots of detailed info, frame capture, logging, etc.
airport -s is incredibly important in setting up roaming correctly with heterogeneous APs. You absolutely want SECURITY column to be identical, else devices won't roam.
> There's no way I'll remember it
I have history(1) and a symlink in a toolbox git repo.
> you likely can't Google it
"!g OSX airport command" works fine
> route
I found netstat -r is the most "Unix-portable" way of querying the routing table (Windows route is a peculiar beast anyway). route is deprecated on linux in favor of ip route.
Of course. The parent was referring to a situation when someone lacks internet access, due to a networking problem they are trying to fix, rendering them unable to use Google to find the command.
On OSX, just press option and click on your wifi-icon. You'll see the signal strength, channel, speed, encryption type etc of the AP you're connected to expanded in the dropdown menu.
. chooses 2.4 over 5
. not automatically connecting to my mobile wifi even though I use it all the time
. simply tries and fails to connect over and over and over .. I have to stop wifi and start it, and then it just works.
My 2013 Air has had three updates including firmware that mention wifi fixes but it still has problems.
OTOH, my Fedora 20 install has been behaving beautifully (though to be said in far less challenging environments)
Given that wifi has been constantly evolving since it started, it's not going to settle on chips any time soon. 802.11ac has just been released, prompting a new round of chips.
Wireshark is a good tool for inspecting 802.11 frames, but it's still a pain. If you do go down this path, I highly recommend adding a capture filter for your adapters MAC, so you don't get flooded with management frames. Filter string: wlan addr 00:00:00:00:00:00.
NetworkManager doesn't relay any error information to the interface at all, but it's log files are quite detailed.
At least on systemd based distributions (tested on arch), running
journalctl -fu NetworkManager
will show you the live logs of NetworkManager interleaved with the logs of it's subprocesses; this has lots of juicy information that i've found quite usefull for debugging.
What's good for you might not be the best option for the majority of people.
Making things that "just work" means handling complexity for users, and leads to
somewhat complex code. Debian's static network configuration is great for
servers and okay for desktops that never move. But it's nothing that you
should put on laptops operated by enterprise users (the people that pay for
Linux desktop development). Imagine users calling support from a Starbucks,
trying the edit the wifi config files.
Handling complexity by piling more complexity on top is sadly very common, even so common that people think it's inevitable. But it's not the only way. The other way can sometimes be harder, and it tends to take more thought, but in the end you may actually solve and get rid of complexity.
Complexity can be caused by the developers of the software, and in that case it is unwarranted. But my assumption is that NetworkManager developers are reasonably competent, and take good decisions in all the trade-offs they have to make.
In their case, they need to support lots of features, and make all of them work seamlessly: multiple Ethernet, Wifi and VPN interfaces, IPv4/IPv6 configuration, modems, firewall policies, and so on. n^m different states. To get rid of this complexity, you'd need to remove options at the bottom of the stack, e.g. only allow communication via serial port at a fixed rate.
This thing works. I just buy them in bulk. I can get one for $13 off ebay. Any device I run Linux on, I replace whatever half size mpcie card it has with one. I don't fuck with the realtek or broadcom chips I get, because ath9k is all-open, no proprietary firmware, works out of the box. Drivers are in any kernel since the 3 series started, the bluetooth is just a generic HCI bluetooth adapter over PCI, shows up as a usb device and works with bluez no problem.
My best speeds on 5ghz with one of these has been around 20MB/s. Across my house I usually get around 8 - 10 MB/s. I think these chips are supposed to get much higher average throughput, but it works well enough I don't care. Works under Debian, Suse, Fedora, Arch, Mint, Mageia, even Slackware.
Ath10k makes me mad, since they are now shipping firmware blobs. Again. And they were doing so well.
I trust these chips so much when I'm doing IT supports and trying to advocate Linux to customers, I have been able to join every wifi network I've thrown the thing at, from ancient wireless-a routers to wireless AC ASUS routers with 2GB throughput.
Point is, different vendors have different quality. The ath drivers have been great for me, but I've only ever bought this specific chip because of the value proposition.
Much appreciated tip, thanks. I've known some atheros chips had really good drivers, but this is more specific than my vague impressions.
I suppose a major cause of the situation is that hardly anyone buys a laptop with the wifi chip as the first concern. Most laptops for sale don't clearly show the wifi chip, unless you go into the online configurator where you might be able to pay for an upgrade (yes I'm one of the weird people who considers the wifi chip, having once worked at a wifi ap vendor). I also doubt it's possible to replace the wifi chip on a macbook air or retina. So it's rare to have "cult favorite" chips of this type that enthusiasts can gravitate to, usually we just deal with whatever we end up with.
It relates back to a real problem in the Linux ecosystem today - the assumption Windows Computer == Linux Computer. A false assumption, especially when you get into the realms of device support for specific motherboard features, wifi cards, expansion cards, etc.
If anything, the real problem is the lack of an easy to reference directory of Linux hardware from the buyers perspective, rather than from the owners perspective. IE, "I want to buy <insert part> (or <notebook>) that supports Linux, all the parts manufacturers provide open drivers or documentation, and all the parts are compatible.
The lack of such a resource probably turns a lot of potential Linux converts off.
thanks to previous comment i just ordered one card on ebay for under $10 to test my theory - any laptop with pcie wifi card can be "upgraded" to atheros card. and i might even get few bucks back, since usually laptop-specific cards cost more on ebay :))
in theory if pcie cards are completely interchangeable, i should be able just swap them in and out. will see if it work out.
I stopped using NetworkManager years ago, I just use wpa_supplicant and dhcpcd directly (well I use the systemd services for these so they're started automatically). I also always name the 2.4GHz and 5GHz networks differently, and connect explicitly to the one I want.
Yes it would be great if NetworkManager did the right things automatically, but the task might be impossible due to hardware and driver quirks. WiFi chips are among the buggiest chips in computers today. Really, the chip. I've seen ridiculous workaround in drivers for multiple vendors' chips. Also, the drivers are complicated and buggy. And the linux drivers don't get quite enough full-time attention from the chip vendor to work solidly. Those windows and mac wifi driver teams are pretty big.
If you haven't used Network Manager in years, I'd suggest taking another look. It (like PulseAudio) was shipped far too early, souring early-adopters on its use.
Later kernels have (somewhat) cleaned up the wireless driver interface so these days I don't have problems with day-to-day usage of Network Manager.
If you aren't blessed with luck, one might say that PulseAudio still requires you to be a hacker to get sound. Over and over again. Every time you think it's fixed for good, it'll prove you wrong.
On the large majority of laptops I've installed Ubuntu on recently, sound has just worked out of the box, so here's to hoping my lucky streak continues!
Sure, sound works fine for a bit, but wait until you want to plug a USB headset in. First time, great. Second time, fail. Log out, log back in, headset works again. Lame!
Add any external sound hardware, use a optical or SPDIF port instead, add some speakers. It'll break sooner-or-later, and in my own experiences it tends to be whenever my personal computer gets away from the 'typical' desktop profile.
Try to plug in headset and have the sound out of it, plug it out and have the sound on the laptop speakers, without going all hacker on it, or for example just try to use your bluetooth headset. Yeah. Nope.
Or if you're even less lucky, PulseAudio still requires you to mute the "system sounds" thing because once skype starts, you get persistent noise out of the audio jack, after which you try to email Lennart following the guide on his page, only to receive no reply.
+1
I tried for an hour to get PulseAudio to work with Bluetooth and A2DP and use my phone as a sound source. Decided I wasn't going to waste time on this. Gave up and ended up buying an external A2DP box off Amazon and plugged it into my Line In port.
All of my computers run Linux, but that's because I can deal with most of the flaws. This is definitely part of the reason why desktop Linux has never taken off with the general public.
Also, the people who manage Linux distributions seem to absolutely love suddenly getting rid of things that work and replacing them with incomplete alternatives, without any kind of migration of user data and settings. Those alternatives should be pushed out as developer previews until they either
(1) match each and every feature of whatever they are replacing AND capable of importing all settings
OR
(2) warn the user months ahead of time with a list of
features that are going to disappear in the replacement
OR
(3) provide an easy, 1-click option to let the user continue using whatever they were using as their default, with continued support and updates
Years ago it didn't take me that long to get PulseAudio setup to play from my HTPC through my laptop (so that I could use the headphone jack on the laptop). That said, it wasn't plug-and-play. I didn't use it too often because there were too many moving parts every time I wanted to get it setup (i.e. setup the laptop to receive audio, then get the HTPC to connect to the laptop and send audio... then disable it all to get things back to normal afterwards).
What would have been preferable would be for the HTPC to advertise itself as a audio source, and the laptop to be able to list sources, and let the use select one.
I had been using wicd on my old laptop for a few years. Then, when configuring a new computer, wicd failed to work on it. I tried, probably, everything, even the most low level CLI tools, but still connection kept failing, and frustratingly, it was providing almost no information about what was going wrong..
Then, I installed XFCE to had some temporary GUI, and its NM connected to WiFi. Now, I'm using NM, and still have no idea, why other tools did not work for me.
Years ago, I had issues with several hotel wifis on the east coast. I found that completely disabling NetworkManager and manually connecting was the only work-around. But NetworkManager had to be disabled from startup. If it was started at any point, even if the service was stopped (and a manual connection was attempted) I couldn't connect to the network.
Contrary to what other commenters are saying, I've been using NetworkManager on my ArchLinux Thinkpads for years and it works great. (And I don't even have a desktop environment, I have DWM, so I am usually a "do it yourself" person.)
Whereas I could never get all the mess working to get things working well without NetworkManager.
My experience is, spend hours messing with configuration shit or just let NetworkManager do all that automatically for you, which in my case, it does well.
Network Manager uses wpa_supplicant underneath, and delegates roaming and AP selection to it. So I don't know how what should be a dump program on top gets such a bad wrap.
Not that I don't use wpa_supplicant directly anyway, but when I have used NM, it seems to do the same job.
I use wpa_supplicant on Raspberry Pi. It works fine even with bad WiFi signal, though I have trouble with Samba client. It forgets to reconnect to SMB server every few weeks after dozens of WiFi reconnects. Tracking down the issue and filing a bug is a bit complicated (automated embedded device, WiFi signal noise, etc).
I always had troubles with the managers too, and now use wpa_supplicant directly combined with systemd-networkd.
When a connection is established networkd automatically starts dhcp. I think it connects faster, and I've had no problems with wifi since I switched.
Same here, I kind of never got NetworkManager to do any good, I just gave up, so mii-tool, dhcpcd, iwconfig, wpa_supplicant are well used tools, basically it works fine, when I actually have the drivers. These tools dont bail out on me or try to hide complexities behind more complex abstractions like NetworkManager does. If im going to be troubleshooting anything, its going to be the networking issue and not dbus to networkmanager interface or policy files.
So basically you need four different tools to set up networking? Well, it sure sounds like the complexities are there and in plain sight then. I can't help but wonder though if it isn't possible to get rid of some of these complexities and make life simpler?
It works ok and just wraps ifup and iwlist. It also reads and writes /etc/network/interface style config, so you can see what's going on under the hood.
But I agree. Getting it all work the first time (or when I encounter a new type of network) is just ridiculous.
I dont know, possibly, I see now that mii-tool is packaged together with ip. So really, it depends, it is different layers - different tools , but could still package all together into one package and have one syntax for all. wpa_supplicant and dhcpcd should go under the net-tools/ip package too.
ip link should also show the information of link-speed from mii-tool, ip addr should take an dhcp argument instead of requiring a separate dhcpcd. Then merge wpa_supplicant into ip, perhaps make a ip-wlink which requires some SSID/keyphrase to set it to "up" state.
So wpa_supplicant dhcpcd iwconfig/iwlist/scan-tools should be merged to net-tools and have same syntax as the rest of the net-tools package.
But yeah, even then you need to learn the layers and set them up. In my opinion Windows got it wrong, its user interface is horrible and people often complain about networking problems which are really UI problems or bugs in Windows/drivers there too.
This problem is classic - networking is a stack and thats a fact, and the best tools/UI is one which allows you to dig through that stack layer by layer. On windows its all or nothing.
I think a big portion of the problems Linux has these days is how unbelievably bloated and overconvulated distributions have become. Every single function has three, four or more ways to do the same thing, often with different results. When trying to get something working I never know if I should run the /etc/init.d script, restart the service manually, edit a config file, use the command line config until or use the GUI settings option. Sometimes you edit a config file and get the thing to work, only to find out some other utility will overwrite it next reboot. Other times you get strange errors or just nothing at all, because it's a deprecated method and you should have used the brand new tool-du-jour instead. It's a mess.
I tried setting proxy configuration so that apt-get would work with my employer's proxy. I put my (plain-text) username and password into a dozen different configuration files before it finally worked. Then I changed my password, and couldn't remember where all I had put my credentials in. I ended up having to reinstall the whole OS because I couldn't figure out how to undo the proxy settings that took me forever to figure out in the first place. Skipping over the fact that I had to put the password in plain text.
I had similar problems with Linux. This is why I have now a rather long textfile in a dropbox folder in which I write down all problems/solution/tweaks I had to deal with. If the same thing pops up again after half a year I can just look it up.
Try Slackware, http://www.slackware.com/, none of that auto-magical stuff of the other distros, with it you control your system instead of your system controlling you.
It's for this reason that I love openSUSE - or, more specifically, YaST. Thanks to YaST, you don't have to worry about all the crap that goes with getting a service to run and be configured properly.
I have a good number of servers and desktops running openSUSE, and only once have I had to manually edit a configuration file (in order to work around some obscure DBus glitch on a coworker's personal desktop); everything else has been manageable with YaST alone.
Copy paste, printing and networking. Biggest usability misses of Linux.
I don't consider these technical issues...but usability issues. Things like insisting on keeping the install ISO at 750 mb, leaves seamless driver support out of the todo list because...hey, there are no drivers anyway.
I'm really surprised there is not a paid version of Linux with these features baked in. I would GLADLY pay for all these (as well as the royalties for mp3,flash, etc)
EDIT: does anyone know if systemd-networkd would make things better [1]
I don't have issues anymore with printing and networking. I understand that this stuff is per vendor so I usually do my research before buying (I have thinkpads, and my printers are all pretty friendly). I do see many windows computer fall over when printers connect to new wireless networks, though.
As for mp3, flash, isn't is as simple as enabling a non-free repository and let it do its thing? If you use Ubuntu the option to enable that is right at the installer. I've never had the issue since I switched to Linux full time (11.04)
In X, you have N selections referenced by atom. Two atoms (PRIMARY and CLIPBOARD) are sometimes used interchangeably by various software, leading to all sorts of shenanigans where for example selecting then middle clicking pastes something that was "Ctrl+Ced" elsewhere or vice versa.
Additionally, X selections aren't buffers - they're handles used asynchronously. So, when you paste, if the source application is dead or has mis-handled its state internally, you don't get what you expect.
These behaviors are patched over by clipboard managers which manage PRIMARY and CLIPBOARD interactions and which immediately copy the selection into a buffer to make it long-lived. However, each desktop environment's clipboard manager has gradually expanded to include all kinds of strange environment-specific metadata possibilities (to enable, i.e. "Paste Special" options from a spreadsheet).
This has some nice little side effects, like I can use the highlight -> middle-click action to get around JS in the browser that is triggered on copying text.
Not that I'm a fan of the situation, Keepass2/Mono break my routine everyday.
Interesting. I only noticed now that shift-insert in Firefox doesn't use PRIMARY but CLIPBOARD. I'm so used to middle mouse button paste that I never noticed. This is probably due to Firefox not using a native graphical toolkit.
Depends on the programs you use, and whether or not you're running a full DE or a bare WM. Most programs use Ctrl+X/C/V, Shift+Delete/Shift+Insert/Control+Insert, and/or highlight/middle-click in various unpredictable combinations.
I've found this to be less of a problem in modern desktop environments (especially KDE, in my experience), since most DEs nowadays feature their own clipboard/buffer management.
For me even that's annoying enough. On a mac it's consistently command-x command-c command-v everywhere, whereas in Linux I have to think "oh, I'm in a terminal now, use shift", and if you get it wrong things screw up (e.g. C-shift-v opens the inspector in Firefox if memory serves).
Sound isn't too great either. The BlueZ developers for example decided to drop HSF/HSP support in BlueZ 5.x [1], which means there is no way to get a bluetooth headset (with a microphone) working on Linux anymore; there's just no way.
Although it's not like sound is great on Windows either.
Vista was the first time I suddenly found myself unable to use my headphones on a Windows machine. Sounds was forced to go out through the speakers. I eventually rebooted to fix it.
Your single anectodal evidence of single non-updated driver does not really prove anything about "going backwards" or tell anything about the merit of new audio stack.
The irony of this to me is that I started using Linux back in 1998 or so (though I've used it off and on for the past 15 years, I'm far from an expert, just a hobbyist)
But in 1998, Sound on Linux was almost impossible to get working correctly (that's hyperbole, but it wasn't too far off)
It's gotten better, but not by much. But considering 15 years have gone by, that's faint praise indeed.
My macbook pro (running linux) has a strange issue where I have to run setpci -v -H1 -s 00:01.00 BRIDGE_CONTROL=0 before the backlight works (when using official NVIDIA drivers, nouveau works fine).
My T410 thinkpad required some XOrg.conf setting get the backlight working. It really is strange how bad the NVIDIA drivers are with this backlight stuff.
The reason appears to be that Intel wants Android to work well on their hardware, and the new BlueDroid stack doesn't support any Intel-specific features. Instead of fixing the sketchy (details in article) BlueDroid code, they decided to make BlueZ a drop-in replacement for BlueDroid:
I haven't had trouble with printing in years- in fact, I can basically print on everything with no configuration, and people with OS X and Windows laptops seem to have lots of problems.
While this is another 'Well, works for me" kind of response, I've never had a problem printing from a laptop with OS X. Windows, yes. But OS X generally just finds the printer and goes, with no configuration necessary at all.
Aren't most Linux distributions still using CUPS, the system OS X uses under the hood?
(More on topic for the actual post here, the last couple of times I've tried to install Linux on a laptop getting wifi going has been a bit finicky, but usually so has getting things like booting into a GUI. I've attributed that less to Linux than to me trying to stick it on obsolete Apple hardware, though.)
I was just printing on someone else's computer running barebones Lubuntu. I found that going to localhost:631 (cups web admin) has all the settings needed to add the printer, which came up as "Detected network printers". I think the OSX one has a GUI frontend for that. I just tested the GNOME gui on my laptop, it doesn't seem to detect the printer but localhost:631 does.
That's strange, since GNU/Linux and OS X both (last I checked) use CUPS for printing.
That said, it really depends on what printer(s) you buy. I've had wonderful experiences with HP printers, and tend to recommend them. Brother, Konica Minolta, and Epson printers work reasonably well with some tinkering and research in my experience.
Is there any sort of certification scheme for CUPS? Given that's the standard for Mac and Linux, I'm surprised companies don't make 'CUPS compliant' printers.
I see a lot of recommendations for Brother printers, but they seem to require quite an involved process to set up.
My Lexmark X4850 begs to differ. No drivers, doesn't work, period. Lexmark doesn't care, and apparently no Linux driver developers have this printer, so no one has made support for it yet.
There are no royalties for Flash. Well, apart from the yearly cap that Adobe pays.
Similarly, Fluendo have been paying the cap for MP3 decoding since 2005 on their open source decoder for Linux.
(MP3 decode should now be out of patent, as should most encode tasks but "intellectual property" likes to keep its boundary lines as vague as possible so you never know if you're trespassing and just pay out of fear/habit.)
Printing has also been pretty good for a while, so much so that Apple adopted the same solution in 2002.
I have only used Linux for the last 12 years. I use a thinkpad in an office full of macs and HP printers and 5ghz networks.
None of it works as seamlessly as on a Mac. I'm pretty comfortable compiling my custom kernels, so I'm more than the average Joe trying to setup networking.
There is still a reason why every Linux install asks you to explicitly select that you want mp3 codecs insralled. Flash is not installed by default.
I don't know what the legal reasons are, but I'm willing to pay my share to not having to deal with it.
"There is still a reason why every Linux install asks you to explicitly select that you want mp3 codecs insralled." Ubuntu is the only installer I know of that does this...
> Copy paste, printing and networking. Biggest usability misses of Linux.
Hah. 5 years ago I switched my desktop from Linux to Windows because copy-paste suddenly stopped working. Suddenly it became impossible to copy an URL link from the terminal into FF's address bar. Then I said to myself I didn't have time for this s*t. I've never looked back after switching.
Concerning networkd, I don't think so. It's mostly geared towards (and I believe even originating from) CoreOS, and environments similar to it. Mostly for managing network devices in containers and virtualized deployments. Red Hat's been sponsoring Project Atomic recently.
Out of curiosity, what do you feel is bad about copy-pasting under linux? It's inconsistent, but it's still the best copy-paste functionality I've experienced so far on any operating system. It'd be nice if all applications understood the dual clipboard, and if terminal applications behaved a bit better, but still.. by far the best of any OS as far as I'm concerned.
If you don't feel like doing the research to make sure that your hardware will work correctly under Linux, you can buy a computer with Linux pre-installed.
They'll ship with the distro of your choice preinstalled, and I've had wonderful experience with their support (where they suggested a kernel upgrade for me a few versions higher than what was shipping with my preferred distro.) Every model except the UltraLap comes with a mini-screwdriver and encouragement to use it. The UltraLap is their competition with 'ultrabooks' though, so it's not put together as nicely as the others - no screwdriver there:(
They just test everything, and only send you stuff that works.
I discovered this for myself after converting my parents 5 year old PC from XP to Linux Mint.
As you can see from the bug tracker:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/852190
the kernel drivers are much buggier than the GPL'd Realtek drivers. But the official Realtek drivers don't work with newer kernel versions.
If those are really under GPL, nothing should stop you from converting them into a kmod using the kernel 802.11 architecture and pushing them upstream. I'd imagine that since it hasn't happened yet there is something else at work impeding it.
Converting them to the latest kernel driver interface is beyond my skills and pvaret's rtl8192cu-fixes. Pvaret's remarks on github indicate it's a hack to just get them running and someone with Linux networking internals experience is needed to do a proper port.
I'd be willing to test and file bug reports. Your 'fix it yourself' attitude is exactly the kind of response that gives Linux a bad name.
But "someone else should fix it" does not help either.
If you do not provide Realtek the financial incentives (ie, not buying their products) for not maintaining proper Linux drivers (in kernel ones are what I would consider proper) then nobody in the world has a financial incentive to make them work for you. If you are buying realtek products, having them not work, and not demanding a refund for the lack of functionality, then nothing changes.
It is disingenuous to blame kernel developers for not fixing a hardware vendors shitty driver when said hardware vendor is not paying them squat.
Realtek provided a rock solid, stable, GPLv2 driver. The "proper" driver is a piece of shit, requiring a reboot after a few minutes of use. They shouldn't have changed the driver interface, leaving people like my parents with broken hardware, if they don’t have the manpower to fix this sort of issue.
One of a handful reasons I don't have Linux on my laptop anymore, even though I like so many things about Linux so much more. I'm not sure about this point, though:
> Connection time. I dislike OS X pretty deeply and think that many of its technical merits are way overblown, but it's got one thing going for it; it connects to an AP fast.
I remember Linux being slow (and unreliable) at this, but OSX is pretty slow too, at least on my MBP. The OS that I've always had the best experience just with connecting to APs is actually Windows.
This is one of the reasons I've stopped running Linux 'bare metal' and have instead always just used an OS with good drivers to serve as a VM host. Every time I fire up a Linux VM it makes me feel a little guilty (after all, if everyone does what I'm doing there's essentially 0 reason for hardware providers to think about Linux drivers), but I've consigned myself to the fact that Linux's server roots are always going to show.
I do the same thing. No reason to feel guilty about it, not that I can see; indeed, when freed of the hardware compatibility issues that it just can't handle, Linux really shines.
Essentially, they behave as if they've spent the last (however long you were offline) without sending any packets. If their lease is still valid, they should still be able to use the same IP address; i.e., they shouldn't need to re-acquire one. That's not always the case, so they also do a DHCP request along with that.
I have a late 2012 13". It usually takes ~8 seconds to connect to an AP. Seems to depend on the AP and anecdotally it's very sensitive to the wireless channel. Maybe I just have a bad card or something.
I have found that the MacBook Pro's speed at connecting is dependent on the AP. My MBP connects to my work APs quickly, but it has issues with my home AP. Sometimes the connection to my home AP gets dropped for no reason, and when I try to reconnect it refuses to let me back on, claiming my password is incorrect. My MBP is the only device I own that has this problem (even my iPhone doesn't have this issue)
I've noticed that on my OSX macbook the connection time is slow, but it seems to connect before you log in if it's asleep. I think Apple uses tricks like this to make it seem much faster.
I'm running Archlinux from a flash stick on a cheap Haswell Celeron. From BIOS to chromium/wifi on is about 20-30 seconds. And I'm using connman so it could be faster with directly using wpa_supplicant/dhcpcd. On Windows it took up to two 2 minutes. So it is definitely an issue with hardware and drivers not Linux in general. Also using old hardware (including old wifi dongles) is a recipe for headaches. A cheap Haswell laptop or a desktop coupled with Linux friendly wifi dongle or a card should work fine.
I can sympathize with this article. Every time my kernel is upgraded, I must manually recompile my wireless driver. I'm using a patched version of a Broadcom wireless driver that some kind soul on Github has been maintaining. If I was new to linux, there would be no way that I would have been able to get my wireless interface working in the first place. Linux has made vast improvements over the years in how well it works with so much hardware. There just seems to be more of a need for better wireless driver support.
The author wrote: "a billion mobile devices running Linux and using Wi-Fi all the time". I would bet that Android has a separate implementation of some of the wifi stack to make it work better.
Coming from a guy that worked on pre-production android hardware... you wouldn't want any of those drivers. Ever.
Seriously.
They don't even properly report the cell signal levels correctly half the time (.... like returning -1 always on the error correction signal level...).
And as long as our computers don't come with Linux preinstalled it will continue to be this way. This is what happens when your OS is a stranger in a strange land; it doesn't feel like it belongs there.
If desktop Linux ever became popular enough for computers to come with it preinstalled, it would immediately go down the binary blob driver path that Android is on.
For vendors, the reasons to close up their drivers would be identical whatever OS their kernel drivers would run on top.
As the other poster mentioned, the driver and firmware can be closed source. The rest of the Android WiFi stack is very liberally licensed. The Android Java portion is apache and the supplicant is BSD. You can do a lot to tune the stack without sending code upstream.
I recently helped a relative move an old Win XP machine to Ubuntu. I was going to move them to Lubuntu; however, the USB WiFi dongle they are using refused to cooperate -- or rather, the Lubuntu live disk I was testing refused to cooperate with it.
The dongle's chipset was Linux friendly and had apparently worked without major problems up to about Ubuntu 10.10 or so. Google revealed the eruption of numerous reports of problems at that time.
Problems that apparently persisted through several releases, for a good couple of years.
The solution that people found worked was to download driver source code from the chipset manufacturer (RA) and build it, with custom settings, on one's own machine. Some also found success with banning one or more apparently concurrently competing drivers from being loaded on their system. Per some descriptions, multiple compatible drivers would wrestle for control of the device, evincing symptoms matching what I'd experienced.
I was getting ready start a custom build and/or perhaps whacking driver loads -- after installing Lubuntu to get past the fixed Live CD configuration, when I thought to try the plain Ubuntu live disk as opposed to Lubuntu. Problem gone.
Working stuff breaks. Breaks persist for months if not years. Ostensibly compatible/comparable systems aren't.
I'm not going to complain; it is what we make of it. But still, today, we don't always do such a good job of making -- or maintaining.
Separately, the Ubuntu screen image on my relative's truly mass produced, 17" Dell LCD is shifted slightly to the right -- just enough to hide the rightmost few pixels. An old 17" LCD I have plugged into an old T42, has a similar shifting. Useable, but slightly annoying, particularly with respect to today's anorexic scroll bars.
I did a little research into the problem, months ago, but did not find a ready solution. Not WiFi, but still slightly to moderately awkward.
If it's a VGA monitor you probably need to hit the auto-adjust button on it. Different graphics cards and drivers produce slightly different timings for the same mode, so the monitor has to compensate, and it only adjusts its setting when you first get it or explicitly tell it to.
Thanks! I don't know about my relative's monitor, but the monitor attached to the T42 has a dedicated "Auto" button. I think I had waded through the separate menu choices without success, but I never tried that "Auto" button.
Pressing "Auto", the image futzed around for a few seconds and then apparently aligned properly. It didn't even mess up my brightness setting.
Sigh... I'm getting old.
I guess I'll add that for a long time, this monitor was dual boot, and I wasn't too interested at regaining the pixels in Ubuntu, only to lose them or their corresponding columns on the other side, under Windows. Nowadays, no longer a concern...
As I grumped in another comment, I'm getting old. One symptom of such is increasingly "putting up" with marginal cases. Starts becoming "easier" than finding/learning yet "one more thing".
I would extend it out to laptops and Linux in general. I've tried a few times over the last 10 years or so to use Linux on normal consumer laptops (Dells, Thinkpads, etc.) and it's always a really bad experience. Wifi issues, battery life, etc. have major problems. In the end I just gave up trying and use a MacBook with virtualization for running Linux.
I use Linux on ThinkPads since 1999 or so.
I can not remember issues I could not solve.
Just last week I installed Debian Wheezy on a X61 and everything worked out of the box. Works like a charm.
This is a helpful resource: http://www.thinkwiki.org
Yup, Thinkpad X60 with Intel wifi cards seems to work OK provided you add the non-free firmware iwlwifi package. Some of the X61 and X61s series had an Atheros wifi card and would work from a default Debian install.
I've also had good luck with a Dell Latitiude E5420 (i5). This and similar models have a Broadcom wifi card which is a known problem so I simply purchased an Atheros half size wifi card and popped it in. Unlike Thinkpads, the Dell bios will take hardware changes in its stride.
Of course, we should not have to do these things. Perhaps as laptop sales decrease, a crowdfunded fully free laptop will become economically viable.
I think this just depends on the make and model. I did a bit of homework before purchase a few years ago (to ensure good compatibility with GNU/Linux), and my Dell Inspiron N7010 has been serving me faithfully for years. No major wifi issues, no battery life issues.
Similarly, I did some research before I bought my Dell Latitude E4310, and it works flawlessly with Ubuntu, and always has.* Graphics (both on screen and on external monitors), WiFi, sound, even Bluetooth.
It's not a guarantee, of course, but my general impression is that going with "pure Intel" (CPU, GPU, sound, WiFi) laptop helps ensure compatibility.
*It's possible that the battery life is worse than Windows; I wouldn't know, because I've never used Windows on it.
The sluggishness, in my experience, depends on several factors:
* Which toolkit is used (if any). Tk-based apps seem to be very quick, since Tk is pretty spartan and basic. GTK-based apps are okay. QT apps aren't quite as OK. Pure-X apps are zippy, but they're ugly as heck.
* Which WM/DE you're using. GNOME3 and its relatives (Cinnamon, Unity?) are sluggish as heck. KDE and GNOME2/MATE are much more tolerable, with or without desktop effects. (Open|Black|Flux)box are zippy, as are cwm, Emerillon, WindowManager, and virtually all of the tiling WMs. Enlightenment is zippy sometimes, but I don't think I've ever managed to get it to run without crashing back to a login screen within 5 minutes of use.
One curious thing I've noticed is that system resources have absolutely no bearing on UI zippiness. Whether I'm on a PowerBook with 512MB of RAM or a gaming rig with some Intel Core i9-867-5309-Quakemaster-Ludicrous-Gibs-Edition-whatever and terabytes of RAM with some NVidia GeFarce GTXXX 5-million-CUDA-core 8GB SLI monstrosity of a video card with hardware-accelerated 3D grass rendering, GNOME will always act like it's running on a God-damn ENIAC.
This is precisely the problem with Linux. Everyone wants to work on cool kernel-level stuff or daemon-level stuff; nobody wants to bother with the tedious, unglamorous last-mile work of actually delivering a polished user experience around all of that cool stuff.
That's another problem with Linux. "Linux" is a kit of 50,000 mostly-compatible little parts. If your hobby is assembling systems out of these parts, that's great; but it's less useful to you if you want to do higher-order work on top of an assembled computer system.
For example, tailing /proc files and compiling new kernel drivers shouldn't be a part of getting wifi to work anymore in 2014. It should just work. Of course, if you want to tinker it's great that you _can_ tail the /proc files, but you shouldn't have to. You should be able to turn your computer on and just have wifi, out of the box.
I think it is unwise of him to generalize his issues to whole of Linux if he has only experience of one vendor and their drivers. Maybe the issues stem from problems in the Linux stack, or maybe they are vendor specific. Problem is that with a single datapoint you can't tell the difference.
I have 3 Linux laptops in the house. One with Intel running Ubuntu, another with Broadcom running Fedora and the third with a Prism interface running CentOS 6. All three work flawlessly.
We also have 3 Macbooks, a white one, a Pro 13 and a Retina 13. All three work flawlessly with our Wi-fi.
My wife had a corporate-issued HP laptop running Windows 7. It connected to our wireless once or twice over a year. I had a network cable for her.
My in-laws have a Windows 7 Dell laptop. It's now running Ubuntu, booted off a USB stick. It's doing so because Windows 7 sometimes connects to the network, sometimes doesn't and I never identified a pattern, so I simply gave up. Under Ubuntu, it works flawlessly.
And yet, somehow, it's the sad state of Linux wi-fi... Go figure...
Yeah, I haven't even so much as bothered to look at the wifi chipsets in computers that I purchase since like... 2007? Since then I just assume that it works, and so far it always has.
The only trouble I have is with this corporate T420 with Windows 7, on my home wifi network. Sometimes it takes 5+ minutes to connect (2.4ghz or 5ghz wireless-n, I have both available and it has trouble with both.)
I've also never had a single problem with NetworkManager, despite what others are saying. Then again, I've also never had problems with the infamous PulseAudio... I bought a cheap-as-hell USB audio card off the internet the other day and it Just Worked™. That didn't even surprise me.
> a) Your personal anecdotes may not broadly apply.
I'd say that if we were talking about a single computer, but here I have stories of 8 different computers with 8 different CPUs and 5 different wireless interfaces, radios, antennas and software all connected to a single wireless router.
If I got my numbers right, there is 81.41% chance this is not a fluke.
I've used exclusively linux at home, for the last 6 years (except at work), and I did find that some of the nano wi-fi USB things (necessary for rPi) don't work.
For years I was using a D-Link USB dongle with a Ralink chip to connect to my home network. At first I had to use a non-mainline driver but eventually mainline caught up. Anyway, either way, it worked pretty well without much fuss.
Then I moved to an apartment building with dozens of repeaters in it for a large university network, and my connection became unbearably slow (even though my laptop running Linux and my Android phone and tablet all worked fine) despite working OK on Windows.
So I ordered another dongle by a different manufacturer with a different chipset, which had many reviews exclaiming how well it worked on Linux. It had the same problems.
Eventually I got a PCIE Intel card and it worked splendidly, with no fiddling whatsoever.
The moral of the story is that there are a huge number of different hardware and software configurations and environments to use them in. And what's more, a configuration that works without issue in one environment can fail spectacularly in another.
> The moral of the story is that there are a huge number
> of different hardware and software configurations and
> environments to use them in
True, and it partly makes all these complaints valid for other OSes as well. I had major trouble with my MacBook connecting to my home wifi, until I bought a new router. No trouble anymore.
The best method I found for getting a Windows 7 Dell to stop being stubborn is to restart the wireless driver.
Toggling the wireless power is less reliable than soft restarting.
Making the soft restart easy involves downloading the Windows SDK, to get a copy of DevCon, and then stuffing it into a scheduled task so that you can run it without a UAC prompt.
(I mean this more as 'surprise surprise it's the drivers' than as a defense of windows)
Win8.1 is even better, we had some colleagues change their simple DSL modems versus routers (which is better anyway, but...) because it's just not possible to use a DSL modem reliably with Win8.1!
I bought a Netgear wifi-to-ethernet adapter when I switched to Linux. For desktops, it's an ideal solution: zero-maintenance, the computer just knows eth0 has a connection from somewhere. It has a web interface to set up connections. The only downside - and this doesn't matter for most desktop users - no monitor mode, no reaver, nothing related to wireless at all, because as far as the OS is concerned, it's wired.
For laptops, there are always plenty micro-sized USB adapters with known compatibility if your built-in wifi has bad or no drivers.
Wow. So many people in this thread are missing the point. «I use Linux at home with card X and it works great when I custom-configure wpa_supplicant» or «Just buy a MBP».
- wireless chips are obscure and buggy
- audio chips are obscure and buggy
I wish for some organized effort to bring a few set of open hardware chips to replace proprietary ones that seem to only work easily on proprietary OSes.
That would complement the work of guys like bunnie huang (novena laptop) and would let the linux world enjoy sound hardware for (allegedly) sound software.
Maybe that's just a pipe dream and the complexity emerge whether or not it's open.
"The 5 GHz signal is just as strong" Interesting. My dormitory at MIT has both 2.4GHz and 5GHz signals. The 5GHz is extremely weak but my Android devices love to pick a weak 5GHz signal over the 2.4GHz and subsequently have terrible speeds.
On another note, I wish that browsers and applications would keep firing spawning and firing requests at a rate beyond human perception, until one succeeds. The state of browsing the web over Wi-Fi while moving from access point to access point is equally sad. I get an IP address, but applications almost universally refuse to retry their connections until the first zombie socket times out. Seriously, I shouldn't have to wait 10 seconds after each access point change. Should be more like <0.1 second after getting an IP.
OSes/Applications should be thinking "This is Wi-Fi. Wi-Fi is supposed to be fast. Since no bytes came in for a full 0.5 seconds, something is wrong. I'm going to keep opening/closing sockets like hell, change networks, change frequencies, whatever it takes to get data to come in the next 0.3 seconds and make the user happy."
Building devices with dual Wi-Fi cards may also offer ways to help alleviate the handover problem.
> I wish that browsers and applications would keep firing spawning and firing requests at a rate beyond human perception, until one succeeds.
I do not think you would enjoy the network conditions that come with that behavior. The point about killing old sockets early when switching wifi makes a lot of sense, however.
Well, the problem is really that the network isn't even connected properly in the first place. I agree that it shouldn't behave badly while actually on the network. It's just odd that madly hitting refresh while wifi is reconnecting actually gives me a sooner and faster page load than letting the machine load by itself. That means there is something can be automated but unfortunately isn't being automated.
Network stacks are layered and even after 40+ years, on every OS I've seen, L4 (TCP) never queries nor uses any L1/L2/L3 link quality/stability/availability information when computing retrans intervals, etc. Doing that would indeed be an aesthetically distasteful layering violation, but it would enable much more optimal behavior in a lot of wifi & cell network scenarios, as you've said.
It will probably happen eventually, at least in Linux, after a few more years of commercial pressure to make it suck less.
This is pretty easily avoided if you just buy an adapter known to work on Linux with only free software. The driver for this is included in the kernel.
Right, that's the main issue with wicd— it's a dead-end, but the usability of its curses text UI is still way nicer than connman. So you get a devil's bargain between shipping something nice-to-use that's eventually going to break everywhere vs. something faster, smaller, and newer which is user-hostile and is being actively developed (ie, has some weird bugs not really yet worked out).
The first laptop I got at my current job is now sitting unused because of unreliable wifi in Linux and Windows 7. Due to BIOS restrictions work just got me a new laptop instead of messing with trying to replace the card.
It's incredibly sad how shoddy modern wifi can be, and a testament to the importance of networking that flaky wifi can render a computer useless.
A main reason to use Apple products is that the hardware and software are always bundled and guaranteed to work together without any additional tweaking or messing around. You turn it on and it just works.
Both Windows and Linux suffer from the problem of attempting to support n different hardware configurations in a decentralized fashion, and neither has solved it very well.
Actually, I know people with MacBooks who regularly have problems with hibernation instability.
And the whole reason for having a choice in hardware configuration choice is because one-size-fits-all doesn't work too well in the real world. It might be fine for all the coders in San Francisco writing iPhone apps and Rails websites, but there are many factors that come into play for other people.
Some people can't afford an expensive laptop, want better specs, want to play games, want a touchscreen, want a full keyboard, etc. There are plenty of reasons for not going with whatever Apple has decided upon from on high.
Yeah, the default card in my X220 was a piece of crap. I got one of these for $15 and replaced it myself (took 10 minutes) and it's been muuuuch better.
(I remember this was actually a customization option when I bought it and I stupidly didn't pick it. So you don't have to get a macbook, just read carefully when you get another thinkpad.)
It truly was an option. Patting myself on the back now for taking half an hour to research it two years ago. Although one can disassemble X200 with (relative) ease.
How is it that for the last decade I've been running Linux on whatever decent machines I was primarily using and whatever random garbage I could get my hands on and I haven't had any of these problems that people are perpetually complaining about?
Are these people using some kind of exotic hardware? Am I just really lucky?
Just FYI, this response has about the same validity that refutation of online anonymity / pseudonymity as a non-issue does: simply because you're not experiencing a problem doesn't mean others aren't, and doesn't make their frustrations any less valid.
I've used Unix for over 25 years, Linux for over 17. It's my platform of choice, I very, very rarely use anything else.
And my Thinkpad T520i listing a "03:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 [Condor Peak]" under lspci and running Debian GNU/Linux jessie/sid has _never_ had reliable WiFi, and I run it essentially 24/7 with a Cat5 cable plugged into hardwire networking.
I've tried network manager, wicd-cli, wicd-curses, and other tools. I can see networks. I cannot connect to them. Plugging a cable in solves the problem far faster than futzing with a nonintuitive, low-feedback/diagnostics interface.
And with that impetus, I've just set up ye olde /etc/network/interfaces configuration, and I've got a WPA2 connection running. One less cord to trip over.
Why I could never get network manager nor wicd to work ... I don't know.
Yay, I'm lucky too. Not only that but 2 out of 3 MacBook Pros in my office do very badly with WiFi - slow to connect (my android phone is 10x faster) & they drop at the first sign of a flaky signal.
The most robust laptop I've had on wifi was a Samsung running Fedora 20. Very fast to connect and never dropped. The pre-installed Windows 7 dropped continuously and often failed to connect.
I suspect the "issue" may be very hardware dependent.
For my desktop use I just use a TP-Link router in "client mode", as far as my laptop is aware I'm on ethernet. For my laptop, I buy cheap TP Link Wireless USB's my laptop has a crappy internal wireless card regardless of OS. Sometimes buying cheap USB drives work better than "top notch" wireless cards. I would still agree, Wireless on Linux is configured terribly. Sometimes you get bugs that have fixes you have to hunt down on the internet, which you wouldn't be able to get on, if your WiFi isn't working at all. You'd think the biggest thing to get attention on a Linux OS is anything related to networking, the most crucial features of any OS these days.
My regular laptop with Win7 gave up the other day and I have been attempting to rebuild an old laptop with Linux as a temporary solution. I bought a cheap usb wifi dongle ... and lo and behold, support nightmare ensued.
Eventually, I realized that I didn't make a right choice in buying that tiny dongle. So now I am on phase two of rebuilding old laptop with Linux, with a different brand of usb wifi (double the price of the first cheap one).
I am not a Windows fan anymore, but everything in Windows world just works out of the box. And what's with Linux on daily basis looking to install upgrade for its OS?
Device manufacturers write drivers for Windows. They don't write drivers for Linux. So it's probably (mostly) the fault of the manufacturer.
Another thing is that Windows has an abstraction layer called NDIS which network drivers communicate to. This abstraction is complete enough that if you have a compatibility layer for NDIS, you can usually use Windows drivers directly on Linux. The project is called ndiswrapper. https://en.wikipedia.org/wiki/NDISwrapper Edit: to be specific, maybe it would be helpful if Linux had a similar abstraction for drivers to use?
As for the updates, that depends 100% on your distro and your own settings. If you're on Ubuntu, you can just uninstall the update-notifier, and/or edit /etc/apt/50unattended-upgrades to install updates without notifying you all the time.
Yup, ndiswrapper was one of the support routine. But even when things work, they don't work optimally, as the author is correctly claiming. But on the flip side, I should also mention that on another newer laptop with embedded wifi chip, I never had any wifi problems. Ultimately, I believe, Linux is probably suffering reputation - in comparison with Win/Mac - due to its multiple flavors and plethora of developmental end points. The lack of manufacturer support is just one of the issue.
This. Wifi is the #1 reason by far that I ditched my $800 thinkpad to switch to a $3500 MPB. (to still be in a unix environment)
On my thinkpad, with one of the known, supported wifi chipsets, wifi would work about 8-9/10 times.
But because I'm doing web dev stuff, those 1 or 2 times would basically brick the laptop for doing any kind of productive work. And that's not worth any kind of savings or effort....
afaik it's a driver problem first, before a linux wifi problem, but really I have no idea why it was working or not working.
But if anyone is out there listening, this is how much the problem is worth to me- roughly $2500...
If you're using NetworkMangler, God help you. Shit's never worked right for me... even connecting to my home wifi it gets way fewer bars than it should, and keeps making and breaking the association.
Using just wpa_supplicant and dhclient, I've had far fewer problems, particularly with Intel wireless chipsets.
Linux would benefit from a bit of expectations management. Some of us neither want nor need a less expensive Windows, and Linux in general tends to function a whole lot better if you don't treat it as such.
Thanks god you use an Intel card. When I bought a Thinkpad, it came with a Realtek card. I'm still connecting through ethernet; never managed to make it work.
I have a thinkpad wired into a wifi bridge. Since it almost never moves nowadays, it's a pert-near ideal setup and sure beats the naff kernel support for the thing's internal rtl8192se chipset.
Ah, interesting. I barely move my Thinkpad, too, so it's not that big of a deal, but it's very sad all the situation. Someone mentioned here a github user was maintaining a driver for the card he was using, so I decided to check out if someone had a patched version of the Realtek I have on my computer, and it seems that there is: https://github.com/FreedomBen/rtl8188ce-linux-driver
It can be somehow remedied a bit by adopting two strategies:
- Cherry-pick hardware, in this case cards
- Use very recent software stacks
I just plugged in a Huawei 4G LTE dongle. I spent some time making sure this particular card worked, and discarded many others. I'm running the most recent kernel, systemd, udev, etc. It was a plug & play experience. If I had proceeded otherwise, it'd have been a nightmare.
Not much better on OS X ... my brand new laptop with the most current OS X is totally unreliable about connecting to wi-fi on wake from sleep. Even on reboot half the time it fails. Maybe different cause from that highlighted by the OP's article ... but in the end not much better situation.
"Overall the 5GHz has shorter range compared to the 2.4GHz. It is recommended to select the 2.4 GHz if you using computers and wireless devices to access the Internet for simple browsing and email. These applications do not take too much bandwidth and work fine at a greater distance.
However, if you are in a place which is crowded with more wireless signals, it is advisable to use the 5GHz network to avoid interferences. Furthermore, the 5GHz is most suited for devices which require uninterrupted wider bandwidth for video/audio streaming or multimedia content."
I can sit literally right next to an AP and get a connection on the lowest basic rate
It's possible that Linux is more likely to reduce your rate in the face of increasing errors and noise, whereas other OSes/drivers might ignore the noise/errors and keep the rate the same, regardless of literally not having the advertised throughput.
In any case, even the WPA2 setup is slow for some reason, it's not just DHCP.
Both your DHCP client and your WPA client are not optimized for speed on Linux, they are optimized for reliability.
it's not unusual at all to be stuck at some low-speed AP when a higher-speed one is available
Yeah; your wireless client isn't going to change APs until it loses signal.
Most of these complaints are industry issues that have different proprietary fixes intended to appease consumers, but none of them are recommended or required by the industry.
I understood it that he just wants the one SSID and let his devices connect to the appropriate one. But his Linux system will always try the 2.4ghz first, so he ends up creating something like "mywifi24" and "mywifi5".
Personally, I would prefer doing the two separate names so I can know what I'm connecting to. Being a radio guy, I see it as two separate bands, two separate physical radios. I don't see a point in trying to give them the same name.
Of course, for my home environment, I'm pretty much using just 2.4, and I give all my access points the same SSID so I can "roam" between them. I suppose someone could want to be able to roam between 2.4 and 5ghz (I tend to use 5ghz for backhaul).
I never was able to make my Ralink based card (D-Link DWA-160) to work on 802.11n band. It only works with b/g. I suspect it's a driver limitation (rt2800usb), but I never got any response from the developers about it.
Maybe because I chose a laptop specifically for use with Linux (ThinkPad T530), but it works perfectly. Everything. Even WiFi, sound, suspend, Fn keys, fingerprint reader, everything.
That's probably a big part of the issue; my experience with Intel wireless on Linux hasn't been as great as with, say, Atheros chipsets.
Plus, NetworkManager. Good God is that terrible. To put it in perspective: on one of my laptops (a PowerBook G4 running OpenBSD), I basically run the following by hand to connect to a wireless network:
this thread (not the article itself) is one fine example of the decline "hacker"-news is going through
if you look at "linux" as a community of users and developers there is lots of get stuff for free attitude and not enough people capable of and willing to work on the tasks waiting (open source drivers for mobile GPUs anyone?)
but hey, aren't we all busy on making lots of $$$ with our übercool startups these days? call it the sad state of hacker ethics or continue to improve the free software world day by day .. choice is yours
Yep and that is the beauty, but also the pitfall of open source. People can patch to make things work for them, but also as long no one take up the task (or form a group) to fundamentally redo a driver or rewrite the architecture nothing happens. So you should take up the task to change things (find people to help you), that's the strength of open source (community driven)
rtfm. Linux provides the architecture, the vendors ignore it and do it themselves.
The Intel drivers require blobs. Broadcom used to require blobs that you had to cut out of Windows drivers yourself, though I have no idea what that's like now.
The vendors use the excuse that this stuff requires FCC certification, and if they expose the low-level stuff some hax0rs will turn the wi-fi portion of the spectrum into CB radio circa 1979.
The OpenBSD guys used to recommend Atheros, because the specs were open. I have no idea what the current wisdom is. I run Thinkpads, which are BIOS-locked to only Lenovo provided cards, so I've been in iwl-land for a long time.
The Atheros HAL was reverse engineered for OpenBSD.
adrian@freebsd used to work at Atheros and managed to get them to open source their HAL. The code is now in FreeBSD where the HAL was a blob for years. AFAIK no hardware docs are available for people outside the company.
Ralink used to provide documentation on request. They've been acquired by Mediatek since. Mediatek allows OpenBSD to distribute binary firmware images under free terms (for the run(4) driver). Perhaps they'd still provide docs. I haven't asked yet.
Realtek's engineers tend to be kind and helpful, but they don't provide docs on request, citing NDAs, and management has ignored my requests. There are several open source drivers for Linux written by Realtek engineers which can be used as a reference but they are pretty large so studying them takes time.
Broadcom's open source drivers are unreadable to the point of being more or less equivalent to binary blobs. If you want to see why try to make sense of this file in Linux:
drivers/netwireless/brcm80211/brcmsmac/phy/phy_n.c
Some broadcoms are decent, with blobs in the kernel. But it's pathetic they never implement power saving. Same tends to apply to Intel, sadly.
The result is that my system uses 1.5W more than if I ran OS X. Otherwise, I have very good power consumption, so it doesn't have an impact on overall battery range. But very annoying.
We can't blame NetworkManager on vendors, though. wicd seemed great for a while, but development on it just stopped at some point.
Maybe I should just spend some more time learning about wireless and see if I can hack on wicd or create something new rather than just mourning its loss.
which thinkpad are you using which locks to intel cards?
I know that's a pretty common occurrence of thinkpads, but I hadn't heard one that was strictly manufacture specific. Most thinkpads i've encountered are "Use Intel 5400AGN OR Broadcom'blahblahblah',not 'Use Intel Only', and all of those thinkpads i've experienced that with have had BIOS reflash available to work around that particular problem.
p.s. this message isn't intended for snark, it's intended to allow me to skip whichever thinkpad it is that's locked down. My bets are on one of the thin cant-take-the-battery-out ultrabook 'ThinkPads', but we'll see!
The strength that left me two months without WiFi on my Ubuntu install, as Canonical devs pushed their half backed replacement for the Broadcom drivers.
Reversing their decision meant for many, either spending a few sleepless nights struggling to undo their changes or wait for feature parity.
I waited, for me the time to spend nights tweaking Linux distributions is somehow gone at the turn of the century. Nowadays, either it works out of the box, or I don't care.
For example, sometimes I can see a network, but can't connect. Why? I'd like to see something like "sent 100 low level packets, checksum failed on 88 of them, disconnecting".
Or I'd like some way to see whether receiving or sending is the problem - do I get garbled packets, or do I get good ones, but no answers to the ones I send.
Sometimes I know a connection is WPA2, but it stubbornly tries another encryption method. Why? Does the AP suggest it, or is it my configuration? Sometimes I can't enter a text password, it only accepts a fixed-length hexadecimal string (happens on Windows a lot). Again why? There is no good central low-level log file or debug tool that lets me see what is going on.