To say it "sucks" is a bit harsh. It's delivering multiple hundreds of Mbps to you via an unlicensed contention-based medium. The air interface is like sending packets over a noisy Ethernet hub; it's impressive it works as well as it does. That said, this article's a good primer on some of the protocol's fundamental challenges.
In the coming years we'll hear more about 802.11ax, which is thankfully focused on efficiency vs. raw numbers, but likely won't be ratified until 2019/2020.
Not so fully featured, I assume, but free is Wireless Diagnostics. Ignore the popup, and go to Windows -> Info/Scan/Performance
(it's in System -> Library -> CoreServices -> Applications, but I start it via Spotlight)
Protip: it's also easily accessible by holding Option and clicking on the Wifi icon in the menu bar.
You're saying that in a train, if I do a track once and connect to every open network (assuming no captive portal) and obtain DHCP offers, thereby obtaining IP subnet information and being able to guess an unoccupied IP address later, I can connect to the networks on my way back and get responses from a remote server to one or two packets?
Say there is about 20m of track where a given router is in range (train won't be running that close to the building and there is a wall in between) and we're doing 130km/h, that gives me about 20/(130/3.6)=0.56 seconds of range time. A server within the same country (the Netherlands) is usually <40ms RTT on any network, so PHY+SYN+data+responses = 10+40+40=90ms, easily in range of 560ms. A tricky part is knowing when I'm in range (I shouldn't wait for a beacon), but with GPS and triangulation of beacons it shouldn't be hard to figure out where each AP is and when I'm in range. Worst case I have to do the track a few times to get enough beacons to see where signal starts and stops.
Big name brand vendors will sell you APs that support some proprietary coordination mechanism between them (usually with central "wireless network controller" as separate device), but that is not required for roaming to work. Such solutions (apart from centralized management) are mostly to keep your APs from interfering with each other (which is non-issue unless you have more than three and unless they are placed in unfortunate physical locations).
... as long as they're on the same layer 2 network.
If the client is moving >10MPH relative to a fixed AP (i.e. you're driving past the AP in your car), Wi-Fi doesn't handle that well. Your movement creates Doppler shift in your RF signal, which Wi-Fi can't adjust for (vs. e.g. LTE which does).
 ∆f = f0 * (∆v/v0) = 2.4 GHz * (4.5 / 3e8) = 36 Hz (using 10 mph = 4.5 m/s)
Edit: I'm certainly not an expert in RF. I checked my intuition against the datasheet of a radio module I've been working with (RFM69HCW), and found in a footnote of an equation (on page 31):
> crystal tolerance is in the range of 50 to 100 ppm
Here's a link to that datasheet: https://cdn.sparkfun.com/datasheets/Wireless/General/RFM69HC...
It seems that the Doppler shift (measured in PPM) would be well within 0.1% of the crystal's tolerance.
Edit #2: Doppler shift is 0.015 PPM; not 0.150 PPM as I originally stated.
A radio is a physical device subject to manufacturing variance, temperature fluctuations, and aging effects. It has to be manufactured to tolerate frequency deviations caused by these effects. What I was trying to show in my analysis is that the effect due to Doppler shift at 10 mph is orders of magnitude less than at least one of these effects. If the Doppler shift at 10 mph makes the signal unintelligible, then the other effects I mentioned should as well.
For example, a transmitter with 50 PPM accuracy could cause a 2.4 GHz signal to drift by up to 120 KHz (0.00012 GHz). That's about 3000 times more than the 36 Hz that I calculated for Doppler shift at 10 mph.
Perhaps current-generation WiFi has more of an issue with that, but I've helped build systems that used a much older generation of WiFi in an ad-hoc (no AP) mode between nodes with a speed differential of Mach 1 (between a rocket and the ground). It worked fine, and lost almost no packets over the course of the flight.
1) We had a shoulder-mounted directional antenna pointed at the rocket.
2) The rocket had a cylindrical patch antenna wrapped around it.
3) Wifi channel 1 is in an amateur radio band, which means we can amplify it beyond the normal limits. We ran it at 1W.
To don't have a huge network drop we used a VPN connection on the client computer so it had the same IP while jumping from one wifi over to the other one.
I filed a FCC complaint on Frontier delivering 10th of their speeds. They gave me an upgraded connection that gave me full 30Mbps within a week.
The only way Corporations are going to learn how to back off is when they have to pay for their mistakes.
Even if you forget the ubiquity of it... it saves ridiculous amounts of money. 15 years ago, we'd spend like $50k to wire up new office space for 100-150 people. Now... maybe $5k with cable pull.
I'm now thinking that our occasional VoIP issues probably aren't Time Warner (er, "Spectrum," whatever)'s fault, but might be caused by overwhelming the physical capabilities of our router.
So what sort of hardware should I plead with our CEO to let me purchase? Is there any way I can test, calculate, or prove that upgrading the infrastructure will prevent the operational staff from saying "Hello..? Hello..? Can you hear me?" on a phone call?
It sounds like, from hopping over to the linked CNet article, that maybe putting another router with the same network name and password, hardwired into the opposite corner from the existing router, would let us split the load across the office. But the end of Anand's article says I should throw up my hands and quit because the back room where a new router would go can still hear the original Asus transmitting loud and clear, so client devices would still have delays caused by picking up other networks.
The fix if you're stuck dealing with crappy equipment that has massively oversized buffers is to throttle the connection in another device downstream of the modem to slightly under what the modem will throttle you to and then in addition to that you can also implement QoS so that your VoIP traffic is put into a different queue and give that queue a higher priority.
This is all predicated on the assumption that your ISP is actually providing what they say they are 100% of the time. If you pay for 5 Mbps upstream, set the QoS to 4.5 Mbps and your ISP delivers 4 Mbps 10% of the time then for that 10% of the time your equipment will be sending too fast and you'll go back to filling up that oversized buffer just like before.
You'll probably have to setup two different essids. Most devices I've used will keep using the same ap even when its signal is lousy and they have a great signal to the other ap
My specific usecase is moving around the office while being on google hangouts on my phone.
Without it, clients can still roam between APs, but you'll notice ~1 second of packet drops each time your client roams to a new AP. It won't drop a VoIP call, but you'll notice a moment of silence when it happens.
The future is 802.11r/k/v, which will allow clients to roam themselves quickly between participating BSSIDs without needing to renegotiate the connection each time (as opposed to relying on pure AP-based hacks).
Any Wi-Fi network supports roaming on the network side, you just need all of your access points to use the same SSID, and to dump users onto the same subnet consistently (otherwise you'll need to re-IP after roaming).
Its always a gamble when I get a new phone/laptop/device if it will work with the multi-AP setup I have in my house, and I'm really tired of getting something and bringing it home only to find it hangs on to the furthest AP for dear life.
As a client, the Intel wifi hardware works quite well, as does the software stack on top of it (at least on Linux). It helps that they're the ones who've written a fair bit of the wifi stack itself.
We're using mostly apple hardware + ubiquiti APs, and to be honest I expected everything to "just work" with the defaults...
Think of it this way: Each channel provides a fixed amount of bandwidth, and neighboring APs need to overlap slightly to provide seamless coverage. As a first order approximation, if two APs are on the same channel you'll have twice the range, but half the bandwidth because the spectrum would be shared between them.
(It's actually slightly worse than this, because collisions will happen causing additional overhead.)
In the end if you have a problem with WiFi (e.g. not a default transparent walls and no interference) you will have a problem with it forever.
Is the 5G band consistently worse than the default (i.e. constant connection drops) or is that just my experience?
5G is the better of the two bands - it gives you many more channels and there's generally less interference. That sounds like something particular to your environment.
If you have a Mac, I recommend installing Wi-Fi signal (https://itunes.apple.com/us/app/wifi-signal/id525912054?mt=1...) - it gives you much greater insight into what's happening on the air. I'd start by installing something like that, and monitoring for correlation with your drops - what else happens when your connection drops? Does the SNR drop? Does the AP change channels? etc.
Great. Which do you like best ?
In general, 5GHz is way better for apartments (less channel congestion, apartments are usually smaller than houses), and 2.4GHz is better for non-urban houses (better wall penetration).
For urban houses (I can see 15+ 2.4GHz networks from my living room; in a larger city it could be worse), the best bet is to have 2 or more APs and connect them with ethernet if possible, and powerline otherwise. My cable internet comes in the house in the corner of the house, and is under a different roof a-frame than the rest of the house (a previous owner built living space above the garage), so I can't run ethernet through the attic, and I use powerline adapters; there's a separate subpanel for this area too, but I still get minimum 30Mbps over the powerline, which is good enough for web/e-mail.
My next step is to see if any outlets under the main roof get good signal over the powerline, and then run ethernet throughout the attic; then I can have several routers throughout the house.
I use TP-Link APs running OpenWRT; they are cheap and OpenWRT lets you adjust the power down. If you aren't cost-constrained, I've heard almost nothing bad about Ubiquity.
Expensive for sure, but setup was a breeze, reliability has been great, and I get 200Mbit+ a floor away from my ISP router, where I used to see 20 on a good day.
I was skeptical of how well a mesh could work, but the performance has sold me.
I have one here at home as well. I'm on a Gigabit connection provided by AT&T at home, and the router sits two rooms away with 2 walls between me and it. Also, my daughter is watching YouTube videos using it right now. With all that, I just ran a speed test using fast.com on my iPhone 6 and it showed 150Mbps. Really solid router.
* MikroTik hAP AC
* Grandstream GWN7610
Last unrelated tangent, how far are we until we can expect to start seeing widerspread municipal wifi deployments? Is it possible that school districts might expect to see 100% home internet penetration in a non-trivial percentage of American school district areas?
I've a Mikrotik router at home (very educational), and I've very often heard UniFi ap's can do it properly. Can I do it with Mikrotik aps? Will setting it up be hell?
1. All APs use the same SSID
2. All APs bridge users onto the same subnet
But the experience will vary from client to client in terms of how aggressively they roam. Many AP vendors offer knobs to "help" clients roam by kicking clients off at a certain signal threshold, supposedly triggering a roam: in practice, this just upsets many clients.
Re #1: "SSID" is actually shorthand for "ESSID" - Extended service set identifier. Each AP has its own "BSSID" - Basic service set identifier, which is the AP's Wi-Fi MAC address. The whole concept of an "ESSID" is to signal to clients that a certain group of APs belong to the same system.
Re #2: If you need to scale beyond a single subnet, you generally solve for this by terminating users on a centralized packet core / wireless controller, which shards users out onto multiple subnets.
The best I could see from packet sniffing was that these certain mobile clients would go into some kind of sleep mode, and when/if packets were transmitted to them, they would not respond with the proper ack (sometimes), the AP would then hang the whole wlan waiting for the ack (I guess). I tried several different netgear prosafe access points (all of them 802.11n). The only thing that worked was getting rid of my netgear hardware and replacing them with a Mikrotik wAP ac.
What are the practical ways to find interference? The tools I have is openwrt router and a macbook.
I normally use 5GHz anyway, but wireless music device is still 2.4GHz which is incredibly annoying once it dies.
I love that tool. Tells me signal strength and quality (i.e. how much noise there is). Actually, opening it now, I see there has been an update and it has gained two new indicators. Used it multiple times to find APs. Also works on laptops (though a smartphone is much more handy to walk around with).
I have a wifi networks on a building built with brick and mortar, so I need more APs to cover the entire premise, there gonna be some overlaps in the coverage here and there...
Also, if there will be so many APs that they will share channels, you want the APs physically isolated from each other even if that means a little more isolation between AP and client than you'd otherwise have. For example, in a hotel or dorm put an AP in every third room instead of all in the hallway, so they don't hear each other.
Interesting. I helped start Wayport (now AT&T WiFi Services) and subsequently worked at Vivato for a while.
Which public WiFi networks are you responsible for?
I think that's the main thing OP is saying "sucks".
Could you give a few examples? ;)
So German Funk is actually directly related to the English funk (as in music) etymologically.
An open space office (or any office) for a company that depends on the Internet for any of its work without Gigabit Ethernet cables sticking out of every workstation is pretty damn foolish.
The most fascinating piece was what he called the "butter factor". The closer a substance is in consistency to butter, the more it will absorb WiFi signals. Aruba had one heck of a challenge installing a WiFi network in the Land O'Lakes manufacturing facilities. They had to use directional antennae mounted mounted at eye level down each aisle of the factory.
Did they ever mention the permanent, cumulative damage concentrated 2.4ghz RF does to the human eye?
Non-ionising radiation is safe to fairly high levels.
I have been safely working around thousands of watts of RF power for 20+ years and have maintained my 20/15 vision.
You can get very solid wireless in your house if you are willing to spend some money on it, and of course the time on the infrastructure. This means a network drop to each room, and very dense access points.
If you try to play games and avoid the "running wires to a room" part - you're just going to have a bad time. I don't see the "mesh wireless" stuff working long-term, they are simple stop-gaps. Once in the room though you can get away with wireless.
I recently just "made it rain" access points (ubiquity in this case) more or less so anywhere in the house you have or are very close to having line of sight to an AP. Since that transition wireless is as good as wired for me.
It's a bit more expensive, but also makes my quality of life far better. Everything just works. I also took the opportunity to get a quality PoE switch for the telco closet, a solid UPS, etc. Now my network survives power outages for 5-6 hours from a single UPS and switch, powering half a dozen APs throughout the house. Bonus is the PoE IP cameras also stay working during an outage.
In my experience it always worked well, was fairly fullproof, and had fast enough speeds that it was never an issue.
2 years ago I moved into a condo that has 25 wifi APs within range of me right now. Now I get it.
Getting one of the high-end tri-band routers as you said did help, but it's still a difficult experience.
But just the fact that the higher end routers handle the extra congestion much better was more the point.
I bought this one: https://www.amazon.com/gp/product/B0167HG1V6
When I need raw speed (e.g, browsing/editing raw photos that are stored on a NAS) I still plug in. Even though the WiFi is rock solid, it's not as fast as wired.
Surely things haven't gotten this insane outside the iFruitbasket? If so I'm going to have to stock up on upgradeable refurbished laptops...
My kids' ChromeBooks don't have an Ethernet port. Good little Acer machines. School-issued CB, I have not been allowed to examine yet.
Why is there a difference between the interference of LTE-u to the interference of paid wifi,like wifi-offload ? And.how big is that difference ?
You can't add more users to a frequency without the existing users losing something, regardless of how much the LTE-U people sugar coat it.
Edit: I've it megahertz written more as mhz than MHZ.
I wrote a more in depth blog about this at https://r1ch.net/blog/wifi-beacon-pollution
802.11n (b/g compatible)
802.11n only (2GHz)
802.11n (a compatible)
802.11n only (5GHz)
I gave away my AirPort a while ago, it was also first-gen of its kind so I think it only did 802.11a
* The WPS attack doesn't work with every router that supports it, but allows for an easy way to compromise most modern routers. 
* RADIUS/EAP-TTLS is still rock solid. We all know WEP has already been broken and forgotten.
As for "I doubt anybody would do that" - it's not really a good security argument when we have the means easily available.
No need to reveal the inner workings of the standards committees to the public. Simple numbering would help.
Wired ethernet uses CSMA/CD and it's one of the reasons it won the LAN networking wars of the 80's and 90's.
Starting with 10gig, full duplex is required so that was the first IEEE speed that did away with CSMA/CD.
I was dismayed at how terrible range was in my new, Silicon Valley-sized house (~1100sqft). Even with a Meraki AP at the front wall, it would work line of sight about 30 feet and start having issues as soon as I stepped behind a wall.
Having a couple APs has mostly solved my issues, but even so, it feels like overkill for such a tiny house. But the neighbors on either side are pretty close, and I see a lot of interference. Worked a case with Meraki support for a long time, and that seems to be the real problem.
I didn't realize I couldn't "shout over" my neighbors though, so I had signal strength set to max.
Back when I was troubleshooting, I tried everything. 5ghz only. 2.4ghz only. Tweaking channels manually. Tweaking everything manually. The funny thing was, nothing helped.. but when I set things back to auto (Except max power), it all got better. Every incremental change I made caused slightly worse performance, but not enough so to notice. Going back to auto fixed it all.
Hoping auto power helps as well.
> In real life, if you had your devices close enough to each other and to the access point, about the best you could reasonably expect [with 802.11b] was 1 Mbps—about 125 KB/sec.
I used 802.11b a lot. In a non crowded situation reaching ~5.5mbit was not a problem at all. I remember seeing transfer speeds of about 700KB/s.
Why the author ignores the theoretical top speed which is something around ~60% of 11mbit is beyond me.
Then the author continues with the same thing again;
>your best case scenario [with 802.11g] tended to be about a tenth of that—5 Mbps or so
This again is not true. In a non crowded situation I had no issues reaching 2-3MB/s, which is closer to the theoretical limits of 802.11g after factoring in some signal loss.
Surely, today when everybody is having wifi you would probably not reach 700KB/s on 802.11b or 3MB/s on 802.11g, but back when it began it was actually feasible.
The usual problem these days is too many overlapping networks. Different APs on the same channel will not only defer to each other when they can hear each other, but also because the AP tends to use a shorter contention window than clients, when they do transmit, they still collide with each other with moderately high probability. Worse, modern 802.11n and 802.11ac only get good performance by forming aggregates of many packets (up to 64KB in 802.11n, more in ac) to reduce the overhead of medium acquisition. Often they don't use RTS/CTS because this reduces performance in benchmarks. When such aggregates collide you lose the whole aggregate, not just one packet.
WiFi used to be good in 3G / WCDMA days, but i think the appearance of LTE with constant innovation and advance in both carrier and phone maker has made the LTE experience so much better. And it will only get better with LTE Advance Pro and 5G.
In today's devices, interference is considered as noise, which means that SNR simply drops to a point where the higher MCS indices are not chosen at all. So, even though the device is capable of the advertised XX Gbps bitrate, the SNR isn't high enough to switch to those higher rates.
You're arguing that PHY data rates are low because SNR is low, and SNR is low because interference is high. In my experience, this is not the case. PHY rates can be quite high, but the overall throughput is low due to inefficient channel access at the MAC layer.
Wi-Fi doesn't consider interference as noise. Since it uses CSMA/CA to manage channel access, a device can only transmit if no other interfering device is transmitting, or if background interference is very low. This is why your device can be operating at a high MCS, but actual throughput will be so much lower. If your devices are using a low MCS, it's more likely that they're getting a weak signal from the access point.
It is true that MAC backoffs are also a contributing factor to the throughput being low. The CCA assessment procedure detects both wifi preambles and non-WiFi interference (pure energy detection), which is why I say interference is modeled as noise. CCA does not for example, have some intelligent coexistence algorithm for dealing with zigbee or LTE-U or other ISM traffic.
Maybe with a Zaurus, Libretto, or other small pocket-sized Linux machine of that era, sure.
I tell my wife [a software dev like me]: "I love it when you talk nerdy"
There's nothing wrong with cracking a WiFi network's WEP password in front of the owner of the network in order to demonstrate WEP's weak security. You would have to actually connect to the network to commit unauthorized access.
I got the story wrong (just checked) - the wifi wasn't secured at all. :-O
There's also the small matter that a dipole is already close to the ERP limit set by the FCC, so most directional antenna setups are not legal
At the time of adoption, many people were on dial-up or just moving to slightly faster internet speeds and they were accessing the internet via wifi, so they didn't notice a drop in performance. Wifi speed increased along with access to faster internet.
Is it as fast as it can possibly be? No, but it's like having a Ferrari in highway traffic. Most people can't take advantage of the technical capabilities of anything that would be considered better.
Most file transfer dialogs I've seen ("real world"?) display transfer rate in Bytes. Advertisers use bits, that little marketing wave can actually explain the speed drop to 1/8th of the "advertised" rate.
(a smart girl knows that it is a bad idea to urge nerds to make a choice between her and his PC).
Wifi has a huge advantage and that's portability. It totally makes sense in this fast-paced, moving, all-encompassing networked world, but wifi still lags behind ethernet for all other aspects :
- Health (I am not sure if the science is settled on whether or not wifi may be detrimental to one's health, but it does raise more concerns than ethernet)
But you're absolutely right that wifi does still bring an essential factor in the equation, the ability to move around.
Realistically hi power USB3 is what I should put as ports around my house and just deliver Ethernet that way.
The charging time (and necessary backups for continuous use), weight, and low power provided all are cited as weaknesses of batteries.
The only advantages batteries have is increased portability and ability to provide power without existing infrastructure. Which is why we see batteries primarily limited to highly portable devices and backup systems (because they can store power more stably than capacitors).
Wifi sucks for anything that doesn't need to move or have issues preventing cabling (eg, infeasible to run a line through walls, but you can get signal). Batteries suck for anything that doesn't need to move a lot or an independent source of limited power.
That's still valuable, but I was hoping for a more technical view on it. Anyone have an article that explains why the systems are as limited as they are?
What I'm looking for is e.g. an explanation of how MIMO works, or why explicit timeslots were considered useful for 802.11n but not 11b.
If you want a very simplistic explanation, it's like giving you multiple Ethernet cables to improve throughput. (To gloss over all the details about RF and boil it down to the practical benefit: more bandwidth)
Per the article, not everything with MIMO is advertised as 1x1, 2x2, etc. You frequently find WiFi routers mentioning 1T1R (1x1), or 2T2R (2x2). Or maybe just the Chinese routers I look at.
This article is mostly on about interference, which is definitely a big issue in populated areas. But there are definitely issues with Linux and the drivers used by AP manufacturers. There's a project: Make Wi-Fi fast  which aims to address these issues.
They're making good progress, especially in environments with a lot of clients. Just having one 802.11b or g client can really ruin throughput for newer devices due to the timesharing algorithm used by default.
O'Reilly's "802.11n: A Survival Guide" is fairly OK.
If you read German, I highly recommend "Wireless LANs" by author Jörg Rech.
I suggest reading Andrea Goldsmith's or Pramod Vishwanath's book on Wireless Communication. If not, Intro to Communication Systems by Madhow is recommended.
Do you happen to know any useful English literature that covers the MAC layer of modern wifi standards (n, ac, ax)?
Apart from the 802.11 standards, of course.
I've got office space as well as a desk at a regular client. On both desks, there's a USB hub with an ethernet adapter hooked up. I plunk down my laptop, connect to the hub and I'm done.