Hacker News new | past | comments | ask | show | jobs | submit login
A deep dive into why Wi-Fi kind of sucks (arstechnica.com)
534 points by nikbackm on March 4, 2017 | hide | past | favorite | 235 comments



I build public Wi-Fi networks, you've probably used one of them. AMA if you have Wi-Fi questions.

To say it "sucks" is a bit harsh. It's delivering multiple hundreds of Mbps to you via an unlicensed contention-based medium. The air interface is like sending packets over a noisy Ethernet hub; it's impressive it works as well as it does. That said, this article's a good primer on some of the protocol's fundamental challenges.

In the coming years we'll hear more about 802.11ax, which is thankfully focused on efficiency vs. raw numbers, but likely won't be ratified until 2019/2020.


Is there a good suite of tools to actually measure the quality of a wifi connection and/or surface, even help diagnose, problems?



On macOS, someone below recommended "WiFi Signal", USD 5.

Not so fully featured, I assume, but free is Wireless Diagnostics. Ignore the popup, and go to Windows -> Info/Scan/Performance (it's in System -> Library -> CoreServices -> Applications, but I start it via Spotlight)


> Not so fully featured, I assume, but free is Wireless Diagnostics. Ignore the popup, and go to Windows -> Info/Scan/Performance (it's in System -> Library -> CoreServices -> Applications, but I start it via Spotlight)

Protip: it's also easily accessible by holding Option and clicking on the Wifi icon in the menu bar.


Fing for services and ping?


That's at least one OSI layer north of what was being asked.


It really depends on your Operating System.


Once you're connected wifi is great. It's the long connection negotiation and the slow reconnection process if it gets interrupted or you switch networks that's infuriating. I get upwards of a minute of downtime when I switch from my router to my AP at home.


Virtually none of the handoff delay is actually due to L1 or L2. The PHY handoff itself can happen in less than 10ms, and with Pre-Auth that handoff is actually happening before you even leave one AP for another.


I've always wondered about this, but given how slow WiFi connects I assumed it impossible. But with this new information....

You're saying that in a train, if I do a track once and connect to every open network (assuming no captive portal) and obtain DHCP offers, thereby obtaining IP subnet information and being able to guess an unoccupied IP address later, I can connect to the networks on my way back and get responses from a remote server to one or two packets?

Say there is about 20m of track where a given router is in range (train won't be running that close to the building and there is a wall in between) and we're doing 130km/h, that gives me about 20/(130/3.6)=0.56 seconds of range time. A server within the same country (the Netherlands) is usually <40ms RTT on any network, so PHY+SYN+data+responses = 10+40+40=90ms, easily in range of 560ms. A tricky part is knowing when I'm in range (I shouldn't wait for a beacon), but with GPS and triangulation of beacons it shouldn't be hard to figure out where each AP is and when I'm in range. Worst case I have to do the track a few times to get enough beacons to see where signal starts and stops.


It may simply be the DHCP that's the problem - the step it's hanging on is always "requesting IP address". I've tried fixed IPs and that hasn't helped. Part of the problem is that even though I bought a designated access point device instead of a generalist wifi router (and I suspect it's the same device with marginally different firmware), the configuration process to make this stuff work is completely opaque. I have no idea what I'm doing right or wrong, and every guide says something different - at this point I'm at just "give both networks the same SSID and encryption and password and hope it all works out".


Setting all APs in one network to same ESSID is exactly wbat you should do and for half-reasonable APs that works well.

Big name brand vendors will sell you APs that support some proprietary coordination mechanism between them (usually with central "wireless network controller" as separate device), but that is not required for roaming to work. Such solutions (apart from centralized management) are mostly to keep your APs from interfering with each other (which is non-issue unless you have more than three and unless they are placed in unfortunate physical locations).


> ... is exactly wbat you should do ...

... as long as they're on the same layer 2 network.


Have you ever built "moving" wifi networks (trains, planes), and do those have special challenges?


Yes and nothing special, as long as the clients and AP are moving consistent to each other.

If the client is moving >10MPH relative to a fixed AP (i.e. you're driving past the AP in your car), Wi-Fi doesn't handle that well. Your movement creates Doppler shift in your RF signal, which Wi-Fi can't adjust for (vs. e.g. LTE which does).


Doppler shift in a radio wave at 10 mph? Wouldn't the Doppler shift be around 36 Hz in that case [1]? That's about 0.015 PPM. Seems temperature would have a bigger impact on frequency stability than that.

[1] ∆f = f0 * (∆v/v0) = 2.4 GHz * (4.5 / 3e8) = 36 Hz (using 10 mph = 4.5 m/s)

Edit: I'm certainly not an expert in RF. I checked my intuition against the datasheet of a radio module I've been working with (RFM69HCW), and found in a footnote of an equation (on page 31):

> crystal tolerance is in the range of 50 to 100 ppm

Here's a link to that datasheet: https://cdn.sparkfun.com/datasheets/Wireless/General/RFM69HC...

It seems that the Doppler shift (measured in PPM) would be well within 0.1% of the crystal's tolerance.

Edit #2: Doppler shift is 0.015 PPM; not 0.150 PPM as I originally stated.


Keep in mind, modern WiFi uses OFDM where the signal is transmitted via a number of simultaneous carriers offset by very small frequencies. So while the center of the 20Mhz band might be shifted just a little the small carries might be shifted far enough onto each other that the signal is unintelligible.


If the individual carriers aren't overlapping in the transmitter, they won't be overlapping in the receiver, either. Each one will be shifted proportionally (∆f/f0=∆v/v0) in the same direction. The signal is still intelligible. It's just shifted.

A radio is a physical device subject to manufacturing variance, temperature fluctuations, and aging effects. It has to be manufactured to tolerate frequency deviations caused by these effects. What I was trying to show in my analysis is that the effect due to Doppler shift at 10 mph is orders of magnitude less than at least one of these effects. If the Doppler shift at 10 mph makes the signal unintelligible, then the other effects I mentioned should as well.

For example, a transmitter with 50 PPM accuracy could cause a 2.4 GHz signal to drift by up to 120 KHz (0.00012 GHz). That's about 3000 times more than the 36 Hz that I calculated for Doppler shift at 10 mph.


> If the client is moving >10MPH relative to a fixed AP (i.e. you're driving past the AP in your car), Wi-Fi doesn't handle that well. Your movement creates Doppler shift in your RF signal, which Wi-Fi can't adjust for (vs. e.g. LTE which does).

Perhaps current-generation WiFi has more of an issue with that, but I've helped build systems that used a much older generation of WiFi in an ad-hoc (no AP) mode between nodes with a speed differential of Mach 1 (between a rocket and the ground). It worked fine, and lost almost no packets over the course of the flight.


Goodness gracious. What was the range of your network? Trying that on a consumer AP would instantly lose the signal, I'd imagine


Miles, easily. But we had three major advantages:

1) We had a shoulder-mounted directional antenna pointed at the rocket.

2) The rocket had a cylindrical patch antenna wrapped around it.

3) Wifi channel 1 is in an amateur radio band, which means we can amplify it beyond the normal limits. We ran it at 1W.


Eh, I run 1W EIRP on consumer equipment and it's legal[1]. Assuming you are in US? Or not?

[1]: https://www.air802.com/fcc-rules-and-regulations.html


US, but TTBOMK it wouldn't have been with the combination of amp and antenna. At the time, we'd looked it up and the legal limit without an amateur license was somewhere in the hundreds-of-milliwatts range.


Normal Wifi is extremely low power by regulation -- the medium itself is capable of much more, and I assume this project used different antennas than you find in your router.


I worked a place that did roaming from one AP to another one on a train in Norway in the early 2000 / late 90s, think we was the first in the world to do it.

To don't have a huge network drop we used a VPN connection on the client computer so it had the same IP while jumping from one wifi over to the other one.


Can you not sue modem providers for false advertising? America is a litigious country, I wonder why this hasnt happened?

I filed a FCC complaint on Frontier delivering 10th of their speeds. They gave me an upgraded connection that gave me full 30Mbps within a week.

The only way Corporations are going to learn how to back off is when they have to pay for their mistakes.


Did you reply to the wrong comment? Or maybe the phone app I'm using misorderered...


I think it's the wrong comment, or maybe our wifi connection misordered content.. /s


I wonder if there are also multi-path effects that become relevant with motion.


This was a problem for mobile data ~2008 on Deutsche Telekom networks.


Seriously. Wifi is a miracle product.

Even if you forget the ubiquity of it... it saves ridiculous amounts of money. 15 years ago, we'd spend like $50k to wire up new office space for 100-150 people. Now... maybe $5k with cable pull.


Yeah, it's ubiquitous but I'm not so hot on the reliability. It may save a lot of money to not wire up an office, but I wonder what the production losses are due to less reliability.


Wow, that article taught me that our office is probably using the wrong setup for optimal VoIP performance. When I started, I bought us an Asus RT-N66U to replace the ancient 802.11g router that was powering the 5 computers, 4 smartphones, 3 ethernet VoIP phones and 1 printer. Fast forward two years and now we have 12 computers on wifi using soft phones, 3 engineering rigs on ethernet, 11 smart phones, and that lonely printer.

I'm now thinking that our occasional VoIP issues probably aren't Time Warner (er, "Spectrum," whatever)'s fault, but might be caused by overwhelming the physical capabilities of our router.

So what sort of hardware should I plead with our CEO to let me purchase? Is there any way I can test, calculate, or prove that upgrading the infrastructure will prevent the operational staff from saying "Hello..? Hello..? Can you hear me?" on a phone call?

It sounds like, from hopping over to the linked CNet article, that maybe putting another router with the same network name and password, hardwired into the opposite corner from the existing router, would let us split the load across the office. But the end of Anand's article says I should throw up my hands and quit because the back room where a new router would go can still hear the original Asus transmitting loud and clear, so client devices would still have delays caused by picking up other networks.


The solution is don't do VoIP on WiFi. You mentioned the 3 ethernet VoIP phones, do those have issues? Also, unless you're running third party firmware on your Asus router, you're going to start dropping VoIP packets whenever anything maxes out your connection. The packet loss actually isn't really a problem, the problem is that when you max out your connection, whatever cable modem you're using is going to start filling up a massively oversized buffer instead of dropping packets and any new packets go to the back of that buffer and take hundreds of milliseconds before the packet even leaves your building.

The fix if you're stuck dealing with crappy equipment that has massively oversized buffers is to throttle the connection in another device downstream of the modem to slightly under what the modem will throttle you to and then in addition to that you can also implement QoS so that your VoIP traffic is put into a different queue and give that queue a higher priority.

This is all predicated on the assumption that your ISP is actually providing what they say they are 100% of the time. If you pay for 5 Mbps upstream, set the QoS to 4.5 Mbps and your ISP delivers 4 Mbps 10% of the time then for that 10% of the time your equipment will be sending too fast and you'll go back to filling up that oversized buffer just like before.


You can put the twin APs on different channels, and the two collision domains shouldn't overlap.


So he's specifically referring to collision on the same channel, not that a device connected to Router A on 2.4 channels 1->4, but can also see Router B on channels 10->13 has to wait to transmit data back to Router A.


Yes, collision occurs on the same channel transmit.

You'll probably have to setup two different essids. Most devices I've used will keep using the same ap even when its signal is lousy and they have a great signal to the other ap


What security measures do you use to prevent someone from knocking a connected user off and redirecting them so they reconnect to an evil twin AP?


Does roaming devices between APs actually work? I reading through ubiquiti forums and it looks like they used to support it, but backed out of it.

My specific usecase is moving around the office while being on google hangouts on my phone.


Not the OP, but yes it works great. The APs themselves do not need to support it, the client does. But you can also do zero handoff (quicker, like for voip heavy environments) and that needs AP support. Read https://help.ubnt.com/hc/en-us/articles/205144590-UniFi-What...


It's worth noting that newer Ubiquity APs no longer support Zero Handoff. Part of the problem was that all the APs needed to be on the same channel, which caused co-channel interference under heavy usage; there was a significant performance impact. (Also, I believe that chipset in the newer Unifi models just don't support it.)

Without it, clients can still roam between APs, but you'll notice ~1 second of packet drops each time your client roams to a new AP. It won't drop a VoIP call, but you'll notice a moment of silence when it happens.

The future is 802.11r/k/v, which will allow clients to roam themselves quickly between participating BSSIDs without needing to renegotiate the connection each time (as opposed to relying on pure AP-based hacks).


It really depends on the phone and the AP.

https://support.apple.com/en-us/HT202628


I thought you could use aggressive approaches by letting the AP's just kick the device off the AP so it's forced to join a better one?


In theory that works, but a lot of phones get angry when the AP stops talking to it. It just assumes that you lost WiFi and disconnects and doesn't try to rejoin right away


Thanks, that link is very informative.


Yes it absolutely does, but it is dependent on the station (the client) doing so. One of my gripes about Wi-Fi is that it delegates too much to the station - your experience is thus dictated by whoever wrote your phone/computer's Wi-Fi chipset driver, and how they interpret the standard.

Any Wi-Fi network supports roaming on the network side, you just need all of your access points to use the same SSID, and to dump users onto the same subnet consistently (otherwise you'll need to re-IP after roaming).


So on that note, do you have any brands/manufacturers/chipsets that you see as particularly good in terms of being a client?

Its always a gamble when I get a new phone/laptop/device if it will work with the multi-AP setup I have in my house, and I'm really tired of getting something and bringing it home only to find it hangs on to the furthest AP for dear life.


> So on that note, do you have any brands/manufacturers/chipsets that you see as particularly good in terms of being a client?

As a client, the Intel wifi hardware works quite well, as does the software stack on top of it (at least on Linux). It helps that they're the ones who've written a fair bit of the wifi stack itself.


Ubiquiti UniFi AC or UniFi AC-LR. I'm not the poster above you were asking. Never need rebooting. Present a single AP name for multiple devices and both 2.4 and 5.x GHz. Reasonable price for home use. Based on personal experience in a large house. grk, roaming works just fine.


Klathmon was asking about clients, not APs.


Should I set channels on the APs in some specific way?

We're using mostly apple hardware + ubiquiti APs, and to be honest I expected everything to "just work" with the defaults...


I've heard that Apple hardware in particular sticks with an AP longer than other devices, the Apple device is not continually looking for the best connection, it sticks with what it started with until the signal gets very weak. Don't take this literally but it's like Apple waits until you're down to one bar while some others wait until you're down to two bars before looking for a stronger connection from another AP.


You should make sure the APs don't interfere with each other. The automatic channel selection didn't work well for me. I had to manually set my two APs at each end of the 2.4GHz spectrum. It's also a good idea to turn on wide band channel. If you have an Android device somewhere you can download wifi analyser to make sure the channels don't overlap.


They don't need to be on different channels, but it's recommended for performance (due to co-channel interference).

Think of it this way: Each channel provides a fixed amount of bandwidth, and neighboring APs need to overlap slightly to provide seamless coverage. As a first order approximation, if two APs are on the same channel you'll have twice the range, but half the bandwidth because the spectrum would be shared between them.

(It's actually slightly worse than this, because collisions will happen causing additional overhead.)


I have a ubiquiti AP and as part of the set up it did a scan which picked my channels based on what it scanned and it seemed to do a decent job. You can go into the controller software to initiate a scan, though it will kick everything off the network for a few minutes.


If you option-click on the wifi symbol in the macOS menu bar, there's an option to access various diagnostic tools. One of them gives you channel recommendations as well.


Channel selection won't affect roaming, so you shouldn't need to. What specific problem are you having?


I believe it is the responsibility of the client to correctly roam between similarly named APs. I vaguely recall reading into this when I read that my unifi AP didn't support roaming. I could be mistaken though!


It work but on every border there will be a zone where you can join AP on either side. You can be still on weak AP but device will consider it weak enough and won't switch just yet.

In the end if you have a problem with WiFi (e.g. not a default transparent walls and no interference) you will have a problem with it forever.


Which router do you recommend?

Is the 5G band consistently worse than the default (i.e. constant connection drops) or is that just my experience?


An unfortunate side effect of being in the industry is that I constantly have a surplus of enterprise-grade hardware at home. It's been a while since I looked at consumer hardware. Apple's Airports were solid, but have been discontinued. I've used both of the OnHubs and both were performant / stable. I've heard generally good things about Google's new Wi-Fi pucks, and Netgear's nighthawk series.

5G is the better of the two bands - it gives you many more channels and there's generally less interference. That sounds like something particular to your environment.

If you have a Mac, I recommend installing Wi-Fi signal (https://itunes.apple.com/us/app/wifi-signal/id525912054?mt=1...) - it gives you much greater insight into what's happening on the air. I'd start by installing something like that, and monitoring for correlation with your drops - what else happens when your connection drops? Does the SNR drop? Does the AP change channels? etc.


"An unfortunate side effect of being in the industry is that I constantly have a surplus of enterprise-grade hardware at home."

Great. Which do you like best ?


You can also do channel bonding on 5G and use 40MHz channels. Almost double the capacity!


My phone doesn't do 5GHz, but other than that, 5GHz is significantly better until you get about 3-5 walls between you and the router.

In general, 5GHz is way better for apartments (less channel congestion, apartments are usually smaller than houses), and 2.4GHz is better for non-urban houses (better wall penetration).

For urban houses (I can see 15+ 2.4GHz networks from my living room; in a larger city it could be worse), the best bet is to have 2 or more APs and connect them with ethernet if possible, and powerline otherwise. My cable internet comes in the house in the corner of the house, and is under a different roof a-frame than the rest of the house (a previous owner built living space above the garage), so I can't run ethernet through the attic, and I use powerline adapters; there's a separate subpanel for this area too, but I still get minimum 30Mbps over the powerline, which is good enough for web/e-mail.

My next step is to see if any outlets under the main roof get good signal over the powerline, and then run ethernet throughout the attic; then I can have several routers throughout the house.

I use TP-Link APs running OpenWRT; they are cheap and OpenWRT lets you adjust the power down. If you aren't cost-constrained, I've heard almost nothing bad about Ubiquity.


Do you use your phone lines? If not check if they are cat 5. If so, use that instead of the power lines.


Only have an anecdote, but when I replaced my latest (last?) generation Apple AirPort Extreme with a trio of Eeros for a ~1700 sq foot two-floor house I immediately wished I'd done it sooner.

Expensive for sure, but setup was a breeze, reliability has been great, and I get 200Mbit+ a floor away from my ISP router, where I used to see 20 on a good day.

I was skeptical of how well a mesh could work, but the performance has sold me.


Physics is the issue here. The 5GHz band is going to be better if you are in the same room as the AP. However, it does not penetrate walls as well, so depending on your set up (and the construction of your walls), that might explain your experience.


In two houses on two routers I've had horrendous experience with 5GHz.


Not OP, but we run retail stores that do cell phone and computer repair, so we needed reliable routers that could routinely support at least 10-15 clients grabbing data at any given time. We've been using the Linksys WRT1900AC (now ACS) and they've been great.

I have one here at home as well. I'm on a Gigabit connection provided by AT&T at home, and the router sits two rooms away with 2 walls between me and it. Also, my daughter is watching YouTube videos using it right now. With all that, I just ran a speed test using fast.com on my iPhone 6 and it showed 150Mbps. Really solid router.


For the price, I think the best set up is getting what ever ethernet only router that has your needed feature set (or just use a box with linux/shorewall), and then for wifi, running cat5 with POE to some Mikrotik wAP access points.


Ubiquiti does "enterprise" APs that are well within a domestic budget. I have a couple of UniFi AC-Lites and an Edgerouter X and no complaints.


Check out these two products:

* MikroTik hAP AC * Grandstream GWN7610


Do you have suggestions for organizing semi-dense apartment complexes such that there aren't more SSIDs than units and are there legal ways to safely share wifi with neighbors? It seems to me that most TOS agreements from ISPs seem to not like the idea that I might share my wifi with my neighbors... Also comments on 20 MHZ v 40 MHZ routers? For high latency, highly attenuated signal situations where varies from .1 Mbps to 3 Mbps, on OSX should one mess with MTU?

Last unrelated tangent, how far are we until we can expect to start seeing widerspread municipal wifi deployments? Is it possible that school districts might expect to see 100% home internet penetration in a non-trivial percentage of American school district areas?


What hardware, and what kind of settings do you need to seek to have a working roaming network?

I've a Mikrotik router at home (very educational), and I've very often heard UniFi ap's can do it properly. Can I do it with Mikrotik aps? Will setting it up be hell?


In general see my response to grk, you can roam with any network that meets these criteria:

1. All APs use the same SSID

2. All APs bridge users onto the same subnet

But the experience will vary from client to client in terms of how aggressively they roam. Many AP vendors offer knobs to "help" clients roam by kicking clients off at a certain signal threshold, supposedly triggering a roam: in practice, this just upsets many clients.

Re #1: "SSID" is actually shorthand for "ESSID" - Extended service set identifier. Each AP has its own "BSSID" - Basic service set identifier, which is the AP's Wi-Fi MAC address. The whole concept of an "ESSID" is to signal to clients that a certain group of APs belong to the same system.

Re #2: If you need to scale beyond a single subnet, you generally solve for this by terminating users on a centralized packet core / wireless controller, which shards users out onto multiple subnets.


I switched from single to multiple ESSID when I found most laptops would not switch BSSID until they completely lost connectivity. It was a shame because both Android phones one iPhone I tried worked great under this setup.


I've got a weird problem at my home where WiFi connection on my laptop gets really bad as soon as some particular other laptop connects to the WiFi. Reconnecting fixes the issue. Any idea what might cause this or how to gather more information on potential causes?


I had the same issue with a $300 netgear prosafe AP. When certain 802.11ac clients would connect (mostly cell phones), it would randomly get periods of of complete grid lock in the wlan. All clients would have good signal strength but they had no data throughput. Disconnecting and reconnecting the offending client would restore the wlan throughput.

The best I could see from packet sniffing was that these certain mobile clients would go into some kind of sleep mode, and when/if packets were transmitted to them, they would not respond with the proper ack (sometimes), the AP would then hang the whole wlan waiting for the ack (I guess). I tried several different netgear prosafe access points (all of them 802.11n). The only thing that worked was getting rid of my netgear hardware and replacing them with a Mikrotik wAP ac.


Try to switch your router to 802.11n-only mode (no compatibility with b and g).


802.11ax won't mean anything unless you reduce interference (see my reply below). Without high SNR, all these new efficiency techniques won't get you anything. Shannon capacity is still what limits performance.


Any idea why we do not have public, encrypted wifi? Seems like it should be easy / trivial to fix, and protects people from passive listeners.


Not OP, but there are some ideas like EAP-UNAUTH-TLS but it's not well (at all?) supported. Some details: https://lists.eff.org/pipermail/tech/2012-December/000333.ht...


Not sure what you mean by this. Https exists, no?


HTTP is not the only traffic on a network. DNS is rarely encrypted, and many email providers use STARTTLS, which starts unencrypted.


Sadly, https is pervasive but not universal.


At my townhouse there are times that 2.4 GHz network dies completely in main floor. Usually its evenings, sometimes feels its periodic. Latency sky rockets even when I am 1 meter from the router.

What are the practical ways to find interference? The tools I have is openwrt router and a macbook.

I normally use 5GHz anyway, but wireless music device is still 2.4GHz which is incredibly annoying once it dies.


We had a specific lounge chair, which when you sat in it with a laptop, if someone was using the microwave, the wi-fi would drop out. If you stood up, everything was fine. We took that lounge suite to the holiday house. Good riddance :)


It's almost certainly a microwave or cordless phone. I still forget sometimes and knock out my own wireless with a microwave.


It's definitely not mine. The effect is crazy tho. I wanna at least find a way to find the source/direction of it.


apt-get install wavemon

I love that tool. Tells me signal strength and quality (i.e. how much noise there is). Actually, opening it now, I see there has been an update and it has gained two new indicators. Used it multiple times to find APs. Also works on laptops (though a smartphone is much more handy to walk around with).


How does Wi-Fi compares to other alternatives for gaming? Which would you suggest to minimize lag introduced on the last mile?


In topic of building a multi-AP but with same SSID wifi network: Do I choose the same channel too or not?

I have a wifi networks on a building built with brick and mortar, so I need more APs to cover the entire premise, there gonna be some overlaps in the coverage here and there...


No, set every AP to use the quietest channel. Using something like Wifi Analyzer on a phone positioned where the AP will be but with the AP off, see what's got the least going on... any loud APs on the same channel whether they're yours or not would be detrimental. Also completely avoid overlap. On 2.4 GHz that means 1, 6, 11.

Also, if there will be so many APs that they will share channels, you want the APs physically isolated from each other even if that means a little more isolation between AP and client than you'd otherwise have. For example, in a hotel or dorm put an AP in every third room instead of all in the hallway, so they don't hear each other.


Some APs let's you lower transmit power, which will help in isolating them


> I build public Wi-Fi networks, you've probably used one of them. AMA if you have Wi-Fi questions.

Interesting. I helped start Wayport (now AT&T WiFi Services) and subsequently worked at Vivato for a while.

Which public WiFi networks are you responsible for?


How do you feel about the way WiFi is marketed to the public?

I think that's the main thing OP is saying "sucks".


and then someone turns on a leaky microwave oven and you get nothing


You should put contact info in your profile.


why does authentication take so damn long ? Why can't it happen in a blink of an eye ?


> you've probably used one of them

Could you give a few examples? ;)


There is a German tech saying that goes "Wer Funk kennt, nimmt Kabel" ("Those who know wireless, use wires").


Funk means wireless ??


Funk means RF/radio/wireless. The term originates from the first radio transmitter ever, which used spark gaps (spark = Funke, gap = Strecke/Spalte) to generate detectable RF.

So German Funk is actually directly related to the English funk (as in music) etymologically.


Thanks for the detailed explanation.


In this context, yes. More literally, Funk means radio, and the word for wireless would be "drahtlos".


Means "radio".


This article should be read by everyone working in open space offices across the world expecting to get decent speed and reliability out of Wi-Fi with more than a dozen people in a single room.

An open space office (or any office) for a company that depends on the Internet for any of its work without Gigabit Ethernet cables sticking out of every workstation is pretty damn foolish.


5 GHz has plenty of non-overlapping channels (24?), so in this situation you can use multiple APs in different room corners set to low power.


Only if the other floors in your building aren't doing the same thing. Which is exactly why this approach is no longer viable with 2.4GHz. ;)


I once had the opportunity to listen to an extremely talented WiFi expert from Aruba Networks explain on a whiteboard to a rapt audience of infrastructure engineers how this works. And how adding multiple SSIDs is a contributing factor to this problem.

The most fascinating piece was what he called the "butter factor". The closer a substance is in consistency to butter, the more it will absorb WiFi signals. Aruba had one heck of a challenge installing a WiFi network in the Land O'Lakes manufacturing facilities. They had to use directional antennae mounted mounted at eye level down each aisle of the factory.


> They had to use directional antennae mounted mounted at eye level down each aisle of the factory.

Did they ever mention the permanent, cumulative damage concentrated 2.4ghz RF does to the human eye?


Care to cite a credible source for this claim?


I have a counter claim: near face high powered WIFI router, 20/10 vision.

Non-ionising radiation is safe to fairly high levels.


Agreed. The primary concern is tissue heating that cannot be sufficiently dissipated. This is nearly impossible at "wifi" power levels(Below 1 watt) without carefully contrived situations -- such as placing your eyeball(which is the least-equipped organ to dissipate heat) at the exact focal point of a specially designed parabolic dish.

I have been safely working around thousands of watts of RF power for 20+ years and have maintained my 20/15 vision.


My day to day happiness quotient shot up the day I ditched my wireless wifi and ran Ethernet cables through my spaces. I'd realized I was constantly noticing wifi issues, and an apple mbp I use for email seems to drop wifi every half hour. But no longer, with them all wired up! Realizing there are USB3 to Ethernet gadgets really hits home how our modern technology is market-driven-dumb and anti-consumer: all modern laptops don't even have Ethernet ports anymore!


I think a lot of this is the 2004 model of plopping an integrated router/access point/modem in a single location in their house simply doesn't work any longer. At least for any sort of Urban deployment. You are seeing many large companies also realizing this and trying to bring products to market.

You can get very solid wireless in your house if you are willing to spend some money on it, and of course the time on the infrastructure. This means a network drop to each room, and very dense access points.

If you try to play games and avoid the "running wires to a room" part - you're just going to have a bad time. I don't see the "mesh wireless" stuff working long-term, they are simple stop-gaps. Once in the room though you can get away with wireless.

I recently just "made it rain" access points (ubiquity in this case) more or less so anywhere in the house you have or are very close to having line of sight to an AP. Since that transition wireless is as good as wired for me.

It's a bit more expensive, but also makes my quality of life far better. Everything just works. I also took the opportunity to get a quality PoE switch for the telco closet, a solid UPS, etc. Now my network survives power outages for 5-6 hours from a single UPS and switch, powering half a dozen APs throughout the house. Bonus is the PoE IP cameras also stay working during an outage.


I splurged on one of the expensive tri-band routers and honestly now the wireless is good enough that I don't care.


I used to live about 1/4 mile away from any other houses, and I never got why people always complained about wifi.

In my experience it always worked well, was fairly fullproof, and had fast enough speeds that it was never an issue.

2 years ago I moved into a condo that has 25 wifi APs within range of me right now. Now I get it.

Getting one of the high-end tri-band routers as you said did help, but it's still a difficult experience.


I never heard of a tri-band router before. It's a misnomer because there is no third band, it just means it can handle 2.4GHz but will use two channels of 5GHz at once. There have been other routers in the past that used more than one channel but it sounds very selfish to use in a dense apartment/condo living situation.


Well luckily there is a massive chunk of 5ghz channels that nobody is using right now so i'm okay, and the routers that use the 2 bands of 5ghz actually will step down to only 1 if they see someone else stepping on either of them.

But just the fact that the higher end routers handle the extra congestion much better was more the point.


It's worse than that. There is a third band, 802.11ad running at 60GHz. But as you say "tri-band" actually means tri-channel.


Good point, they should be called simultaneous tri channel.


'Power Line' or similar adapters can work quite well in apartments. You can use it for a wired connection where practical, and otherwise have a couple 5GHz routers for a good wireless access.


Hm, I'm not quite that isolated, but I do live in a single-family home, so that might explain it.


If they haven't done so already, maybe we'll see copper-clad drywall or flooring for such environments ? Guess that would take down cellular service too though.


I'm just waiting for a standard which smartly reduces transmit power so they all don't need to shout over each other.


Reducing transmit power reduces interference, but it also reduces capacity since the received power (and hence the SNR at the receiver) is reduced! The Shannon limit on channel capacity dictates exactly how much bit rate (C=B*log2(1+SNR)) is achievable over a channel. SNR is a fundamental ingredient of this equation.


Also, copper is pretty expensive. A mesh of cheaper metal should do the job; might even be possible to tune it to the right frequency so you don't block cellphones, but I don't know


My wifi is fine between one wireless device and a wired device, but if I'm say, using my laptop to beam a video off my NAS onto my TV, it all breaks down.


What is triband. Can you recommend one?


I had to google it, it sounds selfish when the trouble is caused by many neighbors having their own APs nearby.


Doesn't it enhance to the problem further though? (if yes, then yes, wifi kinda sucks)


not really, as I believe any router that uses more than one channel at once must disable that feature if there are any other APs that are trying to use any of those channels.


Can't speak for anyone else, but that was not my problem.


It uses 2 5Ghz channels and 1 2.4ghz channel.

I bought this one: https://www.amazon.com/gp/product/B0167HG1V6


All of my WiFi problems were solved with a UniFi AP - dropouts, connections randomly slowing down, etc all completely disappeared once I got that AP up and running.

When I need raw speed (e.g, browsing/editing raw photos that are stored on a NAS) I still plug in. Even though the WiFi is rock solid, it's not as fast as wired.


> all modern laptops don't even have Ethernet ports anymore!

Surely things haven't gotten this insane outside the iFruitbasket? If so I'm going to have to stock up on upgradeable refurbished laptops...


Usually it's just a factor of the thin-ness of a laptop. Ethernet is not a small port and laptops that aren't heavy duty ultrabooks are regularly thinner than one, even at their thickest point.


You can get USB->Ethernet adapters.


Exactly. They're very cheap, anywhere beteween $5 and $15. Cheap enough to basically add them to every ethernet drop where you regularly sit.


For simple things, I actually love my HP Stream 11 laptop. Basically a ChromeBook sold with Windows. No Ethernet.

My kids' ChromeBooks don't have an Ethernet port. Good little Acer machines. School-issued CB, I have not been allowed to examine yet.


That's simply not the experience of the vast, vast majority of users. That the market doesn't bend over backwards so you won't have to spend $10 on a USB ethernet adapter is neither dumb nor anti-consumer, it's perfectly rational. Especially on recent generations of laptops that are literally thinner than an ethernet socket.


As somebody who deploys high density wifi for a living I can agree that WiFi sucks. 5Ghz is already super crowded and it's only getting worse. The new LTE over 5Ghz is going to kill 5Ghz I think once it becomes deployed. Some of the cameras in arena run on 80mhz 5Ghz frequency that hop around and can't be channel planned. They are the worst.


At least it doesn't penetrate walls, so if you are inside and have a 5GHz router you can be fairly sure of empty channels.


You mean lte-u ,right ?

Why is there a difference between the interference of LTE-u to the interference of paid wifi,like wifi-offload ? And.how big is that difference ?


The biggest issue is that we would be the company that is providing the WiFi offload and the paid WiFi so we have control over most of the frequency and can channel plan and all that stuff. With LTE-u its going to be a lot harder to channel plan with the cell phone carriers. A lot of the LTE-u installs I bet are going to be contracted out. So to get to the right people to channel plan is going to be a lot of layers. And they may not even want to channel plan, since its unlicensed some people refuse to channel plan.


And they have a monetary interest in wifi not working. Of course, any intentional act to cause interference with wifi users would be illegal, there are tons of things they can do or not do that will cause interference with wifi that might not be provable as intentional interference.

You can't add more users to a frequency without the existing users losing something, regardless of how much the LTE-U people sugar coat it.


There is a huge difference between MHz and mHz


Do you really think the person you are replying to doesn't know that already? I think you can take it as given that most people here know the difference between MHz and mHz and are perfectly capable of realising it is a typo.


I wouldn't be so sure. You can go pretty far without knowing what a MHz really means.

Edit: I've it megahertz written more as mhz than MHZ.


If someone doesn't know what it stands for, that's also not a problem. You would have to have an extremely specific partial understanding to both notice the capitalization difference and not realize it's the same thing.


Lots of legacy tech is also one of the reasons why Wi-Fi sucks. If you do one thing today, disable 802.11b on your router. 802.11b beacons alone can completely jam a 2.4 GHz channel in dense deployments, exasperated by those ISPs that broadcast their own SSIDs from your home router.

I wrote a more in depth blog about this at https://r1ch.net/blog/wifi-beacon-pollution


Huh, I have an older Airport Extreme, it doesn't look like I can disable 802.11b. I can't disable the 2.4GHz band either. Airports were never known for their configurability.


On my iPad right now. My first-generation "Time Capsule" has four options I can set via the iOS Airport Utility.

802.11n (b/g compatible) 802.11n only (2GHz) 802.11n (a compatible) 802.11n only (5GHz)

I gave away my AirPort a while ago, it was also first-gen of its kind so I think it only did 802.11a


Sounds like "everything is amazing and nobody is happy" syndrome [1] :-)

1. https://www.youtube.com/watch?v=dgEvjW1Pq4I


It also sucks from the security point of view even the problems could in most cases be fixed with solutions known to the current state of cryptography:

https://github.com/d33tah/call-for-wpa3


* True, but I doubt anybody is going to sniff my wireless, force me to de–auth and capture the 4–way handshake which she saves to crack offline in the comfort of her home, using cloud computing or gpu's. [0]

* The WPS attack doesn't work with every router that supports it, but allows for an easy way to compromise most modern routers. [1]

* RADIUS/EAP-TTLS is still rock solid. We all know WEP has already been broken and forgotten.

[0]: https://wiki.installgentoo.com/index.php/Breaking_WPA2 [1]: https://docs.google.com/spreadsheets/d/1uJE5YYSP-wHUu5-smIMT...


She doesn't necessarily have to go offline given that everyone has a phone with internet access nowadays. Also, with week passwords, dictionary attacks, rainbow tables etc you might actually get compromised in minutes. And instead of de-auth, she might just as well wait.

As for "I doubt anybody would do that" - it's not really a good security argument when we have the means easily available.


Many poeple also don't understand, how easy it is to frame a deauth packet and disconnect clients from APs. Hotels are known to use this to kick you out of using your own personal hotspot on phones. Thankfully 802.11w PMF solves this and FCC has started imposing fines on hotels doing this.


If anyone is involved in the standards committees -- please try to make the naming more user-friendly. My non-techie friends are totally confused by A, AC, AX, B, G, N, WiMax.

No need to reveal the inner workings of the standards committees to the public. Simple numbering would help.


You mean like 3G/4G/LTE?


The author kinda bungles his analogy/explanation of collision avoidance and detection. Wireless networks don't use CSMA/CD. They use CSMA/CA. There's a huge difference, and it's one big reason why wireless throughput won't ever come close to PHY speed.

Wired ethernet uses CSMA/CD and it's one of the reasons it won the LAN networking wars of the 80's and 90's.


Do today's wired networks (1G+/FD/Cat6/fiber) still use CSMA/CD? I was under the impression that they don't.


No, since wired networks are full duplex and are their own collision domains. No need to do carrier sensing at all.


Not entirely true. You can run half duplex gigabit connections on hubs. Not sure why you would, but you can. IEEE 802.3 covers all Ethernet standards and it still lists CSMA/CD. Largely for historical purposes, but it is in the spec.

Starting with 10gig, full duplex is required so that was the first IEEE speed that did away with CSMA/CD.


Switched networks are full duplex, point-to-point links, so there's no possibility of a collision from the transmitting station. That said, 802.3 still mentions CSMA/CD for half-duplex backward compatibility. Everything 10gig and above requires full duplex though.


adjusts Meraki APs to auto power, 40mhz channel width on 5ghz

I was dismayed at how terrible range was in my new, Silicon Valley-sized house (~1100sqft). Even with a Meraki AP at the front wall, it would work line of sight about 30 feet and start having issues as soon as I stepped behind a wall.

Having a couple APs has mostly solved my issues, but even so, it feels like overkill for such a tiny house. But the neighbors on either side are pretty close, and I see a lot of interference. Worked a case with Meraki support for a long time, and that seems to be the real problem.

I didn't realize I couldn't "shout over" my neighbors though, so I had signal strength set to max.

Back when I was troubleshooting, I tried everything. 5ghz only. 2.4ghz only. Tweaking channels manually. Tweaking everything manually. The funny thing was, nothing helped.. but when I set things back to auto (Except max power), it all got better. Every incremental change I made caused slightly worse performance, but not enough so to notice. Going back to auto fixed it all.

Hoping auto power helps as well.


What's going on in this article? Is it the author speaking about having bad hardware and configuration?

> In real life, if you had your devices close enough to each other and to the access point, about the best you could reasonably expect [with 802.11b] was 1 Mbps—about 125 KB/sec.

I used 802.11b a lot. In a non crowded situation reaching ~5.5mbit was not a problem at all. I remember seeing transfer speeds of about 700KB/s.

Why the author ignores the theoretical top speed which is something around ~60% of 11mbit is beyond me.

Then the author continues with the same thing again;

>your best case scenario [with 802.11g] tended to be about a tenth of that—5 Mbps or so

This again is not true. In a non crowded situation I had no issues reaching 2-3MB/s, which is closer to the theoretical limits of 802.11g after factoring in some signal loss.

Surely, today when everybody is having wifi you would probably not reach 700KB/s on 802.11b or 3MB/s on 802.11g, but back when it began it was actually feasible.


I suggest reading up on how WIFI works and some of the problems like hidden nodes [1]. Sometimes I'm amazed WIFI works at all.

[1] https://en.wikipedia.org/wiki/Hidden_node_problem


The classical hidden terminal problem, where two clients of the same AP can't receive each other and so collide when they transmit, isn't such a big deal with WiFi. First, there are, by definition, no hidden terminals for the downlink (and most traffic is downstream), or the clients couldn't associate with the AP. Second, although two clients can't receive each other's transmissions, if they're associated with the same AP they can usually hear each other well enough for carrier sense to work.

The usual problem these days is too many overlapping networks. Different APs on the same channel will not only defer to each other when they can hear each other, but also because the AP tends to use a shorter contention window than clients, when they do transmit, they still collide with each other with moderately high probability. Worse, modern 802.11n and 802.11ac only get good performance by forming aggregates of many packets (up to 64KB in 802.11n, more in ac) to reduce the overhead of medium acquisition. Often they don't use RTS/CTS because this reduces performance in benchmarks. When such aggregates collide you lose the whole aggregate, not just one packet.


Thanks for the explanation. It's more evidence to my point that I'm always amazed it works at all :)


Some even more in depth slides about why wifi doesn't always perform: http://apenwarr.ca/diary/wifi-data-apenwarr-201602.pdf


Is there any reason why we cant use LTE as a standard for Wi-Fi use case? So we have in house LTE Router with Wired connection, and when you are out of range you are still LTE with your carriers. This is not LTE-U in Rel12 or LAA in Rel 13, which both require a functioning LTE connection as Anchor point.

WiFi used to be good in 3G / WCDMA days, but i think the appearance of LTE with constant innovation and advance in both carrier and phone maker has made the LTE experience so much better. And it will only get better with LTE Advance Pro and 5G.


What software tools do people use (OS X, Linux, Windows) to test out and debug wifi connections?


On GNU/Linux (either Android/Cyanogenmod or Debian) I use wavemon. It's just an apt-get away and tells me more than I can understand (which is fairly rare, especially compared to Android apps which are universally underwhelming).


The article misses out on explaining WHY we cannot get the promised bitrates. The answer is Shannon's limit on channel capacity, which mandates that you pay in either bandwidth or high power or low noise (SNR) to get higher capacity. Now these WiFi devices have internal rate adaptation algorithms that choose a particular modulation and coding scheme (MCS) index based on the measured SNR. A higher MCS index means better higher bits per symbol (modulation) and lower overhead code rates, and more antennas (spatial streams), which is how you get the xx Gbps bandwidth advertised on the box. List of MCS indices: http://mcsindex.com/

In today's devices, interference is considered as noise, which means that SNR simply drops to a point where the higher MCS indices are not chosen at all. So, even though the device is capable of the advertised XX Gbps bitrate, the SNR isn't high enough to switch to those higher rates.


> interference is considered as noise, which means that SNR simply drops

You're arguing that PHY data rates are low because SNR is low, and SNR is low because interference is high. In my experience, this is not the case. PHY rates can be quite high, but the overall throughput is low due to inefficient channel access at the MAC layer.

Wi-Fi doesn't consider interference as noise. Since it uses CSMA/CA to manage channel access, a device can only transmit if no other interfering device is transmitting, or if background interference is very low. This is why your device can be operating at a high MCS, but actual throughput will be so much lower. If your devices are using a low MCS, it's more likely that they're getting a weak signal from the access point.


In my experience WiFi clients at 2.4GHz never end up using the high MCS indices whenever they transmit (whenever the channel is clear). This has something to do with the fact that the CCA thresholds are fairly high, and that high SNR (30-50dB) is required for activating the high MCS indices[1]. I don't think these SNRs are achievable in a typical setting.

It is true that MAC backoffs are also a contributing factor to the throughput being low. The CCA assessment procedure detects both wifi preambles and non-WiFi interference (pure energy detection), which is why I say interference is modeled as noise. CCA does not for example, have some intelligent coexistence algorithm for dealing with zigbee or LTE-U or other ISM traffic.

[1]: http://www.revolutionwifi.net/revolutionwifi/2014/09/wi-fi-s...


My loved one went for a junior sysadmin job. They'd decided to remove all the wiring and use wifi for everything because of just the sorta hype mentioned here. Loved one pulled out a Palm Tungsten C and proceeded to crack all their WEP passwords there in the interview. Got the job too ... and the task of putting quite a bit of wiring back.


The Tungsten doesn't support RFMON, thus it can't do packet injection, not sure how this was achieved. The most Palm OS could do was run NetStumbler, which does discovery by active beacon probing.

Maybe with a Zaurus, Libretto, or other small pocket-sized Linux machine of that era, sure.


I just asked and it was indeed a Tungsten C (the Tungsten C was a pretty l33t beast for a handheld in that era, I had one too and loved it a lot), but I got the story wrong - the wifi wasn't secured at all. Let me remind you again that they'd replaced their wired network with this unsecured wifi network. Their business was online marketing for companies even more clueless about technology than they were. (Replacing the unsecured wifi with wired was indeed the first job.)


That is soo cool!

I tell my wife [a software dev like me]: "I love it when you talk nerdy"


Lucky he/she didn't get arrested.


For what? Cracking passwords offline isn't illegal as long as you don't use the passwords to gain unauthorized access to a system and you don't share the password with anyone other than the rightful owners.

There's nothing wrong with cracking a WiFi network's WEP password in front of the owner of the network in order to demonstrate WEP's weak security. You would have to actually connect to the network to commit unauthorized access.


See other side comments - I got the story wrong, the wifi wasn't secured at all.


WEP passwords? What year is it again?


Palm Tungsten C dates the anecdote more than using WEP.


2005ish, when WEP was frequently all that wifi points did.

I got the story wrong (just checked) - the wifi wasn't secured at all. :-O


The article doesn't seem to speak about directivity. Sending a narrow, directed beam of information could reduce contention issues, and reduce power requirements. But you'd need a more advanced antenna (probably an array), more advanced signal processing, and smarter software.


Directivity is great for open-space p2p links, but indoors it's much less useful as walls scatter 2.4GHz with aplomb.

There's also the small matter that a dipole is already close to the ERP limit set by the FCC, so most directional antenna setups are not legal


This statement is points to why it doesn't suck for most people "In practice, it wasn't a whole lot better than dial-up Internet—in speed or reliability."

At the time of adoption, many people were on dial-up or just moving to slightly faster internet speeds and they were accessing the internet via wifi, so they didn't notice a drop in performance. Wifi speed increased along with access to faster internet.

Is it as fast as it can possibly be? No, but it's like having a Ferrari in highway traffic. Most people can't take advantage of the technical capabilities of anything that would be considered better.


My trading terminal I have directly wired in as just issues with wifi can become costly in the middle of a trade. That said, my leisure laptop and all I'm totally fine with wifi. Then I again the leisure laptop really doesn't get the heavy use such as gaming. Some lectures on youtube is probably the most test it gets. My phone (iphone 6), despite being relatively new has always been terrible with wifi which I always found weird.


I think it's getting Bits and Bytes a little mixed up. You can expect about 1/10th speed?

Most file transfer dialogs I've seen ("real world"?) display transfer rate in Bytes. Advertisers use bits, that little marketing wave can actually explain the speed drop to 1/8th of the "advertised" rate.


I believe the article takes that into account and describes the speed loss you'd see over and above the "loss" from conversion. (it does use bytes somewhere in between, so you know they're aware of the difference)


That's not what this is about. After converting the manufacturer's bit speeds to bytes or after converting every other tool on the planet's byte speeds to bits, you still get lower performance than the packaging claimed.


He's testing performance with cheap laptops and blaming the adapter? Pushing data isn't free. If you have a bad CPU or memory architecture or even bad drivers, you're not going to get network rated speeds. This has been true for wired as well since the beginning of ethernet.


Wifi sucks because Ethernet is still better in all aspects but mobility to this day.


Wow, it's better "in all aspects but mobility"? You're kidding. You may as well complain that batteries suck because the only advantage they have over just plugging something in is portability.


I suppose the difference is that I don't expect my television to run on batteries, but I do expect it to work on wifi.


Why? TVs aren't very portable, carrying content over a wire makes much more sense.


It's easier for most consumers to just put their new TV where they want rather than knock a bunch of holes in the wall (or ask their landlord to do the work)


Right, and you accept a degraded experience for the portability. The speaker built into your phone isn't as good as stand-up speakers either.


It's even easier to put their new TV where the wire is.


I have a partner who hates wires. Wifi is a very normal thing for TV's, Games consoles, Chromecasts... etc;


The question rather is: What would you rather be willing to give up: Ethernet or your partner? :-)

(a smart girl knows that it is a bad idea to urge nerds to make a choice between her and his PC).


Casual sexism is casual.


The phrasing was not optimal on my part. What I meant was:

Wifi has a huge advantage and that's portability. It totally makes sense in this fast-paced, moving, all-encompassing networked world, but wifi still lags behind ethernet for all other aspects :

- Security

- Stability

- Bandwidth

- Health (I am not sure if the science is settled on whether or not wifi may be detrimental to one's health, but it does raise more concerns than ethernet)

But you're absolutely right that wifi does still bring an essential factor in the equation, the ability to move around.


In a practical sense I use most wifi about 2m from an AP, and usually on a laptop which is already plugged in.

Realistically hi power USB3 is what I should put as ports around my house and just deliver Ethernet that way.


That is a very interesting idea. I've seen power sockets with USB-A ports. When we move to USB-C, that same port could actually identify itself as a dock, offering both power and Ethernet.


I mean, isn't that true and why almost all of our electronics plug in?


It's true, but it's so obvious, and obviously the entire point of the technology, that I wonder why anyone would bother remarking on it.


One difference is you don't see people going around on forums saying "batteries suck because you have to charge them"


Sure you do: look at forums on portable tools with both battery and cord versions.

The charging time (and necessary backups for continuous use), weight, and low power provided all are cited as weaknesses of batteries.

The only advantages batteries have is increased portability and ability to provide power without existing infrastructure. Which is why we see batteries primarily limited to highly portable devices and backup systems (because they can store power more stably than capacitors).

Wifi sucks for anything that doesn't need to move or have issues preventing cabling (eg, infeasible to run a line through walls, but you can get signal). Batteries suck for anything that doesn't need to move a lot or an independent source of limited power.


A lot of people choose wireless even for stationary machines because you don't have to bother running the cable.


This article is all about unwrapping the marketing copy and explaining why it sucks.

That's still valuable, but I was hoping for a more technical view on it. Anyone have an article that explains why the systems are as limited as they are?


Read more (page 2) - there is a considerable discussion of channel congestion and interference due to over powered zones and large number of devices.


I did. It's still fairly superficial, and not the focus of the article.

What I'm looking for is e.g. an explanation of how MIMO works, or why explicit timeslots were considered useful for 802.11n but not 11b.


MIMO isn't that complex. It's using multiple antennas to take advantage of multipath propagation. [0]

If you want a very simplistic explanation, it's like giving you multiple Ethernet cables to improve throughput. (To gloss over all the details about RF and boil it down to the practical benefit: more bandwidth)

Per the article, not everything with MIMO is advertised as 1x1, 2x2, etc. You frequently find WiFi routers mentioning 1T1R (1x1), or 2T2R (2x2). Or maybe just the Chinese routers I look at.

This article is mostly on about interference, which is definitely a big issue in populated areas. But there are definitely issues with Linux and the drivers used by AP manufacturers. There's a project: Make Wi-Fi fast [1] which aims to address these issues.

They're making good progress, especially in environments with a lot of clients. Just having one 802.11b or g client can really ruin throughput for newer devices due to the timesharing algorithm used by default.

[0] https://en.m.wikipedia.org/wiki/MIMO

[1] https://www.bufferbloat.net/projects/make-wifi-fast/wiki/


I would recommend reading books instead of blogs if you really want to understand this stuff.

O'Reilly's "802.11n: A Survival Guide" is fairly OK.

If you read German, I highly recommend "Wireless LANs" by author Jörg Rech.


See my comment https://news.ycombinator.com/item?id=13793876 .

I suggest reading Andrea Goldsmith's or Pramod Vishwanath's book on Wireless Communication. If not, Intro to Communication Systems by Madhow is recommended.


These look nice but they focus on the physical layer.

Do you happen to know any useful English literature that covers the MAC layer of modern wifi standards (n, ac, ax)? Apart from the 802.11 standards, of course.



Thanks!


Is there a way I can just not obey the collision domain and transmit over someone else anyway? Would that allow me to skip the queue or would nobody get any service?


More likely, not obeying the rules would lead to collisions -> damaged packets -> retransmissions -> higher channel utilization -> bigger probability of further collisions; and hence lower speed for everyone.


Can we have "home" (home Wi-Fi) added to the title? I agree with others here - the author doesn't talk about how amazing it is that you can connect to a busy cafe WiFi in a busy street with five other networks coming in strong, and a dozen (or few dozen) people using the one you're on, and it still more or less works! (Wow.) Their only focus seems to be on fixed installations, specifically at home.


Fortunately my Mac doesn't have Ethernet.

Wait...


I think that if I was to buy a computer and notice it's missing an ethernet port, I would return it as defect...


It's not a big deal. USB-based adapters usually don't need drivers and are $5 to $15, so cheap enough to just put on every desk you usually sit.

I've got office space as well as a desk at a regular client. On both desks, there's a USB hub with an ethernet adapter hooked up. I plunk down my laptop, connect to the hub and I'm done.


Is it right in this instance to lay blame on the marketers?


Excellent article. Thanks.


If it's a mess then as a consumer they're doing a great job hiding it from me. I have 20 devices connected to my wifi and everything is running particularly smoothly. I did get frustrated at my 802.11n 4 years ago and dropped $250 for the best ac wifi router on the market and have had no complaints since then. Sure it could be more efficient from a technical standpoint but that's not something I'm interested in and I'm happy at how seamless things are for me right now.


Same here. I have a Roku sitting literally four inches from my router and I haven't hooked it up by wire because I've been too lazy to go get a patch cable (where "go get" means to my basement, not to the store.) I know wired would be faster but wireless has long been fast enough for me not to care.


I hate wires. But the only thing worse than them is wireless technologies.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: