Main differences with 802.11ac:
- denser subcarriers (or "tones") spacing; they are spaced by 78.125 kHz instead of 312.5 kHz, for example a 80 MHz channel now allows 980 data subcarriers, up from 234. That's a 4.19× improvement (4× higher density doesn't translate in exactly a 4× improvement because 11ac had a bit more "pilot" or "null" subcarriers not used for data.)
- new 1024QAM mode (encodes 10 bits per symbol, up from 8 bits with 256QAM). That's a 1.25× improvement.
- downside is that the symbol length had to be increased from 3.2 µs to 12.8 µs, and the symbol interval too from 0.4 µs to 0.8 µs. That's a 3.78× reduction in efficiency.
So the maximum data rate for a single stream on a 80 MHz channel increased by 4.19×1.25/3.78 = 1.39× between WiFi 5 and WiFi 6 (you can confirm it with published max data rate: 802.11ac = 433 Mbit/s, 802.11ax = 600 Mbit/s; and 600/433 = 1.39).
But the best feature of WiFi 6 is that different subcarriers can be concurrently used by different users, what they dubbed OFDMA (https://en.wikipedia.org/wiki/Orthogonal_frequency-division_...). This means that during the same 12.8 µs timeslot, even on a small channel like 20 MHz, you can have 9 concurrent users each assigned 26 subcarriers and each transmitting 26 different symbols (total of 234 symbols transmitted concurrently.) Whereas with WiFi 5, all subcarriers of the 20 MHz channel have to be used by the same user.
Like the parent comment said the actual felt speed gains in a real world situation are going to be from OFDMA allowing concurrent use at slower speeds.
Interestingly Intel gave up on making 3x3 cards a couple of years ago but will sell you a 160 MHz 2x2 ¯\_(ツ)_/¯
D-Link can claim "11 Gbit/s" because:
- first channel at 2.4 GHz is 40 MHz wide (468 data subcarriers) where 802.11ax can operate at 286.8 Mbit/s, multiplied by 4 streams = 1147 Mbit/s
- second channel at 5 GHz is 160 MHz wide (1960 data subcarriers) where 802.11ax can operate at 1201.0 Mbit/s, multiplied by 4 streams = 4804 Mbit/s
- third channel also provides 4804 Mbit/s
1147 + 4804 + 4804 = 10755 Mbit/s which D-Link's marketing team rounds to "11 Gbit/s." It goes without saying that a typical client WiFi device (phone, laptop) will never reach 11 Gbit/s. For starters it will only use 1 of the 3 channels (4804 Mbit/s maximum) and most devices are 2×2 thus capable of 2 streams (2402 Mbit/s maximum.) Even if they are 3×3 you'd be lucky in real-world conditions to get about one and a half times or twice the bandwidth of one stream.
with unlimited pipes from my ISP, and a fat 802.11ax router, what can I theoretically expect on speedtest.net
I feel like your analysis neglected using multiple bands or channels or whatevers at once, and how the routers all can do double, tri quad bands already and might do more for this technology
so we talking 3 Gbit/s?
Sounds complicated and tricky, and not likely to work very well for devices where bandwidth requirements are extremely bursty and unpredictable.
As far as I can tell, all of these numbers are new. At least this naming is a good deal clearer than HDMI's confusing version and feature mix or USB's Speed names.
What isn't clear is how much control the Wi-Fi Alliance has over the tech industry and how their branding is used, but it looks like they might be able to compel a lot of companies to adopt this new naming. They've got standards for logos on things like your computer or phone, so we'll see if these start getting adopted by major manufacturers.
Oh, I'm so happy it's a number that seems to correspond to something (as long as the next one is 7 or 8 or some integer slightly larger than 6).
I look at wifi tech so rarely that I've mostly skipped a standard once or twice, but that just means I'm confused when I look at what's being offered. 802.11n? 802.11ac? What's better? If it's not as clear cut, which one came out later and is likely to be backwards compatible with the prior one?
If it was single letter increasing without large gaps, that might have been easier, but from what I remember, b wasn't strictly better than a when they were both first out (from looking at the info now, a has a higher rate, but suffers from obstructions and other interference more).
> At least this naming is a good deal clearer than HDMI's confusing version and feature mix or USB's Speed names.
Yes. As I understand it, USB is particularly bad where there's often confusion over the connectors, cable, and protocol, especially at the USB-c level.
That's not an accurate statement. "802.11ac" denotes a set of features, and many of them apply to and benefit the 2.4 GHz band. For example 256QAM encoding boosts the max data rate by 20 or 33% on this band for, respectively, 20 and 40 MHz channels.
I always heard that 802.11ac was 5GHz only, and that on dual-band, stations would stick with 802.11n for 2.4GHz
The only problem is that your phone might not be able to talk to your router through your bathroom wall any more. But that capability was always a sign of wi-fi being too noisy—imagine a light that can manage to penetrate a bathroom wall, and tell me you don’t think it’s maybe a bit too powerful for consumer use ;)
Sometimes. 5GHz is great for apartments (where you have lots of noise and a tiny area) and offices (where you have lots of noise and a large budget), but for a villa 2.4GHz usually hits the sweet spot between noise and coverage.
Visible light has a wavelength of a few hundred nanometers; wifi is around 10 centimeters. Material penetrability varies enormously between different wavelengths (and between materials). Harm to humans also various enormously, and in a way very different from penetrability (for example, some wavelengths of UVC are very harmful to human skin but can be almost completely blocked by a thin sheet of clear glass). Equating penetrability with either 'power' or harmfulness is not a useful intuition.
(incidentally, an incandescent light bulb emits on a blackbody spectrum, so some very small proportion of the energy it uses will be emitted at e.g. radio wave frequencies, which will certainly penetrate the bathroom wall)
Most 802.11b and 802.11g devices didn't support 802.11a (due to it requiring 5Ghz), so I'm guessing version 1 is 802.11 and 802.11a isn't part of the version numbers.
The best description I've heard for 1-3 was to use roughly the consumer hardware adoption curve which was 1 (b), [briefly] 2 (a), 3 (g).
(This is, of course, a Very Important Question.)
From that perspective A, B, and G were all sold as three separate things to consumers, so it makes sense to count all three.
802.11b is part of 802.11-1999. 802.11-1997 describes the older Direct Spread (DS) and Frequency-Hopping (FS) standards in 2.4GHz.
802.11a and 802.11b were both ratified on 16 September 1999.
802.11g wasn't ratified until 12 June 2003.
Others notable offenders that jump to mind: the Xbox range (Xbox, Xbox 360, Xbox One, Xbox One X), iPhone SE and XR (since they break the pattern all the rest fit in).
"Chromebook Pixel" (a Chromebook released in 2013, commonly called just "Pixel")
"Chromebook Pixel" (a Chromebook released in 2015, commonly called "Pixel 2")
"Pixel" (a phone released in 2016)
"Pixel 2" (a phone released in 2017)
"Pixelbook" (a Chromebook released in 2017)
"Moto G" (a phone released in 2013)
"Moto G" (a phone released in 2014)
"Moto G" (a phone released in 2015)
Moto G5 (5th generation of their mid-range phone)
Nokia 5.1 Plus / Nokia X5
Nokia 6.1 Plus / Nokia X6
Nokia 7 Plus
Nokia 8 Sirocco
Windows 7, Windows 8, Windows 10
iPhone 7, iPhone 8, iPhone X
Wifi 6, Wifi 7, Wifi 8, Wifi Ten
>Though the nominal data rate is just 37% higher than IEEE 802.11ac, the new amendment is expected to achieve a 4× increase to user throughput due to more efficient spectrum utilization.
>IEEE 802.11ax is due to be publicly released sometime in 2019. Devices were presented at CES 2018 that showed a top speed of 11 Gbit/s.
Sounds good but with limited practical value while broadband speeds stay so limited in home. For businesses this would make a big difference (for example editing HD video over WiFi).
The timescales involved in making Wifi connections should be ms or less, not human-noticeable (order 10 sec).
I've got good LTE signal in my neighborhood, and managed to convince AT&T's retention team to grant me 50GB/mo (whether or not I choose to tether my laptop, none of this "unlimited with tiny cap for hotspot usage" nonsense) ... I digress. I tend to turn my phone's wifi OFF, even when nominally in range of a trusted AP, to ensure a reliable connection. I'll take a steady 4-5 MBps over fragile and on-again/off-again 60MB anytime.
Connman doesn't seem to handle all possible situations like NetworkManager, though, and its developers only seem to do (or approve patches for) whatever Intel management tells them to do.
I can definitely see perks both in "hey, that's the newer network, it's got a higher version number", and "oh, I guess I do need to buy a new Wi-Fi device, it only supports version 4", etc.
Wi-Fi 6/802.11ax should last for a while, so I hope the Wi-Fi Alliance starts focusing on an actual long-range standard that's more of a competitor to LTE but that works in the unlicensed spectrum and for distances of 1km or longer. Then it needs to incentivize smartphone makers or smartphone modem makers to adopt it so that everyone will have it.
This would remove the biggest obstacle towards having a real meshnet.
These are totally conflicting goals. The reason LTE works at those ranges is that a cell tower puts out tens of watts, so that the size-optimized electronics in a phone can receive the signal at that range. That's tens of times more than the limit in the unlicensed band (1W).
Furthermore, the bane of the unlicensed band is that it's full of uncooperative signals. The hard power limit keeps a lid on that, minimizing the number of uncooperative signals any given receiver/transmitter pair has to contend with. Jacking up the range to a kilometer compounds that problem exponentially.
Physics hates mesh networks. To optimize performance, you need a single network (or at least cooperative networks) using each chunk of spectrum.
That's not to say that improvements to 2.4 aren't welcome; they are simply suitable for different applications.
Looking at the shitshow that is the 2.4 GHz band, what are the mass-market uses of unlicensed 1km range in the 1kbps bandwidth range? IoT?
Although technically superior to LTE, phone companies (outside of Sprint, in the US at least) chose not to deploy it as it was not based on existing protocols (GSM and CDMA), even though that isn't an actual legitimate reason, as "true" LTE requires VoLTE deployment, which phone companies refused to do until forced to; VoLTE-enabled handsets perform a lot better than their pre-VoLTE or VoLTE-disabled siblings purely because of not needing to waste valuable spectrum on 3G connections, allowing that band to be reassigned to a 4G radio on the tower.
On top of the VoLTE debacle, companies invested in the GSM and CDMA monopoly tried to claim WiMax did not perform well at long distances, the same distances that LTE does not work well with today (found frequently in rural areas in the US, or similarly in areas heavily shadowed by hills or tall buildings); however the adoption of 600/700mhz to fill in those gaps (plus the forced adoption of VoLTE to improve spectrum usage) has proven that to be false.
LTE in areas that are partly covered by existing bands, with the gaps filled in by 600/700mhz, have finally caught up to WiMax in real world testing.
Interestingly, Asia has adopted WiMax heavily but may be switching to LTE in the future for 5G deployment, even though WiMax beat LTE-A to the commercial gigabit deployment milestone, due to these continued misconceptions. Africa's few networks that were WiMAX are (or have already) switched to LTE (driving up the cost, and lowering the reliability of their networks).
The one place WiMax survived in the US was fixed broadband links (as this is what WiMax was originally designed for, until it merged with Korea's WiBro spec more than a decade ago), but that seems to be finally being replaced in favor with LTE-A's fixed profiles.
WiMax will probably beat LTE to working gigabit deployments, although you'll need to live in Asia for this to be relevant to you.
WiMax is not related to WiFi; although the actual underlying technology of all standards are rapidly converging, they are designed for different purposes.
I live in Asia and actually have a WiMAX plan still active (since it's grandfathered in on the only still remaining "really really unlimited" plan) but it always sucked in performance even when I got it, and since then they've refarmed half the spectrum. They sell "WiMAX 2" now, but that's just branding - it's just LTE.
WiMAX in a lot of markets is not 4G, but neither is LTE in a lot of markets. In other words, a lot of markets do not, and seemingly never will, have 4G as defined by IMT Advanced. I live in a market that is LTE, is sold to me as 4G, will never meet 4G requirements.
WiMAX was developed with proposals like IMT Advanced in mind: ability to have MIMO, all IP packet switching (non-VoLTE LTE networks cannot ever qualify to be called a 4G network due to this, btw), 20mhz and higher channel widths, and spectral efficiency above a certain level (which put a lower limit on how big your modem's DSP has to be due to coding techniques), forwards and backwards compatibility of future specs, and smooth handover between heterogeneous technology (ie, tower to home femtocell and back). WiMAX's original specification (802.16e-2005, which was based on the original 802.16 spec from 2001) met the IMT Advanced requirements.
LTE was not developed from day one to do this, and did not really meet the requirements until LTE-Advanced. The original LTE specification (3GPP release 8 (2008); LTE-A was defined in release 10 (2011); LTE-A Pro was defined in release 13/14 (2016/2017); additions to LTE-A Pro for 5G (but does not meet 5G yet) were added to release 15 (2018)) fell short of the speed requirement.
What makes this all interesting is, WiMax could do fixed modems 10 years before LTE-A added it, was "true 4G" (as how ITU-R defines it now due to how everyone rushed to muddy the definition with HSPA+ and whatnot) 3 years before LTE was, and is currently the only protocol that has hope of deploying gigabit to fixed users currently (via 802.16m-2011/802.16-2012 aka WiMax 2 or WiMax-Advanced).
Also, they're trying to sell LTE as 5G, I've already seen ads saying 600/700mhz support is 5G (it is not, although it is welcome), just like how they tried to sell highest order (2x2 MIMO with dual cell and widest channels) HSPA+ as 4G (which would be more correctly be described as 3.5G, as newest spec LTE would be best described as 4.5G).
When it worked, it was phenomenal. But it needed a nearly line-of-sight connection, and seemed to have problems in rain and fog.
First it was discovered all the major companies were working behind close door forming a group called Densi-Fi, trying to super speed the spec, or more like neglecting all the issues around it and push forward the time to market. This has been discover by some other IEEE members, and the problem has since been "resolved". The Density-Fi section in 802.11ax has since be deleted from Wiki, despite many attempts to bring it back. The 802.11ax committees continue to push forward and correct me if I am wrong, are there any other IEEE spec that failed to pass in all of its Draft? Draft 1.0, 2.0 and 3.0 for 802.11ax all failed to pass the vote. With Draft 4.0 being pushed and forcibly passed, while all the comments ( ~2300 of it ) being the same and unresolved as Draft 3.0. Much like 802.11ac there will be Wave 1 and Wave 2. Wave 1 does not include Uplink MU-MIMO, 80+80Mhz Channels and some other things I don't remember on top of my head.
I am not sure what to make of this. Because it reads to me as a giant pile of mess and I don't want to be the guinea pig for this new spec.
It’s also great for the “access point per room” story, if there was a mechanism to have cheaper APs and do hand off.
I'm more of a software guy, don't know that much about hardware and networking, so can any expert on these things give an opinion as to whether this is a bad moment to buy it? Would it be better to wait for 802.11ax/6 to arrive? When can we expect to find hardware (such as mesh kits) reliably supporting the new standard?
Mesh Wifi sucks. It takes Wifi unreliability and latency and compounds it by adding hopes. Just bite the bullet and put in multiple APs, all connected to a router via Ethernet.
 Mesh [any sort of wireless] sucks.
As an alternative for what doesn’t suck, you want multiple hardwired access points. Obviously if you are renting that is tricky though. Ubiquti UAP-PROs are pretty good for home use, and a couple at each end of your apartment would be faster (and probably cheaper) than a prosumer mesh setup.
I'd posit that this renaming is part of the ongoing competition between Wi-Fi and LTE, and prompted by the advent of 5G?
And the problem with "long range" is that everyone who doesn't live in a detached suburban house now has massive noise on the 2.4 GHz band from their 20 neighbors so that everyone gets terrible speeds. The weaker penetration/range of 5 GHz solves that by right-sizing everyone's network.
edit: researching the updates in 802.11ax, it looks like Wi-Fi 6 will bring the improvements from 802.11ac to 2.4 GHz as well, so it will actually improve performance for people who need range as well. It also has guard interval improvements for outdoor environments.
Is there any Wifi tech for which that isn't true, sans beam-shaping and other tech that effectively puts stations on different physical networks? How do 5 GHz technologies handle multiple simultaneous broadcasts on the same channel? How will 802.11ax do it? Signal processing?
My impression, based on only a little research, is that it's impossible with any tech. Even cell providers need CDMA, TDMA, etc. Maybe that understanding is out of date?
> Spatial frequency reuse
> Coloring enables devices to differentiate transmissions in their own network from transmissions in neighboring networks.
> Adaptive Power and Sensitivity Thresholds allows dynamically adjusting transmit power and signal detection threshold to increase spatial reuse.
> Without spatial reuse capabilities devices refuse transmitting concurrently to transmissions ongoing in other, neighboring networks. With coloring, a wireless transmission is marked at its very beginning helping surrounding devices to decide if a simultaneous use of the wireless medium is permissible or not. A station is allowed to consider the wireless medium as idle and start a new transmission even if the detected signal level from a neighboring network exceeds legacy signal detection threshold, provided that the transmit power for the new transmission is appropriately decreased.
Still I don't know a solution to concurrent broadcasts on the same network, as the GGP comment seemed to imply.
As someone who lives within reach of 30+ networks, this is a step up.
Single long range base stations might be great if you have a house and distance from your neighbors, but it's terrible for people with denser living arrangements.
Yea they are.