Hacker News new | comments | show | ask | jobs | submit login
Wi-Fi Alliance introduces Wi-Fi 6 (wi-fi.org)
234 points by okket 13 days ago | hide | past | web | favorite | 94 comments





Really good whitepaper on WiFi 6 (802.11ax): https://www.cisco.com/c/dam/en/us/products/collateral/wirele... Also: http://www.ni.com/white-paper/53150/en/

Main differences with 802.11ac:

- denser subcarriers (or "tones") spacing; they are spaced by 78.125 kHz instead of 312.5 kHz, for example a 80 MHz channel now allows 980 data subcarriers, up from 234. That's a 4.19× improvement (4× higher density doesn't translate in exactly a 4× improvement because 11ac had a bit more "pilot" or "null" subcarriers not used for data.)

- new 1024QAM mode (encodes 10 bits per symbol, up from 8 bits with 256QAM). That's a 1.25× improvement.

- downside is that the symbol length had to be increased from 3.2 µs to 12.8 µs, and the symbol interval too from 0.4 µs to 0.8 µs. That's a 3.78× reduction in efficiency.

So the maximum data rate for a single stream on a 80 MHz channel increased by 4.19×1.25/3.78 = 1.39× between WiFi 5 and WiFi 6 (you can confirm it with published max data rate: 802.11ac = 433 Mbit/s, 802.11ax = 600 Mbit/s; and 600/433 = 1.39).

But the best feature of WiFi 6 is that different subcarriers can be concurrently used by different users, what they dubbed OFDMA (https://en.wikipedia.org/wiki/Orthogonal_frequency-division_...). This means that during the same 12.8 µs timeslot, even on a small channel like 20 MHz, you can have 9 concurrent users each assigned 26 subcarriers and each transmitting 26 different symbols (total of 234 symbols transmitted concurrently.) Whereas with WiFi 5, all subcarriers of the 20 MHz channel have to be used by the same user.


Due to the increased efficiency, the speed increase is supposed to be much higher. At CES they showed up to 11Gbits: https://www.zdnet.com/article/d-link-asus-tout-802-11ax-wi-f...

That's just the marketing, AC would have been 7 gbit/s if measured in the same ludicrous 8x8 160 MHz setup (though technically AC only specifies up to 4x4 it doesn't matter, clients are almost exclusively 2x2, even high end laptops). Even in a full greenfield ax deployment it'll be a rare sight to see your phy rate pop above 1 gbit/s half duplex as a client.

Like the parent comment said the actual felt speed gains in a real world situation are going to be from OFDMA allowing concurrent use at slower speeds.


I always took these numbers to be like a Switch's fabric speed. Not the speed an individual client would receive.

MacBook Pros have had 3x3 for a while now (except for the non-touchbar 13” post-2016, which is really more of a MacBook Air internally anyway).

Yep, it's about the one client I ever see be 3x3. They use a specially customized version of the 2014 BCM943602 to do it. Don't think I've ever seen a 4x4 client pop into our logs that wasn't an AP to AP bridge but I know there are a handful of adapters made which support it.

Interestingly Intel gave up on making 3x3 cards a couple of years ago but will sell you a 160 MHz 2x2 ¯\_(ツ)_/¯


Well this D-Link router shown at CES is "tri-band" (a misnomer, it should be called "tri-channel") so "11 Gbit/s" is just the sum of the maximum data rate achievable in 3 different channels (one channel in the 2.4 GHz band, plus two channels in the 5 GHz band), where each channels is as wide as possible (40 MHz at 2.4 GHz, and 160 MHz at 5 GHz), and where each channels uses 4 MIMO streams (ie. 4 antennas transmitting concurrently.) For comparison, the number I quoted in my parent post is just for 1 channel at 80 MHz with 1 MIMO stream.

D-Link can claim "11 Gbit/s" because:

- first channel at 2.4 GHz is 40 MHz wide (468 data subcarriers) where 802.11ax can operate at 286.8 Mbit/s, multiplied by 4 streams = 1147 Mbit/s

- second channel at 5 GHz is 160 MHz wide (1960 data subcarriers) where 802.11ax can operate at 1201.0 Mbit/s, multiplied by 4 streams = 4804 Mbit/s

- third channel also provides 4804 Mbit/s

1147 + 4804 + 4804 = 10755 Mbit/s which D-Link's marketing team rounds to "11 Gbit/s." It goes without saying that a typical client WiFi device (phone, laptop) will never reach 11 Gbit/s. For starters it will only use 1 of the 3 channels (4804 Mbit/s maximum) and most devices are 2×2 thus capable of 2 streams (2402 Mbit/s maximum.) Even if they are 3×3 you'd be lucky in real-world conditions to get about one and a half times or twice the bandwidth of one stream.


all I want to know is the maximum speed file transfer I can expect?

with unlimited pipes from my ISP, and a fat 802.11ax router, what can I theoretically expect on speedtest.net

I feel like your analysis neglected using multiple bands or channels or whatevers at once, and how the routers all can do double, tri quad bands already and might do more for this technology

so we talking 3 Gbit/s?


How many other users are in your local area, and their use of the common medium, also matters. So it's hard to really state what boost you can expect in your own home. My read on this new standard is that it's built with conflict reduction in mind.

Great information! I wonder how the subcarrier allotments are orchestrated, particularly under hidden neighbor conditions. Presumably the AP has to orchestrate it based on QoS settings, but the client-OS will need upgrades throughout the stack to signal how much bandwidth should be allotted/reserved at different times.

Sounds complicated and tricky, and not likely to work very well for devices where bandwidth requirements are extremely bursty and unpredictable.


Wi-Fi Alliance is naming the not-yet-complete IEEE 802.11ax standard as Wi-Fi 6, with ac being 5, and n being 4. One can guess then a is supposed to 1, b as 2, and g as 3, but this isn't mentioned anywhere I can see.

As far as I can tell, all of these numbers are new. At least this naming is a good deal clearer than HDMI's confusing version and feature mix or USB's Speed names.

What isn't clear is how much control the Wi-Fi Alliance has over the tech industry and how their branding is used, but it looks like they might be able to compel a lot of companies to adopt this new naming. They've got standards for logos on things like your computer or phone, so we'll see if these start getting adopted by major manufacturers.


> Wi-Fi Alliance is naming the not-yet-complete IEEE 802.11ax standard as Wi-Fi 6, with ac being 5, and n being 4. One can guess then a is supposed to 1, b as 2, and g as 3, but this isn't mentioned anywhere I can see.

Oh, I'm so happy it's a number that seems to correspond to something (as long as the next one is 7 or 8 or some integer slightly larger than 6).

I look at wifi tech so rarely that I've mostly skipped a standard once or twice, but that just means I'm confused when I look at what's being offered. 802.11n? 802.11ac? What's better? If it's not as clear cut, which one came out later and is likely to be backwards compatible with the prior one?

If it was single letter increasing without large gaps, that might have been easier, but from what I remember, b wasn't strictly better than a when they were both first out (from looking at the info now, a has a higher rate, but suffers from obstructions and other interference more).

> At least this naming is a good deal clearer than HDMI's confusing version and feature mix or USB's Speed names.

Yes. As I understand it, USB is particularly bad where there's often confusion over the connectors, cable, and protocol, especially at the USB-c level.


ac is later and presumably better. No idea if there are any trade offs

AC only supports 5gz, which has significantly worse range and is impacted much more by walls or other solid objects. But if there aren't a problem it's generally better

"AC only supports 5gz."

That's not an accurate statement. "802.11ac" denotes a set of features, and many of them apply to and benefit the 2.4 GHz band. For example 256QAM encoding boosts the max data rate by 20 or 33% on this band for, respectively, 20 and 40 MHz channels.


Isn't 256QAM on 2.4GHz a proprietary feature?

https://www.cwnp.com/forums/posts?postNum=307358

I always heard that 802.11ac was 5GHz only, and that on dual-band, stations would stick with 802.11n for 2.4GHz



Interesting. How come all the consumer routers I've used have always required 5ghz to use AC capabilities? Are the rest of the ac features enterprisey? Or have I just not interacted with enough routers to see these?

Those limitations are a good thing: they reduce interference from neighbouring apartments/offices, and even interference between overlapping cells of a mesh network or a base station and its extenders. In practice, this means that the extenders can extend longer, so something like a convention center or work-sharing space can use approximately the same number of base stations it did before.

The only problem is that your phone might not be able to talk to your router through your bathroom wall any more. But that capability was always a sign of wi-fi being too noisy—imagine a light that can manage to penetrate a bathroom wall, and tell me you don’t think it’s maybe a bit too powerful for consumer use ;)


> Those limitations are a good thing: they reduce interference from neighbouring apartments/offices

Sometimes. 5GHz is great for apartments (where you have lots of noise and a tiny area) and offices (where you have lots of noise and a large budget), but for a villa 2.4GHz usually hits the sweet spot between noise and coverage.


> imagine a light that can manage to penetrate a bathroom wall, and tell me you don’t think it’s maybe a bit too powerful for consumer use ;)

Visible light has a wavelength of a few hundred nanometers; wifi is around 10 centimeters. Material penetrability varies enormously between different wavelengths (and between materials). Harm to humans also various enormously, and in a way very different from penetrability (for example, some wavelengths of UVC are very harmful to human skin but can be almost completely blocked by a thin sheet of clear glass). Equating penetrability with either 'power' or harmfulness is not a useful intuition.

(incidentally, an incandescent light bulb emits on a blackbody spectrum, so some very small proportion of the energy it uses will be emitted at e.g. radio wave frequencies, which will certainly penetrate the bathroom wall)


.

What's the "oops" about?

> Wi-Fi Alliance is naming the not-yet-complete IEEE 802.11ax standard as Wi-Fi 6, with ac being 5, and n being 4. One can guess then a is supposed to 1, b as 2, and g as 3, but this isn't mentioned anywhere I can see.

Most 802.11b and 802.11g devices didn't support 802.11a (due to it requiring 5Ghz), so I'm guessing version 1 is 802.11 and 802.11a isn't part of the version numbers.


I would put 802.11a as 3 together with 802.11g, since they have the same rates (and are basically the same tech in a different band, 802.11g feels like 802.11a ported to 2.4GHz plus some extra compatibility stuff).

It sounds like the Wi-Fi Alliance doesn't particularly care which one is 1, 2, and 3. They've only strongly defined 4 (n), 5 (ac), and 6 (ax). [1] It also sounds like they don't think enough devices are left in the wild of a, b, g, to worry about branding them.

The best description I've heard for 1-3 was to use roughly the consumer hardware adoption curve which was 1 (b), [briefly] 2 (a), 3 (g).

[1] https://www.wi-fi.org/discover-wi-fi


Hm, I would expect 802.11-1997 to be 1. Is that 0? Or does it share a number with b, because they're the same technology? Or is it 1, and a and g share a number because they're the same technology? Or do a and b share a number because they were released simultaneously?

(This is, of course, a Very Important Question.)


I think 802.11-1997 works as 0 in this case. The implication from the Wi-Fi Alliance seems to be they are counting consumer hardware generations, and there doesn't seem to have been much consumer hardware that was directly 802.11-1997 (it took b to get consumer adoption). Which is why the order I've heard that makes the most sense is 1 (b), 2 (a), 3 (g), because that was roughly the consumer hardwave waves, with the brief (a) hardware wave the fun out-of-order one.

a and b were specced at the same time though, so it's really weird to look at a+b+g as anything other than two generations.

That's certainly a fair viewpoint (and reflected together in the spec version 802.11-1999), but the interesting thing about this rebrand is the Wi-Fi Alliance taking a step back, realizing that the individual specs don't matter to consumers and as a consumer brand trying to break away from just using the spec names/versions.

From that perspective A, B, and G were all sold as three separate things to consumers, so it makes sense to count all three.


I think this is why they didn't give the numbers for the early versions

> Or does it share a number with b, because they're the same technology?

802.11b is part of 802.11-1999. 802.11-1997 describes the older Direct Spread (DS) and Frequency-Hopping (FS) standards in 2.4GHz.

802.11a and 802.11b were both ratified on 16 September 1999. 802.11g wasn't ratified until 12 June 2003.

http://grouper.ieee.org/groups/802/11/Reports/802.11_Timelin...


Right, 802.11b extended DSSS used by 802.11-1997 for 1 and 2 Mbps through 5.5 and 11 Mbps rates. No other Wi-Fi standards used DSSS.

When was the last time you saw a WiFi logo? The only time I can remember was as one of those stickers on a HP laptop in 2005.

Not a logo, but lots of cell phone manufacturers like to list which Wi-Fi standard they support because they can then advertise “gigabit” downloads and the like.

Does that make the original 802.11 "Wi-Fi 0"?

FINALLY. Confusing version numbering always annoys me, and wifi's "n", "ac", "g", etc version names (combined with the easy to get wrong "802.11") have been one of the biggest offenders for a while. When a version number is a marketable metric, it should not be confusing.

Others notable offenders that jump to mind: the Xbox range (Xbox, Xbox 360, Xbox One, Xbox One X), iPhone SE and XR (since they break the pattern all the rest fit in).


Add the "Pixel" range to confusing version numbering:

    "Chromebook Pixel" (a Chromebook released in 2013, commonly called just "Pixel")
    "Chromebook Pixel" (a Chromebook released in 2015, commonly called "Pixel 2")
    "Pixel" (a phone released in 2016)
    "Pixel 2" (a phone released in 2017)
    "Pixelbook" (a Chromebook released in 2017)
Or even worse, the Moto ranges:

    "Moto G" (a phone released in 2013)
    "Moto G" (a phone released in 2014)
    "Moto G" (a phone released in 2015)
Luckily, they have learned from their horribleness and now officially include the generation in the name, making their new naming scheme one of the best:

    Moto G5 (5th generation of their mid-range phone)
Compare with Nokia:

    Nokia 1
    Nokia 2
    Nokia 2.1
    Nokia 3
    Nokia 3.1
    Nokia 5
    Nokia 5.1
    Nokia 5.1 Plus / Nokia X5
    Nokia 6
    Nokia 6.1
    Nokia 6.1 Plus / Nokia X6
    Nokia 7
    Nokia 7 Plus
    Nokia 8
    Nokia 8 Sirocco
I have no clue what is what and it's confusing enough that I don't want to buy their phones even though one of them is most likely exactly what I want (high-range but not top-range phone that has stock or near-stock Android and receives updates quickly).

Maybe there's someone with a sense of humor there and Wifi 9 will be skipped as a dig at both Microsoft and Apple.

Windows 7, Windows 8, Windows 10

iPhone 7, iPhone 8, iPhone X

Wifi 6, Wifi 7, Wifi 8, Wifi Ten


Well, I think they have already skipped 802.11ad which initially looked like a next version. Personally, I know only one router available to casual consumer made by Netgear [1].

[0]: https://en.wikipedia.org/wiki/IEEE_802.11#Standards_and_amen...

[1]: http://www.za.netgear.com/landings/ad7200/default.aspx


I was looking for an opportunity to say something similar. I didn't realize there was that much prior art!

Good decision on their part. As laughable as some technical branding sometimes seems, it usually serves a decent purpose in helping people upgrade infrastructure.

From Wikipedi:

>Though the nominal data rate is just 37% higher than IEEE 802.11ac, the new amendment is expected to achieve a 4× increase to user throughput due to more efficient spectrum utilization.

>IEEE 802.11ax is due to be publicly released sometime in 2019.[2] Devices were presented at CES 2018 that showed a top speed of 11 Gbit/s.[3]

Sounds good but with limited practical value while broadband speeds stay so limited in home. For businesses this would make a big difference (for example editing HD video over WiFi).


I suspect for most WiFi users, the overwhelming need is more reliable and fast establishment of connection. I waste vastly more time trying to deal with flaky connections, reconnection after wake from sleep, and network priority issues than I do waiting in the rare cases the WiFi is limiting thd data transfer rate (rather than the wired connection).

The timescales involved in making Wifi connections should be ms or less, not human-noticeable (order 10 sec).


THIS.

I've got good LTE signal in my neighborhood, and managed to convince AT&T's retention team to grant me 50GB/mo (whether or not I choose to tether my laptop, none of this "unlimited with tiny cap for hotspot usage" nonsense) ... I digress. I tend to turn my phone's wifi OFF, even when nominally in range of a trusted AP, to ensure a reliable connection. I'll take a steady 4-5 MBps over fragile and on-again/off-again 60MB anytime.


I would love to have a multipath vpn on my phone - prefer wifi on connection, but automatically fall back to lte when responses take too long to arrive.

I recently updated my phone (MIUI/Android) and it does fall back to LTE when it can't reach the Internet through Wifi.

Mine does that as well, but it has to re-establish all connections whenever this occurs. If a webpage is in the middle of loading, it restarts, video stream stutters, it's not even close to seamless.

I've found Macs to be vastly better at this than my Linux/windows laptops, for some reason, to the point where I almost never wait for them to connect. Might just be some aggressive workarounds they've implemented, though.

I've seen Connman connect in under a second IIRC, on am embedded system. Proper PCs usually use wpa_supplicant, NetworkManager and some DHCP client, none of which are very fast.

Connman doesn't seem to handle all possible situations like NetworkManager, though, and its developers only seem to do (or approve patches for) whatever Intel management tells them to do.


It's been posted here multiple times: https://cafbit.com/post/rapid_dhcp_or_how_do/

IME, Macs connect faster when the signal is strong. But on a weaker link, it takes just as long as my Windows and Linux machines.

True, this is mostly at home. In cafes and hotels it can definitely take just as long.

This is just such a sensible change, it's amazing it took everyone this long to come up with it. Figuring out whether something is b, g, n, ac, etc. and then trying to remember which one comes next (ax) is just not going to filter down to the regular consumer like... ever.

I can definitely see perks both in "hey, that's the newer network, it's got a higher version number", and "oh, I guess I do need to buy a new Wi-Fi device, it only supports version 4", etc.


At the same time, some version numbers are meaningless (because Chrome), so I'm ambivalent about the change. Still, this should make it easier for mere mortals to understand.

I don't know about "meaningless" as much as "not something normal people need to care about."

At least there's not a new version of WiFi every 6-8 weeks.

True, but Chrome also de-emphasized the version number. I don't usually remember what version I'm on, and the only place most people might see it is when they manually check for a new version. Wi-Fi 6 is going to be used a lot in branding.

802.11ax is 802.11n's true successor and I hope it will be adopted quickly by everyone. 802.11ac for the 5GHz band was a faux successor, in my opinion, as you couldn't use it in the same scenarios. It came with some higher performance, but with major compromises in reach for a typical home.

Wi-Fi 6/802.11ax should last for a while, so I hope the Wi-Fi Alliance starts focusing on an actual long-range standard that's more of a competitor to LTE but that works in the unlicensed spectrum and for distances of 1km or longer. Then it needs to incentivize smartphone makers or smartphone modem makers to adopt it so that everyone will have it.

This would remove the biggest obstacle towards having a real meshnet.


> Wi-Fi 6/802.11ax should last for a while, so I hope the Wi-Fi Alliance starts focusing on an actual long-range standard that's more of a competitor to LTE but that works in the unlicensed spectrum and for distances of 1km or longer.

These are totally conflicting goals. The reason LTE works at those ranges is that a cell tower puts out tens of watts, so that the size-optimized electronics in a phone can receive the signal at that range. That's tens of times more than the limit in the unlicensed band (1W).

Furthermore, the bane of the unlicensed band is that it's full of uncooperative signals. The hard power limit keeps a lid on that, minimizing the number of uncooperative signals any given receiver/transmitter pair has to contend with. Jacking up the range to a kilometer compounds that problem exponentially.

Physics hates mesh networks. To optimize performance, you need a single network (or at least cooperative networks) using each chunk of spectrum.


5GHz is a godsend for anyone in a dense neighborhood, apartment building, etc. Range isn't the issue for arguably the majority of WiFi users, it's interference.

That's not to say that improvements to 2.4 aren't welcome; they are simply suitable for different applications.


Yes, I don't know much about WiFi standards but I do know that when I updated from an 802.11n router to an 802.11ac one, range increased. I can connect from some rooms where I couldn't before. I live in an apartment building where my phone sees 19 networks right now.

> works in the unlicensed spectrum and for distances of 1km or longer

Looking at the shitshow that is the 2.4 GHz band, what are the mass-market uses of unlicensed 1km range in the 1kbps bandwidth range? IoT?


how is cranking out enough power to reach 1km going to effect battery life?

WiMax?

WiMax is an LTE alternative designed by an IEEE 802 committee. It is designed for last mile delivery and also meets the requirements to be considered a 4G and 5G protocol.

Although technically superior to LTE, phone companies (outside of Sprint, in the US at least) chose not to deploy it as it was not based on existing protocols (GSM and CDMA), even though that isn't an actual legitimate reason, as "true" LTE requires VoLTE deployment, which phone companies refused to do until forced to; VoLTE-enabled handsets perform a lot better than their pre-VoLTE or VoLTE-disabled siblings purely because of not needing to waste valuable spectrum on 3G connections, allowing that band to be reassigned to a 4G radio on the tower.

On top of the VoLTE debacle, companies invested in the GSM and CDMA monopoly tried to claim WiMax did not perform well at long distances, the same distances that LTE does not work well with today (found frequently in rural areas in the US, or similarly in areas heavily shadowed by hills or tall buildings); however the adoption of 600/700mhz to fill in those gaps (plus the forced adoption of VoLTE to improve spectrum usage) has proven that to be false.

LTE in areas that are partly covered by existing bands, with the gaps filled in by 600/700mhz, have finally caught up to WiMax in real world testing.

Interestingly, Asia has adopted WiMax heavily but may be switching to LTE in the future for 5G deployment, even though WiMax beat LTE-A to the commercial gigabit deployment milestone, due to these continued misconceptions. Africa's few networks that were WiMAX are (or have already) switched to LTE (driving up the cost, and lowering the reliability of their networks).

The one place WiMax survived in the US was fixed broadband links (as this is what WiMax was originally designed for, until it merged with Korea's WiBro spec more than a decade ago), but that seems to be finally being replaced in favor with LTE-A's fixed profiles.

WiMax will probably beat LTE to working gigabit deployments, although you'll need to live in Asia for this to be relevant to you.

WiMax is not related to WiFi; although the actual underlying technology of all standards are rapidly converging, they are designed for different purposes.


WiMAX may have had technical potential in its standards (I am not equipped to evaluate that) but all the actual deployments of it were just worse than existing HSDPA networks. 10-20 Mbit WiMAX in the US got falsely branded by Sprint as "4G" which caused companies like T-Mobile and AT&T to also brand their faster HSDPA networks as "4G" which caused massive confusion in the the market (iPhones in the US will still show HSDPA networks as "4G"!! It's kind of insane)

I live in Asia and actually have a WiMAX plan still active (since it's grandfathered in on the only still remaining "really really unlimited" plan) but it always sucked in performance even when I got it, and since then they've refarmed half the spectrum. They sell "WiMAX 2" now, but that's just branding - it's just LTE.


4G's sticking point is speed. Specifically, the ITU-R IMT Advanced proposal requires several things, of which, today, are easy to meet. What wasn't easy was 100mbit peak speeds for mobile users, gigabit peak speeds for fixed users.

WiMAX in a lot of markets is not 4G, but neither is LTE in a lot of markets. In other words, a lot of markets do not, and seemingly never will, have 4G as defined by IMT Advanced. I live in a market that is LTE, is sold to me as 4G, will never meet 4G requirements.

WiMAX was developed with proposals like IMT Advanced in mind: ability to have MIMO, all IP packet switching (non-VoLTE LTE networks cannot ever qualify to be called a 4G network due to this, btw), 20mhz and higher channel widths, and spectral efficiency above a certain level (which put a lower limit on how big your modem's DSP has to be due to coding techniques), forwards and backwards compatibility of future specs, and smooth handover between heterogeneous technology (ie, tower to home femtocell and back). WiMAX's original specification (802.16e-2005, which was based on the original 802.16 spec from 2001) met the IMT Advanced requirements.

LTE was not developed from day one to do this, and did not really meet the requirements until LTE-Advanced. The original LTE specification (3GPP release 8 (2008); LTE-A was defined in release 10 (2011); LTE-A Pro was defined in release 13/14 (2016/2017); additions to LTE-A Pro for 5G (but does not meet 5G yet) were added to release 15 (2018)) fell short of the speed requirement.

What makes this all interesting is, WiMax could do fixed modems 10 years before LTE-A added it, was "true 4G" (as how ITU-R defines it now due to how everyone rushed to muddy the definition with HSPA+ and whatnot) 3 years before LTE was, and is currently the only protocol that has hope of deploying gigabit to fixed users currently (via 802.16m-2011/802.16-2012 aka WiMax 2 or WiMax-Advanced).

Also, they're trying to sell LTE as 5G, I've already seen ads saying 600/700mhz support is 5G (it is not, although it is welcome), just like how they tried to sell highest order (2x2 MIMO with dual cell and widest channels) HSPA+ as 4G (which would be more correctly be described as 3.5G, as newest spec LTE would be best described as 4.5G).


I had WiMax via Clearwire in both Seattle and Chicago.

When it worked, it was phenomenal. But it needed a nearly line-of-sight connection, and seemed to have problems in rain and fog.


My point was the asked for standard already exists.

802.16d and 802.16e are dead and useless from a modern ISP perspective. WISPs are ripping it all out and upgrading to cambium pmp450, ubnt AC, mimosa, etc.

WiFi 6, or 802.11ax, has been in development for sometime and faces lots of difficulties and controversies.

First it was discovered all the major companies were working behind close door forming a group called Densi-Fi, trying to super speed the spec, or more like neglecting all the issues around it and push forward the time to market. This has been discover by some other IEEE members, and the problem has since been "resolved". The Density-Fi section in 802.11ax has since be deleted from Wiki, despite many attempts to bring it back. The 802.11ax committees continue to push forward and correct me if I am wrong, are there any other IEEE spec that failed to pass in all of its Draft? Draft 1.0, 2.0 and 3.0 for 802.11ax all failed to pass the vote. With Draft 4.0 being pushed and forcibly passed, while all the comments ( ~2300 of it ) being the same and unresolved as Draft 3.0. Much like 802.11ac there will be Wave 1 and Wave 2. Wave 1 does not include Uplink MU-MIMO, 80+80Mhz Channels and some other things I don't remember on top of my head.

I am not sure what to make of this. Because it reads to me as a giant pile of mess and I don't want to be the guinea pig for this new spec.


This time, maybe they hire some cryptographers too instead of letting the network engineers design the security features. For crying out loud, they took their sweet 20 years to actually create a protocol with forward secrecy and resistant to offline attacks - WPA3.

It’s kinda a bummer to see 802.11ad being ignored. I use 802.11ad for game streaming in my house, and it’s great.

It’s also great for the “access point per room” story, if there was a mechanism to have cheaper APs and do hand off.


I suspect the numbering will only be used for more or less backwards compatible standards. I doubt they will break compatibility with N anytime soon.

I have a quite long apartment with reception problems in some rooms, and was planning to buy a mesh system (like the TP-Link Deco M9 Plus or similar) to replace my aging range extender (which is stuck on 802.11n now that I have a 802.11ac router).

I'm more of a software guy, don't know that much about hardware and networking, so can any expert on these things give an opinion as to whether this is a bad moment to buy it? Would it be better to wait for 802.11ax/6 to arrive? When can we expect to find hardware (such as mesh kits) reliably supporting the new standard?


802.11ax (or WiFi 6) is going to primarily benefit you at 5 GHz, which is best where you have line-of-sight between the device and the AP. If you're looking into range extenders, you probably don't have line of sight and it's not worth it.

Mesh Wifi sucks.[1] It takes Wifi unreliability and latency and compounds it by adding hopes. Just bite the bullet and put in multiple APs, all connected to a router via Ethernet.

[1] Mesh [any sort of wireless] sucks.


> Mesh Wifi sucks.

As an alternative for what doesn’t suck, you want multiple hardwired access points. Obviously if you are renting that is tricky though. Ubiquti UAP-PROs are pretty good for home use, and a couple at each end of your apartment would be faster (and probably cheaper) than a prosumer mesh setup.


I have 4 eeros in my house and they are amazing and we never have any issues. I'm so glad I ditched all the netgear/d-link/tp-link/whatever garbage. The hardware is fine but the firmware is always shit. Get a set of euros and stop worrying about wifi ever again.

Cunning. Why have 5G, when you can have Wi-Fi 6?

I'd posit that this renaming is part of the ongoing competition between Wi-Fi and LTE, and prompted by the advent of 5G?


I like new numbering system which allows me to quickly deduce the generation of standards. No more a/b/g/n/a-something.

Can someone explain what this means for consumers? Who benefits the most from this new release?

Ordinary people who go to buy a device can know what “version” a router or laptop/phone supports. It makes it much clearer when comparing two routers which supports newer technology rather than knowing the diff between .ac or .n

Looks like penetration and range decrease with each new WiFi generation.

Not really. There's only the 2.4 GHz and 5 GHz split.

And the problem with "long range" is that everyone who doesn't live in a detached suburban house now has massive noise on the 2.4 GHz band from their 20 neighbors so that everyone gets terrible speeds. The weaker penetration/range of 5 GHz solves that by right-sizing everyone's network.

edit: researching the updates in 802.11ax, it looks like Wi-Fi 6 will bring the improvements from 802.11ac to 2.4 GHz as well, so it will actually improve performance for people who need range as well. It also has guard interval improvements for outdoor environments.


The larger problem with the 2.4 GHz ISM band is the lack of non-overlapping channels along with only one station being able to transmit at a time per channel (in pre 802.11ax). There are only three non-overlapping 20 MHz channels in the 2.4 GHz unlicensed spectrum, where 5 GHz spectrum has about 21 (although some require active radar avoidance).

> The larger problem with the 2.4 GHz ISM band is ... only one station being able to transmit at a time per channel

Is there any Wifi tech for which that isn't true, sans beam-shaping and other tech that effectively puts stations on different physical networks? How do 5 GHz technologies handle multiple simultaneous broadcasts on the same channel? How will 802.11ax do it? Signal processing?

My impression, based on only a little research, is that it's impossible with any tech. Even cell providers need CDMA, TDMA, etc. Maybe that understanding is out of date?


Maybe this?

https://en.wikipedia.org/wiki/IEEE_802.11ax#Technical_improv...

> Spatial frequency reuse

> Coloring enables devices to differentiate transmissions in their own network from transmissions in neighboring networks.

> Adaptive Power and Sensitivity Thresholds allows dynamically adjusting transmit power and signal detection threshold to increase spatial reuse.

> Without spatial reuse capabilities devices refuse transmitting concurrently to transmissions ongoing in other, neighboring networks. With coloring, a wireless transmission is marked at its very beginning helping surrounding devices to decide if a simultaneous use of the wireless medium is permissible or not. A station is allowed to consider the wireless medium as idle and start a new transmission even if the detected signal level from a neighboring network exceeds legacy signal detection threshold, provided that the transmit power for the new transmission is appropriately decreased.


That makes good sense to me, but it's still basically keeping stations on separate networks - it's managing the networks' transmission power carefully so they can run in closer proximity.

Still I don't know a solution to concurrent broadcasts on the same network, as the GGP comment seemed to imply.


Those along with downlink multi-user MIMO from 802.11ac allowed an AP to send to multiple clients at one time using spatial division multiplexing.

It's not a bad thing. Base stations are cheap and it's better than having everyone's network clog everyone else's.

As someone who lives within reach of 30+ networks, this is a step up.


I'd rather people used multi-base wifi mesh networks to increase their coverage instead of one long range transmitter.

Single long range base stations might be great if you have a house and distance from your neighbors, but it's terrible for people with denser living arrangements.


This is so dumb, no one's even using Wi-Fi 1 through 5 yet.

> no one's even using Wi-Fi 1 through 5 yet.

Yea they are.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: