So that table is using distance as a proxy for signal to noise ratio. SNR is what really matters.
Each data rate in the standard uses a different encoding technique. "Faster" encoding techniques cram more data into a given transmission interval but require a higher signal to noise ratio to be received without error. Since SNR declines with distance you can have a rough idea at what distance from a transmitter you will be able to receive at what data rate.
However, people and vendors focus far too much on maximum throughput. I've seen data showing that even in the best conditions, clients spend about 1% of their time transmitting or receiving at the highest data rates. Because they are dynamically adjusting the data rate based on the perceived SNR.
Individual clients' peak throughput also works against _aggregate_ throughput when talking about wireless networks with multiple users. If you have 100 clients, do you want one to be able to dominate the others or everyone get a more or less equal share? These peak speeds assume configurations that I would never deploy in practice, because they favour individual users and cripple aggregate throughput - things like 160 MHz wide channels.
Do most clients do a constant throughput or do they do bursts? Because speed does matter a lot if it's burst (send 100MB to fill a buffer, then wait). The faster you fill whatever buffer, the faster you can let another client use the connection.
Correlated, but obviously bad code can really fuck with neighbors. And each client has an incentive to be greedy so users of that client get a better experience. So you fall back again to QOS for what you care about..
> The faster you fill whatever buffer, the faster you can let another client use the connection.
Basically this. They way we usually put it is that we want clients to "get on and off the channel as quickly as possible". That requires all clients in range of each other to be behaving (respecting the rules) and using fast enough data rates to minimise their consumption of precious air-time.
Under the hood though, it's a very granular frame-by-frame, almost nanosecond-by-nanosecond thing that leads to the overall throughput at a human timescale. To give you a sense, let me try to summarise the factors affecting throughput this way:
- Data Rate: the transmitting client can adjust the data rate of each frame up or down per frame if they want. For example, a single TCP session on a 2.4GHz channel could in theory see data rates everywhere between 1Mbps and 450Mbps. But in practice most drivers I've seen adjust up or down incrementally. And in a healthy network, they usually hover around the top 25% of the mutually supported data rates (but they also spend very little time at the highest data rate, typically less than 1%). Also the AP could be using different data rate to the client, and usually is. The rx and tx directions are effectively separate streams and data rate is always chosen solely by the transmitter.
- Block Size: Similar to TCP windowing. Data can be sent in multi-frame 'bursts' before an acknowledgement is required by the transmitter for it send more. In the original Wi-Fi, every frame had to be acknowledged. Later standards introduced this idea of block acknowledgements.
- Re-transmits: Whenever acknowledgements are not received, the data has to be resent. Block size will be reduced, possibly to 1, so it will also take longer. Note that re-transmits are expected and very routine in Wi-Fi, whereas in TCP they are usually considered more of an exception (except on the internet). I've observed re-transmit rates of 20% in networks where no user is perceiving any sort of issue at all. So Wi-Fi is very robust to frame loss, up to a point, but even so, re-transmits do end up having a large impact on the aggregate throughput.
- Clear channel wait time: It's no exaggeration to say that transmitters spend most of their _waiting_ to transmit. And a big chunk of that wait time is just waiting for the medium to be clear - the clear channel assessment. If the client thinks there is a transmission going on, it just has to kill time.
- Other wait times: Even when the channel seems clear, there are various requirements to do nothing before and after transmitting. For example, the inter-frame spacing interval and the random back-off interval. These are just the rules of play. In fact, congestion avoidance on Wi-Fi could be said to be entirely a matter of timing.
Note that these are a simplification and clearly I can't mention everything or cover all the nuances. But, in the way I've framed it here, the clear-channel wait time and the re-transmit rate do basically encapsulate the impact of intangibles I didn't mention, like congestion and noise/interference.
TLDR; Wi-Fi transmissions are extremely lumpy at their native timescale, but many seem a lot smoother than many TCP transmissions at human timescales.
> Correlated, but obviously bad code can really fuck with neighbors.
Also true. Bad code is usually exemplified in Wi-Fi by bad drivers (looking at you Broadcom). These will cause clients to "stick" to bad APs when they should roam, or pick the wrong channel/AP/band in the first place. Intel is generally very good.
> And each client has an incentive to be greedy so users of that client get a better experience.
Greed is good in the sense that clients want to transmit their data as soon and as fast as possible and we want them too! But they have to respect the rules. Of course there's only a handful of chipset vendors so they mostly do. But within that, there's still plenty of room for clients and APs to do things that are _sub-optimal_ even if they are Wi-Fi legal, as per the sticky client example I mentioned.
> So you fall back again to QOS for what you care about..
Wi-Fi does indeed have its own implementation of QoS which is of course a timing dance! But I think you're referring to QoS in higher layers like IP. So it's worth mentioning that this WiFi stuff is all happening at layers 1 & 2. All the congestion detection and re-transmissions and so on that may be happening in higher-layer protocols like TCP are happening _in addition_ to what is going on at the WiFi layers.
But this is the point. What your neighbour's are doing greatly affects the performance of your network.
If you have a good connection and are successfully able to transmit packets to your AP at 600Mbps, and your neighbour has a poor connection and is transmitting at 6Mbps to his AP at that moment, you literally have to wait ~100 times as long for a free medium before you can attempt to transmit. And that's for every single frame. Then you have to hope his client is well-behaved enough not to transmit while you are transmitting. Otherwise you end up having to wait again and retransmit anyway.
You might not notice this with only 2 clients. It might be the difference between a 80MBps and a 50MBps download for example. But it decays exponentially with the number of clients.
niobe's excellent reply covered it already, but just to be blunt: You usually share the channel with some of your neighbors' networks, so the assessment that only you are using it is usually not correct.
This is also why it's often better if everyone uses lower transmit power (while still retaining coverage), as networks farther away will see less interfering networks.
I would agree with that. G to N was perhaps the most critical move in Wi-Fi because it included MIMO. You can think of this as unwanted signal echoes and reflections being switched from a liability to a benefit. Heck, I _still_ run WiFi-4 networks and they perform very well. WiFi-5 was an incremental upgrade, with many experimental features that barely used in practice.
802.11 is in general a vast swag of cool tricks, and when enough ideas are thrown at a wall, many do end up sticking, but for the most part the benefits are cumulative. MIMO being one major exception.
An impressive attempt to summarise Wi-Fi which is a very deep topic. However I think the executive summary already missed the most critical thing about Wi-Fi:
only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.
It's a shared medium and it's not even half duplex, unlike the dedicated full duplex you would typically get with an ethernet cable to a switch port.
The fact that Wi-Fi achieves what it does with this limitation, and how it co-ordinates the dance of multiple unknown clients using the same medium - and in the presence of other RF technologies to boot - is indeed an incredible technology story, but this achilles heel is the single most defining thing about Wi-Fi performance.
> only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.
Not true with newer standards:
> Orthogonal Frequency-Division Multiple Access (OFDMA) is a multi-user wireless transmission technology that divides a single Wi-Fi or cellular channel into smaller subcarriers called Resource Units (RUs), allowing multiple devices to transmit data simultaneously.
[…]
> Instead of one device occupying the entire channel (as in OFDM), OFDMA allows parallel transmissions. As a result, network congestion decreases significantly.
> In addition, the 802.11ax standard defines the smallest subchannel as a resource unit (RU), which includes at least 26 subcarriers and uniquely identifies a user. The resources of the entire channel are divided into small RUs with fixed sizes. In this mode, user data is carried on each RU. Therefore, on the total time-frequency resources, multiple users may simultaneously send data in each time segment, as shown in the following figure.
OFDMA just makes the channels smaller. Sure there are now 10 transmitters on channel 5, but there's one transmitter on channel 5.1, one on 5.2, ... and each 'channel' has 1/10th the capacity of "channel 5".
Yes, and? If a device only needs 26 tones, that's what will be assigned; if it needs 52 or 106, then that will assigned:
> RU allocations can happen with a combination of tones. For example – if there are three stations associated, then the AP can assign 106 tones to the first two users and 26 tones to the third user. The AP can also assign 52 tones to the third user. These RU allotment decisions are dynamically made by the AP based on the client’s traffic type and its available amount for transmission. The AP learns the client’s buffer status by using a periodic sounding mechanism.
> In the first scheduling interval, the AP allocates the whole 20 MHz channel—a single, 242-tone RU—to Client 1. And in the third interval, it allocates two 106-tone RUs to Client 2 and Client 3.
Why give one client more than it needs (when another client can also share the transmission time slot)? If it happens to need the entire x MHz channel, it may be given it (all the RU tones).
It is not even switched on in some early version of WiFi 7 router and receivers.
As a general rule of thumb, the best version of WiFi x will only come with WiFi x+1. So for all the problems to be solved and ironed out on OFDMA it will be WiFi 8 then. And for all the promises of Ultra-High Reliability, it will have to be WIFI 9.
WiFi is clearly moving more towards like 4G and 5G with every version. I just hope someday that it really is good enough where there are many people using it at the same time.
> only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.
That’s not correct. You and your neighbor can use the same channel at the same time. On your network, the transmissions of the other network appear will appear as noise. As long as the other devices are far enough away, however, your devices will still be able to make out their own signal.
This is a common misconception.. you and your neighbour can configure the same channel, you cannot successfully transmit at the same time on the same channel within range. Nor can you and your own AP successfully transmit at the same time on the same channel.
When you and your neighbour _appear_ to be transmitting at the same time, each adapter is actually spending most of it's time waiting for a clear medium and for various backoff timers to expire before attempting to transmit.
"Appear as noise" is not defined for Wi-Fi adapters. There is only "I received a frame addressed to me and acknowledged it" or "I sent a frame and either did or didn't get an acknowledgement back from the receiver". Receivers do not know why they didn't receive a frame, or, if they received a corrupted frame, why it was corrupted. They just wait for a retransmit. Senders ordinarily wait a certain time to receive an acknowledgement, and if they don't, the start the transmit wait cycle again. But they often then reduce the data rate to increase the odds of a successful transmission.
I'm glossing over some complexity here, because there's a sender and receiver to consider, and each has a different view of the RF environment, but the point is always correct when all transmitters and receivers (lets say the 2 APs and each has 1 client) are in audible range of each other. And this is most of the time. Note that "audible range" (where the signal is such that the medium is deemed as busy by the adapter) is much larger than the "usable range" (where data can be transmitted at reasonable speeds). So transmitters create interference in a much larger area than they actually operate in.
That means your neighbour transmitting at 6Mbps to his AP will indeed degrade the performance of your client who wants to transmit at 600Mbps because your client has to wait ~100 times longer for a clear medium.
> There is only "I received a frame addressed to me and acknowledged it" or "I sent a frame and either did or didn't get an acknowledgement back from the receiver". Receivers do not know why they didn't receive a frame, or, if they received a corrupted frame, why it was corrupted.
That's not correct. WiFi is "listen before talk." Radios listen to the channel, trying to decode preambles from other networks, before transmitting. In that process, they can detect other signals well below the threshold where they'll consider the medium in use (the CCA threshold). If you have an otherwise clean channel, the noise floor might be -95 dBm. Radios typically can decode the preambles 3-4 dB above the noise floor. Conventionally, the WiFi standards set the CCA threshold at -82 dBm. So the radio can "hear" a lot of signals that won't cause it to trigger collision avoidance. More recent standards allow using a CCA threshold as high as -62 dBM under certain circumstances to facilitate spatial reuse: https://arista.my.site.com/AristaCommunity/s/article/Spatial....
Also, what the Wifi standards do is less aggressive than what radios could do. The CCA thresholds are set to facilitate orderly use of the spectrum--they're not physical limits. To receive a transmission, you just need sufficient signal-to-noise ratio. An adjacent network transmission raises the noise floor, but if your radio is close enough to your AP, you might still have sufficient SNR.
At my inlaws house, they and all the neighbors have Comcast, with routers that don't allow configuration of the channels. And since Comcast doesn't know how to configure their routers properly, all neighbors are sharing the same channels on both 2.4 and 5. It's fine if you are in the room near your own router, but it works poorly on the other side of the house, where I pick up neighbor signals at the same level as the desired one.
Only if the difference in signal power is high (>40 dB). It’s like saying collisions aren’t a problem in situations where no collision actually occurs.
If I’m in the room with one of my APs, my closest neighbor is a hair under 40 dB lower. But I can see a dozen other networks on my street, which means the other signals are strong enough where my phone can decode the packets.
The point is that wireless networks can use not only the channel dimension, but the spatial dimension. That’s the basis of things like MIMO.
Yes, that helps quiet a lot in practice because in most places there's limited "frequency-domain" capacity (i.e. free channels) but plenty of "time-domain" capacity, (i.e. free air-time). So even if you are sharing a channel with 4 other APs and their users, everybody may subjectively feel the network is fast. When chopping up the time domain into nanoseconds there's just a lot of idle time available, even if clients are pulling down files at 600Mbps.
But at a fundamental level, the channel space (~60 across all bands best case) is extremely limited but the potential growth in transmitters is unbounded. It's like a linear hack to an exponential problem. It seems to work at first, but under very high load conditions performance still degrades ever faster until it falls off a cliff. Then there's all sorts of complex dynamic behaviour like the hidden node problem to add to this, but it all boils down to needing air-time and SNR.
> But at a fundamental level, the channel space (~60 across all bands best case) is extremely limited but the potential growth in transmitters is unbounded.
Yeah 6Ghz freq doesn't have DFS channels which remove a lot of usable channels for 5Ghz. Unfortunately it'll be a while until most devices support 6Ghz.
> Unfortunately it'll be a while until most devices support 6Ghz.
Per this May 2025 Juniper presentation, half of their deployed APs have 6 GHZ enabled, and at least 20%—but as much as 50% depending on the environment—of clients have 6 GHz:
Corporate environments (where client hardware is more standardize) has higher 6 GHz adoption, BYOD (universities) environments have lower adoption.
So I'm not sure how you define "a while" as, but it's probably already the majority at most workplaces, and will be for personal stuff with-in a year or so.
This is clearly a well-timed loss-leading strategic market share grab! Anthropic have blown a lot of user trust in the last couple of months..
But, overall, the current AI pricing is completely unsustainable, across all AI companies, except via the exponential growth they are relying on. Dylan Patel did the most insightful analysis of this I've come across.. https://youtu.be/mDG_Hx3BSUE?si=nyJu4adwYCH1igbJ
Really feel like the current versions are for sure "good enough". Thats not how market capture is gonna function though and they are gonna keep pushing because the only moat is to stay ahead, so the problems gonna stay strange. at some point more compute isn't a reasonable answer, and optimization is, and my feeling is we are well past that point from a product perspective, but ipos etc etc
The only moat is the us trying to buy all the compute hardware in the world for the next two years. Then China, amd, etc are just making their own chips.
So I think the current generation of models are arguably all about the same in terms of capability. However, the requirement for exponential growth I mentioned is all about the economics.
AI companies are trying to ride a growth wave where the income curve lags the expense curve by 1-2 years, and at the same time investing 10x their historical income on next year's projected demand.
Everyone is selling their API calls at a loss, because to capture the investment required to scale the business up and the costs down, you need to grow your market now (in relative and absolute terms). And history shows, that in big tech you often have winner-takes-all situations, or, at least a couple of big firms will dominate, and the others will die. That's where market share becomes a key strategic goal.
But to secure that, they also need to be building next year's compute now. And if their anticipated compute needs are 10x this year, they've got a serious funding problem, one that can only be filled by capital with an appropriate risk appetite. You can only get this high-risk capital when the potential payoff is even more enormous, or, when it's a smaller bite of a much bigger pie. Hence, MS putting into OpenAI and so on. But the investment needs are getting so big we are starting to see some pullback from more conservative sources, but also record deals from others.
Now say an AI company does get the capital they need to grow. Well, they've still got a very serious supply problem. RAM, GPUs, water, electricity etc. Hence why there's a lot of deals and cross-investment going on - everyone is trying to secure resources and lower their overall risk exposure while keeping a foot in every possible door, so they can switch alliances whenever it's expedient, and because collaboration also helps the overall market to grow.
This all explains to me why the industry _needs_ the hype. These companies can't exist without it, because the money they need to sink in, in order to even be around in 18 months, far outstrips all reasonable financial practices. So it's capitalism on steroids or nothing. If you believe the AI story, then to that extent, it's rational.
But note that nowhere in this scenario does it suggest the actual consumers will be getting a consistent product at a consistent price!!!
Don't take it too seriously. It's essentially AI slop. The entire draft mentioned nothing of subnetting under the new addressing schema so I took it is essentially fraudulent on that point alone. This will naturally expire in a few months.
Each data rate in the standard uses a different encoding technique. "Faster" encoding techniques cram more data into a given transmission interval but require a higher signal to noise ratio to be received without error. Since SNR declines with distance you can have a rough idea at what distance from a transmitter you will be able to receive at what data rate.
However, people and vendors focus far too much on maximum throughput. I've seen data showing that even in the best conditions, clients spend about 1% of their time transmitting or receiving at the highest data rates. Because they are dynamically adjusting the data rate based on the perceived SNR.
Individual clients' peak throughput also works against _aggregate_ throughput when talking about wireless networks with multiple users. If you have 100 clients, do you want one to be able to dominate the others or everyone get a more or less equal share? These peak speeds assume configurations that I would never deploy in practice, because they favour individual users and cripple aggregate throughput - things like 160 MHz wide channels.
But the sticker speed is what sells..
reply