This is incredible, we use QT5.12 for an embedded device application, and this issue has been a really weird one for us and this spot on resolves it! See browsing HN at work does pay off!
The amount of hours wasted by this Qt bug is immeasurable. I'm one of the commenters on the issue 5 years ago and still people are being plagued by it.
Qt should've posted a public advisory announcement when this was discovered. Instead it has plagued users for nearly a decade now because very few Qt developers even know that their Qt5 apps are ruining the WiFi connections of their users.
Would be nice if a major distro mitigated against this publicly.
Why addressing this took so long I think it's worth taking a step back to look into. This isn't some small UI change it impacts system performance in a negative way for so many users.
I think the reason that it took so long is because it was actually working by design. The only reason it got fixed is because Qt6 removed the offending feature completely.
However the result of those polling requests is that the OS might opt to drop everything else to complete the network scan. Some incomplete workarounds were implemented that mainly revolved around disabling the feature on affected systems: https://codereview.qt-project.org/c/qt/qtbase/+/214029
I have a company-issued laptop with some corporate spyware installed. I'm not actually required to use it for development, so I don't use it. But I have to switch it on from time to time or else I get a nice email from IT.
Anyway whenever I switch it on my wifi goes to shit. Apparently it does some SSID scanning every 5 seconds and then keeps sending the scan result to the "mothership". So I switch it on once or twice a week for an hour or so to do its spying thing.
probably something like "hey, it looks like updates haven't been installed on your corporate laptop in the last three months. Please fix that or we're going to ban it from our systems."
My company "allows" USB sticks but they will encrypt any files on it with key tied to that machine. Had a tech updated a config file when after this was silently rolled out and poof line down for the day as it couldnt be used to reimage systems on the embedded pcs.
That's what I do, although with OpnSense, but that's that easy part. Also, a cheap managed switch works well enough for this purpose.
The main issue I had was that most "consumer" access points don't support multiple SSIDs with separate VLANs. In the end, I went with a Netgear WAX something that can support 4 SSIDs, each with a dedicated VLAN (+ a separate management VLAN). But it's more expensive than "normal" APs with similar performance.
I've had something trigger every device into asking 'who has xyz ip tell [Mac]'. It made the network unusable and even rebooting systems it would come back since as soon as one device asked the question (sent the broken arp packet) all other devices decided they too needed to know.
The solution that worked was to flip the circuit breaker for the whole house and reset every network device at once.
I have had 2 external USB-power-passthrough laptop doc/hub things with an Ethernet port. They both cause a packet storm on the network if you unplug the computer and leave the ethernet and power plugged in. Causes all my crappy realtek NICs to overheat and flake out. Not exactly the same but super annoying.
I had the same issue with an USB Ethernet adapter plugged into a powered USB hub, disconnecting the computer would make the network crap out. Kind of defeats the purpose of having a docked setup with wired Ethernet if one can't undock it or needs to unplug tons of cables each time.
Yep. There’s actually a known issue with the CalDigit USB C Pro dock that causes this. It’s when the laptop comes up from or goes to sleep and it takes down the whole network. Fortunately there’s a firmware fix for it but it took a while to put two and two together.
I think more adapters/docks with an RTL8153 suffer from this issue. E.g., I had a Lenovo USB-C Gen 2 dock that would fairly reliably take down our office network after suspend-wake. Just avoid Realtek NICs like the plague.
This freaking issue! This has plagued humanity for decades now I think. I had an older ThinkPad at work once that did that when it went to sleep. Fun for the whole fami^Wgroup of colleagues sharing the switch. Pause frames must die, leave flow control to a higher layer.
I had a Apple USB 2.0 USB dongle which would crash the OSX when the router (DDWRT) is rebooted and sometimes would crash the router when the system was rebooted. Didn't do that with other combinations of OS/Router.
Similar, but not exact, are deb, and likely others, with failed storage will continually dhcprenew as they can’t write the leasefile for the interface. Near ddos as far as windows dhcp server is concerned. Required an arp table trace to find. Bad_Address is usually the symptom.
Depending on what network gear you're using (I'm now switching to Omada, previously/still on some site son UniFi, but lots of even 'prosumer' stuff does this) there are specific mitigations available. All of this falls under the heading of "managing broadcast traffic", which is very important even for smaller networks. The three major categories of traffic on a network are unicast, broadcast, and multicast. Unicast is the normal case of one device talking to a single address. Broadcast involves sending a packet to every single possible recipient in the entire broadcast domain (almost always the subnet). ARP fits in here. Multicast is essentially in between, more efficient then broadcast, can still talk to multiple devices that have signed up to hear it.
Obviously an actual broadcast storm can take down an entire network, but excessive broadcast traffic on WiFi specifically can also suck up a huge amount of airtime for little bandwidth. Every single device has to go to the slowest speed and stop what they're doing to listen and make sure no one is left out. Using STP/RSTP with proper values set and LACP for aggregated interfaces can help prevent inadvertent network loops. Many switches also support some kind of port isolation and explicit per port storm control restricting max numbers of packets/second for unicast/broadcast/multicast traffic. WiFi APs can use proxy ARP to cut it in their domain too. The WAP already of course knows the MACs of every device connected to it by definition, so there isn't generally any reason not to have it answer ARP requests on their behalf then forward the traffic itself.
For home I wouldn't really bother with storm control. In the rare case something is actually misbehaving in that regard it's not much work to go fix right and I'd rather do that than mask it with storm control so it's still happening just not enough to cause an obvious signal.
For home wireless another trick, assuming you have a higher end model, is setting it to a different network than the wired. If you're not in particular need for slow service at the very edge of the coverage zone you can also raise the minimum data rate and then BUM traffic and wireless management frames won't be as painful even when they do occur.
> In the rare case something is actually misbehaving in that regard it's not much work to go fix right and I'd rather do that than mask it with storm control so it's still happening just not enough to cause an obvious signal.
While I agree with the general idea, I think that there are only so many hours in a day. If the broken device isn't easily fixable, like most IoT junk out there, it's nice to be able to limit the damage.
There are better network only solutions for unfixable devices when you have that class of gear, such as moving it to its own VLAN which is a good practice for even properly functioning IoT devices.
Storm control has more a "John brought a home Netgear (which ignores but doesn't forward spanning tree packets) and created a loop between two wallports despite spanning tree being enabled on them. Thanks to storm control Jane is still able use her IP phone for a meeting while the network admin investigates a monitoring alert about excessive broadcast drops on John's port which is causing slowdowns in that network" type use case. It's not meant to be the long term solution to anything it's just there for when other long term solutions fail to kick in and you don't want a full outage until you can fix them.
Opinions hooboy do I have them ;), I was just asked that exact question on HN a couple of weeks ago in fact. Gave it a shot in a response here [0] which still applies. But basically Ubiquiti has become a toxic dumpster fire of a company and their product lines (UniFi in particular) on a downward trajectory in terms of performance, features and stability for quite a while. I had a certain amount invested in UniFi (think the final total will end up as a few hundred devices) so it's been a staged switch, with a total change of all routing/gateway/security functions to OPNsense completed first. That bought a lot more runway, it's always been the weakest and most neglected area in the ecosystem while obviously also being pretty critical. Yet the Ubiquiti debacle has served to underline for me how valuable self-host is, I've been able to have a nice slow ramp and deal with their implosion precisely because UniFi/UNMS/UISP and all the hardware is fully under my control. So I've been hoping someone would come along and see the potential of the UniFi niche of the networking market and basically copy it without all the junk. Which seems to basically be Omada to a tee.
I'd actually originally (and still at many sites) intended to hold off and wait for WiFi 7 gear, because at that point a bunch of clients (and myself for that matter) will be interested in replacing WAPs anyway which is a very natural point to consider changing manufacturer as well. But a breaking point has come at a few places with a final feature which is PPSK, allowing the system to have many different passwords for an SSID that can be assigned different tags. Basically it allows having many of the benefits of WPA-Enterprise in terms of segmenting different clients onto different VLANs and revoking credentials and the like with more security and less manual work than MAB (MAC bypass) while still looking like a normal PSK scheme, which means the vast universe of brand new stuff which doesn't support 802.1x and never will works with it happily (by the same token none of that is going to play directly with using a secure virtual network or other better systems either sadly). Lower overhead and better compatibility than captive portals for non or semi-interactive devices as well. Someone hacked together a demo showing this could work on UniFi WAPs like four freaking years ago and Ubiquiti never did anything with it in favor of endless bikeshedding GUI changes to add more white space and hide important features and information (yes I'm a touch bitter).
So I'm not in the position of wholeheartedly recommending Omada yet, I don't have years under my belt there and it's relatively speaking fairly new. It has its own warts and rough edges for sure, from the software to the hardware physical design. But it can be self-hosted and the trajectory looks massively better, has already had more meaningful improvement in months than UniFi has had in years, seems to perform much better so far as well.
Of course the Venn diagram of self-hosting, herding lots of hardware with single pane, fully networking features, ecosystem richness and so on is pretty minimal in the overlap. Take away any one or multiple of those and options expand a lot, Aruba InstantOn for example.
And welp, this didn't end up "basically" at all did it, sorry about that. I am bummed by the sheer wasted potential with Ubiquiti. So it goes in tech over and over again though, we've all seen this movie many, many times.
As far as tips, I would suggest if you plan to stay on the managing-your-own-networks route to very strongly consider having the router/gateway stuff be separate and fully open source as I ended up. Doesn't have to be OPNsense, could be VyOS or plain OpenBSD or whatever else you're most comfortable with and depending on how you want to manage stuff and what needs there are for others to take over. But it's very, very pleasant to have the full spectrum of quality PC hardware available, you can get far more power for less, and you're never stuck with a critical aspect. I'd still suggest generally running that on metal rather than virtualizing it in a (semi)production network, but opinions vary there.
I've also soured on UniFi a lot over the last few years. It's crazy. The SMB / MSP market is massively underserved and instead of staying focused and capturing it Unifi is trying to compete in the enterprise market where, IMHO, they aren't going to succeed.
If we try out Omada and feel like it gives us everything Unifi does without all the feature creep then we'll do the same thing as you; EoL sites will move from Unifi to Omada. I'll always take fewer features and stability over continuous new features.
> I would suggest if you plan to stay on the managing-your-own-networks route to very strongly consider having the router/gateway stuff be separate and fully open source as I ended up.
We've used pfSense forever and have been happy with it. However, 2.6.0 has some gateway/dpinger issues where the bug fixes are slotted for 2.7.0 on the CE side, so it feels like the inevitable neglect of the CE version might be starting to kick in there. I tried to explain to them the realities of the market we're in (small businesses) along with real numbers in terms of what kind of money they could extract from businesses like ours and all it got me was an offer to talk to someone in sales and we're not buying into subscriptions (ever).
It would cost us $15k+ / year to switch to pfSense+. The software only subscription is nearly 2x the cost of buying their entry level appliance (assuming a 5 year lifecycle). IMHO, that's an indicator they don't want to support 3rd party hardware and there's a risk they'll eventually drop it.
I wish VyOS had less crazy pricing because their stuff looks nice. They used to have a $600 / year professional subscription which they don't appear to offer anymore. I think the corporate subscription used to be <$4k / year and now it's $6k. I wonder why no one wants to commit to subscriptions. How can I sell anything to a customer when I can't give a predictable price for ongoing maintenance a year into the future?
Our customers will literally go back to using their ISP supplied modems instead of proper firewalls before they'll pay the prices everyone is asking for. I don't blame them either. They only need a fraction of the functionality being sold and all of the fancy, expensive features everyone is using to justify sky high pricing are simply bad value in that context because they'll basically unneeded bloat.
No problem, glad to chat about this and commiserate a bit with someone in a similar boat.
>I've also soured on UniFi a lot over the last few years. It's crazy. The SMB / MSP market is massively underserved and instead of staying focused and capturing it Unifi is trying to compete in the enterprise market where, IMHO, they aren't going to succeed.
Yeah, it's such a huge waste. But the CEO is a classic trouble case with near total control (owns majority of the stock and thus effectively owns the board, a few caveats in terms of protection for minority shareholders in public companies don't really stop the trend).
>If we try out Omada and feel like it gives us everything Unifi does without all the feature creep then we'll do the same thing as you; EoL sites will move from Unifi to Omada. I'll always take fewer features and stability over continuous new features.
UniFi hasn't even had "continuous new features" though! Or rather, their "features" have been stuff like reskinning the GUI or doing their own entire custom silly speedtest stack for a while, not "features" like PPSK, or for that matter just having normal DHCP and DNS management! Or any sort of certificate management or customization, or, like, anything of substance. Or on hardware, they didn't even bother to upgrade their UniFi Security Gateways literally ever, not once! The ones still selling now are the exact same hardware as 2014 launch, no new updates in forever. They intro new stuff without ever sunsetting older stuff in general really. Use "EA" as a popularity poll test. The dysfunction goes on and on.
I don't know if Omada will be the answer yet but worth a spin IMO. I'd definitely hate having to either go back to lots of individual management or something cloud-based. I am stick with UI for PtP/PtMP stuff for the time being though, even if some rot is creeping in there too.
>We've used pfSense forever and have been happy with it. However, 2.6.0 has some gateway/dpinger issues where the bug fixes are slotted for 2.7.0 on the CE side, so it feels like the inevitable neglect of the CE version might be starting to kick in there.
I'm afraid that company has a history of scummy behavior too :(. I think OPNsense is just plain better but ymmv. FWIW, while it's OSS there is at least one company (Deciso) that does a "Business Edition" at a relatively sane rate (~$150/yr, less for a 3 year) with no hardware reqs (they sell hardware too but I don't suggest getting it) and does business support contracts as well. And Sunny Valley Networks does an interesting little layer 7 inspector that can plugin to it. It conversely might not be good enough for your needs either, but it's not crazy. And the free version is still quite solid. As I said in another comment the docs are pretty decent so might be worth giving them a glance and firing up a system in a VM.
At any rate my clients are happy now. And despite rough edges I'm grateful there are a number of essentially decent options available.
But basically Ubiquiti has become a toxic dumpster fire of a company and their product lines (UniFi in particular) on a downward trajectory in terms of performance, features and stability for quite a while.
I can't remember a time when Ubiquiti wasn't a dumpster fire. They've always seemed like the MongoDB of network hardware. For basic stuff it was okay, but "advanced" features like RADIUS, hardware acceleration, etc. never worked well. Their stuff's worked okay enough for me in the past, but it's really flared up with Sonic's 10G gear.
But it's very, very pleasant to have the full spectrum of quality PC hardware available, you can get far more power for less, and you're never stuck with a critical aspect.
Gotta disagree here. I went with Ubiquiti because I wanted purpose built network hardware. COTS hardware tends to be more expensive and power hungry. I'm cautiously optimistic about the new Marvell (ex-Cavium) stuff but it sounds like Mikrotik is having a hell of a time with that too.
The big problem is that homelab/SOHO folks (myself included) don't want to pay the upfront cost of supporting or developing/testing modern hardware. At the price point Ubiquiti is targeting I think any entrant will be doomed to failure.
>I can't remember a time when Ubiquiti wasn't a dumpster fire.
4-6 years ago things still looked quite promising, high energy great community and forums (gone now), a basic but workable issue/feature tracker (gone), lots of clearly talented engineers who were responsive and right there (gone), etc. Very good price/performance, quality and so on particularly compared to other stuff of the time. There was a reason it got a lot of recommendations. That was certainly a while ago now though.
They've always seemed like the MongoDB of network hardware.
Eh. Though a grim bit of irony is how old a version of MongoDB they've stuck on too amongst other things.
>COTS hardware tends to be more expensive and power hungry
Not enough to me to matter I guess. For applications like this one can find solid low power hardware, Xeon-Ds for example, that consume tens of watts at max load and idle plenty low. Or truly embedded stuff a single digits. There are enormous floods of stuff available that is a few years old for dimes or even pennies on the dollar. I replaced all my USG-4Ps with a few years old Supermicro Xeon-D front-network port systems I got for around $350. An extra 20W 24/7/365 is something like $20-30/year, for a device this important that's well worth it to me. And there's no comparison, it's a slaughter. Real systems have actual BMC/IPMI which is super handy if things go wrong. The performance is just ludicrously higher. It's trivial to have more RAM, the system runs off of mirrored ZFS drives I can count on, it can do HA, etc etc. All the normal tools are available.
I'd even take an old Celeron or Atom system over a USG though. Even without going to second hand market, something like Protectli's basic J3060 based FW2B would still be better IMO and has a max power of 12W, fanless. "Purpose built network hardware" doesn't mean much, hardware vlan filtering or offload for CRC/LRO/TSO are something a decent NIC can do, or even plenty of built-in stuff.
For COTS stuff you can go cheap or powerful, but I've not seen anything that comes close at the $50 price point that the ER-X (or Mikrotik equivalent) goes for. The Protectli stuff is larger because it's mostly a giant heat sink. Intel's just not at the right point on the price/performance curve for fanless, and it's all way more expensive. The four port ER-X uses, what, a 12W power supply? Give me something more pared down. I don't want SATA ports, an integrated GPU, WiFi, LTE, whatever. The new Cavium (now Marvell) ARM stuff is interesting, but unobtanium stateside. For reference I bought into the ER-X when I was dealing with a couple different locations and I even bought one to toy around with.
As for Ubiquiti, I remember them thumbing their noses at the GPL and introducing security bugs in u-boot or having no idea how to fix the issues with RADIUS. They had no clue how to work around Cavium bugs (or get support from Cavium). I remember a company that shipped products based on a lightly skinned Vyatta using an end-of-life'd version of Debian (note that Debian ended support specifically for MIPS). I remember a company that shipped products (ER-X, ER-L) that ran hot enough to reliably cook themselves. And, yeah, Mongo. It was just a mess top-to-bottom.
> 4-6 years ago things still looked quite promising
This is very true. However if you bypass the Dream Machine and just use some other router or an EdgeRouter it’s ridiculously reliable in the home setting.
There are a heap of nice features in the UDM line, but wow does it need polish and stability.
My wifi is the best I’ve every used by a million miles, but the UDMP crashes and has weird behaviour that requires heroics to repair.
After a firmware upgrade my UI switches have been dropping off the network randomly (doesn't respond to ping, ssh) and stops switching traffic! I have to hard reset the switch to get traffic flowing again.. turning off a lot of the features as their forum suggested seems to have mostly worked but I still have issues now and then. I have even downgraded a little too but, eh. I really regret buying them but I'm not too sure where else to go for switches that are.. not super complex to manage for everyone.
I did the same, originally planned on UniFi but since i couldn’t get any i tried Omada. It’s very nice, the only issue i’ve had is my mesh not connecting, but i solved that by reorienting the target AP and boosting the source APs radio. I think i have some tuning left though because my devices don’t switch APs as fast as i’d like.
Not ARP storms per se, But I've noticed new Apple devices don't like router forwarding port 53 to pihole, resulting in bonjour storm and crashing pihole instances(Loosing DNS for the entire network).
I'll try, Didn't have good experience with my earlier bug reports to Apple and now I'm completely out of that ecosystem. In fact I found this issue when my guests have iOS devices.
Anyways here's the regex blacklist I use in PiHole to prevent Apple's reverse DNS spam -
Not sure if it's something similar but I had issue where attempting to setup a wifi smart plug locked up the router for a minute (until the smart plug gave up trying to connect to the router). Wired ethernet still works but the routers show 100% cpu usage on its management interface and the 2.4ghz wifi stopped working (didn't check the 5ghz one). I didn't dig in more because my wife was in a zoom meeting.
Hah, yeah I did originally think something along those lines, but I wonder if you could actually do it non-comedically (e.g. somehow every node connected to the internet has to all be switched off at the same time and restarted to restore connectivity). Most likely it's already been tried too I guess.
I was thinking more along the lines of a rogue self-replicating packet where every last instance of had to die before routers and switches etc. would start working again.
I wish I could decisively turn off airplay on macos.
It's the source of so many weird issues.
For example, locked down mac, using wifi at a friend's house and their LG tv shows up as an airplay mirroring device. Why should my machine be discovering that TV without me asking? When I'm on a public network, I'd like to make my machine output-only, not promiscuous in this way.
there was also an issue where a macbook would randomly lose its onboard sound and somehow default to using a nearby appletv as the output device.
> I'd like to make my machine output-only, not promiscuous in this way.
The TV advertises itself on the network so it's the one being promiscuous. Your machine is still being passive, it just shows you the devices that are have advertised themselves.
No idea about the sound thing though, I don't use any Mac stuff :)
macOS is so frustrating at times, as there's some cool features that are gated by magic, so not only do they not work at times and offer no diagnostic resolutions, also cripple networks.
I recall disabling the `awdl` adapter can stop a lot of this behavior, at the cost of breaking these systems. I used to do this when using Moonlight, as it would cause periodic ping spikes.
My most unusual wifi issue was on a system that said it had a great connection (SNR) and was running at high speed, but would just not pass traffic if it was further than about 1m from a base station.
Turned out to have multiple antennas, and the transmit antenna was broken, so it could receive just fine, but not transmit over anything but short distances.
Sometimes the physical layer is the problem, even if the logical layer says everything is fine.
I'm surprised there isn't a mechanism for the link peer to report the SNR to the sender from its perspective.
I've had the same issue without any antenna troubles - Mac would constantly connect to the 5GHz network and struggle to send any packets out, yet the displayed signal strength was good. It turns out it was able to "hear" the AP just fine, but the AP had trouble hearing back, yet somehow there's no feedback mechanism for it to know.
I love these debugging stories but its a total nightmare to deal with these kinds of issues.
At the moment there's this really weird network issue we're having where iPhones are unable to play Netflix on the Wifi.
Every other device works fine but iphone 7, 8 (2 devices) and SE can't stream Netflix.
I noticed there is other things they can't do, for example the page for the fast.com speedtest loads but the speedtest cant be performed.
Same with the Google speedtest. The phones also can't access Apples update server on the wifi.
Other network stuff does work fine, youtube works, browsing works, etc.
The behaviour is consistent across the iphones and all these things work fine on multiple other devices on the same network.
I can't make sense of it at all.
Called internet provider and they didnt know either apparently other people had the same issue but nothing has changed from their side of things.
Called Apple support and they are putting the blame on the network provider.
Tested one of the devices on a different wifi network and works fine.
AFAIK if you're an internet router the packets look the same no matter what device is being used so I think this must be some Apple software issue. Or maybe my router is cursed.
Apple is actually pretty good about PMTUD as a sender, at least iPhones (but I assume Macs too) will quickly fallback to sending shorter packets if they don't get an ack with a large packet at the beginning of a TCP connection. I'd guess they do it throughout the connection, but I only saw it because network was broken, pcaps from iPhone show working connection, pcaps from Android show broken connection.
It doesn't help that FreeBSD used to do MSS sensibly (thanks to a patch from Microsoft), and switched to doing it the way everyone else does. There's not a good reason to send your naive MSS on SYN+ACK if it's bigger than the received MSS, it's highly likely that the other end can't successfully send a packet larger than they can receive, but it's not unlikely that something in the middle mangles SYN to reflect the true MSS, but doesn't mangle SYN+ACK. I could rant all day.
You can also get very good results by reflecting the received RSS minus 8 (assumed PPPoE), 20 (assumed IPIP tunnel), or 28 (assumed PPPoE over IPIP tunnel).
Yeah, not having an option to set the MTU was the reason I had to stop using the Apple AirPort Extreme when we moved to a new apartment, the symptoms were all the same as GP, limiting the MTU fixed it.
For me, it's always DNS until proven otherwise. But the difference of some sites loading, but others not makes me suspect there's a split somewhere and IPv4 vs 6 seems as likely as anything.
I second the "always DNS until proven otherwise" sentiment, but in my case it's often self-inflicted in my obsessive desire to force all devices on my network to route their DNS requests through my PiHole.
It has caused some significant weirdness, and I've had to relax the rules just to get certain things to work at all.
Xfinity had a router that did not work with the Xbox One when it came out. Wired was fine but wireless just did not work. I believe a software update fixed it and I don't recall if it was the router or the Xbox that was ultimately the issue.
Fielded a lot of grumpy calls that Christmas morning.
Why on earth would a paint program need to be interested in WiFi roaming? I can understand that Qt might have network management functions, and even that a paint program might want to check for updates or licensing via the Internet, but it does seem bizarre that they would chain through to something that enumerates through interfaces like that.
It's not; Qt helpfully started the polling automatically when a QNetworkAccessManager was instantiated. Which is needed for network access via Qt, such as fetching ads for the free version.
I don't think I would have ever figured this out, if it happened to me.
the very first thing I do when I have a problem on wifi, is to remove wifi from the equation. wired Ethernet is so much better, and so far, the problem always disappears.
I had this exact same problem with Citrix Workspace. I tried disabling Wi-Fi scanning but had mixed results with that for some reason. Now I have bought a cheap Wi-Fi extender for maybe $30 and plugged my desktop’s Ethernet jack into it. So far this is working extremely well. I disabled the other Wi-Fi extending features of the extender, so essentially it’s a huge Wi-Fi to Ethernet dongle.
I find it stupid that I have to employ such a janky solution to this.
Citrix Workspace on the Mac does not have this problem, but using an old MacBook Pro for this was undesirable for various reasons.
Turn autoconfig back on only when you restart your PC or disconnect from the network (maybe someone can automate this by checking connectivity without scanning networks, enabling autoconfig, and then turning it back off)
Nice debugging. Do I understand correctly that the registry reads aren't actually the cause of the problem but rather just a signal that a QNetworkAccessManager is active and causing a scan?
If so, is there a better routine to break on in the debugger to see it actually initiating a scan?
You can see a hint in the debugger screenshot. The call is not directly a registry read but to the iphelper API. There are functions in there that enumerate adapters.
Knowing nothing about this scanning process, I’m just assuming they first enumerated wireless adapters. So you could start with iphelper and then explore deeper into how you tell the card to scan. There’s probably some API for that as well.
"Highly Reviewed" and it comes with a "leave a 5 star review for a kickback" offer. A card like that is grounds for returning the thing and leaving a 1 star review IMHO.
> First, I purchased a new, highly-reviewed, wifi adapter on Amazon. It didn’t resolve the issue. It did, however, come with an offer for a free 64GB flash drive in exchange for a good reviews. (That was a pretty terrible purchase overall)
I read this as clearly making the connection; that of course the product was highly reviewed from people who wanted a free SD card even though the product was garbage. I don't see a reason to assume OP was oblivious, it's just a slightly humorous paragraph.
Why? Since when your OS knows what you want to do? Maybe I do intent to enumerate wifi's around me every 10 seconds, so why would OS would be bothered to do such warning? Also Windows has event viewer, you can see there what's going on anyway (consider that the OS warning you want)
Years ago I experienced an issue with a Cyberpower UPS software update that interfered with the system's Bluetooth, causing devices to not be able to connect, inexplicably since the software doesn't use BT for anything.
Only after much online searching did I stumble on another user with the same issue and after uninstalling it and reverting to an earlier version the issue immediately went away. I've since kept a more open mind as to what may be causing network issues.
The main problem is that OSes do a terrible job at reporting Wi-Fi performance. RSSI is a very poor metric by itself, yet is all that's driving the wireless signal strength indicator.
Most wireless links are constantly in a barely working mode that's only made usable by TCP and various retransmission/error-correction algorithms on top, so a lot of connections that people think are "fine" (especially when the signal strength meter is maxed out) are actually not fine at all. There is no magic and those workarounds don't work for real-time streams such as calls/games/etc, so those "fine" connections will crap out miserably as soon as users jump on a call or game.
I'm not sure if there's a way for the OS to get any more info about the link quality/stability from the network card itself, but a purely software-level workaround could be to just track all TCP sessions (should be easy, as that's already managed by the kernel), average the percentage of retransmissions across all of them (to prevent a single bad server from skewing the result) and factor that into the "signal strength" meter.
An easy way to test this yourself is to run a ping towards your default gateway (the IP of your router). You should see a consistently low latency (in the low single-digits range) and no packet loss. If you're seeing losses or significant variation in response times, your connection is bad and you should do something about it (get closer to the AP or install a second one).
Definitely my biggest bugbear with relying in streaming for music - it's the one thing I use apple hardware for (apple tv) and I can't even sync music to it anymore, plus there's no ability to control buffering that's worked so far. So I'm stuck with listening to music knowing it might stutter at any moment...
I think I encountered a similar issue on android. The Alibaba app was enumerating the wireless devices periodically and causing my bluetooth headphones to de-sync.
I didn't dig too far into it beyond verifying that uninstalling Alibaba fixed the problem. But it's super obnoxious that userspace apps are given handles on enough of the wireless stack such that they can break things.
I had issue that sometimes whole home network had lags multiple seconds long.
I noticed strange activity on all leds on ISP router/modem, Wireshark session found that NX no machine player was flooding broadcasts for local network machines discovery and apparently router tried to forward all those broadcasts to all connected devices in the network, what wouldn't be that bad, until there is some device with weak wifi connection that cannot receive them fast enough. some queue on the router filled up, and router stopped routing anything untill it delivered some of those broadcasts to everyone
It is fixed, but it requires Qt application developers to update their application to Qt6 or as a workaround they can disable bearer management in Qt5.
This reminds of me that I have an Ubuntu machine which connects to the WiFi and crashes the router.
The worst part is that I have no clue what causes it and no clue how I would even get started analyzing why the router crashes...seconds after the Ubuntu machine connects to the network, via WiFi.
I just gave up connecting to that network altogether. It works fine if I use it on other networks. lol
Is it consistent? It's still not good and I'd be curious about the explanation, but at least if it's consistent with no packet loss it shouldn't cause any issues.
If I had to guess it's because the system is temporarily pausing tdd wifi traffic while it scans the 2.4 and 5.x GHz bands to see what SSIDs are broadcasting.
It is a bit of a trade off since if you want to see every possible available AP, even the shitty ones with signal levels at like -80, you can't be noisy on your own radio at the same time as you scan the band.
Remember it's a half duplex medium.
It does it even more if you hold down option and click the wifi menu bar, to get detailed signal strength/info on the AP you're presently connected to.
recalls me of the time where at office the LCD screens would sometimes turn off, at seemingly random locations. it took a long time for people to realise movement caused it, then further time to discover it was when the chair would pump when they stood up.