That comcast is part of this coalition is hilarious. My apartment building is flooded by xfinitywifi beacons, in addition to each subscribers private SSIDs. Why they insist on turning on this functionality even in urban environments boggles the mind.
There's also a wallpaper (and even paint) version of this which isn't terrible expensive if you are living in an apartment building with exterior dry walls instead of concrete.
This is pretty much the "civilian" version of stuff government agencies use to RF proof undercover sites which could not be fitted with active jamming, it works extremely well.
I have one in the bedroom put it on because BT set up one of their metro wifi towers in line of sight of the window, it also blocks all the other wifi SSID's from outside.
I'm actually torn on this. I mean, I like that they are trying to push for wifi everywhere. But I kind of thing they should restrict it to certain businesses (Malls, pizza joints, etc) and not residential. I don't ever find myself without wifi at home unless the entire freaking neighborhood is out too (common with Comcast).
I think their goal is to get coverage of commercial areas partly through nearby residential subscribers, since lots of them are in mixed-use areas. Even if the pizza joint doesn't have Comcast, the apartment upstairs (or across the street) might. So Comcast has a bigger chance of getting xfinitywifi coverage to places like the pizza joint if they turn on the functionality for residential routers too.
20 or so SSIDs beaconing adds up to a lot of airtime. Consider:
* beacons are sent at a low "common" rate, perhaps 6mbps
* beacons still include the preamble and the DIFS before that
* 802.11n/ac get high bandwidth from aggregating multiple frames
So all those trivially small beacons take little bandwidth but a significant amount of airtime (which is a significant amount of potential bandwidth for faster clients with multiple frames to send in one aggregate)
That's true. Most 2.4GHz networks beacon at 1Mbps, which is because they are typically configured for compatibility down to 802.11b and beacons are transmitted at the lowest enabled speed.
Here [1] somebody collected measurements of time eaten by beacons at few configurations. Not a complete disaster, but still somewhat significant. For example, at the place where I am now, I'm receiving 67 beacons per second (all at 1Mbps), which, according to those calculation, wastes 17% of airtime.
By just counting the literal airtime of the beacons, I think it underestimates the effects a bit, because it doesn't account for the contention of the remaining air time, which would be reflected in increased collisions and small delays which (sorry to be hand-wavy again) can add up. I think if he ran some application-level tests at the same time (perhaps iperf, perhaps something more sophisticated) he would see a bigger impact to "good-put".
It makes things look more crowded, but unless people are actually using the Comcast WiFi it isn't actually adding to congestion. The periodic SSID beacons are an annoyance but they use negligible bandwidth. That said, don't ever use an ISP provided access point.
Regarding "inferior": It's fast enough and it doesn't crash (at least for me). That's probably "good enough" for most people to not care if there's a better solution.
"lack of control": The Comcast provided hardware gives me an IP via DHCP and I'm able to disable all those useless firewall features. So which features am I missing and why should I care about them?
I have quite a bit of networking knowledge and also used to work at an ISP for several years. Back then I wouldn't have used any router that didn't run my custom OpenWRT built with QoS settings, etc.
But times have changed: On my old ADSL line QoS made a huge difference due to congestion (and the large buffer sizes). SSH was nearly unusable while uploading a file. My current Comcast line is so fast that I don't experience any issues due to congestion - so how would I benefit from QoS?
Another difference is that in the past consumer-level hardware was just unreliable: I remember an old Netgear router that just crashed if there were too many concurrent TCP connections, as its NAT table would overflow. I haven't seen those issues in a long time now.
No - I'm not. But my connection works well enough and I don't notice any delays on either SSH or VoIP connections. So why should I invest time/energy into changing something that doesn't affect me?
I'd be interested to know how much the average HN user cares about this. I certainly don't. I'm not even entirely sure how fast my home internet connection is - it's certainly fast enough for me to stream Netflix etc as well as connect to a VPN for working from home. I don't really care about anything else.
(not that my internet connection is always great - it isn't - but that lies with my internet provider, who I have no choice over, so...)
I think there are a lot of smart people on HN, but that doesn't mean they're all interested in all the same things. Tweaking wifi settings just seems like a waste of time to me, and boring to top it off. My interaction with my AP stopped at setting the ssid and passphrase.
Tweaking QoS settings is largely a waste of time nowadays, but only if your router is running an OS new enough to include modern self-tuning QoS algorithms (and preferably a community-maintained project like OpenWRT, because the commercial vendors screw up their deployment of said algorithms).
Even if you aren't going to be hacking on your router much, it still definitely pays to ensure you're using hackable hardware.
Because you get the benefits of those who do hack, simply by installing a recent stable version of OpenWRT. It's no harder than upgrading and configuring the vendor's firmware. If you tie yourself to vendor firmware, you're tying yourself to 5+ year old kernels and all the associated security and performance and stability problems and a much more restricted feature set.
The commercial vendors do an absolutely horrible job of supporting or maintaining their products and they tend to get a lot wrong with the initial software release. The OpenWRT community does a great job of putting out a solid product that works and has sensible defaults, but you can only get the full benefit of their work if you buy hardware that is open to their hacking.
Slightly off topic, but I was surprised to find that my fios provided modem/router/AP gave me a root shell out of the box, and actiontec has the full toolchain to build for it published.
With some uverse subscriptions, you have to use the ISP provided access point (at least as a modem, not necessarily for wireless) because the TV service is through IPTV and the uverse modem is needed do dns for the TV (you also can't use an arbitrary alternative dns server).
In addition to the other answers many of the ISPs provide their hardware on essentially a rental model. The benefit to that is that if you complain loud enough they'll replace it on their dime: the detriment is the same as with any rental though, typically they overcharge you versus what you would pay if you bought it yourself. Your mileage will probably vary, but I've certainly saved money in the long run owning my own equipment, even paying for my own replacements versus storm damage and similar.
Another consideration is that often they will try to use the fact that you don't use their hardware as an excuse to not go further in troubleshooting since they "don't support 3rd party hardware."
It's fun when you get the opposite too and they follow their support scripts regardless of your 3rd party hardware and above average knowledge of the subject:
Support: On your hardware, do x.
Me: Did it already. It reported "Y".
Support: Please do it again, following these scripted steps [what don't match my actual hardware]...
That's a cool tech. If it really provides decent attenuation of WiFi without affecting other radios I think it may still take off in the future. There will be a point when 2.4GHz WiFi doesn't work anymore at all.
How do you get to this conclusion? The density is probably not going to get much higher than it is already, but the technology is going to continue to improve.
2.4GHz technology has already stopped improving. It's just getting more crowded as "the Internet of Things" trend gets more popular, flooding the 2.4GHz band with even more devices that—owing to their low-power design—often support only 802.11b or g and only at low rates, making them horrifically wasteful of airtime.
More high-def everything, people moving from cable TV to streaming, WiFi displays, ever growing software updates, realtime online games... Growing expectations may easily compensate for technological improvement.
Maybe wallpapers are too much pita, but if it came preinstalled in the walls and floors? Why not?