Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That comcast is part of this coalition is hilarious. My apartment building is flooded by xfinitywifi beacons, in addition to each subscribers private SSIDs. Why they insist on turning on this functionality even in urban environments boggles the mind.


Buy RF blocking window film if you are having so many issues. http://www.slt.co/Products/RFShieldingWindowFilm/RFWindowFil...

There's also a wallpaper (and even paint) version of this which isn't terrible expensive if you are living in an apartment building with exterior dry walls instead of concrete.

This is pretty much the "civilian" version of stuff government agencies use to RF proof undercover sites which could not be fitted with active jamming, it works extremely well.

I have one in the bedroom put it on because BT set up one of their metro wifi towers in line of sight of the window, it also blocks all the other wifi SSID's from outside.


But would that not risk breaking your cellular signal?


It didn't affect cell reception for me, you should be able to find something that only works on 2.4> and leave lower bands alone.

LTE uses 2.5 but it's not that common 1900-1800 and 800-900 are still the most commonly used cellular bands.

I also plugged only a single window if you check which walls/windows leak the most and plug them it should allow cell reception through.


Any way to tell if a particular window (say at a hotel) is using this film?


I would assume putting a bluetooth device on one side, then try and connect from the other.

Or look really close and hope you can see it.


I'm actually torn on this. I mean, I like that they are trying to push for wifi everywhere. But I kind of thing they should restrict it to certain businesses (Malls, pizza joints, etc) and not residential. I don't ever find myself without wifi at home unless the entire freaking neighborhood is out too (common with Comcast).


I thought that the xfinitywifi was just for Comcast subscribers. You can't call it "wifi everywhere" if it's only usable by certain people.


Excellent point.


I think their goal is to get coverage of commercial areas partly through nearby residential subscribers, since lots of them are in mixed-use areas. Even if the pizza joint doesn't have Comcast, the apartment upstairs (or across the street) might. So Comcast has a bigger chance of getting xfinitywifi coverage to places like the pizza joint if they turn on the functionality for residential routers too.


I would assume that the bandwidth used by those beacons is pretty small and negligible when compared to the total available bandwidth.


20 or so SSIDs beaconing adds up to a lot of airtime. Consider:

  * beacons are sent at a low "common" rate, perhaps 6mbps
  * beacons still include the preamble and the DIFS before that
  * 802.11n/ac get high bandwidth from aggregating multiple frames
So all those trivially small beacons take little bandwidth but a significant amount of airtime (which is a significant amount of potential bandwidth for faster clients with multiple frames to send in one aggregate)

https://en.wikipedia.org/wiki/Distributed_coordination_funct...


That's true. Most 2.4GHz networks beacon at 1Mbps, which is because they are typically configured for compatibility down to 802.11b and beacons are transmitted at the lowest enabled speed.

Here [1] somebody collected measurements of time eaten by beacons at few configurations. Not a complete disaster, but still somewhat significant. For example, at the place where I am now, I'm receiving 67 beacons per second (all at 1Mbps), which, according to those calculation, wastes 17% of airtime.

[1] http://wifinigel.blogspot.com/2013/08/its-well-known-rule-of...


That's a great link / experiment, thanks.

By just counting the literal airtime of the beacons, I think it underestimates the effects a bit, because it doesn't account for the contention of the remaining air time, which would be reflected in increased collisions and small delays which (sorry to be hand-wavy again) can add up. I think if he ran some application-level tests at the same time (perhaps iperf, perhaps something more sophisticated) he would see a bigger impact to "good-put".


It just adds to the 2.4gHz channel congestion hell. Pretty much everywhere I've lived in the past 3 years, 2.4gHz has been nearly unusable.


It makes things look more crowded, but unless people are actually using the Comcast WiFi it isn't actually adding to congestion. The periodic SSID beacons are an annoyance but they use negligible bandwidth. That said, don't ever use an ISP provided access point.


>That said, don't ever use an ISP provided access point.

Because the hardware is often inferior to what you can pick up for ~$200 at a tech store, not to mention the lack of control you get with an ISP AP.


Regarding "inferior": It's fast enough and it doesn't crash (at least for me). That's probably "good enough" for most people to not care if there's a better solution.

"lack of control": The Comcast provided hardware gives me an IP via DHCP and I'm able to disable all those useless firewall features. So which features am I missing and why should I care about them?


The average home user, sure, the ISP AP may be fine. But for the average reader of HN I suspect it would not be.

I can't use an AP that does not allow me to fine-tune QoS and just tinker in general if packets are not flowing in the way that I desire.


I have quite a bit of networking knowledge and also used to work at an ISP for several years. Back then I wouldn't have used any router that didn't run my custom OpenWRT built with QoS settings, etc.

But times have changed: On my old ADSL line QoS made a huge difference due to congestion (and the large buffer sizes). SSH was nearly unusable while uploading a file. My current Comcast line is so fast that I don't experience any issues due to congestion - so how would I benefit from QoS?

Another difference is that in the past consumer-level hardware was just unreliable: I remember an old Netgear router that just crashed if there were too many concurrent TCP connections, as its NAT table would overflow. I haven't seen those issues in a long time now.


> "My current Comcast line is so fast that I don't experience any issues due to congestion - so how would I benefit from QoS?"

Are you sure you don't have a few hundred milliseconds of bufferbloat in your cable modem, as is the case for almost all modems out there?


No - I'm not. But my connection works well enough and I don't notice any delays on either SSH or VoIP connections. So why should I invest time/energy into changing something that doesn't affect me?


> So why should I invest time/energy into changing something that doesn't affect me?

This argument has a place here, but it has a cringe effect each time I see it. I find it a dangerous one if applied to other circumstances.


I'd be interested to know how much the average HN user cares about this. I certainly don't. I'm not even entirely sure how fast my home internet connection is - it's certainly fast enough for me to stream Netflix etc as well as connect to a VPN for working from home. I don't really care about anything else.

(not that my internet connection is always great - it isn't - but that lies with my internet provider, who I have no choice over, so...)


I was going to say this, but you beat me to it.

I think there are a lot of smart people on HN, but that doesn't mean they're all interested in all the same things. Tweaking wifi settings just seems like a waste of time to me, and boring to top it off. My interaction with my AP stopped at setting the ssid and passphrase.


Tweaking QoS settings is largely a waste of time nowadays, but only if your router is running an OS new enough to include modern self-tuning QoS algorithms (and preferably a community-maintained project like OpenWRT, because the commercial vendors screw up their deployment of said algorithms).

Even if you aren't going to be hacking on your router much, it still definitely pays to ensure you're using hackable hardware.


> Even if you aren't going to be hacking on your router much, it still definitely pays to ensure you're using hackable hardware.

This seems to contradict itself - how does it pay if you're not going to be hacking?


Because you get the benefits of those who do hack, simply by installing a recent stable version of OpenWRT. It's no harder than upgrading and configuring the vendor's firmware. If you tie yourself to vendor firmware, you're tying yourself to 5+ year old kernels and all the associated security and performance and stability problems and a much more restricted feature set.

The commercial vendors do an absolutely horrible job of supporting or maintaining their products and they tend to get a lot wrong with the initial software release. The OpenWRT community does a great job of putting out a solid product that works and has sensible defaults, but you can only get the full benefit of their work if you buy hardware that is open to their hacking.


My benchmarks of speed at this point are basically:

- fast enough to run Netflix and have a work VPN going at the same time

- fast enough to run Netflix, download something on Steam, and have a work VPN going at the same time

- fast enough to run Netflix, download something on Steam, have Dropbox sync multiple GB to a new computer, and have a work VPN going at the same time


Slightly off topic, but I was surprised to find that my fios provided modem/router/AP gave me a root shell out of the box, and actiontec has the full toolchain to build for it published.

http://paste.click/QEYrXr (Note the uid/gid)


With some uverse subscriptions, you have to use the ISP provided access point (at least as a modem, not necessarily for wireless) because the TV service is through IPTV and the uverse modem is needed do dns for the TV (you also can't use an arbitrary alternative dns server).


> That said, don't ever use an ISP provided access point.

Why not?


Because they're running software you do not control on what is generally sub-par hardware.

It's too cheap to get good, enterprise-grade equipment for home usage that will have no problems and allow you to turn all of the knobs.


In addition to the other answers many of the ISPs provide their hardware on essentially a rental model. The benefit to that is that if you complain loud enough they'll replace it on their dime: the detriment is the same as with any rental though, typically they overcharge you versus what you would pay if you bought it yourself. Your mileage will probably vary, but I've certainly saved money in the long run owning my own equipment, even paying for my own replacements versus storm damage and similar.


Another consideration is that often they will try to use the fact that you don't use their hardware as an excuse to not go further in troubleshooting since they "don't support 3rd party hardware."


It's fun when you get the opposite too and they follow their support scripts regardless of your 3rd party hardware and above average knowledge of the subject:

Support: On your hardware, do x.

Me: Did it already. It reported "Y".

Support: Please do it again, following these scripted steps [what don't match my actual hardware]...

[I pretend to follow along.]

Me: Done. It reported "Y".

Support: Oh, well then...


I'm sad that wifi-blocking wallpaper seems to have never taken off:

https://en.wikipedia.org/wiki/Stealth_wallpaper


That's a cool tech. If it really provides decent attenuation of WiFi without affecting other radios I think it may still take off in the future. There will be a point when 2.4GHz WiFi doesn't work anymore at all.


How do you get to this conclusion? The density is probably not going to get much higher than it is already, but the technology is going to continue to improve.


2.4GHz technology has already stopped improving. It's just getting more crowded as "the Internet of Things" trend gets more popular, flooding the 2.4GHz band with even more devices that—owing to their low-power design—often support only 802.11b or g and only at low rates, making them horrifically wasteful of airtime.


More high-def everything, people moving from cable TV to streaming, WiFi displays, ever growing software updates, realtime online games... Growing expectations may easily compensate for technological improvement.

Maybe wallpapers are too much pita, but if it came preinstalled in the walls and floors? Why not?


Do the wallpapers/paints block cell signals at all?


They can if they can form a faraday cage




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: