Hacker News new | past | comments | ask | show | jobs | submit login
AT&T Wireless traffic shaping apparently making some websites unusable (adriano.fyi)
580 points by acaloiar on April 16, 2023 | hide | past | favorite | 289 comments



> I already knew from previous experience that for some reason, AT&T traffic to fast.com is throttled. Why AT&T wants bandwidth to appear lower than reality is a mystery to me, but I digress

This I think is because they throttle ip addresses for known video streaming sites. That is one of (the only reliable) ways an ISP can get the streaming provider to drop the stream to a lower quality one by default. Since fast.com is a Netflix ip, and the isp can’t distinguish whether it’s video that is being transferred or a file to measure throughout, the speed test gets caught up in it. Said the other way around: fast.com is great to see actual throughput from Netflix as opposed to some fake throughout from a dedicated speed test site for the exact same reason.


> Said the other way around: fast.com is great to see actual throughput from Netflix as opposed to some fake throughout from a dedicated speed test site for the exact same reason.

Yup. This is pretty much the reason Netflix created fast.com. They wanted a speed test service that couldn’t be gamed by ISPs. Many ISPs will prioritise traffic to know speed test services (like Ookla’s speedtest.net), making their services appear faster than they’re under more normal usage.

By placing fast.com on Netflix IPs, ISPs either have to prioritise all Netflix traffic (which they’re very unlikely to do), or accept that fast.com is going to provide a more realistic measurement of their performance.


Now that my ISP bundles Netflix subscription into their internet plans, access to netflix and fast.com now practically saturate the fiber link, while before it was outright blocked. Hooray for no net neutrality I guess.

Another fun part: when netflix IPs was blocked by this ISP, it's pretty much impossible to use netflix because the only way to get around the block was to use VPN, but netflix itself blocks VPN access.


Your ISP entirely blocked netflix? That's incredible shitty.


Heh, just revert back to pirating. That’s what I do whenever things don’t work.


From the above thread it looks as if piracy might actually be a more bandwidth economical option than streaming services. Spread the load of the bandwidth around a bit more. Download any time other than peak streaming hours to then watch, self-contained, during peak streaming hours.


Plus you can rewatch without downloading again. And use sneakernets for sharing with your friends.


I've always disliked how wasteful streaming is in respect bandwidth consumption. However, the "efficiency be damned, just make it fasterer" attitude towards bandwidth is, if not the biggest, then definitely among the biggest, drivers behind the ever increasing internet throughput.

Thinking about it, it seems like watching habits could make streaming be more or less "efficient" compared to downloading once and storing it locally (depends on how one chooses to calculate efficiency tbh).

People that have things they regularly re-watch are obviously gonna benefit from having a local copy they can access entirely on their own terms.

It hadn't occurred to me until recently, but the way I watch stuff has me going through a pretty large amount of data yet I'm also excessively unlikely I re-watch nearly anything that I've seen in the previous 2 years. Whether something is streamed or stored local, it has to be downloaded the first time it's watched. If it'll take 2 years before I re-watch something, it's a waste of time and money to keep a backlog of multiple disks worth of video


With music it's quite different though, I often have the same albums on rotation and it's great to be able to cache them all locally.


Couldn't agree more. I've stopped paying to stream music entirely and now pay to own FLAC files which I store and use on my PC and phone. Almost like regressing back to having an iPod, and I personally love it.


I never gave up my music files, and never will trust service providers enough to do so.


Huh. I typically watch the same thing over and over repeatedly for like a year or more.

Very wasteful if I did much streaming.


Early in the pandemic, I spent a while without wired internet using a wireless hotspot from the library which would not connect to Netflix but any other streaming video service was fine. I forget who the wireless vendor behind the hotspot was—I think it might have been Verizon.


So there's a bit of a weird fact about public libraries, which is that it probably wasn't on the normal internet (Internet1) but was in fact on Internet2 (see [1]).

Internet2 is used by hospitals, among other things, and as such has higher robustness requirements than Internet1. The pandemic created much higher demand for bandwidth from hospitals, forcing Internet2 providers to scramble to keep up in areas. As such, blocking high-bandwidth sites which are clearly lower priority than medical traffic might have actually been a reasonable move.

[1] https://en.wikipedia.org/wiki/Internet2


I've worked at two hospitals. Both of them "had" Internet2. I was excited, as I had not heard about it in ~20 years.

Neither of them actually ran any traffic over it.


Not quite sure I understand - what do hospitals need a ton of bandwidth for? Why would those bandwidth requirements rise significantly during the pandemic. Sure there were a bunch of people on vents in the MICU, but pretty much every elective procedural service plummeted.


Other than telehealth, we also use the bandwidth for remote desktops/apps and lossless high-resolution radiographic images. While one might expect this traffic to occur over local networks, many hospitals have multiple sites and partnerships that are connected over the internet. So, for example, one might operate the hospital's apps while sitting in the library at the affiliated university.


> So, for example, one might operate the hospital's apps while sitting in the library at the affiliated university.

Remote Desktop and even PACS are not huge bandwidth consumers in this day and age of 4K streaming. At my large academic institution usually only the radiologists use the PACS client directly on workstations - all the other clinicians are using a “zero footprint” viewer over Citrix like everything else.


That "zero footprint" viewer over Citrix is susprisingly resource-intensive, even at 1080p with lossy compression. If everyone used it (say, if every workstation were a Citrix client), it quickly adds up. During peak hours, it positively chugs. Not sure if networking or compute is the bottleneck here. Perhaps you're right: some institutions have not caught up to this day and age of 4K streaming, and their pre-Netflix infrastructure is buckling under the weight of all these Citrix streams.


I'd imagine telehealth played a fairly large roll. White crispy video calls are not that demanding individually 10s to 100s of them at scale per location (and an exponential increase of calls in general) might be something more of a problem for that sort of infrastructure.


There are very few medical centers where there are 100s of simultaneous clinic visits. During the pandemic a lot of providers did home telehealth. Even 100s of simultaneous telemedicine visits don’t take much bandwidth compared to HD video streaming. Also from a QoS perspective - these are still terminated on the public Internet.

It might have been difficult relative to the infrastructure at some hospitals - but that is to my point that hospitals in general are not very demanding bandwidth wise and I would be a bit surprised they’re more than a rounding error on the Internet2 in general - especially compared to all the academic research involving large models and just video streaming in general that blew up during the pandemic.

I’m curious if the GP was speaking from some firsthand experience or is just making some conjecture.


Spectrum used to dump all Netflix traffic to a device on their regional network. If you blocked that IP range, the service would perform dramatically better.


> Now that my ISP bundles Netflix subscription into their internet plans, access to netflix and fast.com now practically saturate the fiber link

They may have an ISP-local netflix cache, from Netflix themselves not some home-grown hack, so they can achieve that with some reliability without it costing as much as it otherwise would for bandwidth peering.


Netflix distributes "red boxes" to ISPs which are exactly that: edge content caches.

(PDF): https://openconnect.netflix.com/Open-Connect-Briefing-Paper....


BTW, AT&T has terrible records for dropping customer's traffic and also email without warning or notices to their customers. Why such terrible company is around amazing how corporation don't really survive by merit in our current world.


I'm curious where you live and what provider does this weird Netflix reselling practice.


Streaming services do this all the time in Asia. Instead of paying with credit card, you pay via your carrier/ISP, sometimes at discounted pricing. I paid Netflix, Disney+, Amazon Prime and Zoom this way.



> Hooray for no net neutrality I guess

Surely it would essentially saturate your fibre if there was net neutrality, unless you don't pay for full fibre speeds?


Parent was being facetious.


What is the internet anyway? Are there truth in advertising issues around selling internet access with excessive filtering?


Netflix blocks datacenter VPNs but you can use residential VPNs (which tbf are more expensive) to circumvent this.


Mine did the same and charged for bundles of HBO Max, Netflix, Disney+ Until recently they reverted it after bunch of complains and tonnes of people changing their ISP


hmmm, IMHO, traffic for video and everything else ought to separated into 2 classes. Video obvious takes much much more bandwidth than other stuff. It would be good to allow none video, smaller assets that are needed to to make a website function to have priority over video files.


It definitely looks like testing sites are prioritized. The fastest download speed that I have gotten is maybe 7 MB (bytes) per second; generally it is 2-5 MB per second. The speed test sites generally get 100 Mb per second dowload. In general the best I seem to get is about half the speed of the speed test sites. To me the real speed of the ISP is how fast one can download something one wants, not the result of a test. I would prefer to see results for downloads/uploads from youtube and various CDN networks and popular sites. I would also like to see ISP have a URL that is inside their network to test upload and download so that one can at least isolate what part of the connection might be lagging. Actually, I just used devtools to snag a 25MB file from fast.com. Curl/wget gives a speed of about 3 or 4 MB per second. That does not really seem to match up with fast.com download speed of 70Mb/second. 70/8 is 8.75, which is about double. Is fast.com accurate? Is my math wrong?


I personally like https://speed.cloudflare.com since it just looks like you're doing typical CloudFlare traffic. The results viewer is also quite nice.


Very cool! I never knew about this! I really appreciate the latency during upload test. I've bookmarked this and will stop using speedtest.net. Thank you!


My ISP (Telecom) gives me a speed of 250Mbps using fast.com and 40Mbps using Cloudflare's tool.


Your isp likely has a Netflix oca or is directly peered with Netflix directly. It’s standard for any decent sized ISP since it’s either the largest or 2nd largest by volume peer (youtube being in the other).


MB = megabyte

Mb = megabits

1 byte = 8-bit


For years, AT&T sales reps would refer to MB when they meant Mb. Soon after I started service with them, I called to ask about the promised speed, and the tech insisted they sold Mb/s. He conferenced in a salesperson and was embarrassed to discover that the salesperson talked exclusively in terms of MB/s.


Did they just spell it with the abbreviations? Or did they pronounce 'megabytes' deliberately?


They literally said it out loud. I had spent the last 7 years on a fast university network, and had never paid for internet before that. I honestly believed that they were going to deliver the speeds they claimed.


I wouldn't have believed "megabytes" was an Internet speed, because it never is. It's literally impossible for that to be a signaling rate on any known Layer 2 technology. ISPs use Gibibytes or Tebibytes (which they misname as Gigabytes/Terabytes) for transfer caps, but that's apples and oranges.


There was a time when everyone considered a kilobyte to be 1024 bytes, a megabyte to be 1024 kilobytes, and so forth. The main reason this changed is because disk manufacturers advertised e.g. 1 billion byte hard disks as having “1 GB” of capacity and, to pretend they weren’t ripping us all off, proceeded to claim that the definitions were ambiguous. But even today, if you go out and buy an 8 GB DIMM, you are getting 8 * 1024 * 1024 * 1024 bytes.

(As a historical side note, there was even a weird transitional period where a “1.44 MB 3.5 floppy” held 1.44 * 1000 * 1024 bytes, or what everyone at the time would have called 1440 kilobytes.)

Personally I think it’s unfortunate that standards bodies and even courts have sided with the disk manufacturers. I also find the SI binary prefixes utterly revolting on an aesthetic level, so I’d much rather join whatever parts of the tech industry are still holding out.


> I wouldn't have believed "megabytes" was an Internet speed

Perhaps HNers know better, but among lay people, pretty much no one talks about megabits. For example, people know that an MP3 is about 5MB. They know that they can email attachments up to 20MB (or sometimes 10). But I've literally never heard a lay person talk about bits, or bits per second.

And the fact that ISP's transfer caps are denominated in bytes makes things all the more confusing.


I was trying to ask if there was something more that the speed test do other than multiple or divide by 8. Is some other overhead that they add in? If not, then their math or testing seem to be off or curl/wget seem to be different than they get via the browser's javascript engine. To me it seems that the speed test number is inflated or higher than what one will even for the same URL off of netflix or cloudflares URLs used in their speed tests.


Couldn't ISPs just sniff the SNI hostname to differentiate fast.com vs actual Netflix video streaming?


The fast.com page kicks off requests to nflxvideo.net domains for the actual speed measurement. And it wouldn't surprise me if actual Netflix video streaming made occasional connections to fast.com purely to make it harder for ISPs to cheat.


You can think of the requests to fast.com as just loading the speed test control scripts and user interface. The actual speed test loads files from the same servers (with the same SNI hostname) used by actual Netflix video streaming. It wouldn't surprise me at all if the fast.com speed test loads real streaming video segments from these servers, the only difference being that it doesn't have the decryption key for these videos.


It would also not surprise me if dns requests for fast.com temporarily elevated bandwidth limits for netflix


It does. You're loading segments of real movies on Netflix that are not encumbered by copyright.

But they're intentionally otherwise indistinguishable from any other real movie on Netflix.


That's surely how I would have done it.


Yes a few ISPs actually do this (not just for Netflix, but for other sites in general).


In the case of Ookla's speedtest, many ISPs host speed test servers to eliminate any variability of the wider internet. That's why it may seem they are prioritizing it. It's not a useless tool either as lets you narrow issues down to either your network or ISP.


On my ISP (Three UK), a DNS lookup to fast.com gives you faster internet for the next minute or so.

Therefore, I have a background bash loop just looking up fast.com every 30 seconds all day long.


Out of interest, which Three plan (& APN) are you on?

I’m working on using a Smarty (Three UK-owned budget label MVNO, for those unaware) SIM card in a 5G modem as a backup line, and haven’t been super impressed with speeds — so curious about trying your tip.


> Since fast.com is a Netflix ip, and the isp can’t distinguish whether it’s video that is being transferred or a file to measure throughout,

It is really trivial to do basic traffic snooping and see what people are looking at. I'm surprised it isn't more common.

I figured it would be harder, or perform worse, but I easily wrote a little piece of software that filters the TLS ClientHello for arbitrary domains. Maybe 10 years ago hardware wouldn't have been able to do this, but I bet it's no big deal now. So your filter chain just looks like <Netflix IP range> -> <has fast.com ClientHello> -> unthrottle. You don't need to do packet inspection on every packet, just ones that you might be interested in (e.g. Netflix IPs).

It's crazy to me that the many people who care about privacy and censorship in tech haven't pushed ECH (encrypted ClientHello) harder. It's such a gaping hole in web privacy that you can still passively snoop domain names sent in cleartext. It makes DoH/DoT almost pointless.


Fast.com loads from nflxvideo.net for me. They have a /speedtest/ path.

Sure, they could apply some kind of advanced DPI assessment based on packets sizes over time and other crap, but if anyone complains it's much easier to just say "well, does speedtest.net work? What about Google's speedtest? Hmm, strange, must be fast.com being slow then!". If they just check for fast.com in the SNI header, I know I'd be loading the speedtest every time I want to watch Netflix.

ECH is slowly coming along, but it's still perfectly possible to detect these speed tests. It just takes much more time and effort to set up right.

I'm honestly surprised the article shows enabling a VPN actually working to fix the messed up network here. That would be a perfectly valid solution in my opinion; if I notice my ISP is messing with my network like this, there's no way I trust them enough not to use a VPN.


This isn't applicable to fast.com as it simply makes requests to Netflix's CDN from the frontend. Indistinguishable from regular Netflix traffic.


Good point, although you could still do logic like only activating the throttle after the customer visits netflix.com. You can't distinguish the CDN traffic, but you can still tell what website is being viewed.

Incidentally, my speeds on fast.com are always terrible (about 1/8 of what I get elsewhere), despite the fact that I'm fairly confident it is not being throttled. That's because the speed I see is >100 Mbps, which is like 4 Netflix UHD streams. Wouldn't be much point in throttling to that speed, you'd want 10 Mbps max, and less on wireless.


then you might miss embedded clients, right?


People drastically overestimate the security properties of TLS.

The correct mental model is that it’s good enough to convince 1990’s US internet users to type their credit card into a web page. (Where the downside of a breach is that you have to dispute some charges and change your CC#.)

If you need stronger security than that, then many, many caveats start to apply.

For instance, by default, anyone that can reliably man-in-the-middle port 80 on your website can get an acme certificate for your domain from a reputable certificate authority.


If by “people,” you mean “people who rely on HTTPS for security,” then it seems like you’re saying that every hosted software company is getting it wrong. HTTPS is the foundation of the “zero trust” security model as implemented by Google, MS365, Facebook, AWS, etc.

I think it’s more likely you are not considering the full picture of the TLS ecosystem, or are making arbitrary distinctions like “cert transparency logging isn’t actually part of HTTPS” or something.

Consider that Symantec basically did what you suggest (mis-issue some certs) and not only was it detected and mitigated, they lost their CA business entirely over it.


>For instance, by default, anyone that can reliably man-in-the-middle port 80 on your website can get an acme certificate for your domain from a reputable certificate authority.

You are leaving out a huge caveat here - exactly where the MITM is taking place matters a lot. In 99% cases this isn't possible unless the victim server network is effectively compromised.


Is there more on this mitm cert spoofing somewhere I can read up on? Sounds intriguing to say the least


It probably isn't too different from other access methods: 1) Get access to port 80 for > 60 seconds. Point it at your temporary VM. 2) Use any cert authority, and for verification, choose a file-specific location verification (you can choose amongst DNS records, an email to admin@domain, or a specific file location on your site with many of them) 3) On your VM, Quickly paste the file-specific location into a django GET route. 4) Run the Django site on port 80. 5) The cert authority verifies you, and emails your account the cert, the website author being none the wiser. You can now use it later to fool future visitors for a deeper attack or email-related attacks.


This doesn't work unless the attacker happens to be in between your servers and the cert authority. The ISP that's in-between your laptop and the site can't pull this trick.

Also LE actually knows this attack is possible and mitigates it by validating the challenge from multiple sources so the attacker would need to be in the middle of all the LE validators and your servers.

https://portswigger.net/daily-swig/lets-encrypt-deploys-new-...


Yes that's true, but I was just talking about the scenario of having access to port 80 of a server DNS pointed at by some domain.

You might have access through editing a proxy rewrite rule, for example.

In the attack above you use your own SSL provider for a cert (LE not involved) and you overwrite the cert in a sense that existed before. You choose a provider that just validates with a file location. It's an attack which already requires compromise.


> The cert authority verifies you, and emails your account the cert, the website author being none the wiser.

It’s not hard to monitor cert transparency logs, which will show this new cert.


AT&T is well know for doing deep packet inspection so that they can properly throttle NetFlix. They are always at the top of the list of Net Neutrality violators that Netflix complains about.


I recently discovered that T-Mobile does this too, but they actually let you disable it on their site. Ostensibly, it's a feature for your benefit (somehow), and they're doing you a favor by enabling it by default. In reality, of course, it's for their own benefit, and they're banking on people not realizing it can be disabled. I suppose giving you the option lets them advertise things like "no throttling" and "4K streaming supported" while still reaping the benefits of throttling/lower-bitrate streaming.


>it's a feature for your benefit (somehow)

They don't count the "shaped/throttled" sites against your data plan limits, so I can see some people liking it.


They do this even with an unlimited plan.


> Ostensibly, it's a feature for your benefit (somehow), and they're doing you a favor by enabling it by default.

Some throttling by default for video actually is sort of for your benefit, at least indirectly, although I don't know if carriers are throttling more than that amount.

A surprisingly large number of people nowadays watch movies on their phones or small tablets. On screens that size at the distances they typically hold them from their eyes 4K and usually 1080p won't offer any visible improvement over 720p. On some devices even 720p probably won't offer an improvement over 480p.

It is to everyone's benefit if other people on the network don't stream at a higher resolution than is necessary to reach the limit of what is visible. Since most people aren't going to change any settings from the default, defaulting to low speed streams maximizes benefits.


Speak for yourself. 1080p is noticeably worse if I'm paying attention; 720p is noticeably worse at a glance; and 480p (which is effectively T-Mobile's limit by default, even though they advertise my plan as supporting "4K streaming") looks like hot garbage.


T-Mobile gives me full speed when I set my APN to B2B/IPV4.


Indeed that's the point of fast.com.... To measure the speed from Netflix servers


> Since fast.com is a Netflix ip

Technically the website fast.com resolves to an Akamai IP, but the speed test it loads uses a Netflix owned IP.


Back in the early 2000's, I worked for a small company that was hired by the law firm representing one of the big cable companies (let's call them Company A).

Company A was suing another big cable company (Company X) b/c A claimed that X was throttling customers without telling them and advertising what were in effect unthrottled bandwidth numbers.

In order to prove this, the company I worked for went out and hired people to get cable modem subscriptions for Company X, install desktop that were locked down so the people couldn't use the desktops themselves and then collected ping and HTTP request data for months.

My job was to take that data and analyze it for throttling patterns.

This was my first job out of grad school and involved lots of:

- data cleaning and organizing using Perl

- analyzing statistics using Excel

- creating charts that could then be shown to the lawyers and in turn could be used in court

I didn't know much about databases or stats packages in Perl which, in hindsight, would have made my life easier.

I still remember being on the call with the lawyers and walking them through the data that proved that Company X was indeed throttling bandwidth at peak usage times. The one lawyer on the conference call said "Wow! This data is amazing. You guys are now basically national experts in this."

I was both:

- flattered that someone would call me a national expert in something

- scared shitless that I would end up being cross examined about how we processed this data

Turns out that the court case never happened and they settled out of court.

The reason I mention this story is that it's FASCINATING to me that this is still occurring today. I guess some things never do change.


A big part of the reason that things didn't change is because that lawsuit and others like it ended in a settlement. If this behaviour and data was established as part of the judicial/public record, it would have been much easier to address.


Businesses, like teenagers, always keep probing the limits of what's allowed or tolerated.

As long as it's an out-of-court settlement for a moderate sum, it's just a cost of doing business, a cost of R&D of sorts, only in the legal area. (Similar with taxes.)


Why pay an expert thousands of dollars to succintly debug a problem, when you can pay a teenager hundreds of dollars to chaotically debug it.

Your company sounds like they knew the worth of most high-end developers than most big tech-companies do (or, in light of all the mass layoffs, did).


Did they wait to sue until after you had your data ready? Or were they gonna sue regardless?


Fun story: I work remotely for an org located in the Bay area. Few months ago I get a call early morning from the IT security person. Apparently someone had been trying to RDP into my work laptop, over a thousand attempts per day for last couple of days.

What changed before the last couple of days? My laptop was frequently dropping WiFi connection- a lot of times in the middle of zoom meetings. We use a router provided by AT&T. My wife complained of the same thing, so I thought I'll ping their customer support.

The customer support person proceeded to tell us they wanted to reconfigure our router to enable "5G". After some bewilderment we realized they were talking about 2.4G vs. 5G WiFi. And they will do it remotely. Ok fine go ahead, we said, and forgot about it.. until I got the call from my office IT security.

Apparently the AT&T support person left our router in passthrough mode. According to their K/B, "Placing a device in passthrough mode will remove firewall protection provided by the AT&T gateway."

I reset my router to default settings, and got myself a 100ft Ethernet cable to fix the issues.


> Placing a device in passthrough mode will remove firewall protection provided by the AT&T gateway

Did... did they seriously turn the router into a DMZ*, without your consent? Where every port of your computer is just open to the internet?

That's scary.

*that's what it was called on my router, the "forward every single port to one device" mode. Not sure if that is the correct term for it.


Passthrough mode turns ~90% of the (DSL modem / ONT) + router + DHCP + NAT + WiFi AP + whatever functionality off, allowing you to add another home router after it which is completely under your control and let the ISP-provided one only do the most necessary, eg. (DSL modem / ONT) parts.


> add another home router after it which is completely under your control and let the ISP-provided one only do the most necessary, eg. (DSL modem / ONT) parts.

Oh absolutely, I used that trick once in order to use a router that I could configure over web interface instead of Over The Cloud With A Spyware Mobile App (aka Internet Of Shit).

Unfortunately my ISP smartened up to this and started cutting the line whenever I tried to use my router. Like, it would work for a while, then they'd shut off service, and it wouldn't work anymore (even if I removed my router) until I called them up and complained.

They would act oblivious like the problem was with my equipment. Every time I asked them why my internet stopped working they would say there was some problem with my hardware. The hardware was brand new and could not have been more fine; they were mistaken. It would also work perfectly when I actually had service; the issue was the ISP kept cutting it because they're petty bastards.

Eventually I gave up, which is probably what they wanted, but it's not like I'm suffering too badly with their equipment, even if I have to use a rooted phone in order to pry away all the spyware permissions from their stupid app.

They are a monopoly so I have no choice.


You need to file an FCC complaint about this, Xfinity has an executive response team for all FCC complaints. Document every time you called them and your equipment details.


I had to do this when they let a new renter cancelled my cable internet multiple weeks before my move-out, even after I told them that the new renter's dates were wrong when they called to confirm. Regular support wanted me to sign a new contract and pay an install fee to reactivate.

It was a really shitty situation that should never have happened, but the executive team was what all customer service should strive to be. A single person, with ample time in their day to fully understand the problem and blow past all the usual roadblocks on the way to a solution. Was still a few hours on the phone, but at least I wasn't treated as though it was my fault.


Honestly, that experience was years ago, and I don't really care enough to raise a fuss about it just today, since their first-party router is "not too terrible".

About Xfinity in particular though, assuming someone else uses them, I did learn just recently that at least Xfinity allegedly offers a special decoder box (or something) that is, I believe, free of charge, and lets you hook up your router directly rather than putting it through theirs as a DMZ, and it's supposed to get them to not cut the line. Some self-install kit or something. You can only get it by asking over phone call.


I bought my own 3rd party cable modem for Xfinity and it works fine.


Well, in my situation/ISP it did happen. Though it was a while ago so I don't remember too many specifics.


Two guesses:

1. You had a modem that wasn't on their approved device list. It needs to be on this list to receive firmware customized to their network parameters. https://www.xfinity.com/support/devices/

2. (the most likely scenario) You were switching the macaddr of your router's WAN port every time you connected. For IPV4 Comcast was always set up to only allow one macaddr to bind to a public address at a time, you needed to trigger a dhcpcd release on the old macaddr before reconnecting or clone the mac of the old device if switching.


I didn't have to file a FCC complaint with Xfinity but I had an issue with Arris/Moto surfboard modems and a miswritten boot file some years ago that took weeks to fix. Basically, you can have your own equipment on a dynamic IP'd business account but they automatically default to calling you the asshole for any problems that get reported.

Business class had launched new speed tiers and it broke upstream bonding on my modem, of course they blamed me. So I bought 2 more modems (same model and a different one) and the same things happened as soon as they got provisioned on the account (prior to this on Comcast's provisioning walled garden they'd show the upstream as bonded, but once activated on the account upstream bonding would turn off and speeds went to shit). By this point I had records and emails with like 4 or 5 different people, a useful thread on dslreports, and a very worthless thread on reddit.

I took all this info and CC'd everyone I spoke with in an email to the CEO of Comcast. Executive response had someone up in Massachusetts call me the next day confirming the issue was with their provisioning boot files, and they credited me for like 3 or 4 months of service. Overall I was pissed it was this difficult to escalate but at least once it was in the right office they handled it. It probably helped that this was a business class (which overpays for better support vs residential script readers).


Wow, I hate Comcast like everyone else, but even they are not that bad. That is just ridiculous. If I couldn’t have my own modem and own router… I dunno, they have a local monopoly, but I’d start looking at 5G connections or something.


At least on BGW-320's, which are the modern routers for ATT fiber deployments, it keeps flows in the NAT table even if you use it in passthrough mode, although it doesn't actually do NAT, so you end up still being limited to 8192 sessions (not that that's typically a problem, but it does mean you can't e.g. run a big web server through your connection, and you can't use 3p fiber modems without some hacks).


I might end up getting another router next time I move because of this.


Yeah. We checked with censys.io, port 3389 and 8080 were open as publicly available service on the router.


Oh yeah, if you leave RDP open to the internet the scanbots will be all over that like white on rice.

If only we would finish the switch to IPv6 we could pull the rug out from under those bots.


That's why a firewall I own is in-between my ISP provided equipment and my computer.


Yeah, my computer has an application firewall[0] between it and anything that tries to connect to it (LAN or not), so if my router ever opened up my ports, I would know.

Same applies for my other two computers (my MacBook runs Little Snitch on 10.14, so it's a kernel extension that Apple can't bypass). Not the rest of my devices though. Though it's not like anyone could get through fail2ban on my server, or compromise my Android device, or hack any IoT devices since I own none, etc.

[0]: https://safing.io


Aighht... basically, that's kinda like, you know, REALLY bad... Tbh, it'd been way less damaging to your security posture if instead they went like :

"yeah... we're gonna have to go ahead and reconfigure your home to operate in passthrough mode. This is going to require techs onsite so we went ahead and deployed some generic subcontractors. What they have to do there is, well kinda counterintuitive. They will be using brute force techniques to uninstall and didpose of ALL of your doors. So.... there's a bunch of little wireless packet things, like the kind use for 5G, or the other 5G, flying around everywhere. They always bounce off things and they get lost a bunch. Now that home passthrough mode is enabled, the packets can just 'pass through' and get where they wanna be. Sometimes other things, like humans, or ocelots, 'pass through' too. How exciting! After all, at ATT no one gives a ** if all kinds of critters go through your crap whenever we 'fix' something. "

When your ISP screws up your router settings: 1) It's highly likely you won't be aware there's a major security problem with your network. 2) potential attackers include, but are not limited to, massive numbers of ill-intentioned, yet impossible to identify, individuals quckly become aware there's a major security problem with your network.

When your ISP performs a literal brute force attack on your domicile, leaving you completely doorless: 1) You KNOW, 100%, that there's a major security problem. Your doors are gone. No doors. The physical equivalent of a router in passthrough mode. 2) Potential attackers are within a relatively limited geographic area, with numerous ways to learn more. They've likely noticed that things in your home life have been a bit weird.


I prefer to treat my ISP-supplied equipment as foreign territory, so their modem is in passthrough mode on purpose. Its only client is my own standalone firewall, since I have its Wi-Fi turned off.

As long as I'm not messing with the network, it's all great, and I know where to start looking if there is odd traffic going on.


finally, after switching to 5GHz WiFi was the Zoom problem resolved?

asking because: Your comment reminded me of one article / discussion about Apple Devices causing huge network load , and hence delays, due to some map ping thingy.

[Apple Maps location scan spikes WiFi latency every 60 seconds. 677 points, 172 comments]

https://news.ycombinator.com/item?id=31356730

(I hate the '5G WiFI' that most ISPs seem to be calling it everywhere. )


We don't have any apple products actually. Three Lenovos and a Razer. My wife is still on WiFi most of the time and she hasn't experienced much issues. But my impression is that we always has 5G WiFi enabled. Still not sure actually what AT&T support did, but if there was an issue it was most likely fixed by the router reset.


Remember how 5G was supposed to support "the metaverse"?[1]

I've been working on a high performance metaverse client. All Rust, all multi-threaded. Designed to max out a gamer PC. Supports Second Life and Open Simulator. Second Life had a reputation for being sluggish. Fixing that.

I'm pulling content from the servers at 200Mb/s, sustained. 400Mb/s in tests, but don't need to go that fast. The servers (AWS front-ended by Akamai caches) can handle that just fine. Gigabit fiber can handle that just fine. The 3D world appears in high detail in seconds. Looks like an AAA game title. Not like low-rez Meta Horizon or Decentraland.

5G can handle that, right? Says so right here in the promotional materials.Verizon: [2] AT&T: [3]

The carriers said "unlimited", right? So they can't complain if you're downloading 100GB per hour.

[1] https://www.forbes.com/sites/michaelgale/2022/05/24/how-5-an...

[2] https://www.ericsson.com/en/blog/2022/4/why-metaverse-needs-...

[3] https://www.verizon.com/about/news/5g-makes-metaverse-real

[4] https://www.xrtoday.com/event-news/5g-networks-crucial-to-me...


Are you insinuating, sir, that a telecommunications company has lied to the public? A lie that increases their own profits?? How dare you!



As I learned the hard way once, when AT&T say “unlimited” they mean up to a certain amount for data. Y’know. Like how Olive Garden offers unlimited breadsticks but then when you get your bill they charge you $40 for each breadstick over the unlimited amount of 2.


This is actually good analogy. OG will definitely throttle your breadsticks if you eat too many by taking their sweet time bringing another basket.


Not the way I ask for breadsticks they won’t.


A literal case of there being no such thing as free lunch.


Especially since given the ultimate horrifying finiteness of both the material and time in the universe, the dream of truly unlimited breadsticks will forever be outside of our reach.


Surely that is a joke? They don't actually charge $40 for breadsticks, do they?


Unlimited is a term of art that means "limited".


I had to buy a prepaid cell subscription the other day. I went with the one that stated "Truly Unlimited"* as the headline, clearly to differentiate it from all the other "Not Truly Unlimited" plans.

* Truly Limited for heavy users.


Can we just start calling fraud by its own name?


I'll order one criminality with extra politicians and one large diet-fraud-lite(tm) with extra ice.

Just charge it to my offshore. Cheers


"Un limited." It's Spanish, probably.


Yup. Buffets have been that way forever. No one is actually allowed to eat unlimited.


But I'll eat until they kick me out. That's how I know I got my moneys worth


I'm not sure what you mean. Most buffets won't limit you.


Buffets have inherent limits in the way they do things. It's just transparent to you unless you know what to look for.

Limited buffet hours. Most Indian buffets are lunchtime only, and the restaurant is open, say, 11am-1pm, and then closes until dinner. Indian buffets are often leftovers from last night's dinner, too, so there's a limit to the amount of food that they won't cook more.

Size of plates, bowls, glasses. This is an easy one, and you can use it to go on a diet. If you have large plates or bowls, throw them out and get small-capacity dishes. Buffets will make you use a certain size plate, and so each trip you can only pick up so much food, and this helps you notice that you feel full before taking more.

You may also be required to use a clean plate each time you dip into the buffet. This might have various effects on how you regard the food you took originally.


It's still fair to say unlimited for all of those except not cooking more. Much like you can easily have an unlimited 1gbps connection. (Or 'unmetered' is advertised sometimes.)


There is always a limit even if it's not published.


For what purpose? People can only eat so much. The buffet doesn't have to limit you. And if the buffet isn't limiting you then it's valid for them to say "unlimited".


For what purpose? Polynesians.


I'm not sure what you mean, as far as buffet dark patterns go, nothing really unreasonable happens that comes anywhere near the limits of the human stomach.


In theory yes? But 5G deployment is still catching up to LTE so It’s still going to be dependent on how saturated the bandwidth is. I’ve found that in some crowded situations (conferences and the ilk) switching to LTE actually performs better.


I'm interested in more info about your client. Is it open source, is there a website or blog post about it?



I look forward to trying your viewer when it's ready.


A decade long fight about Net Neutrality in the public, and the ISPs just went and did what they were always going to do anyway in the background. No surprise. No news. No fanfare. Just, “Here’s your broken internet where every websites loads at a different speed. Get Unlimited* access with a 35GB limit before more limits kick in. Some websites are unusable. Enjoy. *Some limits apply. See T&C for details.


But OP is paying for the first 50gb/month to be a "business" unthrottled connection on their unlimited connection!


Net Neutrality was never going to apply to mobile networks under any FCC proposal.


Taking a quick 5 minute look at the packet capture attached to the post, it looks to me like this is likely traffic shaping. While it's always difficult to be 100% sure from just the client side capture, the capture looks relatively clean of errors, and the amount of data in flight doesn't appear to even approach the advertised window. We're getting some merging of segments likely from a segment offload, but I doubt that's throwing off the results.

So there's a good chance there's a shaper letting through about 320kbit/s as it's relatively even throughout the capture.


Could it not also be the problematic traffic getting routed to a Cloudfront node that's overloaded or otherwise misbehaving?

Years ago, I had to troubleshoot a very mysterious issue where sometimes a huge percentage of the PCs in the company I worked for would have trouble loading the company's e-commerce site.

Everyone was going through the same array of 2-3 proxy servers, and the e-commerce site had Akamai caching in front of it. The proxy servers would hold onto a DNS lookup result or a TCP connection for longer than they were supposed to, and so they'd continue routing traffic to the old IP address even after Akamai had rotated a node out of service.


> Could it not also be the problematic traffic getting routed to a Cloudfront node that's overloaded or otherwise misbehaving?

This is plausible. Nothing I saw in the packet capture specifically pointed in this direction, but I don't really have the information I'd want to rule this out totally.

I've seen issues like this when the TCP stack on the server effectively runs out of allocated memory. This can cause the server not to use all of the available tcp window.

The only reason I think this is less likely, is I'd expect a company like AWS to monitor and tune the memory on their CDN, and notice something like a kernel bug here or block an attack exploiting the kernel memory.

Some network providers, especially wireless providers also deploy TCP acceleration equipment, so that network speeds are faster. On a wireless carrier if you've got temporary bad signal and drop some packets, if the carrier can retransmit them instead of going to the internet, this makes for a faster connection. Crappier implementations of these TCP accelerators, do a sort of transparent redirect to the kernel TCP stack. So if the ATT proxy has run out of kernel memory, it could be providing a slow connection. It wouldn't surprise me for a telecom carrier to miss and not tune or monitor kernel memory.

If the issue was caused by a proxy, I'd expect more to be broken then a single server, and other customers to also be complaining.

If ATT is using these accelerators, it also means the proxy could be hiding packet loss and delays between the proxy and upstream server or network. Because the client TCP connection is only to the proxy, we only get to see what happened between the client and proxy. So this precluded drawing strong conclusions from the client side capture only.

There is one thing about the packet capture that suggests a shaper to me. And that's how even the traffic is from server to client. While I didn't spend a bunch of time timing out all the packets, it looked a bit like I'd expect to see for packets getting released by a token bucket algorithm. This isn't necessarily safe to conclude, because the client has an offload that is buffering and merging packets using a receive segment offload, which causes us to miss some timing information in the capture.

So based on the above, shaping looks a little bit more likely to me, but I don't have the right information to rule out a broken or miss-configured server. An overloaded or otherwise broken network link like errors on a link doesn't appear likely to me at all.


I do see a couple of retransmissions early in that query (lost ACKs?) which could be upstream throttling but in any event certainly explains why the connection was slow to get off the ground.

Agreed that segment offload is probably kernel-side coalescing.

From shunting this into Wireshark, you can see that the sequence number grows fairly linearly (often a good indicator of a rate limiter) starting from 1.508s up until the 1.95s mark, but at a rate closer to 25-30 Mbps (1.5MB in 0.45s).

https://i.imgur.com/s2JBrCf.png

edit: this was for iphone-capture.pcap. You see a similarly strong picture of shaping for capture.pcap [0], and indeed that corresponds to the ~320kbps range you've noted.

[0] https://i.imgur.com/g9cN95d.png

edit edit: of course, under normal situations, your bandwidth will get saturated, and steady-state will look fairly linear. If you're using "classic" congestion control like CUBIC or New Reno, you'll see this interspersed with drops (search for "tcp sawtooth"). Under BBR you'd expect to see drops in bandwidth due to latency spikes (per bufferbloat). Neither of those seem to be in play here.


Thanks for the feedback. I was hoping to see more of this sort of analysis from the HN crowd :)

There's one other person from HN doing some analysis with me via email. Maybe something will come of it. You both came to the same conclusion based on the pcap data. They're also analyzing the iphone tether pcap data I just provided in an update.


You're welcome.

I took a very quick look at the iPhone capture. I don't see any notable issue (which is expected since this is "working" behavour), and the main noticeable item in the comparison is an apparent lower latency to the server. This appears to match the speculation I saw in the updates as well.

One thing you might be able to try, if you're able to change the APN on the hotspot, is to see if you're able to connect with a different APN from the consumer side. This is pure speculation, but what some carriers do is use a different APN for their business services, that will route to enterprise only packet gateways. Some carriers may not necessarily disable access to the consumer APNs though that might go through a different P-GW that's closer to you. The iPhone looks like it's using a path about 120ms shorter than the hotspot. And just by chance, going through a different P-GW you might hit different traffic shaping equipment or configuration.

Edit: although be aware that the carrier could also charge different APNs separately. I have no idea what AT&T does.


I considered switching APNs, but didn't want to run afoul of any AT&T rules.

I went ahead and gave it a try anyway, switching from `broadband` to `NXTGENPHONE`. I wasn't able to authenticate with `NXTGENPHONE`, so I switched it back to `broadband`.

After re-authenticating, I'm no longer being throttled. I'm both happy and deeply dissatisfied with this outcome because I think now I'm less likely to get to the bottom of it.

I'll add this to my updates and keep monitoring the situation.

Thanks, I think!


You're welcome. This just reminded me of the IT Crowd... Have you tried turning it off and on again.

By disconnecting / reconnecting by trying the APN switch, it might've just put you on a different P-GW that has a different traffic shaping configuration or current state.


It'll never get old or become poor advice :)


The only way to get to the bottom of this is probably just applying for a relevant job at AT&T, get accepted by whatever means necessary (low asking salary, nepotism, etc), check their network configuration yourself, then immediately submit your resignation once you're satisfied.

I think I read somewhere about someone who did exactly like this in a software company. A new employee showed up, immediately fixed a long-standing bug, then submit their resignation.


While everyone knows that all wireless carriers are universally terrible, I've actually had a fairly noneventful (read: good) experience with T-Mobile over the last decade. I've never observed evidence of traffic shaping or any other shady business (like capturing NXDOMAIN DNS responses and directing you to a sponsored search page). I've also never observed significant performance degradation from congestion-based prioritization after exceeding the alloted 50 GB or whatever of monthly "normal-priority" data usage.


Yeah, except they leak customer data regularly and have offshored tons of customer service to India -- including onboarding. When I tried to sign up, they didn't know what an esim was and wanted my SSN, along with other data that could easily be used to steal my identity. I don't want to spend my days arguing with people, definitely not about the existence of esim; but I did that day. It went all the way up their customer service management chain.

T-Mobile is also completely hit or miss with service. If you are in a location where it works, fine. Good luck if you travel.

If you can tolerate T-Mobile, Google Fi is better in almost every way security wise -- and I am very anti-google services these days. It uses the T-Mobile network.

As for me? ATT has provided the best service of any carrier while traveling, so I will use them. When I need security, I flip on my VPN.


    ATT has provided the best service of any carrier while traveling, so I will use them.
Really? My experience with AT&T while traveling has been pretty awful. In the US, in rural areas, Verizon is better. And outside the US, Google Fi gives you international data roaming as part of the base package. One of the reasons I switched to Google Fi is because it's so much better when traveling.


For coverage it’s Verizon > AT&T > T-Mobile. So maybe it was awful compared to Verizon, but it can be worse. I remember a doctor friend I was hiking a mountain with (La Plata, a 14k ft peak in Colorado) who took a patient call at the very summit.

As a tmo customer I know I’m essentially off-grid in much of the country when I’m outside many metropolitan areas.

But tmo is great for traveling internationally.


I used Google Fi for a while, and while I'd love to use them as a primary carrier again, I can't until they choose to support add-on SIMs for watches. As a Google Fi and YoutubeTV customer, I cannot wait until I no longer have to give AT&T any money at all.


The only thing with Fi is that it is T-Mobile, and you'll always be in a lower priority block of customers compared to people paying for T-Mobile directly, which mean you'll see slower traffic in congested areas at peak times (including e.g. during rush hour traffic).


Thats not true, Google Fi has the same priority as postpaid T-Mobile. This is something MVNOs negotiate in their contracts with carriers, not something thats true across the board.

Discount MVNOs increase their margins by buying wholesale deprioritized data while Google Fi has negotiated the no deprioritization.


Is this talked about somewhere? All I see is this reddit post[0] by u/Peterfield53, which looks like a very active user on r/GoogleFi but doesn't seem to be a Googler or otherwise a Google Fi support agent.

0: https://old.reddit.com/r/GoogleFi/comments/ulc1t5/perks_of_f...


https://old.reddit.com/r/NoContract/comments/oaophe/data_pri...

>QCI 6 is applied to all of T-Mobile's postpaid and prepaid plans (except for Essentials) and Google Fi which also has QCI 6 as well. This means if you want the absolute best from T-Mobile, you want to get a plan directly from them. Even their cheap $10 prepaid 1GB Connect plan has priority data.

You can apparently confirm this with a rooted phone: https://coveragecritic.com/2019/09/17/how-to-find-qci-values...

>My Google Fi service had a QCI of 6 during regular data use.


That seems to track with my experience: Google Fi in the US was no worse than T Mobile directly (which my work phone uses).


If you're travelling, I find it hard to beat Fi.

You literally land in a new country, turn your phone back on and you get a "Welcome to [country] - your data rate is the same" message almost anywhere.

Personally - I've flown from Taiwan to Brazil to Amsterdam and then back to the US and I don't have to think about my phone. It just works.

---

Outside of the travel use-case, I would also probably pick something else, but if I know I'm going to be travelling, I'll switch back to Fi.


With eSim and the Airalo app, international travel is fairly painless. It costs a few bucks and a couple minutes to setup (which can be done while waiting at the airport to leave) to get a data-only sim for your destination county. If you're paying for an expensive domestic account for international reasons instead of a cheaper $40/mo eg Mint mobile plan, it might be worth investigating theirs plans to see if it would end up saving money, given your travel requirements.


Fi is $40 a month for the more expensive plans, plus $10/gb until the plan hits 10gb, at which case it's free but they might throttle.

And the key thing is I just don't have to think about it. I can't forget to register a new account, I don't have to worry about esoteric sign up requirements for certain countries (ex: Brazil wants a CPF for fucking everything), and I can't get stuck without a connection and then not be able to setup the next step.


I still have my Sprint plan. This is how it works by default. (Sprint + gvoice = google fi; before gfi you could merge your gvoice and sprint accounts which was really cool. Then they cancelled that and started gfi) Since the TMo merger, I suspect gfi is still using the Sprint stuff.


> Since the TMo merger, I suspect gfi is still using the Sprint stuff.

Since before the merger, it used Sprint and T-Mobile, in addition to US Cellular https://techcrunch.com/2018/01/17/googles-project-fi-now-cap... (ctrl-f sprint)


I've been to every state in the contingent US, most many more times than once, and spent most of my time in rural areas / wilderness. Verizon was poor during the time I had it. I forgot what triggered switching from them but I didn't last long, had to switch to ATT. I switched to ATT because my Tesla uses them and I noticed it almost always had service in remote areas when I did not. Haven't needed to switch since.

ATT has a great plan for the Americas, south America is all covered, Canada is covered, and many other places. It worked really well for me in Brazil.

That said, when I start traveling to Europe and Asia more, I may switch back to Google Fi.

For domestic use, while traveling / on the road a lot? I would rate as follows:

ATT > Google Fi > T-Mobile > Verizon

Keep in mind, if you are mostly stationary it is better to use the carrier known for good service in your fixed location.


> If you can tolerate T-Mobile, Google Fi is better in almost every way security wise -- and I am very anti-google services these days. It uses the T-Mobile network.

Be careful there, because T-Mobile rot spreads to any downstream MNVO. Sim-jacking is still possible and data breach is happening above Google Fi level. There were articles about that IIRC.


> As for me? ATT has provided the best service of any carrier while traveling, so I will use them.

I'm curious about your reasons to say that AT&T provides the best service "while traveling". I'm guessing it would be domestic travel, because AT&T roaming charges are the second highest among the big three (I believe Verizon is actually even more expensive).

I was a customer and an employee of AT&T for a while, and I find T-Mobile to be better in almost every aspect, except perhaps for coverage in very rural areas, which I don't mind. T-Mobile 5G coverage and speed is also significantly better.


Do you mean international charges? I thought roaming has not been a thing for a decade.

Many ATT plans include usage in all of the western hemisphere, excluding the Caribbean islands. And for countries not included, it is $10 per day ($5 for other lines on your plan) up to 10 days in a billing cycle, and after that it is free until the next billing cycle.

Not the cheapest (I think T-Mobile has international at $5 per day), but not terrible either for a quick jaunt somewhere and not having to worry about SIMs or changing phone number or whatever.


Right, roaming as in international roaming.

I’m on T-Mobile right now, and the plan equivalent to the one I had with AT&T (all inclusive and unlimited data), does include 5GB of international data per month. AT&T was $10, even for employees, so not a great deal for frequent travelers.


Basic international is included on most t-mobile plans.


If I have to pick between T-Mobile and Google, I pick T-Mobile. Google is going to kill Fi and Voice one day, T-Mobile is not. T-Mobile has some customer service, including stores, Google has none. Was able to migrate a physical SIM to an eSIM in a store in ~30 min. You can even move your Google Voice number to a T-Mobile DIGITS number for similar functionality.

(T-Mobile customer for 2+ decades, least terrible option imho)

EDIT: > Google Voice app may be slow but it does work every time

Until it doesn't! Fair critique though. I can also recommend the "Unlisted" iOS app for this purpose.


> You can even move your Google Voice number to a T-Mobile DIGITS number for similar functionalit

You had my interest...till i read the reviews of the DIGITS app on the app store. 1.9 average, most complains about it not working at all. Google Voice app may be slow but it does work every time


T-Mobile objectively has the best network if you have good reception. They locally peer, you're getting your own IPv6 address, and their interconnections are excellent (except with AT&T as AT&T refuses to peer and they use Zayo which will take you across the US).

AT&T by far has the worst network, even if you have great service. They NAT millions of devices behind a single /64 network, you're usually routed to IXs very far away from you, and they do the most packet manipulation out of all the carriers. Verizon is better, but they can do some wacky stuff if you aren't in New England or California.


I have too, mostly…

But apart from all the data breaches, I was also able to verify T-Mobile doing this (arbitrary blocking of texts containing innocuous URLs) on my plan. Although they seem to have fixed it now. https://news.ycombinator.com/item?id=29744347


Many of their plans downscale video, or at least they used to


On my older plan, they downscale video from some providers to 720, but don’t include that video in my data count.

I can opt out, but then any streaming goes against my high speed data limit.

I think it’s actually a pretty fair trade off.


I've never experienced this. How would this even work for encrypted (HTTPS) streams?

Even if this were technically feasible, it sounds like a massive infrastructure investment with little to no value. T-Mobile would need to have enough compute and network horsepower to DPI all outbound traffic, intercept every video stream, detect if it's over some resolution threshold, and re-encode it at a lower resolution or bit rate, all in real time.


It is in collaboration with certain video providers.

YouTube over mobile for me is downscaled to 720p. In exchange, that doesn’t get counted in my high speed allotment of data.

I can opt out, but then my data can get deprioritized over a certain threshold.


"Downscale" is not quite the right term to use, they just throttle the video CDNs so the streaming platform chooses a lower bandwidth version


Throttle video CDN ISP/ASN's


This is interesting, I don't see how this is possible from a technical sense


Identify (by IP or traffic shape) video traffic, then throttle the stream to the desired bitrate. The viewer's player will usually silently adapt.


Netflix should have it own set of AS. And after that it's quite simple, routers can do throttle for decades.


If it is an HLS or DASH type of video stream, they could just query the playlist for a smaller video size and throttle that connection so that you're only served the smaller encode vs the full frame highest quality. Pretty simple thing to do really


its trivial with a simple rate limit.


It’s more due to incompetence than any magnanimity. T-Mobile has a pretty shitty network, with way more local outages and general fuckupitude than the other carriers.

Their focus is speed and density. The other two have much more advanced network management, which as a customer is both good and bad.


When I had AT&T VDSL they had weak peering. Some traffic that should have gone across town would routinely get routed several states away before coming back to town, because they hadn't peered with the local exchanges. Some traffic was ok.

When I switched to an ISP with local peering, ping times between my home and my server downtown dropped to a few milliseconds. It's like being on my LAN.

If this is still an issue for AT&T, traceroute+reverse DNS of each router may indicate it.

This poor peering isn't directly a throughput problem, but really increases the chance of being routed through a congested link, which you could sometimes guestimate from ping times and peeringdb.


Same issue in Canada: our big ISPs historically refuse to peer freely at our domestic IXs like TORIX. So smaller ISP and DCs end up peering with them in the USA and your packets would make circuitous cross-border round trips to go a few km.

Some national players do have ports at domestic IXs, but only as backup links or for negotiated (paid?) access.

The hazards of ISPs also being backbone providers.

(It looks like Rogers has gotten better with this behaviour)


As far as I can tell, Bell holds its Canadian customers hostage to force peers to pay them for what should be mutually beneficial peering. It does not participate in _ANY_ of the Canadian internet exchanges. Even Rogers connects to TORIX.

https://bgp.he.net/AS577#_ix


I know Rogers for a while had 'conditional' peering at Torix, but doing some traceroutes now, more routes are going through Torix than I recall in the past. Seems like they've lightened up over the past couple years.

Too bad the indie ISP market has been decimated. Ebox & Distributel --> Bell. Start --> Telus. Oxio --> Cogeco. VMedia --> Quebecor (Videotron). Mostly only Teksavvy left standing

Indeed with Bell & Telus, you could only peer freely on US soil:

https://www.peeringdb.com/net/1550 https://www.peeringdb.com/net/76


Because they believe you are customers.

This is no different than me approaching Verizon or at&t and demanding they peer with me.


As it turns out, I am a Bell retail customer being held hostage by their refusal to engage in seemingly any settlement-free peering with other Canadian networks. I am only their customer because the third-party providers indicated that they were unable to provide service despite Bell DSL service being subject to local-loop unbundling. As the monopoly incumbent that has received significant subsidy to develop their services, I believe they should have a duty to peer at not-for-profit Canadian internet exchanges that meet minimum traffic and peer diversity thresholds.


I agree in that the Canadian ISP market has had a rough ride for a long time. Until the mid 2000s when Cogent and other folks (GTT) rolled into Canada, TELUS, Bell and other companies were charging obscene rates for IP transit.

For several years the TELUS and Bell peering PNIs (OC12s back then) would run hot regularly in domestic Canada - for years.

You'll never get them to peer openly, but at best you can get them to offer market rates for paid peer (which is what folks have tried to do in the US).


> You’ll never get them to peer openly

Maybe I’ve got it wrong, but don’t my peeringdb links suggest Bell and Telus do peer openly, but only on US soil?

https://news.ycombinator.com/item?id=35595697


My recollection is that they're still selective in the US. Maybe that's changed.


This has nothing to do with torix. This has to do with bell and telus sees you as customers, not peers.


Yep. I'm still on AT&T's ADSL (what used to be known as Uverse) and routinely get routed through Chicago rather than Detroit (I live 30 mi north) for major services like Cloudflare. Of the two choices I have for ISP, AT&T or Comcast, only AT&T has such poor routing. AT&T Fiber apparently has better peering but their supposed expansion hasn't happened in my area yet.


> This is an “unlimited” 100Mbit plan with 50GB for Business Fast Track (prioritized) data. Being that I was far below the 50GB of monthly Fast Track data, my data should have had top priority, so I became suspicious.

And yet ATT will get away with this fraud or theft of selling something it is not delivering because no one in government wants to hold them accountable.


We've been hearing complaints from AT&T users for months now that our website (https://vstream.com) doesn't work for them. I've tried hunting down the reasons & tried getting in touch with AT&T to no avail. This seems like it could be the very thing that's causing the issues for our users.

Now I just need to figure out what to tell our users that are having these issues...


I believe I can confirm that vstream.com is affected by this. It fails to fully load in a reasonable amount of time unless I connect via VPN, Verizon, or my (phone) hotspot.

Update: For what it's worth, the "above the fold" portion of the content loads down to "All Debuts". After that, there's a long blank space with a Loading indicator near the bottom. Eventually it loaded. I didn't keep track of how long, but it was almost certainly longer than 1 minute.


This appears to load fine for me on both AT&T fiber and AT&T cellular. It looks like all content on that site is being loaded from Cloudflare, so I'm curious if those users would be having issues with other Cloudflare protected sites as well?


Ditto, same in a rural area of Wisconsin I'm at a lot, for the past 3 years or so. Like, 3/4 of the Internet is slow through their shaping garbage. What worked for me was using WireGuard to my home network in an urban area (1 gig symmetrical fiber), or, using an ssh ~ socks proxy. Problems no more, until they decide to block those.

Support has been awful, best of luck if you go that route. They took us off their unlimited plan without consent, took 6 months to get it back. Also, throttling doesn't actually seem to happen once over their limit on the advertised unlimited* data plan, it's just most of the Internet is awfully slow most of the time going directly through this, ah, ISP, which they claim to be.

* Not actually an unlimited plan


I use a VPN to a cheap server in a lesser-known datacenter. Also works for bypassing government level blocks in middle eastern countries.

But yes, AT&T throttles a wide diversity of content, even on their very highest tier plan (Unlimited Elite, now named Unlimited Premium). This is advertised as never throttling, no matter how much data you use (all other customers are supposed to get throttled first).

Also the throttling happens with mobile hotspot, I have to use VPN on my laptop with many many sites as well, even when I'm inside the "40GB/mo of unthrottled hotspot data".

The speeds instantly go way, way up once I hop on VPN.


But why would AT&T be throttling Cloudfront requests specifically?

That's not traffic shaping, it's just dumb. Traffic shaping is slowing down types of traffic like torrents or video in order to prioritize regular sites. But this is degrading regular sites that are usually what benefit from traffic shaping if there's overall network congestion.

I'm not questioning anything the author reports, it all seems extremely well documented.

I am questioning what AT&T is trying to accomplish here though. It makes no sense, unless they're in some kind of negotiation with Cloudfront right now, punishing them until they pay up or something? But that doesn't explain why it would only happen on a data-only plan. I'm mystified.


I think the era of traffic shaping as you describe it is no longer. Most connections are encrypted these days, a good thing. But that also means, me as an ISP in the middle, I only see the layer 4 packet. So I know where a packet is coming from and where it's going (and ttl and some other not really helpful data for this purpose.) I don't even know the port.

So I'm left with doing (dumb) traffic shaping by destination and target.

If I had to guess, I'd say that they incorrectly thought that some specific IP address (range) serves predominantly one type of data. So they throttle by the only data point they have, destination ip, and the collateral damage is everything else hosted on that ip address.


So I had this conversation with one of the traffic shaping vendors many years ago. At the time when encryption still wasn't that common, they didn't sound too worried about the shift to all encrypted connections. Their product was already starting to use a behavioral analysis to shape the traffic most of their customers were interested in.

So to detect bittorrent, they'd build a profile about how many bit torrent clients operate, the packet and connection creation patterns used, and then slap a throttle on. Looking at some independent analysis, these products might only detect 50% of the bittorent traffic, and have a false positive rate, especially for bittorent users also doing something else. And the ISPs don't care, they get what they need if they clamp 50% of the traffic.

So I'm not disputing that everything encrypted is a good thing, just pointing out that because it's encrypted doesn't necessarily mean the shaping equipment can't figure out enough to throttle bit torrent.

> If I had to guess, I'd say that they incorrectly thought that some specific IP address (range) serves predominantly one type of data. So they throttle by the only data point they have, destination ip, and the collateral damage is everything else hosted on that ip address.

This is plausible. As I recall, the way some of the equipment worked was it would sniff out DNS requests, and then mark the IP address as this destination. So if someone set's a rule for example.com, it might accidentally apply to alice.com using the same IP address.

My knowledge on the industry is out of date though.


This is great insight. Thank you for that.

I imagine lots of people are or have spent lots of money and time trying to figure out the type of data or connection from patterns as you say.

A more nuanced and correct statement would have been to say that it's much harder to do than it used to be, when you could just look at the mime-type or similar to figure out what to throttle.


If you're interested in learning more, deep packet inspection, and specifically "encrypted traffic classification" are fairly mature in industry. Many traffic shapers are using products like Enea's Qosmos ixEngine or home-grown equivalents which can identify thousands of applications and protocols.

Most providers that use traffic shaping don't care about content, and the encrypted traffic classification is enough to make traffic policy decisions.


Wait, why wouldn't they know the port? I'm not aware of any cases where the TCP header itself is encrypted.


99% of consumer traffic is on 443 these days (citation needed)


HTTP3 uses UDP and doesn’t have to use the same port, although I presume it does. “A browser first connects the server with HTTP/2 To discover the service. The server responses with an Alt-Svc header, including the port for HTTP/3, such as Alt-Svc: h3-29=":443"”


This changes with HTTPS DNS records, browsers can make a H2/H3 connection straight if they get the hints.


You are correct. The port is part of the tcp header and my statement above is incorrect!


They are probably targeting video content on CloudFront, and it's easier/cheaper to do by domain or ip vs content inspection (when even possible due to encryption).

I experienced the same thing when building a large scale video service, but with Comcast. I did extensive testing to isolate it to any direct connection to aws (CloudFront or other aws services).


Adding to this, the only effective solution I found for Comcast customers was to use akamai, as they paid the Comcast troll toll. It was night and day.


You've got to love a website that makes text selection invisible by default.

I'd probably double check assumptions on the cloudfront issues. Switching carriers or adding VPN might connect you to a different edge node.

Some web assets being throttled for specific AT&T accounts seems a little too targeted to just be traffic shaping. I'd expect them to throttle traffic for all users, like they do with the speed tests.


>You've got to love a website that makes text selection invisible by default.

First time seeing it being invisible, mostly they just disable it. Terribly annoying practice. I often select text while reading articles.


Same here!

I think it helps me read faster online, but it's probably just a fidget-y habit.


You guys are not alone, I do the same. It is indeed a fidgety habit, but also, I do it as it helps me see text more clearly due to my astigmatism.


Bell in Canada is doing the same with their new 'unlimited' (vs the previous generation 'unlimited') and their cheaper data plans.

Even if you have any amount of 'fast data' left you're throttled to SD speeds for streaming video unless you pay for the higher tier of their plans. Data isn't equal any more. Their higher plans say "HD video is up to 1080p" so I suppose no 1440p/4K/60fps/etc either.

It's honestly fucking infuriating but there's nothing that can be done except use a VPN I guess. I'm on the previous generation so "data is data" and they don't throttle me yet but if I ever change plans it's there.

I'm not totally sure if the other carriers do (or their 'competition' in forms of MVNOs that they themselves own) but they tend to have a habit of copying each other, at least after a little delay.

https://www.bell.ca/Mobility/Cell_phone_plans/Unlimited-plan... (select your local area or just choose Ontario)

bonus fun bs:

every so often when you try to log into the web portal they try to trick you into letting them build a profile off your browsing/usage data so they can make even more money off of you. a popup with 'advertising is a reality in today's world' comes up with the nice attractive 'get this out of my face so I can do things' blue button being 'yes please opt me in'. shady dark pattern!

back when, they used to opt people in by default and make you explicitly opt out... but then the regulators said hey that's illegal. so now they resort to stuff like that to get the numbers back up.

here's some of the blurb if anyone is interested: "Advertising is a reality in today’s world, and people find that they receive ads that are irrelevant to them. With our tailored marketing program, Bell will work to ensure that the offers participants receive when using our services may be more relevant, rather than random marketing ads. In other words, participants won’t see more ads, just more relevant ads."

full text: https://pastebin.com/ESskYEUy it's honestly super gross what they collect and how they try to trick people into agreeing.


> for some reason, AT&T traffic to fast.com is throttled. Why AT&T wants bandwidth to appear lower than reality is a mystery to me

My guess is that Netflix is throttled, and given that fast.com is a Netflix site, fast.com is throttled too.


Netflix intentionally hosts fast.com on the same IPs as their video sites. It is a very intentional way for consumers to see if their isp is throttling Netflix.


Since almost everyone usingfast.com on an AT&t network also use AT&t DNS, I'm surprised they don't do something funky with that, for example throttling IPs that did a recent Netflix.com lookup, but not IPs that did a recent fast.com lookup


Because if this was a workaround to get better Netflix quality, then there would be 1000 blogspam articles about it. It's too good to be true ("visit fast.com to make Netflix load faster"), and the Internet loves that kind of thing.


I noticed this same issue with my AT&T internet. After some investigation, I found that using a VPN (Virtual Private Network) effectively resolved the problem. It seems that when internet traffic is encrypted and routed through a different server, the traffic shaping technique they employ isn’t able to identify and throttle specific types of data.


When you tunnel/encapsulate your traffic, many variables change at the same time, such as:

- maximum transmission unit (MTU)

- TCP maximum segment size (MSS)

- different DNS responses leading to different edge servers

- TCP reordering, which may now occur on the tunnel layer

- lost packet retransmission, which may now occur on the tunnel layer

- time to live (TTL) hop count on the packet

- IPv4 may be used for some connections that were previously using IPv6, as some VPN services are IPv4-only

- different peering between the VPN server and the edge server

ISP traffic categorization is only one variable.


This might, in a very "take it with a pinch of salt" way, be actually good for stirring up the incentives of lazy web devs (read: greedy bosses who don't care about technical prowess) to make slimmer websites... if enough people in the developed world have issues accessing them, it might at last have the effect that couldn't be achieved by probably orders of magnitude more people having worse issues from developing countries.


>(read: greedy bosses who don't care about technical prowess)

Oh come on. There's plenty of ICs out there who'll happily do slapdash work because nobody cares if you close tickets well as long as you close them good enough. So what if the fix only lasts 2yr before needing to be refactored for performance. Lord knows if the code will even be in use then.


Yeah... I was just anticipating the opposite kind of responses: "we web debs would like to make things right but our bosses don't care and expect it to be made fast even if fat"


AT&T uses different APNs for regular phone plans and data-only hotspot plans. It's very possible that the phone traffic is being routed completely differently than the hotspot traffic, with congestion at a peering point occurring on the hotspot. AT&T tends to be known for having poor/congested peering.

You could try changing the DNS server on your hotspot to a different public resolver like 1.1.1.1 or 8.8.8.8. If CloudFront is using DNS based geolocation to route you to the nearest data center, a different DNS server may get you routed around the issue to a different data center.


Use resolvers that support eDNS Client Subnet, otherwise geo-resolvers like Google/CF DNS may cause your traffic to be misrouted.

You can use Quad9's EDNS [0] so your client is properly routed, any privacy concerns aside.

[0] https://www.quad9.net/support/faq/#edns


Quad9 doesn't work on AT&T, because of poor routing you're sent to Miami or Amsterdam.


Good tip. I just tried with both and wasn't routed to a faster point of presence with either.

I'll add a traceroute from the phone and one from a device connected to the LTE router to the updates section next.

[update] Traceroutes added


It's definitely odd that you're getting routed across the country to an east coast pop. Ideally in your location you'd be routed to a west or central pop. Perhaps a bad combination of Cloudfront getting geolocation wrong for your IP range, and poor performance of at&t's network for that specific source-destination combination.


I found that odd as well, and my iPhone hotspot does get routed through a different list of hosts as you can see in the traceroute.

I suppose it should be noted. The iphone antenna and router antennas are no more than a meter from each other. They're unlikely to be hitting different towers.


What are you using for DNS?


On chrome-based mobile browsers your traceroutes are getting stuck in a small non scrollable iframe-type-thing.


So not just traffic shaping but also pretty a heavy moderation of discussion on that topic? Interested to see what justification AT&T comes up with for suppressing these posts on their forum and marking them as 'private' without any explanation to the original poster at least.


This explains some similar behavior I've had with various apps taking absurdly long to load when the internet connection itself feels fine. Assumed it was just iPhone's gradually getting shittier


Now that you mention this, it does match my experience on AT&T with my iPhone - plenty of bars, no DNS issues but frustratingly long delays spent staring at layout templates (because progressive enhancement is long dead) waiting for any actual content to arrive.


> "After switching back and re-authenticating with the broadband APN, I’m no longer being throttled."

If the author is reading this, I recommend putting an update at the head of your article with this. It will be kinder to your readers.


Update #6 has been at the end of the post since I posted that here on HN.


Wireless carriers have been pulling stunts like this over the last several years. For example, T-Mobile blocks all SMS messages with links that have a .xyz TLD. There's no indication that it was blocked by neither the sender or the receiver. I assume this was done to prevent spam and sketchy websites, but personally speaking I've seen way more sketchy .com sites and more spam coming from gmail accounts.


One nit of the author's analysis:

The Strava javascript file used for speed tests is 1.68MB uncompressed. But in a browser and most all other situations it should be requested with `Accept-Encoding: ...`. In Chrome, Strava responds with a gzipped response that's 463kB in size.

This doesn't really matter for the CLI speed tests, when its requested without compression, but it does mean that the speed test performance won't correspond with the actual in-browser performance and the traffic shaping may not be comparable when the request is made without compression. Adding `--compression=gzip` to the wget command will fix this.

To quickly show the difference:

   $ curl -s 'https://web-assets.strava.com/assets/federated/find-and-invite-friends/827.js'  --compressed -w '%{size_download}\n' -o /dev/null
   462525
   $ curl -s 'https://web-assets.strava.com/assets/federated/find-and-invite-friends/827.js' -w '%{size_download}\n' -o /dev/null
   1759662


Fair nit. Although this is why I chose to focus on transfer rates instead of time.

But consider this. When I load Strava's dashboard, open the Network tab and search for "cloudfront", these are the metrics:

> 374 requests 15.80 MB / 7.07 MB transferred Finish: 4.89 min

This is not a good time, and no amount of compression is going to help the situation.


Oh I absolutely agree! That's some sketchy stuff. Didn't mean to sound like I was discounting the rest of your experience.


Makes me wonder if it’s just something as simple as suboptimal routing and the path it’s taking is through a hop experiencing packet loss or some other issue further upstream. I would think that there’s a higher chance of AT&T being incompetent then doing something nefarious.


> Why AT&T wants bandwidth to appear lower than reality is a mystery to me, but I digress

For you a speed test is just once in a life time event.

For a wireless provider this is a thing what clogs the shared media of a radio channel for quite a lot (modern speed tests try to push 100 or even more MBytes?) and cause a disruption for everyone else on the same channel.

I don't know why ATT throttles (if at all, not a customer) that site, but wire and wireless providers are different.


>For Netflix, part of the message is: If you’ve got a bandwidth problem, don’t blame us — blame your ISP.[0]

This coverage from when Netflix launched Fast.com in 2016 alludes to the reason why the results on Fast.com may be slower than on something like Speedtest.net. ISPs like AT&T throttle connections to streaming video providers like Netflix, in order to force a lower quality stream. By offering a speed test from the same infrastructure that delivers your TV show, they are revealing the 'traffic shaping' that is degrading your connection. Sites like Speedtest.net are all excluded from this throttling, so it will always appear that you are getting 'full speed' when testing your connection.

[0] https://variety.com/2016/digital/news/netflix-fast-internet-...


I never thought about this much, but what's interesting is that there's a missed opportunity for collaboration here.

I think most ISPs throttle known streaming sites (netflix, youtube, etc.) by default for mobile plans. Some even advertise it [1]. And really what the ISPs have noticed is that if you throttle the throughput, the streaming services will switch to a lower quality stream (e.g. 720 or even 480 instead of 1080p) to allow for non-lagging streams.

So in some weird way the streaming sites are the ones that provided the tool and the ISPs figured out how to exploit it.

The two could have just worked together on a solution. Here they are pointing the finger at each other. ISPs don't want to have default 1080p streams for 4" device screens (which makes sense,) and streaming services don't want their network traffic throttled (which also makes sense.)

[1] https://www.t-mobile.com/offers/binge-on-streaming-video


> The two could have just worked together on a solution.

We do...? Google Global Cache [0] and Netflix Open Connect [1] are both the result of ISP partnerships wherein we deploy cache nodes in the ISP's datacenter to reduce network load and improve our users' viewing experience.

Whether a specific ISP chooses to partner with us or not (and the breadth of their deployment) depends on their willingness to sign the relevant contracts.

[0] https://support.google.com/interconnect/answer/9058809?hl=en

[1] https://openconnect.netflix.com/en/


I didn't know about that. That's cool.


fast.com was created by Netflix to show if your ISP was throttling Netflix traffic.

A lot of mobile services (at least in the US) offer "unlimited streaming" which is really just bandwidth-limited to offer SD-quality streams and offer "HD" (read: less-limited) streaming as a paid add-on.


"Modernizing its search engine has become an obsession at Google"....

No doubt it can be modernized, but just making Google search as good as it used to be would be helpful. And why not better than it used to be with the traditional interface? That would still be worth doing even if Google adds on a superb AI.

Search is the proverbial goose that laid the golden egg and Google is killing it, and I don't mean "killing" in the contemporary ironic usage.


I think you might have written this comment on the wrong post


Google search can't be as good as it used to, because the web isn't as "clean" (pure? idealistic? honest? legitimate?) as it used to be. People making websites with the goal of appearing higher in Google's results means Google has to work on countermeasures. That sets off a game of cat-and-mouse, and means actual good results are caught in the crossfire.


Is it possible to return Google to pre-seo blackhat days? I think the golden egg has been poisoned, even if Google tried it's best


Wrong thread?


I think it is the latency. 130 ms will cause websites with lots of little files across multiple https servers to load slowly. If the resources was bundled into a single file, it would load faster.

(My corporate network used to have similarly exaggerated latency on a bad old wire.)


What you say is true, but 130ms latency is quite normal for even very healthy LTE connections. It's simply a high latency environment.


This shouldn't matter much with H2. "Long fat pipe" is a well-known issue with a bunch of solutions.


Adriano,

In the article you mention "Cloudflare" a few times, but I think you mean "Cloudfront". For example: "web-assets.strava.com resolves to Cloudflare, see the writeup on my site for details"

I'm assuming this was a typo based on the names sounding so similar and both being CDN's. Is that correct?


Get Calyx. Unlimited data, no throttling service via T-Mobile. Grandfathered contract from Clearwire

https://members.calyxinstitute.org/enroll/membership

I've done nearly gigabit symmetrical over 5G with it before and several Terabytes in a month. No issues.


I like the testing and documentation. But I also want to see more test results in more areas over a longer period of time.


While I can't provide more results over a long period of time, yet -- I can provide a pcap file and some details about IPv4/6.

I've added an "updates" section at the bottom of the post.


Excellent! I've bookmarked the page and look forward to updates. Thank you for making this public.


> Strava’s javascript resources clocked in at 1.68MB

That looks like the problem to me. Throttle the fuck out of that shit.


I had this problem with Boost Mobile for phones. Apple Store and various podcast sources seemed ultra-throttled and often wouldn't even want to load at all.

Eventually switched to Mint and problem went away. Maybe a fluke, but I wouldn't be surprised if it was some shaping BS


Just found that Google Mesh Wifi does this at my home. 25Mbit/s and 900ms. Directly with cable 800/800Mbit and 4ms (full speed).

Deactivate priority for Zoom, and now I have full speed.


Are you near Utah? The fact you are being routed to a cloudfront pop in IAD is kinda broken.

What are you using for your DNS resolver?


Durnago, CO is close to Utah, so that's not a surprise.

I've used `1.1.1.1`, `8.8.8.8`, and others during this debacle, which I can't recall.

Here are some query responses that I received during the incident, but I can't tell you which resolvers were in use during the queries.

``` host web-assets.strava.com web-assets.strava.com is an alias for dgpcy4fyk1eox.cloudfront.net. dgpcy4fyk1eox.cloudfront.net has address 99.84.208.46 dgpcy4fyk1eox.cloudfront.net has address 99.84.208.56 dgpcy4fyk1eox.cloudfront.net has address 99.84.208.10 dgpcy4fyk1eox.cloudfront.net has address 99.84.208.115 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:8000:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:e600:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:c600:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:b600:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:d000:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:6400:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:f000:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:400:17:4613:2840:93a1 ```

``` host web-assets.strava.com web-assets.strava.com is an alias for dgpcy4fyk1eox.cloudfront.net. dgpcy4fyk1eox.cloudfront.net has address 99.84.208.56 dgpcy4fyk1eox.cloudfront.net has address 99.84.208.10 dgpcy4fyk1eox.cloudfront.net has address 99.84.208.115 dgpcy4fyk1eox.cloudfront.net has address 99.84.208.46 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:4c00:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:ae00:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:c000:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:7200:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:1000:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:b800:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:9400:17:4613:2840:93a1 dgpcy4fyk1eox.cloudfront.net has IPv6 address 2600:9000:2199:9e00:17:4613:2840:93a1 ```

The only difference I've seen is order, which is expected.

[edit]

I actually spot some different IPv6 hosts above, after having taken a closer look.


Ok you're being sent to pops in northern Virginia, that's a big problem. Lemme reply back with some other commands to figure out which DNS resolver public IPs you are coming from..


> While wget is not my goto for command line HTTP fetching

It is for me. Is there something better I'm not aware of?


Curl tends to be the standard.


It's what I reach for first. `wget` made for a better gif, though :)


wow, 40KB/s is approx the speed I get after I use up my 4G LTE data on Unreal Mobile. I get unlimited 2G but it's also the cheapest plan in the U.S.


What’s the best way to simulate this kind of network?


Throttle all traffic from cloundfront


The net neutrality battle is ongoing.


How does traffic shaping interact with net neutrality regulations?


Net neutrality was removed under Ajit Pai's time leading the FCC. Did I miss some post-Pai restoration?


There are no net neutrality regulations. Republicans won that battle at the FCC.


What net neutrality regulations?

Yet another Trump triumph that I voted for Biden to fix, that he has shown absolutely no interest in fixing.


Easy.

goto here: https://www.att.com/support

log in.

cancel.

tell them why.

goto https://www.t-mobile.com

port your number and sign up.

and you're done!

You can't legislate good behavior. You can only hit them in the checkbook. Wireless fortunately doesn't have the cable providers do, where local governments have entrenched players into local monopolies. Switching away is the most powerful thing you can do.


>You can't legislate good behavior.

Well, you absolutely can. It's just that the US voter regularly votes in politicians that do absolutely nothing to mandate at least fair behaviour.


I’d be happy to switch to T Mobile if they could stop leaking their customers data.


ok, fair point.


This seems like a good idea if t-mobile doesn't have traffic shaping, but do they?


T-Mobile was one of the earliest violators of net-neutrality: https://cyberlaw.stanford.edu/publications/t-mobile-likely-v...


lol as if ATT is going to care if a tiny handful of techies switches because of this.


Wireless providers have never had the same net neutrality situation the landlines providers have.


Like a neutrino interacting with regular matter, it would seem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: