Hacker News new | past | comments | ask | show | jobs | submit login
Netflix urged by EU to slow streaming to save internet (cnn.com)
76 points by Cantbekhan on March 19, 2020 | hide | past | favorite | 165 comments



This is a non-issue & complete speculation. Netflix already manages stream quality based on network conditions and has likely done so for years. If the government wants to ensure network access they should be talking to the ISPs, IMO.


Yup. My regional ISP already has infrastructure in place for this. They cache the most watched Netflix shows for a period of time in metro-level data centers. "Otherwise, Netflix would kill our network," a friend said.

It's probably a formal agreement with Netflix, but I'm speculating.


Netflix has been partnering with ISPs for a long time via Open Connect - at one point, they were giving away the hardware.

https://openconnect.netflix.com/en/


Huh, TIL. The data center is not listed in the list of peering partners, but it's probably an appliance like the other user mentioned.


Sounds like they just have the netflix boxes in their data centers.


Why don't the govts just mandate to watch less tv like less social contact? (Being cheeky here :)


That's a long term solution.

Short term, ISPs can't increase bandwidth availability, and Netflix can reduce bandwidth consumption.

And I'm not sure it's a nonissue. The videos in my online classes are pausing every few seconds lately. And watching those in a lower quality isn't an option because the writing needs to be readable.

On the other hand, Zoom works better than EdX, so it may be the EdX servers that are overwhelmed.


But is that caused by contention between netflix traffic and your online classes providers traffic, or is it just a capacity issue with your online classes provider's infrastructure?


That's a good question that I had already added to my post before you asked.

I wonder if anyone here knows if the EdX servers are overwhelmed.


EDx's videos are served straight from youtube, I think


It looks like they're served from edx-video.net.


The long-term solution is IP multicast for live streaming.


That would help for things like Youtube TV, but I'm assuming that users would not be too happy about having to coordinate their binging with their neighbours?


Why can't ISPs reduce Netflix data?


Some countries still have "Net Neutrality" laws preventing just that.


The EU for the most part doesn't, traffic shaping is still legal just a practice that died mainly due to the difficulty of implementing it these days than due to net neutrality.

https://berec.europa.eu/eng/netneutrality/

>ISPs are prohibited from blocking or slowing down of Internet traffic, except where necessary. The exceptions are limited to: traffic management to comply with a legal order, to ensure network integrity and security, and to manage congestion, provided that equivalent categories of traffic are treated equally.


I thought it was clear that "laws" are out of the window everywhere now.


Because it's streamed using HTTPS so they cannot remux or transcode it (not like they have the CPU power to do so anyway).


They could still throttle the TCP stream, and many do. Then the player will likely switch to a lower quality video to adapt to the bandwidth.


How ironic. Maybe CNN should stop autoplaying video that I don't even want?

I'm sure that will help bandwidth.


Can we get that turned of on ESPN too while we are at it please!!!


Lots of comments in here getting it exactly right. Asking Netflix to change will have almost no effect. Netflix already adjusts stream quality based on "internet weather" and already has most of the stream sources really close to the user. And has for years and is pretty good at it now. (Source: I worked there and worked on a lot of this stuff).

But it also makes sense for the CEO to make the political gesture and appease the EU commissioner.


Could it not have a significant impact on the last mile of providers (which is often a shared medium)?

Deprioritizing Netflix would also solve that, but given that this is illegal in several countries, providers there are likely lacking the infrastructure or know how to implement such a policy on short notice, even if these laws were temporarily suspended.


Unlikely, because Netflix accounts for traffic on the entire segment and aggregates data. It would be possible that you are the only one on your segment watching Netflix and everyone else is using Zoom or something, but if that were the case then your one stream wouldn't make a difference if it was at lower quality.

In other words, this would only help if a lot of people on your segment are watching Netflix, and if a lot of people on your segment are watching Netflix, then Netflix has a whole bunch of data on exactly how full that segment is and will adjust accordingly.


That makes sense! I didn't consider that Netflix would actively scale down quality in that way (i.e., backing off to achieve less than 1/n proportional bandwidth share), but ISPs probably ask for something like that quite regularly.


Netflix has caches all over the world https://openconnect.netflix.com/en/peering/#locations - don't see how this would make any significant difference to backbones. I'm in India and my Netflix downloads mostly come from a few neighborhoods away - I seriously doubt Europe has it much worse.


It's not only the backbones that are getting congested. In most access technologies, there will be some shared medium between users and these caches, and often that shared medium is close to saturated.


Is this really a problem at all? Doesn't Netflix select a lower quality stream automatically if the player buffer is being read from faster than it's being filled?

It's already in their interest to not have users watching their video "Buffering...". I'm sure that makes users disconnect. If users disconnect they won't generate revenue.

In other words, in cases of congestion on the Internet, doesn't Netflix and similar services already "slow down streaming"?


> I'm sure that makes users disconnect.

I guess I'm the oposite. For me personally this "adaptive" thing without any user control is more likely to make me go away. For example I've stopped paying for amazon's anime strike after trial period because it kept playing in like 480p without way to enforce better quality.

If it's buffering, fine, I can either try to find out what is wrong with the network or just let it buffer the whole thing.


The whole file will in most cases not fit your playback device's cache, especially considering STB-like devices like Chromecast, Apple TV or smart TVs.

It would also be a big waste of bandwidth every time a user does not finish the entire video and the cache would just be eventually discarded.


Doesn't adaptive stream quality or any form of responding to congestions require the link to be saturated first. At least I don't know how it should detect that the link is almost saturated but not quite yet. So every time it detects that the link has been saturated the damage is already done.

Compared to say, just capping the quality a certain amount below the current maximum for a while. For example turning off the 4K stream for the next 2 weeks. I guess people can live with HD for a while.


Yes, but that's how TCP/IP (the Internet) has worked successfully for 40 years.

Basic congestion control uses packet drops as a signal of congestion and slows down. You get packet drops if a queue was filled somewhere, which would be the bottleneck between two stations. That's business as usual.

The best queue to fill is as close as possible to a station, i.e. in the user modem or first hop, if not the application itself. If we begin to get congestion between ISP networks it won't be as pretty but it won't "break the Internet" and it's literally their job to upgrade the connectivity between networks before that happens. Plenty of bandwidth means you don't need QoS.

That being said, there are modern congestion control algorithms that try to be more clever. Your web browser may already be using one of them.


> Basic congestion control uses packet drops as a signal of congestion and slows down. You get packet drops if a queue was filled somewhere, which would be the bottleneck between two stations.

Bufferbloat gets in the way of that. As does packet loss due to non-congestion reasons, e.g. noisy or unstable connections at the physical layer.


Large queues without AQM is indeed problematic but the industry is far more aware of this nowadays than it used to be.

If the EU really wanted to save the Internet they should have done something about that, only it's 20 years too late.


netflix&co are probably using BBR to shovel data to end users which takes latency into account too, not just packet drops.


> Doesn't adaptive stream quality or any form of responding to congestions require the link to be saturated first.

It does not in the case of Netflix. Netflix monitors the performance of entire network segments, and if a segment is starting to get overloaded, it may kick down other streams on the same segment. Netflix uses the info from every one of its clients as a sort of continuous end user test so they get a really good read on every segment's network congestion in near real time.


Interesting and all the more reason why this shouldn't be a problem politicians need to get involved in.


IXP traffic only seems to be up by 10% or so. And any direct peering with ISPs should be up to netflix and that ISP to manage.

https://www.de-cix.net/en/locations/germany/frankfurt/statis... https://stats.ams-ix.net/index.html https://portal.linx.net/lans_flows


Netflix also is deployed directly in ISP's DCs. https://openconnect.netflix.com/en/


That's the direct peering part I mention. It's up to the parties to hash that out bilaterally on a case by case basis, there shouldn't be a need for netflix to throttle things across the board.


Direct peering is probably not quite the same thing as installing a cache in an ISP's racks.


I can’t speak for other EU countries, but Germany has systematically neglected its network infrastructure over the last decade. Even in Berlin fast, reliable Internet is hard to come by. I often find my LTE turn into Edge at the blink of an eye.

We should urge leaders (public and private) to do a better job and hold them accountable for bad policy.


The USA paid $4 billion to get FTTH by 2004, didn't happen. ISPs still got the money. I'm honestly hoping Starlink shakes the ISPs to the ground.


Support your local municipal fiber efforts. StarLink is cool, but pulling your own fiber financed with muni bonds at super low interest rates is available today. Returns are chasing safe yield, muni bonds are safe yield.

https://muninetworks.org/communitymap

These folks in the northern rural parts of England do it stupid cheap:

https://b4rn.org.uk/

Contacting your local community representation/government about this can be done from the comfort of your home during self-isolation.


There's dark fiber in my home town -- what would it take to run a muni isp and finally give Comcast and co the finger?


https://startyourownisp.com/ to start (it’s wireless last mile centric though). I will return with more links later when I’m back at my desktop and have access to my bookmarks. I ran a small ISP a lifetime ago, and have also helped others set them up.


past-tense -- would you do it again?

There's a company already doing gigabit over wifi to homes but you run into line of sight issues -- if figure with all the dark fiber we would just patch into that


As long as there are people without good internet options and I could break even, yes. I plan on doing this in the Eastern Tennessee mountains, but I am somewhat financially secure and work remotely, YMMV.

Like any bootstrapped venture, make it a side gig until you reach a comfortably sustainable run rate and subscriber base. You’re not going to get rich as a private ISP, but it can be a comfortable small biz.


Not losing money but being a viable alternative to the incumbents would be a huge win for me.

I bet I would need to get some serious networking skills also, no?


The US has a _lot_ more area to cover, though. Other than the "urban canyon" effect Berlin basically has no excuse.


The 'excuse' is that Berlin is a relatively old city with little over-air infrastructure and that the ex-national-monopolist (DTAG) doesn't not need to give access to its infrastructure (trenches, tunnels, posts, etc) to other ISPs. This is in contrast, for example, with how the ex-national-monopolist (Orange) operates in Poland - they have to give you access to their infrastructure trenches/tunnels, extremely cheap (around $25 per km per month), by law.

The only thing that DTAG has to do is allow local loop access for unbundling - and this is why the biggest freedom of choice within the Internet market in Germany you see is that you can choose from 10 different ADSL ISPs. However, since they all use the same shitty old copper it's effectively zero choice.


> Even in Berlin fast, reliable Internet is hard to come by. I often find my LTE turn into Edge at the blink of an eye.

If you're talking about fast and reliable why are you looking at mobile internet then? Fiber beats a shared medium on a fundamental basis.


Even HD streaming from Netflix requires around 7 megabits of bandwidth. That's not too much. If the internet infrastructure is struggling the handle that much throughput in 2020, it's the fault of ISPs, not Netflix.

Streaming video content has been ubiquitous for years now. A lot of households probably have 3-4 devices streaming HD content for hours during "prime-time." So ISPs should have kept up.


Let's not forget how much we've slowed broadband rate of improvement lately. In 2000-2010 speeds were doubling every few years with ADSL generations and fiber was just around the corner. The tech is there but the investment is not, from engineering POV we could easily and cheaply have 10G at home in western cities for everyone.


Rule of thumb: Apparent exponential growth means you've zoomed in too far on the chart you're looking at.

The important change was actually always-on Network access, but that's often muddled up with the step to 512kbps or more commonly 1MBps DSL service labelled "broadband" - because for most users both happened together.

There wasn't then and isn't now a practical need for "10G at home in western cities for everyone". Getting to 10Mbps allowed almost all the applications you'd actually want, and 100Mbps makes that practical if several people in a household are doing different things - maybe you're in a a video conference while the kids watch movies. But continued exponential growth needs a compelling application, and no practical application was found.

Fibre (often muddled together with these higher bandwidth offerings) makes sense for new builds because it's cheaper to produce and has good reliability. But ripping out working copper networks to add fibre is like demolishing an otherwise perfectly good bridge to put in a nicer bridge. It's not impossible but it is a waste of resources.


Well, the mobile side is certainly betting on "if you build the bandwidth, the apps will come".

In fixed broadband the same is still happening albeit more slowly, we have streaming tv, then hd/4k streaming movies, then games ala ps live/Stadia, enormous gb console/pc game downloads .. new apps are continuously enabled by improving bw & latency.


I moved from 100 Mbit/s to 1 Gbit/s (up/down both times) and haven't really noticed a difference. I'm still the only one using it in my household, mostly over wifi which caps me around ≈ 300 Mbits/s effectively.

I think you have to consider the quite limited multi tasking for bandwidth consumption. Worst case scenario I'm downloading some big file and streaming video at the same time, but that happens very early. Most realistically I'm only using one service at a time.

We also have to consider the asymmetrical nature of service consumption. Netflix et. al aren't event close to saturating a 100 mbit/s connection with video, nor are most places I download stuff so fast that I'm saturating my wifi-link.

This will probably age bad, but I think "100 Mbit/s low-latency bandwidth per person ought to be enough for anyone".


It’s not so simplistic.

It may make perfectly good sense to pull out copper and push in fiber.

It costs money to run and maintain a copper network. Fiber networks are less costly to operate and maintain.

In a rational world copper would be replaced immediately with fiber when total cost of ownership over the lifetime of the asset is less for fiber than copper.


Let's not get hung on fiber vs copper, the same type of advances make radio and copper carry more BW. But we are not deploying faster signaling on copper fast enough either, due to lack of investment and resulting slowdown of copper speedups making it to the market.


No they don’t. Technological advances make copper and radio carry more bandwidth OVER SHORTER DISTANCES.

This is why WISP are deploying micro POPs and cellular carriers are deploying small cells.

This is a huge difference compared to fiber where the same infrastructure will scale from 100M to 100G.

Forklift upgrades and truck rolls are expensive. Copper and radio aren’t like fiber where swapping a $9 optic for a $19 optic makes you go from 1G to 10G. All this makes a massive difference in TCO in fiber’s favor over the lifetime of the assets.

Providing fixed line service over copper or radio is silly in the long term as each upgrade requires you to build out fiber closer and closer to the end user bit by bit. Better to do it at once and reap the benefits, both in terms of service and costs.

Not deploying faster signaling in copper networks is also a misstatement. Cable networks are busy splitting nodes, deploying more fiber and upgrading to faster versions of DOCSIS all the time.

Most telcos have already transitioned to fiber and are not building out copper anymore. Here the US is an outlier, as the telcos here are just milking the existing copper assets for revenues until they exit the business.


The important number for videoconferencing is the uplink. That is typically set at an insultingly small fraction of the downlink, so in practice you need to buy the Ultra Hyper Globo Super Duper Speed Pro package to get anything better than a Mondrian painting slideshow.


100Mbps probably is enough for a single family if they can actually get it. the fact that netflix streams are apparently saturating the network implies there is a practical use case for a less underprovisioned backhaul.


I've seen constant improvements during the past decade. The main issue nowadays is that more often than not, unlimited 4G is reasonably fast and reliable, many residential customers don't see the point of a 500Mbps/1G/10G broadband connection. Even though it's not particularly expensive over here.

Yes, some European countries are lagging behind (hello Germany) but that's a political problem.


Ultra-cheap ~unlimited bandwidth doesn't seem like a good idea unless we ever figure out a solution to botnets that doesn't involve the whole internet getting behind Cloudflare.


Bandwidth costs are not whats keeping botnets at check. This would be a really sad reason to retard internet connectivity speeds.


To clarify, I think bandwidth should simply be metered cheaply but also clearly like water and electricity.

One major problem is that people have no idea where their bandwidth goes, the tools and bill don't help them.

Meanwhile it's pretty obvious when someone is tapping your water/electricity and, importantly, you have an incentive to investigate. This doesn't exist with bandwidth.

If your smart toaster is saturating your bandwidth, 99.9% of people will never know. They'll just think their internet is slow for some reason and that's all they can perceive. We have so little insight into our connection that we have all sorts of superstitions when the internet is bad, like restarting the router. I think everyone is being let down and the system seems to exist to let bad actors take advantage of everyone else.


As I've written earlier, what’s the point of rationing something that costs a hundredth of a cent per GB?

You get 10TB of bandwidth for a dollar. Once average usage per subscriber goes over that, then we can talk.

A $50 router will tell you where your bandwidth is going, with graphs, so that's not an issue.

Not that your smart toaster is going to be a problem. Now your gaming console...


I've been impressed that Zoom isn't breaking down. I'm on it all day, and even my kids now have TaeKwonDo three days a week over Zoom.


If the stuff is breaking down already then has been improperly engineered from the foundation up. Very few places are already in the work from home mode apart from fairly technologically forward companies.

If the situation continues, I would expect the load on conferencing demand to be at least 10x of what it is now when an average 4-10 person business starts doing WFH.


I'm smelling ISP lobbyists.

They want to abuse the situation by inventing a problem and getting support from politicians with no knowledge and plenty to lose during a humanitarian disaster - all of it to be able to introduce "HD packages" and similar later, so you get to pay more while they get to invest less. This is a great first towards selling "services" (by holding them hostage) and not being a "pipe" (which is their job).

Think of the children.


Right now would be a fantastic time for these ISPs to start installing Open Connect Appliances. It'll keep most Netflix traffic off of the backbone.

https://openconnect.netflix.com/en/appliances/


It’s not up to the ISPs. Netflix requires a minimum of 5 Gbps of traffic before an ISP qualifies for an appliance.


It's up to the ISPs to apply, isn't it?


Do you have any reason to believe that all those that qualify haven’t applied?


Laziness, lack of awareness, being under the 5 Gbps limit a year ago when they took a look, any number of other reasons?


While true in general, it's not a hard and fast rule. Exceptions get made when it makes sense.


This is in direct conflict with what the (UK) broadband providers have said:

https://www.bbc.co.uk/news/technology-51870732


Of note:

"Netflix said it already adjusts the quality of streams to available network capacity, and uses a special delivery network that keeps its library closer to users as a way of consuming less bandwidth.

"The Commission said that while there has been a sharp increase in internet usage, no outages or adverse affects have so far been reported."

So I'm not sure what the problem here is.

Meanwhile here in the US, my company has been holding large video conferencing sessions (80+ people) for the first time ever, on Google Hangouts Meet. They've been flawless. Sure, that can change, but I have a hard time believing we aren't already near peak "at home" usage.


Are actual technical people reporting issues with internet bandwidth, or are these unjustified fears by politicians?


depends on the provider. Netflix is pretty good at offering peering or local cache servers that can be installed inside of a providers network.

But most providers don't take Netflix up on that offer and rather focus on hoping that crappy peering is forcing Netflix into paying them for connectivity.


That's not the impression I get from Wikipedia. Do you have a source?

https://en.wikipedia.org/wiki/Technical_details_of_Netflix#O...


Netflix is open to any peering, some ISPs are not: https://techcrunch.com/2014/07/29/netflix-and-att-sign-peeri...


Most ISPs have enough capacity to handle this, investments in capacity usually last many many years. I'm seeing like 10% growth of peak traffic consumption on lock down in my country which is nothing. I'd be surprised if anyone on FTTB experiences problems. I'm not talking about monopolies though that neglect and postpone investment into infrastructure upgrades as much as possible, plenty of their consumers probably already live with significant quality problems and routinely experience "slow internet".


Yes, I'm seeing issues and slowdown.


Same here. I work from home and before this whole thing started, I'd generally get ~250 Mbps down. Over the last few days, since people started WFH and kids were out of school, I'm lucky to get maybe 15 Mbps. At one point yesterday, I was getting 1.2. Also have seen DNS resolution issues off and on for the past couple of days.


None of the residential providers have enough interconnect capacity to actually support what they have been selling if customers start using even 10% peak bandwidth promised by the connections.


Yet another place where the pandemic calls companies on their bullshit.


This used to be far more prominent or maybe I used to pay more attention, but DSL offers around 2000 had the contention ratio, most residential were 50:1 but there were some that were 20:1

Most business were 10:1 but you could pay for up to 1:1


Well - it depends on what they are using it for. For streaming content & downloads the caches should pick up the slack. Video conferencing is much harder.


If under caches you mean caches in the end-AS then the only work that has that ability is Akamai which has a very large number of pops located inside eyeball networks.

Cloud Flare does not. Cloud Front does not. Fastly does not. Google does not. Level3 does not. Even Limelight/Edgecast which used to have a decent footprint inside Verizon does not.

That in turn means that all of the content cached by those networks still relies on the eyeball networks having interconnect capacity.


I'm pretty sure I remember being shown a Google Cache box in an ISP data center a couple of years ago - and a netflix one as well - and I think an apple one - but not sure. I do remember there being a lot of fuss with iOs updates and they got persuaded to put them in a cache.

Maybe I dreamed it.


Netflix does but it is not that big size wise. It downloads current hot content. With COVID-19 a lot of long tail is now being consumed, not enough to push it onto the cache boxes but enough to strain the interconnects. When Altered Carbon 2 is released, it will be on the Netflix box because a good number of people at the edge would be requesting it. On the other hand some independent flick won't be on it because there are only a few devices requesting it at the same time.

Google used to have some search appliances, which they gave to ISPs. It is not the same thing.

Apple used to use Akamai for distribution of content because of racks that Akamai had in lots of edge networks. I do not know if they are still using it but I expect they do. Akamai has an enormous foot print.


I did an EU project where the tenant was that 90% of content consumed on a mobile network in a day was 1TB total! But yes, the tail is very long.


Waiting for my provider change, and upgrade to 250 Mbps, next week. Right in time for the Disney+ launch over here! If everthing goes well, home office will become a lot more entertaining! Binging the Mandalorian while being mute during conf calls!


Like what? Is it an issue with specific services/providers? Or total internet bandwidth? Anyone know the bandwidth the submarine cables can handle in their current configuration?

What's the real bottleneck here?


I don't think the backbone of the Internet cares about this. ISPs get a box with all of Netflix's content in it, so you probably never go out to the Internet to watch something on Netflix. (YouTube also does this, though obviously they don't cache every video in every local ISP's cache, and I believe that Apple has a system like this as well.)

If there is any slowdown, it is likely due to limitations in the last mile. Netflix isn't sending 4k video streams from California to everyone else in the world individually, assuming everything is working.


> What's the real bottleneck here?

ISPs overselling their last mile bandwidth.


As others have noted, the problems are with specific ISPs and their underdimensioned last mile network and/or backhaul.

A modern submarine cable, like Google's new Dunant transantlantic cable, has 250 Tbps of bandwidth. For reference, the largest Internet Exchange in Europe, DE-CIX, just reported a new traffic record of 9 Tbps.

So, no, anything using fiber isn'ta bottleneck. You can fit all of DE-CIX traffic on a single strand of fiber with off the shelf gear.


It is usually ISPs oversubscribing their nodes to too many customers. DOCSIS 3.0 and 3.1 are very fast. The video services that work the best leverage locality so that you aren't steaming westword over the ocean, but from a local cache much closer to you.


My cable gets pretty slow in the afternoon. From usually 80mb/s to around 4. Ping is up from 15 to 80.


Fortunately SAPGUI uses up so little bandwidth , can't Netflix and chill they still expect me to do work.


Nice one! On premise hosting could save you so!


Can we get the millions if not billions back we gave to ISPs to roll out and improve broadband services that they never delivered?


We should give them more money to improve infrastructure.

I am sure 'this' time it will work.


nice try ISP executives!


My ISP, Sonic, said yesterday their evening peak traffic is up 25% since Bay Area counties ordered people to stay at home.

https://twitter.com/dane/status/1240155938193051648/photo/1


Here in the UK there is talk of 2020 being the year society gets its act together and starts making significant changes for the better. I’ve heard this praised as an opportunity, or bemoaned as lefty idealists trying to take over.

One positive outcome we can all agree on though: the end of BT OpenReach’s passive aggressive “monopoly“ couldn’t come soon enough though. The amount of makework that goes on in UK broadband caused by OpenReach’s emplacement at the heart of it all is ridiculous.

Kelly Communications (a major broadband hookup contractor for OpenReach and Virgin): your days are numbered.

Any mention of Netflix is a deflection.



Where are the calls for cable companies to downgrade their traditional TV channels (set top box) from HD to SD so that they can re-provision some of the bandwidth toward internet?

I don't know enough about the tech to know for sure that this is possible, but it seems likely since it is the same wire and they offer different levels of TV and internet service.

If I'm right and it is possible, why is it that only streaming providers are being asked to sacrifice the quality of their service?


How soon until we find out the EU was nudged to do this by ISPs that can't handle the network load and want to push blame? I mean, everyone knows that Netflix already uses available bandwidth measuring to adjust bitrate/quality.



Netflix solve backbone pressure with open connect. Last mile delivery is still up to the ISP but that won't "break the internet."


Isn't it automatic? If the network speed is slow, Netflix will switch to a lower bandwidth video stream.


"Urged" by who exactly? Some random politician who thinks the internet is a series of tubes?


Pretty much: Thierry Breton. A political hack with a pressing need to feel relevant in the crisis and nothing useful to do.

The daily briefing at ec.europa.eu was edited after Netflix caved, but yesterday it was evident from it that the person responsible had no clue about how any of it works, was deeply confused by the difference between "the Internet" and mobile networks.

Part of the text freaking urged people to "use Wi-Fi" to relieve Internet congestion. Because as everybody knows, Wi-Fi uses magic and bypasses the Internet.

So we got our 25% reduction in Netflix quality on broadband connections because some moron was told by telcos that mobile networks are strained. While the actually competent people at IXes all over EU are saying there's nothing to worry about, they know how to do their job.

EU in a nutshell for you.


Poor ISPs can't deliver the broadband that we pay for and now it is the customers fault. :(


Well, ISP design their network for the average broadband used (plus a safety margin) and not for the worst case scenario.

Just like the electricity utility companies cannot provide everyone at the same time with the maximum power they are allowed to use.


Sometimes I really wish people got their internet bill and saw their peak vs. off-peak usage as something like megabits per second, just like they do with their power bill in kilowatts per hour. You could even provide the same kind of "neighbors near you used" metrics.

Each measure could be based off of what the ISP has factored in for the "average" usage at a given time. It would be a purely synthetic number, but it would give people a better idea of exactly what they use, when, and how that compares.


My ISP provides me detailed traffic charts. For example on Monday night I lost link-layer service (as did a neighbour and perhaps my whole street, this isn't exactly the ideal time to go asking everybody in person) and I can see that on the chart, and then I can see the next night when I had a multi-way video conference with friends for several hours.

Apparently in March 2019 I used a total of 106GB of download 25GB of upload, and this month so far is way below those numbers. This is biased by me working from home, so hour-long video conference calls aren't a new thing for me.

The ISP's policy (for which they charge significantly more than a typical ISP in my country) is to "never be the bottleneck". If things are too slow it is never (intentionally) their fault. So that's one reason they provide these charts - you can see latency info in them, drop rates, everything.


You can do this yourself with a $50 router.

You’d have to ask your neighbor about their usage to get comparative metrics, tho.


There's a difference between them though. It's rare that people max out their power when it comes to electricity. But with the broadband it's different. Any time you download a file you try to max out your connection. There is utility for an ISP to increase their bandwidth capabilities for customers. There's much less utility for electricity companies to be able to provide more maximum power to each home.

We also know from poorer countries that ISPs can provide better connections.


What "utility" do you think exists for the ISP to increase their bandwidth capabilities?

With an electricity utility, they may be able to turn on their coal generators to meet peak loads - it's a higher cost, but it's something they have available to use when needed. If engineered correctly, it also provides power consistently across all customers. Sure your house never uses enough to matter, but if all of a sudden an auto manufacturer wanted to turn on a whole new production line, they would have planned for that and gotten specific lines run to power it.

There is no equivalent for an ISP. Yes, they could guarantee that each customer gets x mbit/s and provision enough capacity (both to your house, to your neighborhood, to your central office, and finally to their peering connections) to cover all of that, but your internet would cost $300/m whether you use it or not and you'd be complaining about that too.


>What "utility" do you think exists for the ISP to increase their bandwidth capabilities?

In the US? The fact that everybody doesn't hate their guts. A future government could easily screw them over in the future, because the voters don't like those companies.

In other countries? Competition. You offer a better service than other ISPs. This also gives everyone better internet and internet services can offer more things. Much poorer countries than the US and Germany have much better internet for a lower cost. As long as there's a big disparity there customers aren't going to be happy.


This is not correct. IP networks are planned for the 95th percentile peak traffic, not average.


I wouldn't want to pay for worst case service utilization. On the other hand I definitely think service providers should be more clear on what exactly you're paying for.


Would edge servers alleviate some of the bandwidth issue for this?


That’s what the openconnect service does. It’s an appliance installed directly into the ISP data center.


How much would AV1 help this?


What does "standard definition" mean today? NTSC, 480i, 30FPS? HDTV, 1080p, 30FPS?


"Pornhub urged to slow streaming to prevent breaking the internet"


How about instead of urging people to do something they wont, remove content or only make certain content available at certain times for certain people.


Nobody should be surprised by this, I mean if you lock down hundreds of millions in western countries, what will they do in 2020 but online?

I am fine with throttling these services, hospitals and research should be given top priority, followed by all remote workers (or general traffic if that's hard to distinguish). These massive leisure services should be at the very bottom of priority.


Your missing the sub-text, a lot of Europe has crappy infrastructure, western countries should be able to handle this bandwidth without any issues. Blaming netflix is indicative that you have crappy IT.


Which countries specifically do you mean?


I fully disagree, no one should be given priority.


There are basic communications principles that pre-date having an Internet. One of them is that emergency traffic let alone health & welfare traffic have precedence over routine traffic on communications systems. There can be voluntary prioritization of traffic to support emergency response now or the full legal power of multiple nation-states can be brought to bear to ensure traffic supporting emergency response gets handled first.

What we have here is a critical vulnerability in that too much is dependent upon one backbone.


VOIP, VPN, video conferencing etc. traffic should be prioritized to enable working from home. It's a crisis. Economic cost of not being productive at home is worth degrading leisure activities (especially if its just 4k->480p).

That doesn't its not clear that its necessary at this point. But it if is? Clinging to net neutrality is silly.


Leisure services should actually have a high priority if you plan to keep people home for some weeks and want them to stay there.

I think traffic from research and hospitals is comparably low for that matter.


No, that's not how you run a working network. Bulk data doesn't need a higher priority in the first place, so giving it a higher priority will only do harm.


Not a higher priority of course. Just keep it going.

Speaking of which... quite a few conferencing solutions are down right now in central Europe.


> Not a higher priority of course. Just keep it going.

Sure, I agree. And if the hospital data is critical, you can already pay an ISP to get guaranteed bandwidth over a VPN between your hospitals. It will cost you dearly but if the best-effort nature of the Internet won't do it for you, you pay.

> Speaking of which... quite a few conferencing solutions are down right now in central Europe.

Isn't that because the backends of those services are overloaded, not because of congestion between networks? That happened to Microsoft Teams, I'm sure it happened to more of them.


is this discussion missing the important distinction between QoS and bandwidth limiting?

I can certainly agree emergency communications warrant better QoS guarantees than a netflix stream; it's probably not a big deal if your stream drops more packets than usual, but it matters a lot for real-time communication.

if we're talking about throttling connections, OP of this thread has a good point. if you want to keep people cooped up in their houses indefinitely, you gotta throw them a bone of some sort. throttling their netflix stream is going to piss people off, and at the margins, people might just say fuck it and go outside.


The problem is in Europe Google/CloudFlare/Akamai/Netflix/Amazon tend to have DCs in Ireland/Netherlands. And there was obviously little investment in backbone fiber connecting across countries. And even worse for services based in USA.

Now EU politicians are shift-blaming after they blew their budget and have nothing to show.


Google and Netflix, at least, have content distributions boxes inside the ISPs (e.g. https://openconnect.netflix.com/en_gb/appliances/)


That doesn't help that much for YouTube. There's too much variety of content for a cache to reduce bandwidth usage noticeably.


While it's true they have a ton of content. I imagine a lot content viewing follows some type of power law. The top 10,000 watched videos surely make up a significant portion of the total content watch.


If you surf sufficiently exotic videos on YouTube, you know, the ones in the hundreds of views after 8 years, you can pretty much see them getting pulled out of long-term, low-priority storage. I don't get 10 second waits for the streaming to begin on very many other videos.


That turns out to not be true. You might not be watching the top 10 trending videos, but most people are.


The “YouTube algorithm” gives them a lot of control over what “most videos” are. They can nudge people to prefer cached content.


A good chunk of YouTube is cached in the GFEs.


To be more precise, the good chunk of youtube goes to the Google Global Cache, which is a set of physical hardware deployed on an ISP's premises: https://support.google.com/interconnect/answer/9058809?hl=en

The GFE is our frontend proxy that terminates TCP & SSL and does not cache content.


Virality is a major component of YouTube.


You are 100% wrong about Cloudflare. We have servers in:

Amsterdam, Netherlands

Athens, Greece

Barcelona, Spain

Belgrade, Serbia

Berlin, Germany

Brussels, Belgium

Bucharest, Romania

Budapest, Hungary

Chișinău, Moldova

Copenhagen, Denmark

Cork, Ireland

Dublin, Ireland

Düsseldorf, Germany

Edinburgh, United Kingdom

Frankfurt, Germany

Geneva, Switzerland

Gothenburg, Sweden

Hamburg, Germany

Helsinki, Finland

Istanbul, Turkey

Kyiv, Ukraine

Lisbon, Portugal

London, United Kingdom

Luxembourg City, Luxembourg

Madrid, Spain

Manchester, United Kingdom

Marseille, France

Milan, Italy

Moscow, Russia

Munich, Germany

Nicosia, Cyprus

Oslo, Norway

Paris, France

Prague, Czech Republic

Reykjavík, Iceland

Riga, Latvia

Rome, Italy

Saint Petersburg, Russia

Sofia, Bulgaria

Stockholm, Sweden

Tallinn, Estonia

Thessaloniki, Greece

Vienna, Austria

Vilnius, Lithuania

Warsaw, Poland

Zagreb, Croatia

Zürich, Switzerland


And for Google, we have CDN PoP in

Amsterdam, Netherlands (3) Budapest, Hungary Dublin, Ireland Frankfurt, Germany (3) Groningen, Netherlands Hamburg, Germany (2) Hamina, Finland London, England (3) Madrid, Spain (2) Marseille, France Milan, Italy Munich, Germany Paris, France (2) Prague, Czech Republic Sofia, Bulgaria St. Ghislain, Belgium Stockholm, Sweden (2) Warsaw, Poland Zurich, Switzerland (2)

https://cloud.google.com/cdn/docs/locations


Apologies. I was completely wrong.

Still, Google/Amazon/etc don't have the same level of distribution as you or Akamai. And they are fine. Blame should be in international fiber.


Wait, what?!

What ever gave you the idea that there is a dearth of fiber in Europe or that there are only DCs in Ireland or the Netherlands?


Conectivity and performance issues.

https://en.wikipedia.org/wiki/Google_data_centers

https://aws.amazon.com/about-aws/global-infrastructure/

I was way off with CloudFlare. Netflix might be better by using Akamai and others.


You're also way off with Google. GGC has cache boxen in nearly all large ISPs that are willing to cooperate, and they also openly peer with anyone willing to at large IXes, not only in the metros they have DCs at.

For instance, they peer with us, a nobody (Warsaw Hackerspace, AS204880) over EPIX in Warsaw, announcing most of their prefixes to us there. In addition, they have a GGC node in EPIX.


I can confirm this as another nobody, we have under 100 customer WISP, have peering sessions with Google and Cloudflare, and Netflix is visible on the routeservers anyway.


You're still talking nonsense. Netflix has built their own CDN. This is one of their core competencies. https://openconnect.netflix.com/en_gb/


Your own link shows 5 full AWS regions – with two more on the way – in Europe and that's not even the right link for this conversation since what you really care about for last-mile connectivity is less where the servers are running and more where the CDN edge locations are:

https://aws.amazon.com/cloudfront/features/


I'm always surprised at how someone could write something so incorrect but say it with such conviction that it is true.


Yeah, I'm an idiot. Enough HN for me. Sorry.


Yea, this is simply not the case. All of these providers have severs, fiber, and networks all over the EU.


ever heard of Edge computing? I'm pretty sure these companies invested in having capacity in the ISPs DCs all over the place in the EU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: