Hacker News new | past | comments | ask | show | jobs | submit login

Maybe if we had kept using bittorrent for TV distribution we wouldn't have so much global traffic downloading "The Avengers" for the millionth time, you would just download it from the nearest host (also the way the internet worked before HTTPS, there used to be way more local caching of content being downloaded over and over again)



As I understand it Netflix has hardware in ISP data centers to do exactly that. https://openconnect.netflix.com/en/


Netflix might have data centers that gets closer, but nothing will ever get as far reaching and close as P2P would. A residential building who are all watching the same movie would only have to download the movie from building <> internet once. Replicate this system on every level (local network <> building <> city <> country) and you get so much more bandwidth available.

Of course there are other considerations and we (humans) haven't worked as much on P2P as centralized systems, so only the future can tell us where the limit is.


I think you are overestimating the capabilities, and potential "savings", of P2P here. For one, OpenConnect already does what you're suggesting at the "city" level since it's sitting in ISP DCs. Second, there are minimal gains in the last mile via P2P, if at all, considering the numerous types of devices streaming. How do you handle TVs that are already memory constrained by the streaming app? Do mobile devices constantly upload and eat up your bandwidth cap? What if you're streaming from a neighbor who turns off their computer? Do you need to refetch data from some distant host? Is that really a better experience? You also run into privacy/security concerns. How do you reconcile hosts that _cannot_ leverage P2P? Do you now need to support a "P2P" mode and legacy/vintage non-P2P mode? This doesn't sound good for the end user.


As mentioned in my previous comment, there are a lot of things that would have to solved and deployed in order for P2P to be 100% feasible. I didn't expect to receive a list of things to be solved right now!

But you do bring up good points, as the current infrastructure (everywhere) is not setup for P2P. In most modern countries (sans US), ISP networks are actually pretty good and cheap, and works fine P2P. Otherwise there are other ways of distributing as well, mesh networks is one way.

All these questions you are outlining are definitely solvable though, just like when these questions arised when we built our current centralized infrastructure. Problem is that P2P networks are not nearly as funded as centralized infrastructure, leading to less people working on actually solving these problems.


> What if you’re streaming from a neighbor who turns off their computer?

The default is to download 100% from a CDN somewhere, and P2P that default to the same behavior can’t be a net negative outside of sub 0.1% extra communication overhead. Don’t want to use your upload bandwidth? That’s fine your simply stuck using the same CDN network that’s currently overloaded.

So none of the above are actual issues, a device doesn’t need to stay connected or have the full movie to be useful for P2P. If I start watching some movie while you’re also watching that same movie you can stream whatever is in your buffer to me and that’s a net gain in terms of back haul capacity. I don’t need to depend on anything from you other than what you already sent to me. I can then download from the service bit’s you don’t have and stream them to you.


You're grossly oversimplifying the complexity involved with "streaming" a video. Specifically in the context of a service like Disney+ or Netflix. Additionally how do you "find" that content near you? How do you ensure it is available? How do you ensure it is correct? Who owns it? How do you ensure copyright/drm/licenses are respected? What do you do when it disappears? The gains are incredibly slim, if at all, for the engineering challenges it introduces.

You also confuse "CDN somewhere" with OpenConnect box literally inside your ISP's DC. It is probably faster to get it this way than it would be to P2P it from your neighbor since, at the end of the day, that P2P traffic _must_ go through your ISP and their ISP. It will not, by definition, peer at the local hub. You are _NOT_ on a local network with your building/neighbors/etc. Your communication routes to your ISP and then out to anywhere else, even if that somewhere else is on the other side of your wall.

Even if it was possible. Even if it was slightly faster. Do you really think studio execs are going to be okay with customers hosting/serving their content off their machines? Even _IF_ this was "technically" a good idea, this is a non-starter from the business standpoint.


First, OpenConnect is just a CDN run by Netflix, they could call it bunny protocol it’s just a name. But, they don’t have unlimited boxes at every ISP, in almost every case you can get P2P connections between specific users with lower network overhead than those users connecting to one of the ISP’s data centers.

Anyway, the CDN knows the connections that are on the network because they are what are connecting to it. Segregating based on large scale network architecture is a solved problem, if you’re confused read up on how CDN’s work. What happens inside each ISP can then be managed either via automation based on ping times etc, or ISP specific rules.

In terms of P2P it’s trivial to include 99% of the data for a movie, but not enough data to actually play the movie. It’s codec specific, but that’s not a problem when you’re designing the service. Ensuring the correct users are part of the network is the basic authentication at the CDN node. That’s what’s keeping the list of active users.

As to data validation, the basic BitTorrent protocol handles most of what your concerned about. Clients have long been able to stream movies with minimal buffering by simply prioritizing traffic. Improving on that baseline is possible as you’re running the service not just accepting random connections and you want to be able to switch resolutions on the fly, but that’s really not a big deal.

PS: And yes, some Netflix content deals would create issues. But, that’s irrelevant to their own content and it’s just another negotiation when negotiating licensing, much like allowing content on a CDN in the first place.


> First, OpenConnect is just a CDN run by Netflix, they could call it bunny protocol it’s just a name. But, they don’t have unlimited boxes at every ISP, in almost every case you can get P2P connections between specific users with lower network overhead than those users connecting to one of the ISP’s data centers.

They have boxes in a lot of ISPs.

If “P2P” requires you to transit your last mile to your ISPs POP and then back down to another user, and Netflix requires you to transit to the ISP POP and back out again... has P2P gained you much? In most cases downstream throughput is much higher as well, making the in-ISP cache box far better for most.

P2P has its place but it’s hard to argue its better for video distribution.


Many ISP’s have significantly more and largely unused bandwidth between users vs the overall network. This is often done for simple redundancy as you want a minimum of two upload links if not more. However, it’s much simpler to run a wire between two different tiny grey buildings in a neighborhood than run a much longer wire to another section of your core network. Ideally that’s just a backup for your backup, but properly configured routers will still use it for local traffic.

Another common case is if you want X bandwidth from A to B you round up to hardware with some number more than X. This can result in network topology’s that seem very odd on the surface.

PS: I think you’re confusing what I am saying, this is not pure P2P it’s very much a hybrid model. Further Netflix was seriously considering it for a while in 2014, but stuck with a simpler model.


I don't understand your first paragraph. If I torrent a movie, how does the torrent client know any of the things you mentioned as well? The answer is that the protocol does, along with some tags as to what the movie the file represents, like Radarr and Sonarr do. This would be the same, just on a locked down streaming client.


> Netflix might have data centers that gets closer, but nothing will ever get as far reaching and close as P2P would. A residential building who are all watching the same movie would only have to download the movie from building <> internet once.

This assumes that residential internet connections act like a larger version of your home LAN.

For all the ISPs that I know about in Australia, all of those connections are terminated within the ISPs network in a central location. This is done using PPPoE/IPoA which is handled generally at the state level.

Even for HFC networks where the local node is a shared medium, you can't get client-client communications.

Perhaps the situation is different for small WISPs, but I can't see it being too different for most people. Happy to learn more if you have examples.


With the NBN, most providers don’t use PPPoE anymore but instead just plain IP.

Every NBN connection goes back to one of 121 Points of Interconnect (PoIs) (which is way too many by the way but was a dumb and misguided decision imposed by the ACCC). At the PoI it gets to the provider through a Network to Network interface (NNI). The provider’s switch that plugs in here is the earliest that a P2P connection could loop back (but only to somebody in the same region, that is, connected to the same PoI), and is also the closest that a Netflix OpenConnect device could be (although it might not be cost effective to rent the rack space there vs. putting it in a data centre in a capital city).

A lot depends on the provider, however, on how they have structured their network. Apparently some do tunnel all their connections back from the PoI to a central location even though they don’t have to.

But for most, I expect the provider’s backhaul to the PoIs in a state would all go back to the capital city of that state, which would then be interconnected to the neighbouring states capital cities (this is what the ISP I use, Aussie Broadband does at least).


1) Can a cable ISP even route traffic between residential customers without schlepping out to a POP?

2) Where does the upload bandwidth limitation live? Because if it's in the cable modem itself or the nearest upstream connection, this doesn't help much. There is probably much more capacity between the OpenConnect and the home than the home and its nextdoor neighbor.


> 1) Can a cable ISP even route traffic between residential customers without schlepping out to a POP?

Guess it depends on the ISPs implementation but in my experience, yes, some ISPs let you route traffic between residential customers, granted you've setup your router correctly.

> 2) Where does the upload bandwidth limitation live?

In my experience, the limitation lived further away than the residential building, as we hit our own equipment's limits before we hit any other limits.


Windows 10 and Xbox do p2p aggressively for games and updates


Large residential buildings (condo and apartments) need a small edge Data Centre in them.


This might be a more realistic compromise.

Maybe if the industry can agree on an OpenConnect standard with some traffic shaping. Then Netflix, Hulu, Steam, Xbox, PlayStation, EA, Disney+, etc could all share the same edge CDN hardware. Then residential buildings, cities, individuals, whoever, could run their own caching hardware that would cache based on downstream demand.


That's quite something when a streaming service can go to the ISP with their hardware...


This is common for any company that operates a CDN. It just so happens Netflix is a big enough content deliverer that they run their own network rather than pay to use someone else’s.


I was involved in starting a streaming service in SE Asia and the costs of 3rd party edge caching (Akamai etc) were significant. So much so that it made the business model non-viable without work arounds. Eventually we partnered with ISPs and piggybacked their edge caches.


Pretty common.


You could think about the opposite way: it's good that Netflix was using so much bandwidth because it encouraged network upgrades. Now, when we need to use this capacity for something else, Netflix can flip a switch and it's available. Excess usage for luxuries like 4k video in good times is a way of preparing for bad times.

Ideally we would treat hospital capacity the same way. In good times, everyone gets a private room.


This is also a reason to eat meat from animals fed food for people. In the bad times, slaughter the animals and eat the meat and the feed.

It's changed my perspective on efficiency. There are some forms of inefficiency that can provide a buffer if something goes wrong.


That last line used to be highly internalized by Americans who lived thru the depression, WW 1 and 2, etc.


Yes.

Though Americans were probably least affected by WW1 and WW2 of most war participants. Go ask some people in Russia for a better example.


It's not necessarily inefficiency.

Eg flush toilets are a great time saver. But they use a lot of water.

If we had a drought tomorrow, we could switch to compost toilets. They are more hassle to use and maintain, but use basically no water.

Which one is more 'efficient' depends on the relative scarcity of water vs time.


The difference being that networks and Netflix servers don't cost a lot of money or take up space when they're idling, and/or they can sell the capacity and switch rapidly when they need it, whereas hospital capacity does.

I mean I'm all for having excess hospital capacity (and excess capacity for a lot of things), but at the same time who is going to pay for it?

This has been the problem before cloud computing; companies had to set up servers and the like for the worst case peak capacity workload, meaning that usually they'd only be operating at 10-20% of capacity.


Yeah, I think it would have to be used as a way to get people to pay extra. Like, some private rooms are available for an extra charge, but when the crisis hits, they aren't available anymore?


At least in the US, Netflix colocates a box with the most popular content on premises with the ISP, so the last mile could be the bottleneck here.


Yes, it's the same here (Dominican Republic).


It sounds like it's time for some popcorn.


Excuse me, but if the problem is in overloading of local networks, popcorning full-quality vs streaming downsampled might revert their efforts in cutting traffic, which might be for a good reason.


Would you be so kind to pass some to me? Popcorn that is.


I would never risk suggesting people use the legal app popcorntime to pirate copyrighted material.


What is the advantage over good ol' fashion torrenting?

I guess I'm just an old timer....


Torrenting is on the way out, from what I (as a casual torrenter) can tell. There is just a lot less material available. Maybe it's better on private trackers, but I haven't been a member of one in 15 years, nor would I know how to access/be invited to one even if I wanted to.


Its on the way out for casual use but its still the primary method for enthusiast use, Its the best and sometimes only place to get flac/bluray quality and the only reliable place for console games.


Newsgroups and private indexers are generally the best option these days.


I've stopped using torrents 4 years ago maybe, because I discovered usenet will use up my entire connection for not just the new stuff, but also 10 year old movies/series. I've tested it on a server in a local datacenter to try remote plex streaming. It uses the entire gigabit connection from start to finish.

It isn't completely free like torrents though. You'll need to subscribe to a usenet provider for like $6/month and a newsgroup indexer for $10/year. The $80/year is worth it though. I've got it setup so my series automatically download (Sonarr) and whenever I want to download a movie I usually just look it up myself. I've had it automated in the past (Radarr), but it wasn't really necessary. My download client already renames movies and places them in the right directory anyway.


Its more automatic/streamlined. The app looks like an ordinary streaming website and all the torrent stuff is in the background.


seems to me like there's some concern over the latest edition of popcorntime in its subreddit, so I'd be wary of it for now.


This is called a CDN, and they are very ubiquitous especially for video streaming services


The greatest advantage of torrents is also its greatest disadvantage.

It’s distributed by nature but it relies on a network of computers (peers) to always be available. And given the asymmetric speed of most consumer grade internet connections, most peers tend to be greedy and disconnect from the mesh after the download finishes.

This is where private torrent sites stepped in and encouraged sharing ratios. Of course, most of the “peers” ended up being dedicated servers with symmetric gigabit lines rented from some host in a foreign country.

So in the end it, the patterns shifted back to data centers and “dumb terminals”


You missed Netflix's compression. Netflix delivers contents with a way less data with similar quality.


Except residential upload speeds are usually only 1/10th of the corresponding download speed.


Doesn't really matter though, still means (Slightly oversimplified) that a peer can connect to 10 hosts and not have to download at all from the origin server. Also this simply isn't true in a lot of European countries where upload and download speeds are equal, but that's a different thing.


The math still does not work out.

On an 1:10 asymmetric connection, users would need to remain available for seeding 10 times longer than it takes to download (and presumably watch) the film to have a net positive impact on the system.

That's not exactly realistic, considering that a lot of traffic goes to mobile devices where this would outright kill battery life.


And Smart TVs which lack the computational power/storage/memory to serve traffic.


Can you explain what you mean by that last statement? It does not sound true to me. As far as I can tell (in my country) almost all consumer offers are asymmetric: ADSL, obviously, but also fiber. Symmetric offers are very much for professional access, and typically more expensive.


In Sweden, fiber (which is quite common) is pretty much always symmetric. 100/100 is common, many people have 1000/1000 and there is residential symmetric 10 Gb/s to be had. Asymmetric ADSL and Cable is definitely not uncommon though.


If you’re downloading ten segments from ten different uploaders it can all balance out I think.

Bittorrent can be damn quick. Haven’t used it in a while but it was certainly fast on well seeded items.


Bittorrent would eat all of the available bandwidth if the content is popular enough that there are always new leachers available.

With enough bandwidth Bittorrent would be less efficient than a single stream TCP between the source and destination because of the protocol overhead.


Yeah, but my residential conncetion is 1000/100.


Maybe 10 years ago? I don't know anyone with a connection that badly asymmetrical.


My connection is 1000/50 – theoretically.

On a good day, that works out to about 100/2, thanks to my provider vastly overselling their shitty overcongested network as "fiber to the home". (It's just cable.)


So you've got a "gigabit connection" (in marketing terms), yet worst case your upload speed is like that of ADSL. That's unacceptable.


That's not the bottleneck. The bottleneck is the IP interconnect capacity between edge network and its peers. If you only have 4x100 Gbit/sec IP sink to Level3 in SEA and your other sink is 100G to NTT, no amount of traffic engineering in your edge network is going to give you more than 500 Gbit/sec of exit and that exit will only be available if NTT has entire 100Gbit/sec that is not congested and Level3 has 400Gbit/sec not congested.

In eyeball networks the bottleneck is not upload. It is traffic being brought into the edge network from upstreams (netflix/photos/etc)


This is a accurate explanation. Eyeballs are inbound heavy and have plenty of outbound capacity (it's also why they don't care about outbound ddos so much...). However it's important to remember that they can easily congest their internal network infra on the inbound.


That's the standard ratio for Spectrum consumer internet. The ratio actually gets worse as you increase the upload speed. They cap out at 35 Mbps upload even with 940 Mbps download.


My Comcast gigabit internet is capped at 35Mbit upload as well.


It's pretty common if your last mile technology is old, e.g. cable internet coax, or copper phone lines for ADSL.

Most Comcast plans in my area come with 5Mbit/s upload, even if the download speed is 15x to 25x that.


Most FTTH connections in Italy are 1000/100 or 1000/300 Mbit/s. I guess most people wouldn't even notice the difference in upload between 100 and 1000 Mbit/s.


I have the most expensive plan available in my area, and my speeds are 24mb/0.768mb


My parents connection in Australia is 100/3.


My connection in Germany is 50/5.


Is this cable ???. Mine is fiber and is 50/50.


It's ADSL. Cable is allegedly a bit faster in my area.


230 mb Down/ 5 mb up. Xfinity near Denver.


1000/35 for me. Yay DOCSIS.


why is docsis so slow/asymmetric?


DOCSIS is fine now, but being a cable network supporting legacy systems takes up most of the upload spectrum for any cable ISP.


You can configure cable to use more of its channels for upstream. They just don't because that would hurt downstream and force them to run more cables to make up the difference.

There's absolutely no reason you can't have symmetric cable if cable companies wanted to pay for the infrastructure.


It's not a matter of "running more cables", and not just a matter of cost. They have to complete their IPTV migration, which is a massive project, and has been in progress for years now. They also have to upgrade hardware at every single node, which is a multi-year project. With that also comes brand new noise mitigation problems.

Yes, you can configure it to use more channels for upstream, but only if you cut off television to all of those paying customers, likely losing a lot of subscribers, breaking carrier agreement contracts, lose the advertising dollars, and go out of business before you even get there. You also have to swap out all of their cable boxes, and leave whoever purchased their own box hanging.

As far as I know, no ISP in North America is currently using any DOCSIS 3.1 features for upload, and they're barely taking advantage of the features for download. Moving to OFDMA is the big barrier to opening upload speeds. The protocol supports it on paper, but turning paper into massive co-existing television and ISP infrastructure is not trivial.


> Maybe if we had kept using bittorrent for TV distribution we wouldn't have so much global traffic downloading

I don't think bittorrent can compare to the efficiency of traditional TV broadcast, neither cable nor over-the-air. Instead of a packet being for a single destination, it's send once, and everyone just listens in.


Wasn’t Netflix originally p2p in some manner? I can’t remember the exact details (and I may be wrong), but I could have sworn their original architecture was distributed in one way or the other.


You might be thinking about Spotify, which had a torrent-like technology originally. Nowadays I think it's fairly limited.


Thanks that might be right!


Also Voddler I guess?


Netflix was never p2p, but BBC iPlayer was when it first launched.


Edge caching should provide similar benefits no?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: