Of course there are other considerations and we (humans) haven't worked as much on P2P as centralized systems, so only the future can tell us where the limit is.
But you do bring up good points, as the current infrastructure (everywhere) is not setup for P2P. In most modern countries (sans US), ISP networks are actually pretty good and cheap, and works fine P2P. Otherwise there are other ways of distributing as well, mesh networks is one way.
All these questions you are outlining are definitely solvable though, just like when these questions arised when we built our current centralized infrastructure. Problem is that P2P networks are not nearly as funded as centralized infrastructure, leading to less people working on actually solving these problems.
The default is to download 100% from a CDN somewhere, and P2P that default to the same behavior can’t be a net negative outside of sub 0.1% extra communication overhead. Don’t want to use your upload bandwidth? That’s fine your simply stuck using the same CDN network that’s currently overloaded.
So none of the above are actual issues, a device doesn’t need to stay connected or have the full movie to be useful for P2P. If I start watching some movie while you’re also watching that same movie you can stream whatever is in your buffer to me and that’s a net gain in terms of back haul capacity. I don’t need to depend on anything from you other than what you already sent to me. I can then download from the service bit’s you don’t have and stream them to you.
You also confuse "CDN somewhere" with OpenConnect box literally inside your ISP's DC. It is probably faster to get it this way than it would be to P2P it from your neighbor since, at the end of the day, that P2P traffic _must_ go through your ISP and their ISP. It will not, by definition, peer at the local hub. You are _NOT_ on a local network with your building/neighbors/etc. Your communication routes to your ISP and then out to anywhere else, even if that somewhere else is on the other side of your wall.
Even if it was possible. Even if it was slightly faster. Do you really think studio execs are going to be okay with customers hosting/serving their content off their machines? Even _IF_ this was "technically" a good idea, this is a non-starter from the business standpoint.
Anyway, the CDN knows the connections that are on the network because they are what are connecting to it. Segregating based on large scale network architecture is a solved problem, if you’re confused read up on how CDN’s work. What happens inside each ISP can then be managed either via automation based on ping times etc, or ISP specific rules.
In terms of P2P it’s trivial to include 99% of the data for a movie, but not enough data to actually play the movie. It’s codec specific, but that’s not a problem when you’re designing the service. Ensuring the correct users are part of the network is the basic authentication at the CDN node. That’s what’s keeping the list of active users.
As to data validation, the basic BitTorrent protocol handles most of what your concerned about. Clients have long been able to stream movies with minimal buffering by simply prioritizing traffic. Improving on that baseline is possible as you’re running the service not just accepting random connections and you want to be able to switch resolutions on the fly, but that’s really not a big deal.
PS: And yes, some Netflix content deals would create issues. But, that’s irrelevant to their own content and it’s just another negotiation when negotiating licensing, much like allowing content on a CDN in the first place.
They have boxes in a lot of ISPs.
If “P2P” requires you to transit your last mile to your ISPs POP and then back down to another user, and Netflix requires you to transit to the ISP POP and back out again... has P2P gained you much? In most cases downstream throughput is much higher as well, making the in-ISP cache box far better for most.
P2P has its place but it’s hard to argue its better for video distribution.
Another common case is if you want X bandwidth from A to B you round up to hardware with some number more than X. This can result in network topology’s that seem very odd on the surface.
PS: I think you’re confusing what I am saying, this is not pure P2P it’s very much a hybrid model. Further Netflix was seriously considering it for a while in 2014, but stuck with a simpler model.
This assumes that residential internet connections act like a larger version of your home LAN.
For all the ISPs that I know about in Australia, all of those connections are terminated within the ISPs network in a central location. This is done using PPPoE/IPoA which is handled generally at the state level.
Even for HFC networks where the local node is a shared medium, you can't get client-client communications.
Perhaps the situation is different for small WISPs, but I can't see it being too different for most people. Happy to learn more if you have examples.
Every NBN connection goes back to one of 121 Points of Interconnect (PoIs) (which is way too many by the way but was a dumb and misguided decision imposed by the ACCC). At the PoI it gets to the provider through a Network to Network interface (NNI). The provider’s switch that plugs in here is the earliest that a P2P connection could loop back (but only to somebody in the same region, that is, connected to the same PoI), and is also the closest that a Netflix OpenConnect device could be (although it might not be cost effective to rent the rack space there vs. putting it in a data centre in a capital city).
A lot depends on the provider, however, on how they have structured their network. Apparently some do tunnel all their connections back from the PoI to a central location even though they don’t have to.
But for most, I expect the provider’s backhaul to the PoIs in a state would all go back to the capital city of that state, which would then be interconnected to the neighbouring states capital cities (this is what the ISP I use, Aussie Broadband does at least).
2) Where does the upload bandwidth limitation live? Because if it's in the cable modem itself or the nearest upstream connection, this doesn't help much. There is probably much more capacity between the OpenConnect and the home than the home and its nextdoor neighbor.
Guess it depends on the ISPs implementation but in my experience, yes, some ISPs let you route traffic between residential customers, granted you've setup your router correctly.
> 2) Where does the upload bandwidth limitation live?
In my experience, the limitation lived further away than the residential building, as we hit our own equipment's limits before we hit any other limits.
Maybe if the industry can agree on an OpenConnect standard with some traffic shaping. Then Netflix, Hulu, Steam, Xbox, PlayStation, EA, Disney+, etc could all share the same edge CDN hardware. Then residential buildings, cities, individuals, whoever, could run their own caching hardware that would cache based on downstream demand.
Ideally we would treat hospital capacity the same way. In good times, everyone gets a private room.
It's changed my perspective on efficiency. There are some forms of inefficiency that can provide a buffer if something goes wrong.
Though Americans were probably least affected by WW1 and WW2 of most war participants. Go ask some people in Russia for a better example.
Eg flush toilets are a great time saver. But they use a lot of water.
If we had a drought tomorrow, we could switch to compost toilets. They are more hassle to use and maintain, but use basically no water.
Which one is more 'efficient' depends on the relative scarcity of water vs time.
I mean I'm all for having excess hospital capacity (and excess capacity for a lot of things), but at the same time who is going to pay for it?
This has been the problem before cloud computing; companies had to set up servers and the like for the worst case peak capacity workload, meaning that usually they'd only be operating at 10-20% of capacity.
I guess I'm just an old timer....
It isn't completely free like torrents though. You'll need to subscribe to a usenet provider for like $6/month and a newsgroup indexer for $10/year. The $80/year is worth it though. I've got it setup so my series automatically download (Sonarr) and whenever I want to download a movie I usually just look it up myself. I've had it automated in the past (Radarr), but it wasn't really necessary. My download client already renames movies and places them in the right directory anyway.
It’s distributed by nature but it relies on a network of computers (peers) to always be available. And given the asymmetric speed of most consumer grade internet connections, most peers tend to be greedy and disconnect from the mesh after the download finishes.
This is where private torrent sites stepped in and encouraged sharing ratios. Of course, most of the “peers” ended up being dedicated servers with symmetric gigabit lines rented from some host in a foreign country.
So in the end it, the patterns shifted back to data centers and “dumb terminals”
On an 1:10 asymmetric connection, users would need to remain available for seeding 10 times longer than it takes to download (and presumably watch) the film to have a net positive impact on the system.
That's not exactly realistic, considering that a lot of traffic goes to mobile devices where this would outright kill battery life.
Bittorrent can be damn quick. Haven’t used it in a while but it was certainly fast on well seeded items.
With enough bandwidth Bittorrent would be less efficient than a single stream TCP between the source and destination because of the protocol overhead.
On a good day, that works out to about 100/2, thanks to my provider vastly overselling their shitty overcongested network as "fiber to the home". (It's just cable.)
In eyeball networks the bottleneck is not upload. It is traffic being brought into the edge network from upstreams (netflix/photos/etc)
Most Comcast plans in my area come with 5Mbit/s upload, even if the download speed is 15x to 25x that.
There's absolutely no reason you can't have symmetric cable if cable companies wanted to pay for the infrastructure.
Yes, you can configure it to use more channels for upstream, but only if you cut off television to all of those paying customers, likely losing a lot of subscribers, breaking carrier agreement contracts, lose the advertising dollars, and go out of business before you even get there. You also have to swap out all of their cable boxes, and leave whoever purchased their own box hanging.
As far as I know, no ISP in North America is currently using any DOCSIS 3.1 features for upload, and they're barely taking advantage of the features for download. Moving to OFDMA is the big barrier to opening upload speeds. The protocol supports it on paper, but turning paper into massive co-existing television and ISP infrastructure is not trivial.
I don't think bittorrent can compare to the efficiency of traditional TV broadcast, neither cable nor over-the-air. Instead of a packet being for a single destination, it's send once, and everyone just listens in.