Most importantly, the audience should expect and accept a delay.
Youtube goes to great lengths to make videos play instantly, including running huge local caches near large traffic exchange points. The audience is conditioned to see a clip start in under a second, faster for popular clips.
Anything that runs off distributed (not replicated) storage, let alone peer-to-peer links, will inevitably be slower to start. It may take 10 seconds, or 30 seconds, for a clip to begin playing.
This alone can kill such solutions for the mass audience, hungry for instant gratification.
I find my peer to peer video streams higher quality and more reliable than my paid streaming services. Seriously, they've gotten quite good, and a unique feature is sensible buffering. The peer to peer stream will buffer through the whole video so you can seed it. The paid streaming service buffers only like the next 30 seconds to save on their bandwith, and if you are regularly running past that buffer you will have an awful experience, compared to the peer to peer option where you might buffer initially for 30-45 seconds, but then never buffer again nor drop quality. There is an advantage to frontloading more buffering time.
> The peer to peer stream will buffer through the whole video so you can seed it. The paid streaming service buffers only like the next 30 seconds to save on their bandwith
I think you're seeing how these systems actually work in practice, and why they operate this way. They don't quite work as you describe, and they work the way they do for the benefit of the user's playback experience more than anything else.
Most adaptive video implementations are using HLS/DASH and are serving up each video in 2-5 second chunks. (Usually closer to 2.) This is by design, and a very good thing for the vast majority of users and use-cases. You aren't beind mindlessly spoonfed chunks of the video you're trying to watch -- your playback experience is also being heavily measured, and the next chunk you receive is determined by this. If your bandwidth suddenly drops, or you've been watching a higher-quality version of the video than you can support, within a couple seconds you'll be served a slightly lower-resolution version. If it goes back up again, same thing, it'll be corrected very quickly.
If you're watching a video on your iPhone and walk from your fast wifi to the 3G outside to your garage with almost no service and back into your house again in a single minute, the video can have adapted to each of these situations quickly and fairly seamlessly.
This also solves another problem -- letting the user have near-instant playback at relatively high-quality without having to worry much about if your one-second bandwidth test wasn't as accurate as you'd hoped. The scenario you describe, where buffering for longer at the beginning to improve performance overall, is not a big concern here. If you serve a version a bit higher quality than the client can sustain, it'll be kicked down in a couple seconds. (By the way, the 2 seconds preceding every Netflix stream, when you see their logo and hear the tone that accompanies it, that's when Netflix tries to very quickly measure and determine which quality level to start you out with.)
If you have blazing-fast, highly-stable internet, that's great! And that's not most users. But in this case, most services should still be giving you far greater than 30 seconds worth of chunks, provided that the system determines this is safe/stable for your connection. If you're seeing a shorter buffer in web browsers, it's likely the cache readahead limits in Chrome/Firefox. (You can modify them!)
I can see how one might think having one big video file buffered might result in lower performance for peers, but in the ideal scenario, peers aren't being served byteranges of a single big video file that you have buffered locally with your fast internet, but rather the next chunk(s) that make sense based on the client's current playback experience.
I will admit that shoehorning adaptive video standards into p2p has been a more difficult challenge than letting WebTorrent handle the full file the way torrents have worked for years, but it's getting there, and it's the proper way to go.
Youtube goes to great lengths to make videos play instantly, including running huge local caches near large traffic exchange points. The audience is conditioned to see a clip start in under a second, faster for popular clips.
Anything that runs off distributed (not replicated) storage, let alone peer-to-peer links, will inevitably be slower to start. It may take 10 seconds, or 30 seconds, for a clip to begin playing.
This alone can kill such solutions for the mass audience, hungry for instant gratification.