
Avoid FIFA World Cup CDN Issues - shacharz
https://blog.peer5.com/avoid-cdn-issues-at-peak/
======
isostatic
What amazes me is the lack of concern on latency for web streaming.

We use the internet (and IP in general) to stream video. At high bitrates
(200mbit+) we aim for sub 100ms end to end, for compressed services we're
happy with 500ms, maybe upto a second if it's something like Sydney to London
over the internet.

I was in a control room a couple of weeks ago watching some football. There
were two displays, one end was the feed from the stadium, one was the feed
from the web streaming service.

There were cheers and then groans from the live end of the room. nearly a
minute later someone on the web end started running up the field to score. Of
course I knew at that point that it wouldn't be a goal, as not only did the
people watching the live stream tell me, but twitter was abuzz.

1 minute end to end delivery latency is shocking for this type of program.
Heck 10 seconds is bad enough.

~~~
_trampeltier
Already since over 20 years, the german TV chanel "RTL" is about 15 seconds
later than all other TV chanels on live events like formula 1. But I have no
idea why it is.

~~~
mv4
There no technical reason for it to be that high. Perhaps I should reach out
to them.

------
isostatic
BBC is doing UHD, with HDR and HLG, for the world cup, with the top stream
36Mbit/second [0]

There are a limited number of "spaces" available [1] -- I think it's upto
100gbit of output.

Unlike with the FA Cup (where the BBC did a uhd trial), the world cup will
have a lot of games during the week, where people will be watching from the
office (although probably not UHD). This will mean far higher loads on the
distribution.

Fortunately England's only 3 games are either at the weekend or at 7PM. The
second half of the tournament will really stress the UK internet though, with
both World Cup and Wimbledon on during the working week.

[0]
[https://www.bbc.co.uk/rd/blog/2018-05-uhd_hdr_world_cup_2018](https://www.bbc.co.uk/rd/blog/2018-05-uhd_hdr_world_cup_2018)
[1] [http://www.bbc.co.uk/mediacentre/latestnews/2018/uhd-vr-
worl...](http://www.bbc.co.uk/mediacentre/latestnews/2018/uhd-vr-world-cup)

~~~
helipad
"Only" – A subtle dig ;)

------
lossolo
I've started building P2P live adaptive (DASH) live video streaming using
WebRTC with distributed rate control mechanism some time ago and I am planning
to open source it. Basically using that you could build your own p2p global
distributed live adaptive video streaming CDN (or use it on one server only).
Adding new supporting server (to add additional bandwidth) in this solution is
just as easy as spawning vm/server and launch binary. Distributed signaling
server with GEO/region based peer distribution, full real time statistics on
whole network health, analytics, network automatically adapts to bandwidth
shortage (if for some reason network can't sustain itself) switching to lower
bandwidth versions of the stream. Very easy to use, you would need to set one
config file only, launch 2 binaries and add one JS file to source of your site
and you are ready to go.

Anyone interested in this?

~~~
fasteddy760
Yes! How do I keep updated with progress? Do you have a project hosted
somewhere?

------
sergiotapia
Life pro tip: Just use an ace stream and have zero connectivity issues.

[https://www.reddit.com/r/soccerstreams/comments/82ac7j/acest...](https://www.reddit.com/r/soccerstreams/comments/82ac7j/acestreams_you_what_are_they_and_do_i_use_them/?st=jibvbwab&sh=3dc2954a)

My brother bought the MMA ticket to watch mcgregor vs mayweather and it was a
horrible experience.

I booted up my laptop and ran an acestream, boom crystal clear high definition
image with zero network issues.

~~~
untog
Of course, those streams are not legal.

~~~
oh_sigh
Is it illegal be the streamee, the streamer, or both?

~~~
llao
It is P2P so you would be both.

------
lephty
Is there anything that can be done from the user perspective to improve your
streaming experience. Beyond bandwidth.

~~~
inglor
Hey, I work at Peer5 (another Peer5 employee wrote this article). We typically
measure video user experience in several ways:

\- The amount of rebuffering the user is getting (basically, the less the user
sees the loading wheel the better). \- The bitrate the user has (are the users
seeing the video in the highest possible quality?) \- Whether or not there are
any media errors. \- The amount of time the video takes to load after a seek.

From the user experience - you don't really have control over these things -
it's up for the broadcaster to set up a good service.

We've found (unsurprisingly) that services that are either paid or are
national broadcasters offer better user experience than ones that are free and
easily found online so my recommendation would be to spend a few dollars and
get a solid provider.

(Also, the bandwidth you get now doesn't mean too much when the network is
very congested - so it's worth checking how fast your network is during big
peaks and considering a different ISP or a streaming provider that utilizes
P2P)

------
CapacitorSet
The P2P approach is interesting. Do you have data on how much bandwidth is
saved at peak usage?

It might be especially interesting when many users share the same connection,
effectively achieving broadcasting - the CDN pushes the data to one client,
and it broadcasts it to the local network.

------
petepete
I watched the UCL final via YouTube this year and the Bet365 webapp kept
telling me about goals half a minute before it happened on the screen. To be
honest, if you're isolated the delay doesn't _really_ matter.

------
cflat
Don’t use webrtc for live video. It doesn’t scale with CDNs. Use dash/hls.

~~~
whadar
Peer5 uses webrtc datachannels and runs on top of existing dash/hls streams

~~~
cflat
still doesn't scale efficiently. http based streaming is the only reliable way
to get economies of scale. The cpu cost per webrtc socket is high compared to
a cache-hit for a static resource. Not to mention that WebRTC is way more cpu
intensive client side compared to hls/dash which has kernel/hardware
offloading.

------
btown
Software for scaling up live-streaming CDN points of presence (POPs) is a
pretty crazy domain. For on-demand video, you can think of a CDN as a cache,
getting known-ahead-of-time chunks. But what about for live streaming? It's
not feasible to stream frame-by-frame directly from your encoding backend to
all the viewers of the World Cup, over something like RMTP - you'd want to use
a CDN. So typically, you distribute meaty (multi-second) HLS segments as
individual video files, or collections of files, to your CDN; once available,
they then need to be requested by browsers/mobile clients as a whole segment,
over HTTP(S). Works well with existing CDN infrastructure (provided they can
handle the write volume and have big enough inbound pipes)... but the huge
issue is that the length of the segment plus round-trips is a lower bound on
effective latency. And when interactivity is required, multi-second delays can
be horrible.

[https://www.wowza.com/blog/hls-latency-sucks-but-heres-
how-t...](https://www.wowza.com/blog/hls-latency-sucks-but-heres-how-to-fix-
it) is a great writeup. Another overview of the problem, and a proposed
solution, is in this excellent article by Twitter here:

[https://medium.com/@periscopecode/introducing-lhls-media-
str...](https://medium.com/@periscopecode/introducing-lhls-media-streaming-
eb6212948bef)

> In HLS live streaming, for instance, the succession of media frames arriving
> from the broadcaster is normally aggregated into TS segments that are each a
> few seconds long. Only when a segment is complete can a URL for the segment
> be added to a live media playlist. The latency issue is that by the time a
> segment is completed, the first frame in the segment is as old as the
> segment duration... By using chunked transfer coding, on the other hand, the
> client can request the yet-to-be completed segment and begin receiving the
> segment’s frames as soon as the server receives them from the broadcaster.

And Twitch's followup challenge:

[https://blog.twitch.tv/twitch-invites-you-to-take-on-the-
icm...](https://blog.twitch.tv/twitch-invites-you-to-take-on-the-
icme-2018-grand-challenge-2b3824d3537b)

> This Grand Challenge is to call for signal-processing/machine-learning
> algorithms that can effectively estimate download bandwidth based on the
> noisy samples of chunked-based download throughput.

(IMO) If you're thinking that this is all rather silly, and that live video
streaming is not something that should be done over HTTP in the first place...
there are a lot of reasons why this is the case. All the CDN POPs are
optimized for HTTP GET requests rather than stateful sessions, and Apple's
smiting of Flash removed a lot of incentive for innovation on RTMP servers.
The ironic thing is that Internet connectivity is fast/reliable enough
nowadays that RTMP might have been able to escape its association with
"buffering" spinners, and would provide a much lower-latency experience.
Hopefully there's better standardization in the future as live video becomes
more mainstream.

