We use the internet (and IP in general) to stream video. At high bitrates (200mbit+) we aim for sub 100ms end to end, for compressed services we're happy with 500ms, maybe upto a second if it's something like Sydney to London over the internet.
I was in a control room a couple of weeks ago watching some football. There were two displays, one end was the feed from the stadium, one was the feed from the web streaming service.
There were cheers and then groans from the live end of the room. nearly a minute later someone on the web end started running up the field to score. Of course I knew at that point that it wouldn't be a goal, as not only did the people watching the live stream tell me, but twitter was abuzz.
1 minute end to end delivery latency is shocking for this type of program. Heck 10 seconds is bad enough.
1. network latency, in milliseconds, that affects stream quality and stability
2. the delay (lag) between real-time capture and what the end user is seeing; this one is usually measured in seconds. A stream needs to be ingested, transcoded, sent from distribution servers to edge servers in each target region - with each step adding to the delay.
Minimizing the lag is very hard, because stripping all buffers (to reduce the delay) makes the stream very sensitive to network conditions (which reduces quality). With most commercial CDN providers you will get 5..10 seconds. It can be reduced to 2..3 sec if you know what you're doing.
edit: in case anyone is interested - in the second scenario, where we achieved 2..3 sec broadcast lag vs real-time, stream source (ingestion) was in the US, and the viewers were in mainland China. Network latency was over 600 msec. Wasn't easy!
I know why there's a delay, I'm just amazed that people aren't concerned about it. The BBC used to offer multicast sources of live TV, which is a far more sensible solution, far more bandwidth efficient and allows end-to-end transmission in the satellite (or even less) range.
Wowza did a talk at demuxed last year about how to do "3 second latency end to end at scale", which I found amusing given that TV people have been doing sub millisecond latency at scale for nearly 100 years, so at least some people in the industry recognize the problem (which is mainly for sports events)
Twitch has recently implemented LHLS (looks like a "periscope-style" implementation) and I was seeing 1.2s glass-to-glass.
Maybe I just need to get away from the public streaming services which use HLS, switch to UDP streams and sub-1s buffer sizes.
No matter what anybody says, the Internet just isn't well suited for massive live streaming events. That's what television is for!
It does amuse me when we were looking at latency for a program from a ropey bit of connectivity which we were using ARQ on. We were discussing whether we could push the latency up from 2 seconds to 6 seconds (it kept dropping out for 2 or 3 seconds at a time), as it's sport. Then we realised there was a good 30-40 seconds downstream before it even left to the CDN!
I still don't understand half of what Streampunk  are trying to do with their nmos grain workflows, but they are talking about sub-frame HTTP units
This is not an approach that supports line-synced timing and may not be appropriate for live sports action that requires extremely low latency.
However, for many current SDI workflows that can tolerate a small delay, this approach is sufficient.
I think I prefer this approach to the SMPTE 2110 approach to be honest, especially with the timing windows that 2110 requires (it doesn't lead well to a COTS virtualised environment when your packets have to be emitted at a specific microsecond)
But I digress, this is all very off topic
There's a real opportunity for a sports-oriented OTT company to compete on latency, a DVR that actually works, and expansive rights (I never have to guess if I have access to any sporting event).
If you're watching a sports game, I can see that having a second stream (perhaps curated) with easy to access stats on that game, or a different angle to what the director thinks you want, or whatever.
I don't see the appeal of second stream in drama, but in things like sport, yes
It would be annoying if there were multiple devices nearby on different delays, but for me in my living room. I don't care if it's 30 seconds or 3 minutes. I've gotten spoiled by twitter feeds a few times, but it's not the end of the world.
In some cases international viewers may see the "live" footage before local ones.
How is this helpful in either of these cases? A sniper needs eyes on, a drone operator probably has live video from the drone. I don't see how a TV delay would have any effect.
There are a limited number of "spaces" available  -- I think it's upto 100gbit of output.
Unlike with the FA Cup (where the BBC did a uhd trial), the world cup will have a lot of games during the week, where people will be watching from the office (although probably not UHD). This will mean far higher loads on the distribution.
Fortunately England's only 3 games are either at the weekend or at 7PM. The second half of the tournament will really stress the UK internet though, with both World Cup and Wimbledon on during the working week.
Anyone interested in this?
My brother bought the MMA ticket to watch mcgregor vs mayweather and it was a horrible experience.
I booted up my laptop and ran an acestream, boom crystal clear high definition image with zero network issues.
Here’s hoping sports get the memo re: usability. Music piracy plummetted after Spotify arrived on the scene.
If some random dude with pr0n popups can make it work but Fox/NBC/ESPN/etc can't, tough shit.
- The amount of rebuffering the user is getting (basically, the less the user sees the loading wheel the better).
- The bitrate the user has (are the users seeing the video in the highest possible quality?)
- Whether or not there are any media errors.
- The amount of time the video takes to load after a seek.
From the user experience - you don't really have control over these things - it's up for the broadcaster to set up a good service.
We've found (unsurprisingly) that services that are either paid or are national broadcasters offer better user experience than ones that are free and easily found online so my recommendation would be to spend a few dollars and get a solid provider.
(Also, the bandwidth you get now doesn't mean too much when the network is very congested - so it's worth checking how fast your network is during big peaks and considering a different ISP or a streaming provider that utilizes P2P)
It might be especially interesting when many users share the same connection, effectively achieving broadcasting - the CDN pushes the data to one client, and it broadcasts it to the local network.
Caching with WebRTC is very hard - even for a single second - since every connection is stateful.
https://www.wowza.com/blog/hls-latency-sucks-but-heres-how-t... is a great writeup. Another overview of the problem, and a proposed solution, is in this excellent article by Twitter here:
> In HLS live streaming, for instance, the succession of media frames arriving from the broadcaster is normally aggregated into TS segments that are each a few seconds long. Only when a segment is complete can a URL for the segment be added to a live media playlist. The latency issue is that by the time a segment is completed, the first frame in the segment is as old as the segment duration... By using chunked transfer coding, on the other hand, the client can request the yet-to-be completed segment and begin receiving the segment’s frames as soon as the server receives them from the broadcaster.
And Twitch's followup challenge:
> This Grand Challenge is to call for signal-processing/machine-learning algorithms that can effectively estimate download bandwidth based on the noisy samples of chunked-based download throughput.
(IMO) If you're thinking that this is all rather silly, and that live video streaming is not something that should be done over HTTP in the first place... there are a lot of reasons why this is the case. All the CDN POPs are optimized for HTTP GET requests rather than stateful sessions, and Apple's smiting of Flash removed a lot of incentive for innovation on RTMP servers. The ironic thing is that Internet connectivity is fast/reliable enough nowadays that RTMP might have been able to escape its association with "buffering" spinners, and would provide a much lower-latency experience. Hopefully there's better standardization in the future as live video becomes more mainstream.