Interesting post! The author mentions that the first step was to "receive this input and to generate HLS output to a known folder". I have a couple questions about this step:
1: Did they re-encode the input? Or just repackage it?
2: I'm unfamiliar with EvoStream, but if it is ingesting RTMP and outputting HLS, why did Globo need to bother generating an HLS manifest? Couldn't they just use the one EvoStream created?
I am glad you liked. I am not the author of the blog post, but I was a member of the team.
1: We just repackage it.
2: The problem is that EvoStream stores the manifests and chunks locally and we needed high availability. That's why we use an external data storage. We have had up to 30 simultaneous streams, with 7 bitrates each and 2 hours of DVR.
Just adding one extra information, building the playlist on our side enabled us to define the DVR window (how many time one can seek back on the video). Since we built both the server and client side, it was also possible to add tags to the stream and we did it. tl;dr; It was possible to control the adaptive streaming on the server side.
When our CDN was very crowded, one tag on the HLS playlist was able to direct users to a lower quality, preventing all the users to fight for the same bandwidth and avoiding rebuffering events.
Honestly - one of the most informative, well written, extremely well referenced articles I've read literally in years.
This article will teach me more than I've learned in a long time. Thank you very much for your contributions..
It is rewarding working at globo.com with such skilled engineers and great solutions, in almost every area. Congrats to Leandro, Juarez, and everybody involved.
It feels like they reinvented the wheel on a lot of things (especially on the monitoring side). I also wish there was some more detail about the caching/endpoint side of things. We rely heavily on Akamai's services for their edge servers, caching, and site acceleration. I don't see anything like that here, but maybe they're just not mentioning it?
We only implemented the dashboard (getting data from all kinds of source) on monitoring side, we relied on: logtash, graphite, elasticsearch...
The architecture up from what was described is almost purely to provide caching capabilities, aka bunch of edge servers caching content to final users.
Please keep in mind that FIFA14 was one event, we still broadcast many other events and channels simultaneously ( private and public).
About why chose HLS over RTMP, in our experiments HLS showed to be easier to scale than RTMP (maybe because the latter is stateful + the current players don't have an optimized adaptive bitrate algorithm + it's easier to scale http over rtmp)
I disagree (maybe with a little context you will do understand):
* We were only allowed to broadcast to Brazil.
* We already have all the servers and needed bandwidth, why would you pay when you already have all the servers?
Working in the TV industry myself, the uninevitable question on DRM springs to mind. Was these streams encrypted as well somewhere along the described process?
This documentary is very old and pre-dates the internet.
There's no denying the extent of Globo's influence. But, compared to the 80's, it's merely a shadow of it's former self. Probably because of the internet. Also, there are other communication vehicles fighting for eyeballs now.
Politics aside, its IT arm 'globo.com' is pretty modern and open.
> There's no denying the extent of Globo's influence. But, compared to the 80's, it's merely a shadow of it's former self.
The ire lingers, though. Sometimes Globo shows up to report on something and they essentially get kicked out of the area (by the people near the event). This may be a combination of how it tends to report on certain subjects and leftover distrust from the past decades.
1: Did they re-encode the input? Or just repackage it?
2: I'm unfamiliar with EvoStream, but if it is ingesting RTMP and outputting HLS, why did Globo need to bother generating an HLS manifest? Couldn't they just use the one EvoStream created?