My bullet point high-level thoughts on the Super Bowl stream:
- Numbers aren’t out yet, but I would expect an AMA number of under 9 million (2023 was 7 million AMA)
- While some were debating online if the Paramount+ stream was in 4K, it wasn’t. Their stream was 1080P 60fps true PQ HDR, had no color space conversions and was encoded for six bitrates maxing out at 12Mbps.
- Depending on the device and platform, I saw latency from 2 seconds to 60 seconds behind the TV broadcast feed
- HDR on Parmaount+ looked great across all devices I streamed on, and the upscaled 4K/HDR/5.1 stream on YouTube TV looked great as well
- Some users, including myself, did experience some errors or crashing of the stream, which started before kickoff and was largely mitigated before it could impact a larger portion of users. I put the impact to 1% or less of the total streams delivered.
- One media outlet reported, "The streaming platform seems to be struggling with all the Super Bowl traffic,” which was inaccurate. Paramount was prepared for the viewership they received, and there was no problem with the volume of users.
The more CDN providers, the more choices you have for less money. If you are using Vultr as a hosting provider, it's much easier to enable/disable CDN out of the control panel.
>Or, the site owner could manipulate the coordination JavaScript to report the downloaded file as invalid/corrupt/tampered with, and then just not re-request the file (and there's nothing you can do to stop them). So in that case, the site owner is managing to not pay for the service (assuming you don't bill for downloads that were reported as invalid).
This fraud is possible. We collect logs from three sources:
- the Balancer (belong to Farba)
- the Peer
- the User
We can analyze every single case of suspicious behavior.
>now your battery-starved mobile phone is validating cryptographic signatures
with a native app, it costs nothing. Actually, the main power consumer is the phone screen.
>who's responsible for illegal content hosted on servers
this is a very important question. We are going to run AI models against all content we host on Peers. Right now, we don't accept third-parties and "fake" all Peers ourself. Want to ask lawyers what is the best way to handle this case.
IPFS is very slow. It's like a cold storage. Not good for web-cache at all. Though Cloudlare provides gateways for IPFS, and again, we have a CDN layer here. It would be nice to see IPFS performance numbers to compare.
>a CDN isn't just about bandwidth, it is about _disk_
The disk performance - is the weakest part of the Peer. It is a part you can't guarantee any QoS on a cheap server. The idea is to keep popular cache on Peers in memory. Everything else we will host from a limited number of regular servers (belong to Farba), with high-performance disk subsystem.
>Cache management (eviction, invalidation, etc) is done where?
yes. You can invalidate either a single URL or a folder with a wildcard
>you are actually doubling your origin traffic just to push to the cache after serving the original client request
this is something we can proxy and do only one request per file. Current implementation was done for simplicity and reliability.
>but I don't see how this can work. Are they going to rewrite the object in a way that doesn't require the host certificate?
you can easily check this up. Please open the network console on our demo-page and see how it works:
- we have two different certificates (one for the main site, for for the Peer)
Your example servers have 16-32 GB of memory, which is basically nothing. If you ever ramp to real world traffic the peer will be constantly evicting objects (if you use a normal LRU). Even if you backstop that with a mid-tier belonging to Farba to protect the origin, what kind of cache hit rate are you expecting at the peer?
> we just need to add more Peers and evenly distribute the workload.
That doesn't make sense! If you add more peers just to keep more files in memory, each peer is serving less and less data. There's a fixed, finite amount of usage that the system will receive, and the more peers you add, the more that usage gets subdivided among the peers. Which means they make less money, which makes the system unprofitable. Moreover, you're spending more money to distribute the file to more peers (well, you're not, you're offloading that to the customer right now), which means the more peers, the more expensive for you or the customer.
The problem you have is that handling more files means cutting peer profits without adding any benefit for yourself or the peers. Storage isn't factored into pricing, which means that any accommodation you make to increase availability (an important part of being a CDN!), the worse the economics end up being.
> The disk performance - is the weakest part of the Peer. It is a part you can't guarantee any QoS on a cheap server. The idea is to keep popular cache on Peers in memory. Everything else we will host from a limited number of regular servers (belong to Farba), with high-performance disk subsystem.
I use a CDN to serve about 50GB of files stored on my origin, which turn over every week or so (a new 50GB every week). What you're saying means that _at best_ I need at least 3-4 peers in order to keep one copy of all of my files available. That's your using your numbers for a LeaseWeb or OneProvider peer. And if I'm competing with other customers, my content is getting evicted, resulting in more hits to my origin. If I'm actually using your system as a CDN where peers should be physically close to my users, that means that I'm filling up RAM of lots and lots of peers just so that enough peers that are close to my users can serve the files.
for unpopular content, we will deploy a limited number of servers (belong to Farba) with a performant disk subsystem. A web cache storage is a separate option in every CDN: if you want your files always in "hot" cache, you have to pay extra.
"""
My bullet point high-level thoughts on the Super Bowl stream:
- Numbers aren’t out yet, but I would expect an AMA number of under 9 million (2023 was 7 million AMA)
- While some were debating online if the Paramount+ stream was in 4K, it wasn’t. Their stream was 1080P 60fps true PQ HDR, had no color space conversions and was encoded for six bitrates maxing out at 12Mbps.
- Depending on the device and platform, I saw latency from 2 seconds to 60 seconds behind the TV broadcast feed
- HDR on Parmaount+ looked great across all devices I streamed on, and the upscaled 4K/HDR/5.1 stream on YouTube TV looked great as well
- Some users, including myself, did experience some errors or crashing of the stream, which started before kickoff and was largely mitigated before it could impact a larger portion of users. I put the impact to 1% or less of the total streams delivered.
- One media outlet reported, "The streaming platform seems to be struggling with all the Super Bowl traffic,” which was inaccurate. Paramount was prepared for the viewership they received, and there was no problem with the volume of users.
"""