Hacker News new | past | comments | ask | show | jobs | submit login
Netflix now supports TLS 1.3 (netflixtechblog.com)
308 points by caution on April 21, 2020 | hide | past | favorite | 137 comments



I noticed they didn't explicitly mention why they feel the need to ensure authentication+confidentiality+integrity for their streams, given that the data they're dealing with is films and TV shows, rather than, say, payment details.

As I understand it, they use HTTPS to prevent spying and data-mining by unscrupulous ISPs. It doesn't affect their DRM at all, which would work just as well over plain HTTP.


>I noticed they didn't explicitly mention why they feel the need to ensure authentication+confidentiality+integrity for their streams, given that the data they're dealing with is films and TV shows, rather than, say, payment details.

Because secure should be the default where we can do it.

If I'm watching a show, I don't necessarily want every single router along the way knowing what show i'm watching. Either because of advertising profiles or just because I don't want people knowing that I'm binge watching a specific show, or to go to an extreme, perhaps because what I'm watching isn't legal where I live.

If I'm watching a documentary or news show, I don't want the possibility that someone could modify my stream and change parts of it. Misinformation is a big deal, and if we have a simple solution to get rid of whole attack vectors (especially before they become common), why not use it?

And TLS gets rid of other attacks as well. token stealing is a lot harder if EVERYTHING is TLS encrypted vs if only some things are. It can also secure against vulnerabilities in the software that reads the streams. They no longer have to harden their systems as much against potentially hostile data, they can be behind well tested TLS termination that only allow verified data through to the "backend".

Everything should be encrypted in transit, zero exceptions. Just because you don't think it's important to hide doesn't mean that others don't. Just because you think your parsing is secure doesn't mean that another layer isn't useful for defense in depth. We have the technology. It's free, cheap to run, and doesn't hurt the user experience in any significant way, so why not use it?


> If I'm watching a show, I don't necessarily want every single router along the way knowing what show i'm watching.

Unless Netflix has also started padding their compressed video segments to a deterministic size, TLS doesn't really help with the "I don't want people to know what video I'm watching" problem, as the encrypted media segments are first compressed using a variable bitrate encoding that causes a unique pattern in their resulting chunk sizes; and so, with a little database of signatures--apparently even just at the TCP level, without caring at all about the HTTP and TLS layers... which surprised even me, and I'm quite cynical about this stuff--you can pretty rapidly fingerprint what movie someone is watching from a passive traffic dump.

https://dl.acm.org/doi/10.1145/3029806.3029821

> Identifying HTTPS-Protected Netflix Videos in Real-Time (2017)

> After more than a year of research and development, Netflix recently upgraded their infrastructure to provide HTTPS encryption of video streams in order to protect the privacy of their viewers. Despite this upgrade, we demonstrate that it is possible to accurately identify Netflix videos from passive traffic capture in real-time with very limited hardware requirements. Specifically, we developed a system that can report the Netflix video being delivered by a TCP connection using only the information provided by TCP/IP headers.

> To support our analysis, we created a fingerprint database comprised of 42,027 Netflix videos. Given this collection of fingerprints, we show that our system can differentiate between videos with greater than 99.99% accuracy. Moreover, when tested against 200 random 20-minute video streams, our system identified 99.5% of the videos with the majority of the identifications occurring less than two and a half minutes into the video stream.

Note that this exact same kind of attack has been applied successfully to other places where a pattern of sizes can differentiate usage, including: Google Maps tiles (which are either variably-compressed fixed-dimension images or variably-dense fixed-region vectors), web pages (which are different sizes already but also link to images and other resources that are random sizes, which helps create the fingerprint), and non-personalized type-head search queries (where the result for each letter brings up some unknown set of titles and URLs; as you type each character the sequence of sizes for your intermediate results can expose the query, potentially even without needing any model of the likely keywords).

Maps: https://ioactive.com/ssl-traffic-analysis-on-google-maps/

Search: https://eprint.iacr.org/2014/959.pdf

(edit: I just realized it might be interesting for me to note that I am the lead developer of a new VPN protocol called Orchid and have "attempting to solve as many of these issues as generically and yet with as little required overhead as I can" on my todo list; to be very clear: I don't handle any of this stuff yet at all, but should get to it in the next few months... while the company behind Orchid that is paying me to work on the protocol talks a lot about itself already, I'm really intending to do my own "launch" of the product once I solve some of the stability issues and finish my traffic analysis mitigations.)


Yup. One thing that I'm surprised to see hasn't happened already, but maybe has a better chance in QUIC than TLS 1.3 itself is making use of TLS 1.3 padding to exactly fill the MTU on typical 1500 byte (or so) links.

The TLS 1.3 on-wire format is weird because it needs to pass rusted-in-place TLS 1.2 only middleboxes. One of the convenient side effects is that you can add one or more bytes of padding "free". Any 1499 byte packet can be transformed into a 1500 byte packet just by adding one byte of padding whereas many naive formats couldn't do that.

So this means you could "right size" packets just before sending, so that you still send the same number of packets but they're always full, reducing the ability to distinguish their contents by length.

Signal does something more sophisticated to defeat this sort of analysis for its GIF support. The local client does two or more overlapping GETs. So maybe it needs a 14230 byte GIF but it fetches 8430 and 9401 bytes of data then throws some of it away to re-assemble the 14230 bytes needed, an adversary is left with too many possibilities. But it's probably a bit much to expect Netflix subscribers to pay extra for bandwidth just to defeat an adversary who wants to know how much of Tiger King they've seen.


Netflix's aproach of doing TLS in the (FreeBSD) kernel may make it easier to do this. In most stacks, the TLS segmentation is too far away from the TCP segmentation to make this practical, especially since MTU/MSS can change during a connection.


That's really interesting, if i'm understanding this correctly it's basically another form of compression leaking information about the underlying data. Would they need to pad everything to a deterministic size or would they just need to change some early parameters during the encoding process to throw off the rest of the chain?

But in the end it still doesn't mean TLS isn't worth it for all the other benefits (not that you implied that).


The way static "streaming" video works is that you divide the video into a number of small segments--maybe every 10 seconds or every 2 seconds or something--and then encode each of those segments at various quality levels and store all of those files on HTTP servers. There is then a "manifest" file that lets the client learn what qualities are available and what their URLs will be for various timestamps. The client then just starts downloading chunks. If it is finding itself with free time, it starts downloading higher quality (larger) chunks in the future. If it isn't downloading them fast enough, it downloads lower quality (smaller) chunks in the future.

The issue then is that no matter what you do with the compression algorithms, the user is downloading these chunks and you can see the pattern of packets in one direction setting up requests and the packets in the other direction replying with the chunks and you can fingerprint what movie someone is watching. It isn't some kind of encoding issue, as the data is all encrypted: it is the entire concept of taking fixed length segments of a movie that will compress to some non-determinstic size. If you take the first two minutes of every Star Wars movie, divide each up into 10 second segments, and then compress those segments, the sequence of sizes of the segments will be pretty unique.

What is so great about this particular paper is that you don't even really need to analyze the TLS layer and try to pay close attention to really figure out the request/responses: they just fingerprint the TCP flow and that's sufficient, which in retrospect doesn't surprise me as what you are really looking for is some kind of rate of requests to responses for the chunks over the course of those first few minutes of watching the video, and don't really need to know for sure where the boundaries are: you have a long enough sequence and a small enough catalog (there aren't tens of millions or billions of videos on Netflix) to get a really strong fingerprint using just the relative rates.

To fix this you really need to either inject extra random traffic (such as extra packets to the server that break up the request rates) that adds so much noise that you can't figure out the signal "in time"--if it takes longer to fingerprint a movie than the length of a typical movie, that's "good enough"--or you need to destroy signal (which is a better description of what we do if the video segments are all the same size: at that point all movies are by definition the same sequence over and over again; if you then pad the length of every movie to the same 4 hour runtime and force the user to download all of the padded black video frames, you essentially 100% solve the problem ;P).


Doesn't this inspection of the manifest have a fairly limited scope in how it can be exploited though? I know the attacker will be able to see what someone is watching, but only for non-novel information, right? You can't use that technique to guess anything new, and had to have yourself already seen the video?


What I am talking about hers is definitely focussed only on "fingerprint known content", as that is the goal of the "figure out which Netflix movie I am watching" use case (as well as the "what model am I watching on myFreeCams/Chaturbate" use case, which might feel worse ;P... it is also the "what video am I watching on PornHub" use case, but that's inherently harder as there are so many many videos and they are often super short: much harder than the moderate catalog of long movies and tv shows on Netflix or the comparatively scant number of available live feed on camsites).

That said, as noted by someone else on this thread, there has been some work done figuring out what people are saying by analyzing encrypted speech packets; but I imagine that kind of technique would be almost impossible to pull off with these segments on the order of multiple seconds long, and including the video in the stream would seem to make that a total non-starter.


Well one of the things that paper does is they just go fetch the index.

Every Netflix client that is watching say 1080p Rick & Morty s1e2 fetches an index of the bits that make up that video, and then fetches the relevant chunks of data.

So if you just fetch that index, no need to "watch" the video, you know OK 1390450 bytes is the 1st chunk of data making up that video, and then 2046917 bytes is the 2nd chunk. When you see that somebody fetched 1390450 bytes of encrypted data from Netflix, they're probably watching s1e2 of Rick & Morty in 1080p. When they fetch 2046917 bytes of encrypted data that confirms it.

So it's "non-novel" but they don't need to (relatively expensively) actually watch the videos they want to match. It's likely any sophisticated adversary has an index of all the common videos from sites like Netflix, the Play Store, Disney+ and so on.

There is definitely more a client could do to make this harder, but there'd be a sharp trade-off where to make it impossible for passive eavesdroppers to know what you saw costs both you and the "broadcaster" a lot of bandwidth, and bandwidth ain't free.

I'd like to see Netflix at least gently stroll in that direction, maybe use some TLS padding to make it a bit harder to do this trick, but I don't expect them to set off at a sprint and ruin their profitability in pursuit of a non-goal.


> there'd be a sharp trade-off where to make it impossible for passive eavesdroppers to know what you saw costs both you and the "broadcaster" a lot of bandwidth, and bandwidth ain't free

Off the top of my head: if you took a live streaming approach along the lines of live TV, or indeed Google Stadia/GeForce Now/OnLive, then you could presumably make the traffic patterns invariant over what was being watched. You'd need to ensure the bitrate didn't drop during low-detail scenes, I suppose, similar to what you mentioned re. Skype below.

That approach would of course be far more intensive on server resources, but I'm sure a similar level of privacy could be achieved without such inefficiency.


VOIP compression of speech too, at least in the past: http://www.cs.unc.edu/~amw/resources/hooktonfoniks.pdf

edit - said Skype initially but not sure it was specifically skype


You can defeat this for VoIP by using a codec in CBR (Constant Bitrate) mode. You should look for this feature in secure audio software.

Unfortunately for video CBR is generally thought to be too expensive to be acceptable. 50kbps CBR Opus wastes 22Mbytes of data in an hour if you're silent compared to a good VBR codec. A small price to pay for improved security. But a CBR Netflix encoding might move several gigabytes per hour of extra data compared to a visually similar VBR encoding, and that's going to really hurt both Netflix and quota restricted customers.


well your first attack is only theoretically possible. but it's basically impossible in a big network.

for a single user it's possible. for a network with gigabytes of data, it's basically impossible. it's not just a "tcpdump" and compare against fingerprint.

btw. it's also outdated.


Why would the size of the network possibly matter?... You do know that you can just filter the packets by IP address, right? ;P


yeah with a targeted attack. but I've meant big scale monitoring which person watches what.

also eme basically solves this on the audo and video side. (maybe not the best solution tough)


I think you overestimate how complex this is to purpose-build once you have the basics of DPI in place.

Also you don't need a precise measurement. Just measuring a random 10k users in 1M network daily will give you more precise viewership measurement than anything Netflix publishes.


Replying to self… I'm wondering if the recent per-country daily "Top 10" that Netflix is now exposing in its UI (now scraped by https://flixpatrol.com/) is not a response to ISPs already building this ? Like a way to kill two birds with one stone: reduce value of private measurements, and improve UX at the same time.


How does EME change anything?


EME encodes the video content. i.e. without eme you can basically extract the video via tcpdump, with eme you can't since everything, even metadata is encrypted.

the attack inside the pdf does basically mention the old technology which netflix used and not dash+eme inside video element (MPEG-CENC). i.e. widevine which is kinda funny because the browser drm is highly controversial, but in this case serves as additional privacy since video metadata and content get's encrypted, no matter which transport layer is used.


> EME encodes the video content

EME encrypts the video content, yes, but I don't see that this changes things. Netflix uses HTTPS (TLS) encryption over the top of the EME encryption, as this thread's title states.

> even metadata is encrypted.

I don't believe so. If you're delivering EME-encrypted blobs over insecure HTTP, an ISP will be able to see which blobs you are requesting, simply by their URL.

Aside: I recall reading that Netflix's CDN servers ('OCAs') store EME-encrypted blobs, so only the HTTPS encryption burdens the delivery server's CPU. Unsurprising, of course.

> without eme you can basically extract the video via tcpdump

Six years ago, sure. Today, no, as the stream is sent over HTTPS.

> in this case serves as additional privacy since video metadata and content get's encrypted, no matter which transport layer is used.

Perhaps unencrypted streams are still used to support legacy devices, but Netflix are committed to maximal use of HTTPS.

I don't see how EME's additional layer of encryption changes anything as far as privacy and unscrupulous ISPs are concerned.


I think the point being implicitly made in the case of streaming, is it's to help fight against ISPs that are categorising and throttling/charging for channels of bandwidth.


Also to prevent traffic shaping: ISPs throttle traffic that contains netflix.com in the SNI. ESNI (encrypted SNI) comes with TLS 1.3.


ESNI was dropped from the TLS1.3 spec. It is currently a draft protocol.

Almost nothing supports ESNI yet. Chrome does not have it yet. Firefox does but it very difficult to enable, there is a config flag but it does nothing on its own unless you also enable DNS over HTTP in Firefox.

OpenSSL has no support for ESNI yet either.

ESNI also never tells the user if it is working or not yet, making downgrades fairly easy.

ESNI is a long way from being deployed, let alone useful.


They're Netflix, though, so they've got a lot of use cases where they control both ends of the connection (e.g. they've got a native app for the end user). If it's important to them to have ESNI support, they presumably could do it for those use cases.


CloudFlare has ESNI enabled (and so do websites hosted by it, like medium.com) so it's not as if no one is doing it.


If no clients support it, and you can't tell as a user if it is working, then effectively no one is using it.


Firefox supports it, although currently only as a configurable.

Users can test their ESNI support online here: https://www.cloudflare.com/ssl/encrypted-sni/


It is enabled and working in _my_ firefox.

But, I encourage you to try to get it working in _your_ firefox, where that site says you are in fact using ESNI. It is trickier than just enabling a flag.

If you think this means people are using it, sure. Firefox has ~9% of market share, and a very small percent (maybe 0.1%) of those users have correctly configured this, and when those users visit a page hosted by cloudflare they will may be use ESNI. That is in fact more than zero users. It is a very very small number today, but hopefully it will grow in the future.

And while that page you link tests ESNI, I cannot verify that any other domain supports ESNI. I am unaware of any tooling that lets me test that today.


> Firefox supports it

Only if you use their built-in DoH resolver. The public keys for ESNI are distributed by DNS records and as I understand it there's a bunch of work to be done for Firefox to retrieve these records from classic DNS servers. That work has been classified P5 (we won't do it, but might accept patches)[0].

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1500289


Firefox supports it and Chrome has plans to support it in the future. I agree that it's not much right now but it's a start.


Ah, the good ol' radar detector reflector.

There was a skit I saw a couple decades ago where a person was showing off his radar detector, then in order to combat that, the police had developed a radar detector reflector, so then he had made a radar detector reflector protector or something like that, then they made a protector detector, and so on. It was a couple minutes of explaining his best efforts to counter the police counters to his counters and on and on. But, you know, funny.


You’re probably thinking of the movie “the big hit”. It’s trace busters: https://m.youtube.com/watch?v=Iw3G80bplTg

The trace busta busta busta.


Networking noob here. Can‘t the ISP see Netflix‘s ASN/commonly used IP‘s and shape that way?


Yes. The case where it's much more useful is traffic going in/out of major CDNs to shared IP addresses. For Netflix or someone else running their own CDN the gains are much smaller.


So ISPs could still impose traffic shaping on Netflix if they wanted to, I take it? I can't see how Netflix could prevent this. Ultimately, their servers ('OCAs') have easily detectable IP addresses, right?


Fast.com probably helps quite a bit. If ISPs throttle Netflix' traffic their fast.com measurements will look bad and customers will complain/sue. (AFAIK there's no way to distinguish between fast.com tests and actual Netflix video consumption since the former's traffic patterns are identical(?) to a Netflix video streaming client's.)

Creating fast.com always seemed like a pretty brilliant move by Netflix to me.


I think that was the whole point of fast.com, right? Speeds to known 'speedtest' servers were manipulated by ISPs, and this was bad for Netflix's business because customers would ostensibly have good speeds yet have slow Netflix, so fast.com was created as a way to measure the 'real' speed to Netflix's servers.


Yeah, that's what I was saying (or at least trying to say).


Good point. Smaller players can't make the same move though.


Both the replies here miss the fact that the relationship between Netflix and ISPs is often pretty cooperative. Specifically, Netflix will provide ISPs with devices to do caching. The ISPs need to install and run these devices, configure the routes to send traffic to them, etc etc. Details: https://openconnect.netflix.com/en/. Because of that (and various other things), it's very easy for ISPs to attribute traffic to Netflix.


IIRC netflix mostly use AWS these days. So checking IPs will just check for anything AWS hosted.


For the web site and UI, yes. But the actual video traffic comes from a private CDN: https://openconnect.netflix.com


To give you an answer focused on the crypto itself, because the way they explained the TLS properties doesn't make this clear:

1. Integrity is redundant with authentication, so really you could say they're ensuring confidentiality + authentication. You can't authenticate a thing without implicitly obtaining assurance of integrity. It's a strictly stronger property.

2. Confidentiality is (usually) insecure and unreliable without authentication. Without authentication you have no PKI for a key exchange to symmetric encryption, so you can't even do TLS in the first place. And if you don't have a carefully applied MAC or a native AEAD mode, your symmetric mode isn't that secure either.

So really what you're asking reduces to the question of why they need the most sophisticated TLS scheme for encrypting their streams. If they want the most secure TLS scheme for confidentiality, TLS 1.3 is the way to do it. They explained one particular facet of why this is the case, re: perfect forward secrecy.


ISPs compete with them. They need to think of Comcast wanting to degrade the experience of watching a show that is also available on Peacock.


> I noticed they didn't explicitly mention why they feel the need to ensure authentication+confidentiality+integrity for their streams, given that the data they're dealing with is films and TV shows, rather than, say, payment details.

I can't speak for all of Netflix's motivations, but one of the biggest for OTT platforms is preventing MitM of your streams by middle-proxies - including ISPs.

That MitM prevention encompasses data mining/user privacy, ad injection/replacement, and maintaining control over stream quality (Quality of Experience).

There used to be [and still is, in some places] a lot of transcode-to-lower-bitrate-and-cache-inside-our-network behaviour, especially amongst mobile carriers. Reduced stream quality would reflect on Netflix, not the ISP who might be doing this transparently. A user switching up to 1080p or expecting 4K HDR (as appropriate) wants to get that, and Netflix [or others] want to deliver that experience as intended.

The reason we see a LOT less of this now is due to how easy (and thus, widespsread) TLS became.


ISPs can still datamine which shows each user is watching simply based on the bursts of data as shows have fast moving or slow moving scenes.

There are only a few tens of thousands of shows on Netflix, so it ought to be easy to identify.


Has anyone demonstrated that this attack is actually practical? It's super interesting.

One way to defeat it, that is already happening to some extent (though not for this reason), is buffering. Maintain a large enough local buffer, and fill it at a constant bitrate, rather than runrate, and you lose this attack vector, no?


The attack is known from VoIP systems: https://www.researchgate.net/publication/321888273_Classific...

Adapting the research to other non-CBR streams doesn't sound far-fetched.


VoIP is a completely different use case though because it needs to be real-time. A latency of even 100ms is noticeable. By contrast, Netflix is streaming pre-recorded content, so any latency is not perceived at all so long as it is smaller than the buffer size. E.g. I could have a 5 minute buffer and suffer a 4 minute Internet outage and never so much as notice, whereas that would obliterate any VoIP use case.


I agree it's not exactly the same. I would also love to inhabit the parallel universe where Netflix hands out a 4-minute buffer, because in this one 20 seconds seems to be on the high end.

However, I strongly suspect that Netflix's video and audio are sent in different streams. Occasionally video for some titles is missing but audio gets through, confusing our daughter to no end. So while you can't infer individual syllables from the audio stream (as you would from VoIP), the audio streams should have varying enough size and chunking characteristics that allow to identify them.


> I would also love to inhabit the parallel universe where Netflix hands out a 4-minute buffer, because in this one 20 seconds seems to be on the high end.

Presumably it depends on the device, but I vaguely recall testing this out on a Chromecast, and I found Netflix to buffer up for several minutes. Some other streaming providers, such as Rakuten, only buffered around 15 seconds.

> I strongly suspect that Netflix's video and audio are sent in different streams

I think this might be right. They presumably want to support lots of different combinations of audio codecs and video codecs, so keeping them separate would make sense. Pure guesswork on my part though.


This is why someone needs to do the research on it. Even a 20 second buffer, filled at a constant bitrate, may suffice to deter the vast majority of potential bitrate signatures that could be used to identify the stream. The status quo may already be sufficient defense.

Interesting data point on the separate AV streams. I can't say I've noticed that myself.


In browsers at least, DRM doesn't work if your full page doesn't use HTTPS.

The encrypted media extensions (EME) are only available on secure contexts, which means the page needs to be HTTPS. Segments are typically fetched with standard XHR/fetch, which means they need to use HTTPS on secure origins as well. You really don't want to have to deal with mixed content.

Unrelated to DRM, Apple's ATS on iOS kinda requires developers to only use HTTPS origins in their apps.

At that point, you might as well just do HTTPS everywhere.


> encrypted media extensions (EME) are only available on secure contexts, which means the page needs to be HTTPS

I didn't know that, but you're right. [0] There was a time Netflix didn't use HTTPS. [1] I guess they must not have supported HTML5 at that time, and must have been using Silverlight for in-browser viewing.

[0] https://www.w3.org/TR/encrypted-media/#privacy-secureorigin

[1] https://arstechnica.com/information-technology/2015/04/it-wa...


To prevent another Max Headroom incident?

Joking aside, Netflix's web page runs on a vast variety of devices from phones to smart TV's which don't have the same security profiles. You don't want someone to be able to inject a packet to your Netflix stream to pwn your TV.


> It doesn't affect their DRM at all, which would work just as well over plain HTTP.

Does it work at all though? I does prevent paying customers from getting higher quality streams. But aren't all the shows still widely available in torrents at whatever quality you want?


>given that the data they're dealing with is films and TV shows, rather than, say, payment details.

I definitely have had to enter payment details on Netflix.com before


We're talking about why they encrypt their media streams.


I'm not super familiar with how the TLS handshake and sessions work, is it straightforward to mix and match like that? e.g. tell the client that this particular webpage (like with payment info) requires TLS 1.3, but less important pages (like a movie) can fall back to TLS 1.0 or SSL or whatever?


https://tools.ietf.org/html/bcp188

Pervasive Monitoring Is an Attack


from the summary:

> From the field test, we are confident that TLS 1.3 provides us a better streaming experience.

so they actually measured an improvement in performance metrics.


That's a performance comparison against TLS 1.2. It doesn't speak to why they're concerned about authentication+confidentiality+integrity for audio-video streams in the first place.


Are you hinting at something here, some threat maybe, that you think Netflix is responding to?

Or do you just not think that more security is a good thing generally and should be done for its own sake?


It's possible to distribute large binary blobs over unencrypted HTTP, and to provide security using hash checks. I believe apt no longer does this due to a flaw in their implementation that would not have been an issue if they'd used HTTPS. I'm unsure if Steam still uses insecure HTTP, but they certainly used to. [0] [1]

I'm all for HTTPS everywhere, but for Netflix and YouTube, that means encrypting exabytes of data. That's going to cost them quite a bit. Seems reasonable to go for more detail than It's good practice in general.

> Are you hinting at something here

No, as I put in my first comment, I think we know the specific reasons (untrustworthy ISPs), it's just interesting they didn't mention them.

[0] https://whydoesaptnotusehttps.com/

[1] https://developer.valvesoftware.com/wiki/SteamPipe#Server_ad...


Given that hardware encrypting/decrypting is omnipresent, is it really that much more expensive to stream content securely? I'd like to see the numbers, but I just don't think it's actually a meaningful increase in cost. Network traffic itself tends to be the dominating factor in cost, with encryption just upping your electric bill slightly.

It seems like the benefits of transmitting your streaming securely (e.g. defeating interloping ISPs) more than exceed the costs, hence why they're doing it.


It took a significant engineering effort at Netflix.

When people whine about the overheads of HTTPS, they're generally talking nonsense, as the overheads are typically marginal. Video streaming providers are the exception.

https://arstechnica.com/information-technology/2015/04/it-wa...


How are you distributing the hashes?


Either over HTTPS or by signing the manifest files using a well-known public key, and ensuring that stale manifests are never used (otherwise it's possible for an attacker to cause installation of old, vulnerable software).

apt used to be 100% HTTP based. I believe they've changed course recently though. Unsure about Steam.

Steam were clear that the reason they adopted insecure HTTP (previously they used a proprietary protocol) was to enable web caching, by ISPs and other organisations. I can imagine that being very beneficial at a commercial LAN party, for instance.

Related reading: https://wiki.debian.org/SecureApt


Not that I'm agreeing that they don't need HTTP, but they could do a HTTPS connection to fetch the (relatively small) hashes. On the native applications they could bundle their public key and do everything via HTTP, signing the hashes.


at my $CORP i religiously encrypt all my traffic where possible because the network team does traffic shaping which breaks things at times.


“Defense in depth”. If you can protect an attack vector you should. Makes everything more difficult for attackers.


I don't think this is a new feature?

I run an out-of-band network tap and have seen only TLS 1.3 traffic for some time now.


TLS 1.3 has been out for a while now, and always promised performance improvements (for example, see this Cloudflare article from Sep 2016: https://blog.cloudflare.com/introducing-tls-1-3/). I wonder why Netflix is only getting to it now? Even if some clients didn't support it yet, seems like they could still introduce support on the server side?


Just guessing: it could be related to the fact that they have been reported to use an unusual setup (kernel TLS on BSD). https://2019.eurobsdcon.org/slides/Kernel%20TLS%20and%20TLS%...


I wonder if this is also done in-kernel on their edge machines. I know that they used to do it in kernel with ktls(?) on FreeBSD which they themselves are a developer of.


While I really liked that they're moving to fully secure TLS 1.3, I was more impressed by their A/B testing. I find it fascinating they can segment their traffic by device AND then be able to deliver separate streams of content. It's something I've wanted to do at many companies I've worked at but we never had the data to begin with in order to do that. Always another fire to put out...


It’s interesting to see daily variation in play delay; is this just related to less network congestion during certain hours?


You have different audience mix around the world (and even within each country) at different times of the day. At certain hours it's more mobile, other hours it's more desktop; more US or more Asia etc. (unclear what data is shown on the graph though)


Yes specifically queuing delay. It is unlikely that any significant proportion of packets are being retried but both servers and network infrastructure will have slightly longer queues.


It is interesting to see the experiment results with 7%-8% improvements in media rebuffers with just moving from 1-RTT to 0-RTT. Wonder if they ever considered having only one layer of encryption instead of two (TLS and DRM). Would that save more CPU and hence avoid media rebuffers lot more?


DRM is required by the people they licence video from. I imagine that they wouldn't do it if they didn't have to. (They might still do it to their own content but I don't see what they gain by doing it for someone else's)


IIRC, Amazon Prime actually uses less obnoxious DRM on their own content than for 3rd party stuff.

At least, when I had a computer configuration that they said was unsupported for HD due to DRM reasons, I was still able to get Amazon's own videos in 1080p


By now I have only heared about (and experienced) the downsites of DRM. Are there any good arguments for it / casestudies that show any reduction in piracy?



But those 2 layers are at different level, one is for content the other for the HTTP protocol.


Can someone explain why this thread is full of people not caring about security? This article even goes over how TLS 1.3 is a perf improvement

Have the anti privacy crowd come out in droves now that we have a public desire for contact tracing & there's a desire to scapegoat why netflix et al have reduced stream quality due to increased load?

& HTTP is not an option. I for one enjoy my ISP not being able to inject ads into my streams


You're probably running into the contrarian dynamic: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que.... Your comment is a great example of the second wave: objections to the objections, getting upvoted to the top of the thread. (Edit: it's no longer at the top, see explanation below.)

Comment threads don't represent the community view. They self-select for commenters who object to something. This is particularly true early on, since objecting is usually a reflex reaction that leads to quick comments. Reflective comments, the better kind, take longer to appear—because reflection is a slower cognitive process and because they take longer to write. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

The third wave is people looking around and not seeing the second-wave description of the thread really matching the comments (we got one of those too: https://news.ycombinator.com/item?id=22935245). That's because the original objections have since been supplemented by more interesting comments that were slower to roll in. It may also be because the janitors a.k.a. moderators have been active in the thread, sweeping up and so on.

Edit: Speaking of janitorial work, I've downweighted this subthread now, which is one of the things we do. Check out the effect: at least as of this moment, if you scroll back to the top, notice how much more enticing and curious the conversation is, than if this were the first thing were seeing. It's as if the thread can breathe.


Neat, I've noticed this pattern in the past, but haven't ended up in this spot myself

What's the better way to play out this dynamic? Since there were three comments I didn't think it'd be optimal to spam by replying to all three individually. Maybe picking one to reply to would be better. I don't think downvoting is the right play, but ignoring may be

Maybe responding with agreement to the article, without objection to the objection


Great question! It's better to reply, because then it at least won't get upvoted to the top of the thread, accruing mass and blocking fresh conversation. I think you're correct that it suffices to pick one to reply to, perhaps the most prominent one, or the one where the reply will be most interesting. I don't see any problem with downvoting something like https://news.ycombinator.com/item?id=22934379 as well.

The trouble with an indignant anti-crap comment is that it can't help but repeat the crap it's objecting to, because that's what it's about. Negation is a form of repetition. Crap subthreads tend eventually to get moderated by users, but that doesn't happen with their anti-crap counterparts. Usually those get upvoted to the top, since people love defending the unfairly criticized and we all respond to indignation. But then it sits there, choking out new discussion with metafumes.

The best way to counter bad comments is to post a good one—for example, a reflective comment about the specifics of the article, or a curious comment about something unexpected in it. This isn't always an option, though, because one doesn't always have something like that to say. I think responding to reflexive negativity with a pleasantly-worded objection can still be helpful—just not as a top-level comment.


I'm imagining some kind of heterarchical mode of comment threading where you can refer to n parent comments for your reply. Could that ever work for semi-mainstream discussion formats? The interface could perhaps work like hashtags, but instead of generic topics, you tag your desired parent comments. Similar to a multi-quote on a non-threaded (chronological only) forum topic.


I don't really see many people on this post arguing against encryption entirely... and the article isn't vs HTTP, it is about an upgrade to a new version of TLS; so it kind of feels like you are just cherrypicking a few comments in order to hit at a rather fascinating strawman agenda / conspiracy theory: is there even a single comment here--other than yours ;P--poking at Netflix over reduced stream quality due to the pandemic?


Those comments have now been downvoted compared to an hour ago. I want to give good faith those comments aren't trolling, & so I personally don't think they should be downvoted

https://news.ycombinator.com/item?id=22933774

https://news.ycombinator.com/item?id=22934040

https://news.ycombinator.com/item?id=22934379

Maybe these comments appear first because they're less likely to've read through the article. Otherwise it'd have to be something to do with the demographic of people watching /newest. But it was posted 4 hours before I made my initial post, so I don't know


Yes: I did find those comments; I am even downthread of one of those comments myself, talking about the fascinating limitations of encryption: I simply do not consider three or four comments to be a thread "full of" such comments (and my mental model of this thread, as well as my own involvement, is from well before your comment, which seems to have now hijacked the first position on this post as people I guess like upvoting complaints about other commenters).


> why netflix et al have reduced stream quality due to increased load?

This is not quite true - it's less about _Netflix's_ load and more about load on the _access networks_ (end-user ISPs).

They are seeing longer peaks, increased congestion (fewer folks are leaving their homes), and thus the reduction in default streaming quality by many streaming platforms was a move to limit congestion on these last-mile networks (and some IXs) where possible.

The EU & India anticipated the most congestion here, which is why you likely saw these announcements start with the EU before becoming global, as other ISPs/IXs/govts began to think about the same.

Disclosure: I work for Google in this space, but don't represent them here.


Who cares about security in the real world? Have you tried googling "should I use SSL"?

The top answer is a stackoverflow answer saying no way. "Putting SSL everywhere is only a way to get a warm fuzzy feeling on security that is no good. It [SSL] is usually only used as a distraction allowing administrators to disregard actual security issues."

https://stackoverflow.com/questions/2177159/should-all-sites...


I wonder if accepted answers with negative upvotes still display at the top on SO... Only one way to find out.


We don't allow our mail to be seized and read without warrant, why should our other data be susceptible?

Netflix already had a secure system with their original business model of mailing you the content.

Finally, allowing advertisers to profile us based on our browsing data is one step before allowing the police to do so as well. Maybe it's the same step.


But what about interdiction? What if the CIA swaps the directors cut dvd for the theatrical version and you never learn deckard was a replicant?


Good point. I think there's always a certain element of trust here--the law is more important than the implementation sometimes.


Agreed. I've been blocking HTTP (on port 80) accross my network and I reroute all DNS traffic to a pihole configured to use DoT. At this point in time there is really no reason to let anyone see plain text traffic.

A few things are whitelisted but the handful of things that break aren't necessary anyway.


Genuine question: Do you not have issues with browsers and other security products and OCSP? At work they blocked all internet access and tools that forced use of CRL and OCSP lost their minds. Digicert and other CAs seem to to use port 80 for this. Firefox requires OCSP option to be disabled for me for GitHub to work with pauses.


OCSP has to be port 80 because of the chicken and egg problem.

Specifically, if you use HTTPS for OCSP then now you need OCSP for the certificate for the HTTPS connection you're doing to get OCSP, so you open another HTTPS connection, and you need OCSP for that certificate, and then you open another HTTPS connection and...

But most browsers don't do OCSP by default because realistically you need fail-open behaviour and if you have fail-open behaviour there's no meaningful security benefit so why bother?


OCSP is broken by design. Should turn it off for server applications. Maybe even disable it entirely when compiling openssl if possible.

OCSP is to query random CA web servers when opening a SSL connection to verify if the certificate in use was revoked. It's broken by design because servers simply don't have internet access.


OCSP with server-side stapling and must-staple marked certificates should work fine for most use cases.


This is what happens when people do random shit in the name of security. It's security theatre, not security engineering.

Microsoft Windows, for example, allows administrators to re-publish external CRLs internally, but this only works for the built-in crypto API. Non-native applications generally won't pick this up, including most command-line tools. Blocking outbound port 80 is a mistake.

A colleague said it best: Certificate private keys are like your password. Tell no one, ever. The public key is like your passport. It's safe to hand to a border officer. CRLs are the lists of passport numbers used by known terrorists wanted by Interpol. Preventing the border officer seeing that list just lets the terrorists through the airport. This is never good.

Similarly, most firewalls ship with "drop" as the default rule, which means that a small error in the firewall rules will just end up with 20-120 second timeouts throughout the network. If instead they "rejected" flows with an explicit deny response, this failure mode would be instant and easier to diagnose.

Just yesterday I was trying to diagnose a packet loss issue on a network where some security troll blocked all ICMP between all endpoints. The last time "ping" was used as an attack vector was in the last century, but apparently it's important to prevent any hope of diagnosing entire categories of network issues. Unsurprisingly, their network has atrocious performance, random periods of high packet loss, and asymmetric routing issues. This will not get resolved until the idiot in the security team quits or retires.

PS: I suspect their issue is large UDP packets getting dropped on some paths due to MTU issues. To diagnose this without ICMP in a complex network is basically impossible. The automatic fix is to enable Path MTU Discovery... which requires ICMP!

Every large environment I've been to that has these security trolls in charge with nobody to reign them in was a shitshow of broken systems performing catastrophically badly because they spend 99% of the time waiting for timeouts.

PPS: Why firewalls don't simply have a default, auto-updating rule for "Enable CRL & OCSP outbound" for all known public CAs is beyond me. This is an extremely common requirement, to the point of being practically mandatory for any medium-large enterprise. If you firewall doesn't have this feature, and about a dozen similar ones, you're getting ripped off. You're probably paying some decent fraction of a million dollars for a bag a tools, not a complete solution.


How do you manage the cleartext SNI over 443?


> netflix et al have reduced stream quality due to increased load

OT, but why am I still paying full price for less-than-full quality? Same with Amazon - why am I paying for two-day Prime shipping even though it's impossible to get that level of service right now? /rant


Because that's life during a national emergency. If you don't want to pay it then stop paying it, NF and Amazon will let that happen. I assume Amazon might even prorate your refund.


I'm fairly certain that neither Amazon nor Netflix promise you anything more than best effort on their streaming quality or delivery time.

You can always cancel your contract with them if you don't like it.


Sure, but if I'm signed up for a 4K plan and Netflix has entirely disabled 4K, they cannot possibly provide me what I'm paying for.

Note: I'm not even sure they have disabled 4K, it's just an example.


Seems a bit like when a congested campus has a "space not guaranteed" paid parking program, and you opted for a tier that includes VIP spaces. The VIP spaces near your building are currently being restriped so you can't use them. Should you automatically get bumped to the non-VIP tier since it no longer brings you value? No, you stay on that tier because theoretically it offers you the VIP spaces at other buildings (like the 4K streams in places that aren't collaborating with Netflix on decongestion strategies).


Because a black swan event has occurred, life is anything but normal right now, and people are generally concerned about things that are far more important.


I understand that, but it's trivial for them to temporarily reduce the amount they're charging me, since I'm not getting what I'm paying for. I'd see it as them during their part to help the people in this crisis. But alas, I digress, as it was just an opportunistic rant, and I feel better for getting it out.


You have the option to opt into a SD only plan, don't you?


Of course, but that's comparing apples to oranges, so not really relevant.


Reduces the number of simultaneous streams from 2 to 1 as well.


I'm as confused as you are.

My guess is some people have a greater hatred for DRM, which makes Netflix "bad", and so anything Netflix does security-wise that isn't removing DRM becomes irrelevant.

But those comments seem to be mostly downvoted now anyways, so they don't seem to be representative.


Any change can and should be questioned as to if it actually improves things.


Is there a good website listing all the sites that don't use HTTPS or use really old TLS versions? I'm not seeing one.

As an example, the NVIDIA driver download pages switch to http:// for the actual downloads although the rest of the site is HTTPS...

It's interesting to see which companies have difficulty keeping up.


Keep in mind such a list wouldn’t necessarily be accurate, especially since proxies like Cloudflare can accept 1.0 and 1.1 connections and convert them to 1.2+ Between the proxy and the server.


Oh sure, you'd need to do some research. But for sites that openly use http or old TLS, it would be obvious. I wonder what the top 1M would look like.


This is cool, and I'm glad to see that A/B testing confirms that TLS 1.3 is actually faster at startup in the real world. That said, I find most annoying Netflix perf problems are in the middle of a stream, not the start. I wonder how much they're getting hit by TCP congestion control troubles, and if QUIC/HTTP3 might help.


If TCP congestion control is an issue for Netflix, it would be addressable on the server side, and I'm guessing they would address it in the FreeBSD kernel. Looking through the congestion control directory, there's not a lot of changes coming from Netflix; although, the TCP_STATS one [1] looks interesting, and some of the others indicate they're probably doing some testing (bug fixes and code cleanup).

Switching from TCP to TCP over UDP in order to run congestion control (and other things, such as segmentation) in userspace instead of kernel space makes more sense if you don't control the kernel; but Netflix controls the kernel on the content appliances. Netflix has also put a lot of effort into moving more of the processing into the kernel to avoid copying data and context switches between the kernel and userspace; moving congestion control into userspace would most likely have a large negative impact on their serving bandwidth per appliance.

[1] https://reviews.freebsd.org/D20655


The elephant in the room is Youtube, which supports only (when last I checked) TLS 1.0. My OpenSSL is configured to fail connections below 1.3, but evidently Firefox bypasses that with its own TLS.

If they support TLS at all, why not 1.3? Do they have some dodgy hack that avoids a performance cost in 1.3 that is not in 1.0? Or is it just sloth?


That’s not the case now. I see a TLSv1.3 connection to the googlevideo.com server using Firefox, and a QUIC connection in Chrome


[flagged]


"Don't be snarky."

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html


How's it not 4K?

You need to use Windows 10 Creator update or newer and the Edge browser (most likely the old one and not the newer Chromium based). You need an Intel 7th generation CPU (i3, i5, or i7) or an Nvidia GPU.[1][2]

[1]: https://help.netflix.com/en/node/23931

[2]: https://nvidia.custhelp.com/app/answers/detail/a_id/4583/~/4...


Despite having all of those requirements I still can't get 4k streams on my PC. All I've managed to find are rumours from unofficial sources that there's some issue with HDCP 2.2 when using displayport with certain monitors. Either way it's extremely convoluted. In fact it's vastly easier just to pirate the 4k version. I don't feel bad about doing this because I already pay for netflix.


> How's it not 4K?

Many people report Netflix's 4k is very heavily compressed compared to 4k blu-ray, making it high quality in name only.

I wouldn't know, because Netflix won't serve 4k content to any of my devices.


> Many people report Netflix's 4k is very heavily compressed compared to 4k blu-ray, making it high quality in name only.

This is quite true, and it is to be expected from a streaming service that wants to deliver 4K video over most Internet connections. They stream 4K at around 15mbps to my LG TV, meanwhile a blu-ray's video stream would go above 50mbps typically.

There is definitely also a cost component to take into account when you have over 160M paying accounts to deliver content to.


Every time I read a requirements list like this I remind myself that the pirated version works without hassle.


For those not on Windows and without a very specific hardware setup, 4K and even 1080p is unattainable. I understand the annoyance the person you replied to feels.


> the Edge browser (most likely the old one and not the newer Chromium based).

Chromium-Edge also supports 4k (and higher-bitrate 1080p) streaming because Microsoft added the same DRM used in normal Edge to Chromium-Edge.


I was under the impression you actually can via exactly one browser: Microsoft Edge.


Netflix's UWP client also supports 4k streaming, alongside 5.1 audio at a slightly-higher bitrate (128kbps IIRC?), but it has the same HDCP and Intel/NVIDIA requirements as far as I know.


I am curious why would I care about what do they do with their streams as long as I get adequate playback quality.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: