Hacker News new | past | comments | ask | show | jobs | submit login

Yes, encoding needs to be done only once.

Generally it would take 2-3x of original duration of video to encode a source into a 1080p, so I am not sure why they take full 1 day? unless they do each bitrate serially which I think is not as hard to parallelize as it is to parallalize single bit rate by chunking.

Yes, I believe serving is lot harder, but serving is almost a solved problem since people are dealing with for long time.




>I am not sure why they take full 1 day? unless they do each bitrate serially which I think is not as hard to parallelize as it is to parallalize single bit rate by chunking.

I think the talk I know this from is https://www.youtube.com/watch?v=tQrsz3BrfwU - they chunk not only for encode but also for QC (and QC validation on the resulting transcoded asset).

If memory serves the talk also discussed the long transcode time, because their transcoder (EyeIO at the time and I have not heard differently since) is optimised for efficient packing over performance


For x264 that is true, HEVC which is also mentioned is much slower. For a 4k source transcoding can take more than a second per frame. For a normal movie this can quickly result in encoding times of more than a day.

Another problem is that you have to encode the movie for each codec profile times the number of different bitrates per profile. The article mentions four profiles (VC1, H.264/AVC Baseline, H.264/AVC Main and HEVC) and bitrates ranging from 100 kbps to 16 Mbps. Assuming now there are 20 different bitrates per code you already get 4*20 => 80 encoded copies per source. But of course this can be solved by parallelism.


Are there any codecs that can output multiple versions of an input at the same time? Seems like a lot of the encoding process (like motion estimation) is the same every time, so why do it once for every output instead of reusing it?


That would be interesting to know. A lot of transcoders can make multiple passes over the source, so being able to reuse the meta data generated for subsequent passes at different output qualities might help speed up the process. I dunno, not my forte, just thinking out loud.


It's not worth it, because every single decision ends up depending on your output targets anyway.

(You can't afford accurate motion estimation at low bitrates because you can't fit the accurate info in your budget anyway. Except for when you can.)


Well do they not get re encode once in a while? I am pretty sure the x264 encoder now is significantly better then the one 3 - 4 years ago. Same goes to HEVC.


I doubt that they do. The older streams if it is so old that meanwhile significantly better encoders have come to the market then probably very few people are watching those old streams.


I would be shocked if they didn't roll their catalog. Maybe not the whole thing every time, but pulling their sources and re-encoding the complete suite when a new bitrate/codec combination comes on line seems like a sensible use of resources.


I'm pretty confident people stream all sorts of old content on Netflix. Why would the longtail not be a thing for Netflix?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: