
Scalable Video Technology for AV1 Encoder - andruby
https://github.com/OpenVisualCloud/SVT-AV1
======
mmcclure
Digging in a little, based on similarities in the readme and the mailing list
being the same, it appears to be related to Intel[1]. Looks to be an extension
of Intel's Visual Cloud Computing efforts[2].

Edit: Feeling dumb, but confirmed, Intel is also in the license[3] :)

[1]: [https://github.com/intel/SVT-HEVC](https://github.com/intel/SVT-HEVC)

[2]: [https://www.intel.com/content/www/us/en/cloud-
computing/visu...](https://www.intel.com/content/www/us/en/cloud-
computing/visual-cloud.html)

[3]: [https://github.com/OpenVisualCloud/SVT-
AV1/blob/master/LICEN...](https://github.com/OpenVisualCloud/SVT-
AV1/blob/master/LICENSE.md)

------
dragontamer
[https://www.phoronix.com/scan.php?page=news_item&px=Intel-
Op...](https://www.phoronix.com/scan.php?page=news_item&px=Intel-Open-Source-
SVT-AV1)

This is an Intel project, showing off their work on AV1 encoding.

------
BlackLotus89
I love it when reposts get traction
[https://news.ycombinator.com/item?id=19072647](https://news.ycombinator.com/item?id=19072647)

Anyway can't wait to test this. Right now my library still is mainly h264.
Wanted to get everything to vp9 a while back which was too slow, then tried
hevc which was faster, but not really satisfactorily either. Hope this will
get down to vp9 encoding speed so that it's at least feasible...

~~~
andruby
I was dissapointed that your submission hadn’t caught traction, so I reposted
it with “Intel releases ...” in the title because for me that was the
newsworthy bit. I was interested in what the community had to say about this
Intel code being optimized for Intel cpu’s.

It seems the mods changed the title again though, and reading the comments
people seem to be surprised that this is Intel’s.

~~~
BlackLotus89
Yeah it's sad that this thing happens from time to time, but man I'm glad that
now people are actually talking aboujt it because I'm psyched :)

I was waiting for a good encoder to show up and this seems like a step in the
right direction

------
svnpenn
those are some steep requirements - 48 GB RAM?

~~~
nothanksmydude
For 112 cores, that's only 2.333333 gigs per core

~~~
CyberDildonics
That is still way too much, a 4k RGB 8 bit frame is about 25 MB, and many
frames could be operated on at once, but I doubt the equivalent of around 2000
uncompressed 4k frames (less depending on 10 bit color) need to be in memory
all at once.

~~~
brigade
Scaling video encode to 112 CPU cores is hard. I haven't looked too hard into
this encoder but the normal method to scale that high is to encode entire
segments in parallel. (YouTube in particular supposedly does each segment
single-threaded which is why libvpx has terrible scaling.) Which effectively
means encoding up to 112 independent 4k streams.

Each stream could need:

\- one source frame

\- additional source frames for reordering (3-7 is pretty normal)

\- additional source frames for rate control (x264's default is 40)

\- recon for the frame being encoded

\- reference frames (IIRC AV1 allows up to 8 to be stored)

Plus MVs, modes, maybe subpel caches, etc.

That's easily 50-60 frames per stream. Times maybe 112 streams for 6000
frames. Easily tunable of course, especially with even a little intra-segment
parallelism.

~~~
CyberDildonics
I understand how an encoder could eat up so much memory and justify it in some
way, but I can't buy that it's a neccesity or even acceptable in the long run
(maybe this is stated to be in the prototype stage).

From what I've seen AV1 breaks frames/segments up into a kd-tree and brute
forces these leaves to find the transformation that looks the best with the
smallest size. An over simplification obviously, but with everything that
encoders are doing I still think it is naive to design them with such a
simplistic view of concurrency that they have to be treated as a hundred small
files for a hundred CPU cores.

