
Rav1e: An experimental AV1 video encoder, designed to be fast and safe - adamnemecek
https://github.com/xiph/rav1e
======
kristofferR
What does "safest" mean in this context?

~~~
jabl
Presumably that it's implemented in Rust, and thus does not suffer from many
of the usual bugs that C code bases suffer from?

There seems to be some asm code as well, which obviously does not enjoy the
safety advantages of Rust.

~~~
arez
I always wondered if in practice it really checks out, is rust code really
safer? Did the bugs just shift to different ones? Is there anyone who wrote
about that already?

~~~
masklinn
It's probably not quite equivalent but I believe Frederico & al found & fixed
a number of issues when they converted librsvg to rust. You may want to check
the archives on their blogs (IIRC it's split between Frederico's own and the
librsvg one) for more, or specifics.

But "safe" rust at least intrinsically protects from use-after-free, double
free, dangling pointers, null-pointer dereferences, out-of-bounds accesses, …
You may still have logic bugs of course (though the richer type system
expressivity also allows better static encoding of application & domain
logic), but these baseline memory-safety issues will only be a problem in
specific and tagged `unsafe` blocks rather than throughout the application.

~~~
brian-armstrong
All of the pointer issues you mentioned have already been solved in C++ by
unique_ptr.

~~~
pjmlp
Good, now try to enforce developers to actually use it on their code and all
third party libraries they link to.

Ah, and not passing it around by address or reference, instead of actually
moving ownership.

C++17 is a great improvement, but for it to work out in this context,
developers need to actually write C++ instead of "C with C++ compiler".

~~~
brian-armstrong
This is a silly argument. When evaluating whether to use a tool, you should
consider how /you/ would use the tool, not how someone else does.

~~~
pjmlp
Yeah, it kind of works out in the ideal world where one works alone, writing
100% of the source code.

------
skolemtotem
Will AV1, or even VP9 for that matter, ever be suitable for realtime encoding,
or is that just not their target market?

~~~
matt4077
Yes, of course. Anything else would be DOA.

On a general note: there really seems to be an extremely inaccurate narrative
regarding AV1 and speed taking hold. I can't understand why it isn't easier
understood that a reference implementation is about accuracy only, completely
ignoring performance considerations. Not in the usual "we'll now try to make
it faster", but as in "this is never meant to be used in production, and it's
performance is in no way indicative of the performance optimised encoders will
see".

As but one example: media encoding is pretty close to being "embarrassingly
parallel" in principle, making the first three orders of magnitude easy wins
for a straightforward GPU implementation.

~~~
Jasper_
> As but one example: media encoding is pretty close to being "embarrassingly
> parallel" in principle

Which part? 90% of what you're doing is context or inter-frame dependent.
Video encoders that live on graphics cards today use dedicated ASIC hardware.

~~~
clouddrover
You can divide the video into chunks and encode the chunks in parallel. This
is what Netflix does:

[https://medium.com/netflix-techblog/high-quality-video-
encod...](https://medium.com/netflix-techblog/high-quality-video-encoding-at-
scale-d159db052746)

[https://medium.com/netflix-techblog/dynamic-optimizer-a-
perc...](https://medium.com/netflix-techblog/dynamic-optimizer-a-perceptual-
video-encoding-optimization-framework-e19f1e3a277f)

Works well when you're doing video at the scale of Netflix, but not
necessarily much help to the individual user who just wants to encode a video.

~~~
Ace17
> You can divide the video into chunks and encode the chunks in parallel.

What about live encoding?

~~~
clouddrover
You can split the encoding across 32 cores:

[https://bitmovin.com/constantly-evolving-video-landscape-
dis...](https://bitmovin.com/constantly-evolving-video-landscape-display-
ibc-2017/)

[https://bitmovin.com/bitmovin-supports-av1-encoding-vod-
live...](https://bitmovin.com/bitmovin-supports-av1-encoding-vod-live-joins-
alliance-open-media/)

------
zakk
> ~5 fps encoding @ 480p

How does this compare with the reference encoder?

~~~
masklinn
You'd need to run them on the same machine to make sure you get a proper
comparison but
[https://ffmpeg.zeranoe.com/forum/viewtopic.php?t=5601](https://ffmpeg.zeranoe.com/forum/viewtopic.php?t=5601)
has some runs. One of the users downthread ("entac") provides both libaom and
libx264 numbers, they get 63fps for libx264 and 0.0924fps for libaom (r9028)

Also this currently does delegates work to libaom.

~~~
derf_
_> Also this currently does delegates work to libaom._

Currently just for the transforms and to initialize the probabilities for the
entropy coder.

------
sargun
Just curious, what's the memory footprint on the encoder in real life?

Do different video encoders, for the same codec, and input produce different
outputs, or is the algorithm specified in a way where it produces the same
results for two given inputs, no matter what?

~~~
wolf550e
For almost all compression algorithms (both lossless and lossy), only the
decompression is specified. A compressor can do whatever it wants as long as
it produces a bitstream that a compliant decompressor can decode.

For example, you can make a video enocoder that produces a compliant video
stream in which every frame is a keyframe and every macroblock is
independently fully encoded, thus reducing AV1 (or H265, etc.) to MJPEG. But
if the result is decodable by a compliant decoder, your compressor is
compliant. It might even be somewhat useful (e.g. output needs to be zero
latency or output is intended to be edited).

~~~
tzahola
And this is how people keep making substantial improvements to ancient formats
like JPEG or MP3.

E.g. Guetzli from Google:
[https://github.com/google/guetzli/blob/master/README.md](https://github.com/google/guetzli/blob/master/README.md)

