The memory access pattern they wanted is technically safe, but can't be expressed directly in the safe Rust. But that wasn't a blocker, and the Rust language didn't need to be made more complex to handle such case. Instead, the "missing feature" could be added to Rust with a bit of `unsafe`.
The big thing here is that instead of implementing actual encoding using unsafe code, and spreading the risk of unsafety all over the complex parts of the codebase, all of the dangerous parts were contained in a minimal, easy-to-verify abstraction layer. And all other code on top of that remained being safe Rust with the same safety guarantees.
The negative spin would be: how to spend a lot of effort to solve a problem you would never have if you didn't use rust.
Yes it makes great blogs but it doesn't necessarily make his programs better.
I'm two minds here. On the one hand the safety is nice and it might or might not find a few bugs in advance.
But in a real project do you really have time to go through all this effort every time just to solve something do something legitimate that the rust designer's didn't quite consider?
If he had written it in C he might have had enough time to work on the performance to actually get a performance gain.
I still haven't made up my mind if the trade off is worth it.
I can't totally disagree, I sometimes had this feeling during the implementation.
Contrary to other languages, you can't always choose your trade-offs: memory-safety is in practice almost non-negotiable in Rust (you can use unsafe code locally, but you won't implement the whole application in unsafe), so everything else has to adapt. For example, in another language, I probably would have kept slice+stride to avoid many refactors.
In the end, I am quite happy with the tiling structures API, but a lot of work (and [boilerplate](https://github.com/xiph/rav1e/tree/f1c43dbdc52016f67ecf33383...) was necessary to implement them.
That should be rare because if that case of yours is common, someone has already "safetify" (put in a safe interface) in a crate that you can pull in and use.
Secondly, your time spending on "safetify" your unsafe code is just a trade off for time debugging safety issues in the future when you use your unsafe code in the future. Heck, Rust was born also to let people declare the way their "unsafe" code can be used in other to parallelize, because keeping all the unsafe contracts in your head (or even document) is not scalable.
The proof of this pudding is whether later developments are sped up or slowed by this work.
The progress in speed and quality that rav1e makes versus AV1-SVT (which explicitly uses C to attract a wider developer audience) might be interesting to compare, though of course the level of resources that Intel/Netflix/Mozilla and the open source community put into each will be a complicating factor.
No, you don't have that problem even in Rust. Rust has C-like pointers, so if you're OK with them, you can just use them as if you were writing a C program.
The effort here was not forced by Rust, but was author's choice to get a guarantee the code is safe. And I'd say it's relatively low effort given it's reusable, and it'll stay safe even after code changes. In C you'd instead do meticulous analysis, debugging and add a comment "// careful, don't break this!"
This would not be practicable. Raw pointers in Rust are (probably on purpose) far less convenient to use (no operator + or -, not possible to index an "array" with operator …).
Moreover, you would lose all benefits provided by slices (iterators, etc.). And locally converting raw pointers to references just to call these methods could lead to undefined behavior (aliasing of mutable references is forbidden even in unsafe code, see https://stackoverflow.com/questions/54633474/is-aliasing-of-...).
IMO, using C-like pointers for a whole Rust application would make no sense (it would be worse than just writing a C application).
Here's a picture of a development board. This is half of a two board sandwich that supported ten video processors. The CL4000 was the first member of the family and was used for the DirecTV rollout. It was not capable of MPEG-2, and for a few months, DirecTV was actually MPEG-1.
Otherwise, profiling would obviously need to be done on the sequential parts to figure out if any of them are amenable to parallelisation, while also being worth the effort based on them taking up a significant amount of time.
I just updated the article after similar comments on reddit: https://blog.rom1v.com/2019/04/implementing-tile-encoding-in...
The fun part was when I dropped the error minimization in the partition choice function and set it to choose them randomly: