In short, verification asks the question: "does it do what we designed it to do?" For instance, if you have a reference encoder of some kind that is a verbatim translation of a specification, and a high-performance encoder that is designed to squeeze every last drop out of the system, you can verify the high-performance version by comparing the output to the reference; if they're the same, then you know that the high-performance version, at least on some level, works as designed.
If verification compares the design against the implementation, then validation asks the question: "is the design what it needs to be?". The story that xiphmont told here is a validation issue: the implementation is perfectly correct, but it's the design that isn't what he'd hoped for! Although verification often takes more of the time than validation, in my experience, validation often seems to take quite a lot more creativity than verification.
Thanks to Monty for writing this up, and providing such an illustrative example of the difference! It's a useful distinction to have, especially when looking at errors that escaped testing and made it to production.
Breaking the continuum into pieces breaks your symmetries. For example on a finite rectangular grid, a sphere is not rotationally invariant. Even if your maths is valid in the continuum, for the implementation, you still need to prove that it is valid (i.e. fulfills all desired properties) on a discrete grid.
I have zero experience with video compression, however, the author states that the issues get less severe when the video is rotated by 90˚. I guess the compression algorithm is designed to work irrespective of orientation. Therefore, my humble guess is that they failed to implement the desired symmetry of the continuum in the compression algorithm that acts upon discrete spatial coordinates. Unfortunately, there's no way to recover all symmetries from the continuum on a discrete grid. You always get some kind of artifacts and you can only work to reduce them for the case you are considering.
It's a really fun project to hack on!
It's hard to explain why development is not always easy but I think this provides a pretty sobering example.
Realistic that there could be an industry consensus next generation video codec (like Opus for audio) with Daala as basis?
Xiph used preemptive approach and refuted all accusatoins and patent attacks on Opus as soon as they surfaced, which helped its acceptance in IETF. They plan the same tactic for Daala which hopefully would kill all the FUD. That's in contrast with VPx where Google waited for lawsuits instead of defending it right away and refuting any false accusations.
However IETF acceptance doesn't mean that the likes of Apple will actually start using it. But making it mandatory for the video tag would be a big victory, since those who refuse to support it will be under pressure to follow the standard.
It was CLOSED WONTFIX.
> #2: Working groups can't force a browser to implement something. Are all the browser vendors willing to implement it? That's what matters.
> #3 Microsoft doesn't believe HTML5 needs to specify a mandatory to implement codec. The market is quite capable of ensuring that popular formats are supported on the web. Opus did not exist when ISSUE-7 was closed. Who knows what formats might be more popular than this by the time HTML5 gets to Recommendation, or after that. It's also not clear that there will be widespread adoption of Opus for the purposes that <audio> is currently used.
This does not mean Microsoft is for or against supporting Opus with <audio>, just that the spec doesn't need to say anything on this.
Rationale is invalid, since in other instances standard is enforcing something (same WebRTC). And of course MS was brought as the main antagonist. Some apparently just hate when the Web has interoperability.
Even if this new video codec beats the alternative on several dimensions, it'll be a struggle to get adoption for similar reason of entrenched patent owners and business interests.
I can't really bring myself to blame corporations for following short term private gain, but all the nerds who ignored licencing issues in favour of short term technical superiority deserve some blame for this local minimum we're trapped in.
Why not? They should be blamed for it, especially when they sabotage interoperability for such gain. This irritates me the most in these companies.
> the nerds who ignored licensing issues in favour of short term technical superiority deserve some blame for this local minimum we're trapped in.
That's free codecs should be ahead of the competition by a big margin. Daala is not just next generation, it's next next generation, which increases its chances to beat patent encumbered competitors and to actually gain traction. No matter what backroom deals can be proposed, if someone can save tons of money on licensing and get much higher quality, it's a no brainier to do it.
Perhaps that sort of logic is rolled into the rate distortion metrics of Daala? But if so, I suspect this bug would not be.
The better approach, I think, is some form of "temporal RDO", where you spend more bits on places that will be good predictors of future frames. We discussed the idea in a Theora update in 2010 . x264 implemented it even before then (under the name "mbtree" ). It is not yet implemented in Daala, as we do not have two-pass rate control or lookahead yet. It's possible to do one-pass, no-lookahead versions by estimating future prediction efficiency based on the past, but you want one with full lookahead to compare against to make sure those estimates are doing the right thing.
I am not sure temporal RDO would save us here, because the issues described only occur in areas of the frame that are being updated. It still might have, in this case---these things are relative---but we could construct cases where it would fail to do so.