Algorithms are not simply math, in the same way mechanical engineering is not simply math.
The patents are the result of engineering to create very precise and complicated tools.
And if it were simply math, then there would be no way to work around those patents, which plenty of open source codecs have done in various directions.
Unfortunately, most open source codecs are not as good at compression, mostly since the patented ones have a massively bigger pool of engineering behind them from dozens of companies banding together their work to develop best of breed solutions.
And eventually the patents expire, and the world gets these codecs free to use.
> And if it were simply math, then there would be no way to work around those patents, which plenty of open source codecs have done in various directions.
What makes you say that? Math usually leaves many ways of calculating the same thing.
> dozens of companies banding together
You could probably get a bigger group if there wasn't a patent vs. no patent conflict.
> And eventually the patents expire, and the world gets these codecs free to use.
How many of these techniques wouldn't have been reinvented a dozen times over in less time, if the original patenter never existed?
>How many of these techniques wouldn't have been reinvented a dozen times over in less time, if the original patenter never existed?
In the exact same details? None. There is an extreme number of parameters and details in each of these codecs.
How many times have you or a small band of people invented a world class video compression codec? They take a huge number of people and a lot of funding to create. That funding, in many cases, was paid for by places like Fraunhofer that has invented many such codecs before anyone else, and they do so because they know they will get a revenue stream to pay for the work.
If these things were easy to make, there would be constant new ones, each improving the last one. This is not the case - they take years and the effort of a lot of people to make new ones that are better enough to slowly replace old ones.
A simple way to see it - if these were so easy that people would invent a dozen if only patents didn't stop them, then people would invent those anyway and publish them for fun. I am unaware of ANY world class video codecs that some hacker invented in their basement, and the only ones close to best in breed were funded to the tune of the prices above by huge companies that want to use their leverage to move markets, for their own gain.
These things are in no way trivial to create.
> You could probably get a bigger group if there wasn't a patent vs. no patent conflict.
Removing a funding channel means there would be less, not more, incentive to pay tens of millions to billions to develop one. If you pay the R&D and anyone else can simply take the work and not pay for it, then less companies will work on it.
> In the exact same details? None. There is an extreme number of parameters and details in each of these codecs.
The patent doesn't cover the extreme number of parameters and details, or someone would just change a couple and have a nice patent-free codec. Far more general techniques get snapped up by a single owner.
> If these things were easy to make, there would be constant new ones, each improving the last one. This is not the case - they take years and the effort of a lot of people to make new ones that are better enough to slowly replace old ones.
That's because it takes so long to get implemented, and it's a big cost to re-encode.
If it was just a matter of adding support to a couple programs and sending out an update, you'd see incremental improvements all the time.
We're largely still stuck on jpg and png, for crying out loud, and those use thoroughly obsolete methods.
> Removing a funding channel means there would be less, not more, incentive to pay tens of millions to billions to develop one. If you pay the R&D and anyone else can simply take the work and not pay for it, then less companies will work on it.
There would be less total incentive, but there's still the enormous incentive from everyone working with video to get better compression/quality, and streamlining cooperation would be a big deal.
If these things were as trivial as you think, then why doesn't open source simply patent all the good new ideas now?
Even if these easy to create ideas you think exist required some previous work, at least these new simple patents would free up stuff eventually.
But this doesn't happen. Ask yourself why.
Heck, spend some time and try to make a better method your self, and patent it. Then you can hold for-fee patents needing your brilliant idea hostage, and change the playing field.
Or maybe you're mistaken on how hard and non-trivial the needed components are?
And, if they're so non-trivial that open source, or Google, or whomever want them to be free, then the ideas are worth patents.
>That's because it takes so long to get implemented,
You think it takes longer to implement a compression codec than to create it? That's not true for this scale (I've worked on several high end compression codecs and have designed and implemented probably a few dozen compression codes in my career).
Creating one is by far the hardest part - trying tons and tons of ideas (coding them all up for testing), looking for new ideas, working on how various ideas interact, etc.
By the time the spec is solidified probably a few hundred throwaways have been made. Implementation is trivial, then optimization takes some time.
But neither effort, by far, approach the effort it takes to design one on this scale.
> There would be less total incentive, but there's still the enormous incentive from everyone working with video to get better compression/quality, and streamlining cooperation would be a big deal.
If that were true, you'd see all those same actors improving x264 compression, or MP3, or h265, and other compression codecs, but they are not. Improvements, despite being something ANY person can do and push to open libs, is rare. Most all modern compression schemes only define the decoding part, allowing improvements on the encoding side to be continually worked on. The technical skill to do so is non-trivial, and a tiny, tiny fraction of a percent of "everyone working with video" has any idea how to improve it.
This is how x264 kept improving and improving over and over for H.264 compression - there was pretty much one person, Dark Shikari, that worked on it for years and years to improve it.
If you want to see how terribly complex this work is, read his (internet archive) blog on the work. It was fun at the time watching one person make x264 the premier H.264 compressor.
But pretty much zero other besides him made significant improvements to it, despite being something anyone can work on.
So no, the vast majority of people working with video would be useless at making anything better.
> If these things were as trivial as you think, then why doesn't open source simply patent all the good new ideas now?[...]But this doesn't happen. Ask yourself why.
Because they didn't find the idea first. I'm not sure what your argument is here?
Lots of patents do go to open source projects. Is "this" a world where open source finds everything first? If so I'm not sure how you got the idea that I was arguing that.
> And, if they're so non-trivial that open source, or Google, or whomever want them to be free, then the ideas are worth patents.
The fact that a patent could be worth money doesn't imply that it should be granted. If I could convince the government to grant me any patent I wanted, I would go patent every word in the dictionary.
> You think it takes longer to implement a compression codec than to create it?
Most codecs aren't made from scratch. If you got rid of the network effects, there definitely would be "constant new ones, each improving the last one".
In a world where implementations have to be nailed down and then it takes years and years to roll out, then creating a codec takes a very long time.
If implementation was as easy as normal software, you'd see organizations that put out a new codec every couple months, featuring the latest ideas and tweaks.
> stuff about compression
I'm talking about the people working on h.265 and AV1. They're already working on codecs, so I know they can work on codecs. If a bunch of people are useless at working on compression, well that sucks but it's a sideline to my argument.
I don't know about "simply", but algorithms are mathematical. I know so because everyone who matters in the field of algorithms says so (e.g. Knuth's "every algorithm is as mathematical as anything could be.")
Knuth adds: "An algorithm is an abstract concept unrelated to physical laws of the universe." so "in the same way mechanical engineering is not simply math" doesn't hold, or have even much bearing on the truth of the statement "Algorithms are not simply math."
> And if it were simply math, then there would be no way to work around those patents
How does this follow? It seems to be an implication out of nowhere instead of an argument.
> most open source codes are ... since
Another assertion presented as an implication. Following the same line of reasoning one could state: "Open source is not as good at producing kernels, mostly since proprietary vendors have a massively bigger pool of engineering" which by now has been shown to be false, in other words: a post hoc fallacy
> Algorithms are not simply math, in the same way mechanical engineering is not simply math.
Algorithms are _literally_ mathematical objects. C.f. Church-Turing thesis.
> And if it were simply math, then there would be no way to work around those patents, which plenty of open source codecs have done in various directions.
Show me an H.264 codec that works around the patents in the MPEG LA pool. Also software patents are orthogonal to the question of "open source" (which is about copyright). That there is very often no way to work around the patents in an otherwise independent implementation is precisely the problem with Software patents.
> Unfortunately, most open source codecs are not as good at compression, mostly since the patented ones have a massively bigger pool of engineering behind them from dozens of companies banding together their work to develop best of breed solutions.
Show me the numbers. Because the numbers I know (for example the chart in the blogpost itself) show that x265 is the best codec (or encoder, if it's not a decoder too) for H.265, it self being "open source", better than proprietary alternatives.
> And eventually the patents expire, and the world gets these codecs free to use.
Even after patents expire, copyright expires in 90 years.
>Algorithms are _literally_ mathematical objects. C.f. Church-Turing thesis.
The Church-Turing thesis is about computability, and has precisely zero to do with if QuickSort or x264 is math.
Not a single machine on the planet is an actual Turing machine. So in practice the Church-Turing thesis applies to zero actual computers on earth. They don't fit the definition. So throwing out fancy sounding phrases does not help your argument.
As a simple example, theoretically the Halting problem is undecidable on a Turing machine. But on any finite machine the Halting problem is decidable. All physical machines are finite. Thus every physical machine is not a Turing Machine.
You might as well argue that reality is based on physics, and physics is based on math, so nothing can be patented. But reality, and especially law, works differently than that.
The fact remains that algorithms under certain conditions can be patented, precisely because it is engineering, and because some algorithms require massive investment to develop, and it's worthwhile to inspire companies to invest on the R&D.
>> And eventually the patents expire, and the world gets these codecs free to use.
> Even after patents expire, copyright expires in 90 years.
I didn't say the sourcecode was suddenly free to use. I said the codec, as in the algorithm (which is what can be patented - code cannot), is free once the patent expires.
Conflating the two is not useles s -they're orthogonal. I can write code for a non-patented algorithm, and you cannot just take it because that code is automatically copyrighted.
But we were discussing patents.
>Show me the numbers....
You keep confusing codec to mean a particular implementation of a codec. It does not. H.265 is a codec. x265 is an implementation of the codec. MP3 is a a codec. LAME is an implementation. Codecs can be patented, but not copyrighted. Implementations can be copyrighted, but not patented, but they can use patented ideas.
So yes, commercial codecs generally are better than open source. H.264, H.265 were/are both better than any codec released around the same time. MP3 was the same. JPEG was the same.
So you're agreeing with me - H265 is the best codec.
The patents are the result of engineering to create very precise and complicated tools.
And if it were simply math, then there would be no way to work around those patents, which plenty of open source codecs have done in various directions.
Unfortunately, most open source codecs are not as good at compression, mostly since the patented ones have a massively bigger pool of engineering behind them from dozens of companies banding together their work to develop best of breed solutions.
And eventually the patents expire, and the world gets these codecs free to use.