This burden is becoming far too great, when this is the cost necessary to achieve innovation.
Too many people think the carrot of the temporary monopoly was the point of the patent office.
In other words, we know that the community of ideas gets better with more sharing and building off of other people's ideas. As a society we decided to make laws to encourage this sharing. As a result, technological progress was immense.
Sadly, the current situation seems to strongly suggest that we may need to find better ways to encourage sharing (Open Source has been great for much of software).
Both patents and copyright are failed experiments. They weren't meant to 'benefit creators' or 'guarantee an income', and they cannot take that role in a healthy society.
"...the copyright section includes thirteen "shall provide" and just the one measly "shall endeavour." And if you add in the enforcement section you get another thirty eight "shall provide" and just a single "shall endeavour" buried in a footnote unrelated to the key points in the document.
So, for those of you playing along at home, the message being sent by the TPP is pretty damn clear: when it comes to ratcheting up copyright and setting the ground rules for enforcement everything is required and every country must take part. Yet, when it comes to protecting the rights of the public and making sure copyright is more balanced to take into account the public... well, then it's optional."
Copyright has failed in an altogether different manner. The trouble with copyright is what has been erected to enforce it: Laws against circumventing DRM that encourage monopolization of copyright-adjacent markets, absurd statutory damages, internet censorship, easily abused takedown schemes. And the term is far too long.
But if "copyright" is only the ability of an author to sue copyright infringers in court for actual damages, it's basically harmless. If you don't like proprietary software then you can excise it from your life by simply not using it, and actually doing that is continually becoming more practical as free software improves.
The trouble with software patents is that you can't do that. There is no option to build your own system because independent creation is not a defense. And that failure is inherent in the nature of what a patent is. You can't fix it without eliminating software patents entirely.
So nobody really needs to use other people's copyright works. Unlike patented ideas, you can always create your own instead. Or course, sometimes it's expensive and feels like a waste of effort reproducing what's already been done, but it's still not a restriction on doing your own thing the way patents are.
The way I see property ownership is that it's like a store of human labor. When you do work, you might get money from it, you might get a physical object from it (I made a table, now I own that table), or you might get a copyright work from it. All of these things can hold the value of the work you did and can be used to trade for other things. I don't think it's unreasonable to imagine that in a fair world, the people who did that work can keep the stored value they created as long as the market is willing to trade things for it. In the case of computer software, some value is lost whenever somebody uses your work, since you're not able to sell it to them anymore. Value is also lost if the the demand falls (obsolete software) but that's the risk of investing your labor in something with an uncertain lifetime, just as the value of a table is lost if fashions change and people want steel tables instead of wood.
Some copyright work has value and that's why people want to copy it, so they can gain that value for themselves at the expense of the copyright holder. But wouldn't it be more fair to make your own, instead of siphoning value out of other people's work without compensating them?
Of a single part of a software product a company can enjoy trade-secret protection, copyright, and patent protection... then they multiply the force many fold with burdensome one sided 'license agreements'.
This is possible because the bar for disclosure in patents has basically been put at zero. If the patent teaches the invention in question to a single hypothetical omnipotent expert in the art, then it meets the disclosure bar. In many fields patents are obfuscated beyond belief and it is pointless to read them: You'll learn nothing.
Simultaneously, parties keep their source code secret-- but distributed binaries which are obfuscated forms of the source that effectively hide their design, and they use the force of anti-circumvention provisions of copyright law to threaten people who would reverse engineer the software (perhaps to extract some information about the 99.9% of non-patented ideas in it). And yet they enjoy a long lasting copyright protection, the same as other works which are made transparent through their publication.
The patent system on its own is unbalanced, especially for software good which have a much lower resource requirements for their exploration... but when coupled with the other schemes that were hardly envisioned to overlap, the result creates incredible costs, stifles innovation, and is generally bad for society.
We allow for (temporary) ownership of property because it's supposed to encourage benefits to society of various varieties. If it doesn't - and it's starting to become clear that, say, inequality in the US is becoming too high - then we need to change things.
I am guessing that a lot of us "see what they did there".
The others are laws that a nation creates as a result of specifics surrounding itself. Patents are one such example. These exist solely to spur the sharing of ideas by encouraging inventors to document and share them in detail so that they can later be used by society once the patent expires. A temporary monopoly is granted on an idea to encourage this sharing. This way, ideas aren't lost to eternity simply b/c the inventor didn't have the means to fabricate them. This concept is purely a construct of western nations and culture. It's unlikely an isolated society would arrive at a similar construct naturally.
If it's so good for the industry, the patent holders could just choose to make it free. Everyone else could choose to :gasp: pay.
I actually agree with your point. I just want to make it clear what that means. When it comes to patents, to me a compression algorithm is the closest thing in software to a mechanic device. At that level could patent equivalent ideas as hardware. Are we saying no more patents on mechanical inventions or hardware?
For a long time I've thought that the patent office letting crummy patents through as being a separate issue than having patents at all. It also never made sense to separate out software patents from anything else.
I remember very clearly when Ogg Theora was being developed the difficulty they had in choosing a technique that would work and was not already patented. It's not like they were looking up algorithms in a big book and saying "I wonder if that one is patented". They were coming up with techniques independently and then having to search to see if it was patented.
At what point is something obvious to a person skilled in the art? What should be patentable? Should you be able to get a patent across a whole field of techniques because you managed to implement one example of that technique?
The overall approach might be obvious to someone skilled in the art, but the devil is in the details. If someone can patent the overall approach because they have implemented an example of that approach, then it shuts down everybody else. If you have a company that goes around buying up (or making strategic partnerships with companies that own) patents that cover all conceivable approaches, then they can completely lock down any new developments for a couple of decades.
This is the reality of codec development right now. Is this what we want? Is it good for the industry and society in general?
Imagine as a programmer being told, "No matter what idea you have, it is already patented. You are not allowed to program without paying someone a fee. If they decide not to sell to you, then you can't program at all". That's the world of a codec developer. It's something that I personally do not want.
Is it better to patent something and tell the whole world how to do it? Or is it better to just keep it secret? It seems like some refinements to when you can sue might be in order, specifically, I might outlaw suing if you don't market an implementation of your own; that radically changes the value of a lot of property though.
As a programmer, I am firmly against software patents. They are a burden only. How many programmers troll patents thinking, "I'll look for useful techniques so that I can license them in my program"? Even if you wanted to, you couldn't because there are millions of them. And if you start looking, you may get sued for wilful infringement if you forget about something you saw and start using it.
That aside from any arguments about the patentability of ideas rather than inventions. I would be 100% in favour of software patents if the implementation was covered (i.e. the source code). If I used a different implementation for the same idea, then the patent wouldn't hold. Of course such a patent would be useless in a world where software is covered by copyright.
That's quite the false dichotomy.
Where did this 'patents are good because they make inventions public' idea suddenly come from?
I've seen it bandied around a lot lately from people who are uncomfortable with directly supporting patents.
No. It's not better. Don't be ridiculous.
> Or is it better to just keep it secret?
How do you imagine anyone will do that?
Imagine I come up with a new compression algorithm that achieves 1:4 compression on 80% of compressed files and normal 1:2 on the rest.
In a patent system I can:
1) Use it privately and not tell anyone. Keeping it secret.
2) Patent it and license it to other people. I risk being sued out of existence by existing patent holders and trolls.
In a non-patent system I can:
2) Share and sell it as a black box implementation. People will immediately reverse engineer the compression method.
How is privacy and secrets an issue here?
In both cases (1) is the best choice if you don't want your competitors to get access to your algorithm.
In the patent case its easier for 3rd parties to find the implementation details by doing no work themselves. It's also significantly more risky.
In the non-patent case, people have to actually work to reverse engineer the implementation, sure, but then they're free to use it. There's also no risk in selling and distributing the product.
So, lets see here, things which are better, since both paths lead to the algorithm being made freely available in the end:
1 - Do research, at risk that you'll get sued into oblivion the moment you publish & sell. Even if you don't immediately get sued, you have zero temporary competitive advantage because you just told every competitor what you're doing.
2 - Do research and have temporary competitive advantage once you release it?
The only people who win in the patent way is lawyers.
The 'secrets are bad' argument is a straw man; first you setup the straw man (but then we would never get to know the secret details!!?!!), then you punch it a few times (but sharing knowledge is good! How will the global body of knowledge grow if everything is just secrets??).
It's just daft.
So yes, "It seems like some refinements to when you can sue might be in order"; indeed; ie. never.
It's not about privacy, it's about the competitive advantage of inventing something better.
Why do insanely profitable companies get patents?
I've always felt that if we allow for software patents (which I'm not sure even that is a good idea), then they should be much more limited (say 5 years) where commercial costs can be recouped, and some profits made as well as being a short enough time that the greater society can still benefit.
Both the scale and scope of what is being approved regarding software patents in this country are ridiculous compared to the natural rate of change and innovation... 20 years ago the average computer would have a lot of trouble trying to display a 1080p video stream. Today just about everyone has something in their pocket that can handle this. We can't limit software expression and bind it for 20 years at a time, for ideas that take a fraction of that time for multiple people to come up with and implement.
I think it would be a big mistake to admit that yours is the same, because if this doesn't carry you've effectively said you are infringing.
oh that's right. The whole point is they don't want to.
His complaint against H.265 will likely be fixed, and then it too will catch on, just as H.264 did, despite these same types of complaints against it.
Because audio and movie pirates don't care what codecs they use, and VLC has supported pretty much everything you throw at it, patents be damned.
What do you think most early iPod adopters filled the disks with? Surely no legally bought music, lol.
>What do you think most early iPod adopters filled the disks with?
The first iPods shipped with decoders for several patented codecs, including mp3.
This supports my claim that the codecs were not prohibitively expensive to make gadgets that include royalty payments to codec patent holders.
And as often it is with patents, the outcome is more invention. All those workarounds are also innovation.
Either way, having another effort competing to make a great format is not a problem. Here's hoping it goes well!
One of the things that made Opus a success was the contributions of others. We certainly don't have a monopoly on good ideas. We'll take pieces of Daala and stick them in Thor and pieces of Thor and stick them in Daala, and figure out what works best. Some of that experimentation has already begun:
Because none of us have a financial incentive to get our patents into the standard, we're happy to pick whatever technology works best, as long as we end up with a great codec at the end. Hopefully NETVC can replicate the success of Opus this way.
You all have done a great job in the past and I'm always on the edge of my seat to see what incredible things you will do. Keep up the rockin' work!
your article, such as this http://people.xiph.org/~xiphmont/demo/daala/demo1.shtml is the best codec introduction I could find online, but still I couldn't follow, say, the deblocking part.
there doesn't seem to be a book covering this area.
And unrelated question, what will be the name of the merged codec? I hope it won't remain as NetVC, as that name is awful.
Beyond that the design is rather conservative and different from Daala's "start over from scratch" methodology. It doesn't look like the two would be poised to merge directly a-la Opus at this point, but rather to be mutual test beds and borrow and exchange unencumbered ideas that work well.
 https://www.ietf.org/proceedings/93/slides/slides-93-netvc-5... (PDF)
So similarly, there are now two projects, Daala from Xiph and Thor from Cisco, which are both preliminary but being worked on in the same group, so hopefully the best parts can be taken from each to produce a new, better open codec.
It looks like the last patent on MP3 audio decoding expires next month.
You are right in that there are many other encumbered technologies that have patents expiring soon. MPEG-1 and MPEG-2 video, MP3 and AC3 audio, and several container formats are included. Notably, this is almost all of the technologies required to make a DVD.
MPEG-LA has been padding their patent portfolio by dumping in all these patents on little-used features. Until this year, they still had some good patents, such as the ones on motion estimation, which is a hard problem and is needed to make compression work. But those have now expired. What's left looks like it can be avoided as unnecessary for Internet use.
Because the interpolation operation is a full morph of a mesh, (which GPUs can do easily) you can interpolate as much as you want. Ultra slow motion is possible. You can also up the output frame rate and eliminate strobing during pans.
Kerner Optical was spun off as a separate company, then went bust. The technology was sold off, but nobody could figure out how to market it. The delamination phase turned out to be useful for 2D to 3D conversion, which was popular for a while. But Framefree as a compression system never went anywhere after Kerner tanked. Nobody is doing much with it at the moment, and it could probably be picked up cheaply. At one point, there was a browser plug-in for playback and an authoring program, but they're gone. I'm not sure who to contact; the "framefree.us" domain is dead. the "framefree.com" domain is dead. Here's its last readable version:  The remnants of the technology seem to be controlled by "Neotopy LLC" in San Francisco, which is Tom Randoph's operation.
 https://hopa.memberclicks.net/assets/documents/2007_FFV_Comp... (Open with OpenOffice Impress; it's a slide show.)
I think this is a great effort, and if you'll recall Google went and attempted to do the same thing with VP8, but found that people could file patents faster than they could release code. I would certainly support a 'restraint of trade' argument, and a novelty argument which implies (although I know its impossible to currently litigate this way) that if someone else (skilled in the art) could come up with the same answer (invention) given the requirements, then the idea isn't really novel, it is simply "how someone skilled in the art would do it." I've watched as the courts stayed away from that theory, probably because it could easily be abused.
 Conspiracy theory or not, the MPEG-LA guys kept popping up additional patent threats once the VP8 code was released.
Anyway, I'm not a lawyer, and none of this is legal advice or patent advice. Just my thoughts (or perhaps frustrations) on how hard it is to generate deliberately patent free technology. That difficulty suggests to me a way in which patent law could be improved.
But you don't need an expert for novelty. Either you can show a prior art or you can't. I'll grant that there may be some some edge cases where the prior art needs some nuanced interpretation from an expert witness.
Lets say someone asks you to make a mud pie and put bits of lavastone in it. You make your mud pie and then you patent "system and method for creating a mud pie with lava stones."
Perhaps there is no prior art because nobody asked for a mud pie with lava stones, perhaps there is no prior art because others who made mud pies with lava stones didn't see anything useful about it. But someone, somewhere, filed a patent. And the patent office grants it.
The question I pose is how to come up with a defense that anyone skilled in the art of making a mud pie, would make one with lava stones in just that way ? And yes, I know all the legal arguments why it doesn't work like that, so my point is how do we fix the patent system such that utility patents on methods or combinations of methods, would likely be independently arrived at by anyone skillled in the art?
How do we fix it so that Cisco, writing their patent free video codec in the open, doesn't get "scooped" by someone taking their project, projecting out a month or a year in advance of what it is going to need to work, and then throwing together a provisional that pre-dates the open source project getting there, thus depriving the people working on Cisco's efforts their ability to ship without hindrance?
 Really, just dirt and water.
This is begging the question (in the original sense of the phrase). The process is usually not somebody saying "make me a mudpie with lava stones". It usually starts with "how do I make a more attractive mudpie?" There are countless ways of doing so. You could use marbles, leaves, different mud, different levels of consistency... But maybe using lava stones gives you the most bang for the buck. So then you are really filling a patent on "method and system for increasing mudpie attractiveness".
The infamous Amazon one-click patent can similarly be viewed that way. The patent is not really solving the problem of "how do we enable purchases with one-click?" (the solution to which is blindingly obvious) but of "how do we get people to buy more things on our online store?" Now, the path from there to "one-click buying" may also be obvious, no doubt, but it's not as obvious as the path from "how do you build one-click?" to "here's how" simply because the solution space is so much bigger.
In your case, I think you argue that there is nothing special about lava stones, and in fact, dirt/mud contains many stones. Probably, including tiny lava stones, whose only real difference with the additional lava stone bits is size.
So, adding stones is not very novel. There's also not much difference between a lava stone and a non-lava stone; if I can put in a non-lava stone, I can probably just as easily put in a lava stone. Is it not obvious that if I can put a quartz into a mud pie, I could also put a lava stone?
I guess the general strategy is to find the more general pattern and then show that the patent is just a specific instance of a larger, known pattern.
Software can now be call proprietary?
pro·pri·e·tar·y : of or relating to an owner or ownership.
So what is this guys saying. That now anything with a company behind is is proprietary? Linux has got LMI, so I guess that's proprietary, and firefox has got Mozilla. Libreoffice...by this guys twisted version of reality, what is not proprietary?
VP8 was published as an informational RFC under the IETF, but not as part of a standards track, see "Not All RFCs are Standards": https://tools.ietf.org/html/rfc1796
It also took a full year after Google bought the company behind VP8 to actually release the code. Someone from Firefox actually wrote an open letter to Google basically asking WTF was going on.
I don't have any first or even second hand knowledge of the current situation, but I suspect that Google has continued to ... not collaborate as much as everyone else would like.
(Caveat emptor: the above is based off of memory of events that took place a few years ago.)
Theora has also suffered somewhat from not being developed through an open process: On2 code-dropped one of the older propritary formats that they were uninterested in supporting anymore, and the Xiph side (which previously had just done audio) picked it up, formally specified it, and radically improved the implementation (of both the encoder and decoder) while retaining compatiblity.
Codecs thrive on network effect-- you might be happy to take some efficiency loss for improved licensing terms and patent assurance, but there are plenty of other who don't care (or at least won't care until the lawsuit arrives on their desk). It's not good enough to be very good in one dimension, to be wildly successful a codec needs to be great in many dimensions; a decade ago you could argue that Theora was in that space (and I did!), less so today.
... and wildy success is itself the ultimate performance objectve, not for the sake of the egos of the developers but because only though ubiquity do codecs stop being an issue that people trying to deliver great works have to worry about. 99.9% of the time an application developer is not thinking at all about what his TCP stack is doing, who made it, or what its settings are-- ultimately that kind of effortlessness needs to move up the stack in order for the world to move forward.
That's an odd choice of phrase; it's unfortunate that a press release chooses to disparage alternatives without explanation.
Here are some definitions of "proprietary" as used by members of the FOSS community when talking about standards:
>"Proprietary" as in "sole proprietor" is appropriate for a project with zero governance, launched by Google after some incubation closed-source, dominated by Googlers.
https://news.ycombinator.com/item?id=9395992 (pcwalton, Mozilla employee and Rust core developer)
>In a competitive multi-vendor ecosystem like the Web, public-facing protocols that are introduced and controlled by a single vendor are proprietary, regardless of whether you can look at the source code. NaCl and Pepper, for example, are proprietary, even though they have open-source implementations.
Yet when applied to Google's products, this is suddenly viewed in a different manner? Even if the maintainer does not accept patches, you can still fork it, so no freedom is lost. And it's ok for other people to make a proprietary fork, but not ok for the author to make a proprietary fork? That sounds like hypocrisy to me.
Forking standards is completely different to forking a codebase. It should be obvious why.
They don't even need to be able to win. An existing "legit" patent holder might choose to simply throw lawyers at as a tactic to delay or defeat a potential competitor. In that case, it comes down to a cost/benefit analysis for the would-be litigator.
Also, any companies contributing to the NETVC standard are required to declare IPR, which is not the case for MPEG standards.
I'm not sure what this means. A royalty-free video codec basically has a bullseye painted on it from the perspective of existing rightsholders. The only reason such entities might exercise restraint is because the cost/benefit analysis doesn't support litigation. Even if they don't think a competitor codec is a threat at the outset, there's nothing stopping an attacker from just sitting on the sidelines and waiting until the threat profile (and depth of infringers's pockets) becomes clearer. I.e. the "submarine patent" model, except it could even be a known patent in this case.
I think Daala's development process (and IPR disclosure policies at the IETF) reduce the risk substantially. However, this is not automatically true of any RF codec.
We'll see what comes out of it!
I don't really care about the compression ratios achieved, or speed of compression/decompression.
Something like motion JPEG would be good, if it was actually a proper standard (AFAICT it isn't).
Its a cool idea, it just doesn't work in practice. Especially since a lot of these video standards are transmitted over UDP.
1) Take a stream of images calculate images that are the differences between subsequent frames and then JPEG encode the first image and the subsequent differences. The decoder then does the opposite. Once you lose a frame you're done so its better to actually encode a full frame once in a while (do I and B frames like normal codecs).
2) Just push a set of full JPEG images as individual frames. This is the most common usage of MJPEG these days as it's what you get from IP surveillance cameras and stuff like that. This is actually reasonably standardized as it's basically HTTP multipart where each of the parts is just a new jpeg. If you point an HTML img tag to a HTTP GET endpoint like that most browsers will display a video stream.
NETVC participation is open to anyone though, so it is possible more players will show up.
What makes a codec "next generation"? I assume, broadly, it involves trading off more yet more computation for a tighter encode? (As nearly an embarassingly-parallel problem, video coding continues to get faster with more silicon even if serial performance is stagnant.) What kind of gains can we expect from the "next generation"?
All honest questions, BTW. Links welcome, though something focused on this question and not just a laundry list of features with no reference to the past would be preferred.
> Our performance target is roughly a generation beyond current 'next-generation' codecs like VP9 and HEVC, making Daala a next-next-generation effort.
> The next-generation VP9 and HEVC codecs are the latest incremental refinements of a basic codec design that dates back 25 years to h.261. This conservative, linear development strategy evolving a proven design has yielded reliable improvement with relatively low risk, but the law of diminishing returns is setting in. Recent performance increases are coming at exponentially increasing computational cost.
> Daala tries for a larger leap forward— by first leaping sideways— to a new codec design and numerous novel coding techniques. In addition to the technical freedom of starting fresh, this new design consciously avoids most of the patent thicket surrounding mainstream block-DCT-based codecs. At its very core, for example, Daala is based on lapped transforms, not the traditional DCT.
Codecs are only efficient up to a certain image size, and then stop working because all the details are too large-scale for them. HEVC works much better than H.264 on 4K. Besides that, there's higher bit depth pixels, 3D, that kind of stuff.
Also there's usually so many mistakes and compromises in any standard that you can always find something to fix in the next one.
That objection makes no sense. That just implies that at worse parallelization may cost some encoding efficiency. In general, we are quite often willing to pay for that encoding efficiency with gusto given the speedup we can obtain. For instance, http://compression.ca/pbzip2/.
If you have that much need for a speedup, you probably have multiple video streams going (like you're Youtube or a livestream broadcaster). In that case, it's better to do one video per CPU, and now you really are parallel.
Also, you can get up to 4x parallel through slice-threads safely on one video, or 16x through x264's frame-threads if you don't care about your target bitrate. I wouldn't consider that embarrassingly parallel until it's up to 1024x or so, but maybe you do.
But that doesn't happen - when you're encoding, the DCT isn't actually run on the image but on the output of previous compression steps (prediction) which are based on the last encoded block. So there's a dependency on every pixel of the image to the upper left of you.
And when you're decoding, it just never ends up worth it to read the whole bitstream so you have a whole frame of motion vectors to do it at once. The whole data locality thing.
For another codec to be next gen, it should have the same or better compression than HEVC: about half as many bits to encode the same video than H.264 required.
I'd be happy to see VP9 actually get deployed on any significant scale before putting another new codec into the mix.
I want open-source to subsidize a small team of engineers to create a completely open standard where no single entity owns it and everyone is free to branch / fork it.
VP9 is still about 9x slower than x264, but yields the same quality at half the bitrate. You can set VP9 to run a lot faster, but you'll lose some of the bitrate advantages. Still, VP9 is practical for a lot of applications, such as Youtube.
If you're more of a visual person, you can take a look at some images here, compressed to 60KB: https://people.xiph.org/~tdaede/pcs2015_vp9_vs_x264/0.25/
Certainly libvpx still has a lot of optimization and tuning work left to do. But there's only so much x264 can do with a 15 year old bitstream format.
1. https://blogs.gnome.org/rbultje/2014/02/22/the-worlds-fastes... n.b. x264 comparisons were taken with `--preset veryslow` which understates x264's potential performance by an order of magnitude. From the same link: "it can be fast, and it can beat x264, but it can’t do both at the same time."
It's purely an implementation issue - you don't get software as good as x264 just by paying for it.
Has anyone tested this or has more information on the performance/quality vs other codecs?
Summary is that Thor is performing at a slightly better level than Daala and worse than VP9 or H265. But it's also missing a lot of features right now, and the encoder is only tuned for PSNR.
Perhaps if you stopped worrying about name collisions with any existing project ever, you might find naming things a little easier.