I agree with the author of this article in that FFmpeg will eventually win out, just for little things like this.
This behavior kills the will to contribute quickly.
in both cases you can make a patch (=copy/paste) instead. and in the first case you often actually provide a patch because it might be easier or because they're using something else than a git or git-like vcs
That being said too, not everyone uses github ;-)
It might be interesting to see how effective (if such a thing could even be measured) communities with benevolent dictators perform as compared to those that operate by consensus/voting.
The quality of the code base also left something to be desired. It was fast, and generally worked, but they certainly didn't subscribe to the camp that believes in treating compiler warnings as errors. I remember using Valgrind and discovering that some memory was being used uninitialized, and submitting a patch. The patch was rejected with a comment such as "not needed". I still don't know if this was correct, but I do know that being a static initializer my patch wasn't going to slow anything down.
We now test all the code through valgrind; a single memory leak will complain. There are also static analysis, and various other checks.
And BTW, you can't treat warnings as errors. Just try to fix the warnings we have and you might understand why it's not possible easily (might require API changes, compiler false-positive, deprecated warnings that can just be silenced, etc). Of course, you might be able to fix some easily, and the patches will likely be accepted (assuming they are correct).
You are absolutely right that not all the warnings are accurate, and that not all code should be changed to satisfy a every compiler. But I'm arguing that there is a long-term benefit to choosing a single compiler, and making the modifications necessary to allow this compiler to run without warnings, even if the behaviour is provably correct without doing so.
On top of that, sometimes warnings come from parts of the code you don't control, like a scanner generated by flex.
I think it's better to have a strong policy of not introducing warnings and fixing them whenever a particular compiler is foud to emit them, but having -Werror in the default Makefile can quickly inundate the project's mailing list/IRC channel with "doesn't compile" complaints from people with exotic compilers.
I think SQLite has a good attitude: http://www.sqlite.org/faq.html#q17
"Some people say that we should eliminate all warnings because benign warnings mask real warnings that might arise in future changes. This is true enough. But in reply, the developers observe that all warnings have already been fixed in the compilers used for SQLite development (various versions of GCC). Compiler warnings only arise from compilers that the developers do not use on a daily basis (Ex: MSVC)."
The code wasn't merely copying an uninitialized variable, it was branching on the random data that happened to be in the data returned by malloc(). This confused Valgrind, but was thought to be safe due to some later check. To me this was fragile code and unsafe practice. To the decision makers, it was a good way to save a couple cycles and a few bytes. The culture (at least at the time) felt that efficiency trumped clarity and maintainability. This is a blessing and a curse.
The article states that Debian (and Ubuntu?) uses libav (and yes, the bs about ffmpeg being deprecated is obviously a douchebag move), so does anyone know what other distros are defaulting to, Gentoo, Fedora, OpenSUSE etc?
This should be deployed with any new fresh ffmpeg install.
The filtering code is still unstable due to what's explained in the blog post...
To be honest, the biggest problem I had was that searches would return too many out of date code snippets and sample code on other sites that didn't match the exact version I was using, so it wouldn't work for me.
Other than that, once I got it working, it was awesome.
Its the 'meaning' thing that makes your volunteers give up their own time and do free work. Open source projects will probably have similar loyalty/emotional investments.
Confusing a bright, backlit monitor with a piece of paper is some pretty nasty 1990s logic.
I'm not sure about the original poster's statement that that site isn't accessible. I haven't tried applying a stylesheet from my browser to it, and I don't use anything to enhance contrast of text in web pages, as my vision is not impaired in that way. There's an argument to be made, though, that modern websites use coloring, layers, and a bunch of other things to achieve layouts, looks and feels to the extent that if you want to see content as it is presented, or to even find it readable or usable at all, you'll have a hard time with applying styles using your browser.
Believing that just because the web can be used differently to black-type-on-white-paper in a wide variety of ways means that every way of using it differently is virtuous is pretty mid-2000s web 2.0 logic. Contrast is useful, and its proven track record for rendering text is no mark of antediluvian shame, of irrelevance or datedness.
You seem to have read something in my comment that wasn't there.
FWIW, I discovered the Firefox plug-in Tranquility recently. It is sort of a light-weight Readability. Although it has far fewer tweaks available, it compensates by using much less resources. Readability was pretty much unusable on my low-spec laptop.
iReader for Chrome is a good Option. I tend to read most articles through it. Other browsers have their alternatives.
Do you have any suggestion for the CSS?