Here's the fix if anyone is interested: https://github.com/FFmpeg/FFmpeg/commit/00c73c475e3d2d7049ee...
Just a suggestion: Jumpshare Plus link should either be at the top or renamed "Pricing" because you don't see it directly and the usual ctrl+f of pricing gives nothing.
Plus your pricing is nothing to be ashamed of :)
By the way, we're more about quick sharing than syncing. We will be overhauling our homepage to make that clearer. Here's the app if you're using a Mac (Windows app is coming soon): https://itunes.apple.com/us/app/jumpshare/id889922906
You can learn more from the discussion here: https://www.producthunt.com/tech/jumpshare-for-mac
And doing that with Dropbox means that, after it's been shared, it's also synced, mirrored, and backed up for you.
Where does that leave Jumpshare?
Promoting six-line pull requests on a fringe news website.
But honestly this type of stuff should be done by the project before any release.
I think the main highlights are:
- The API/ABI break (implied by the major bump)
- The many improvements in the native AAC encoder making it the recommended one (libaacplus and libvo-aacenc are removed)
- A ton of filters were added
- Many ASM optimizations that weren't mentioned in the Changelog (it will take a while to make highlights on those, I don't remember them)
Hopefully a proper news will be posted soon. Sorry again.
On a project I was recently on recently we started hitting the per-region concurrent transcode limits on Amazon's Elastic Transcoder. 
Instead of sharding over pipelines or accounts we set up a pipeline with FFMPEG + Lambda functions and it performed fantastically (within the free tier even).
It was incredibly simple to write the functions and has given that project a lot more freedom; with the caveat that the any single task you undertake should occur within the timeout window (currently 5 minutes). Having said that, it's also straight forward to split the process into steps and have multiple lambda jobs to make the flow more of a pipeline.
In my experience, every limit is immediately relaxed when requested; number of VPCs (I see people do horrible things to work around this all the time! Just ask!), EC2s / region, SES limits (need to send 10 million emails / day? No problem!), API Gateways / account, total ASGs... I believe all of these are there to keep you from shooting yourself in the foot through automation gone wrong or inexperience.
I've seen some crazy complicated architectures, where just sending an email or lifting up the phone solves the thing within an hour.
Also to prevent you from racking up huge bills in case an api key is compromised, and the attacker is able to spin up tons of instances for a botnet or something on your dime.
We had other reasons to move and it did end up working well for that project and others, but I can obviously only say that with hindsight.
They need to write that in big bold letters.
For $2.60 an hour, you get 4x Kepler GPUs that can handle ~4x realtime 1080p encodes each (120fps per GPU), or 16x realtime 1080p encodes total (480fps). To convert this into rather odd units, that works out to ~1.37Tpix/$ (1920x1080 x 480fps x 3600s / $2.6). Put this on reserved instances and that number is pushed up to ~2.2Tpix/$.
According to , ffmpeg + x264 performance on the most cost effective instances (c3.xlarge) was 20s for a 30s, 960x540 video, or roughly 0.5Mpix at 45fps. That's 83Gpix for $0.21, or 0.399Tpix/$ at spot prices or 0.672Tpix/$ at reserved prices.
Depending on how much you care about your compression quality (NVENC isn't quite as good as x264 veryslow but it's definitely usable, particularly with its "two pass" preset), it might be worth a good look at the GPU encoders.
I work on a project where ET isn't flexible enough (MPEG-DASH), and wondered whether Lambda would make for a good alternative to EC2 + SQS + Scaling Groups.
* The maximum timeout window for a function invocation is 300 seconds
* The maximum available temp disk space per instance is 500MB
* Memory is maxed at 1.5GB
In the function invocation time window you need to:
* retrieve the file (to memory or disk)
* transcode the file (outputting to memory or disk)
* upload the file (as the disk is not persistent)
This along with the following facts make it infeasible:
* transfers from S3 are fast, but non-s3 sources are likely to be much slower.
* assuming you mean large as in GB - you have nowhere to put the files (disk too small, memory too small).
* transfers to S3 are fast, but uploading the transcoded video to a non-s3 source will likely me much slower.
Hope that helps.
You can process several GBs in the 5 minute window by piping your S3 download stream through your transformation steps then directly into an S3 upload stream. Nothing ever persists to disk, so your only worry if anything is managing your stream buffers so you don't run out of memory.
As long as any single step of your pipeline doesn't exceed the time limit, you can make really nifty pipelines for large file processing by using the S3 upload as "temp space" then an S3 event to automatically trigger the next step of your pipeline.
How does doing all this in-house compare price wise (say, per minute), compared to using elastic transcoder?
Edit: The ultimate lowest cost I can find is $0.0125-0.015
The key point was missed: we were dealing with very short, small videos.
If you are dealing with longer or large videos, it's simply not feasible on Lambda.
As for costings, unfortunately I cannot retrieve them as this project was mid last year and I've since moved on to other clients. They can be calculated though with a few short tests I'm sure.
So, any reason for the casual graphics developer to install libav?
edit: Oh I see that VLC at some point switched to libav. That was likely a deciding factor when I last did my nominal research into ffmpeg vs libav:
True, but that was biased and unfair. Some developers leveraged their Debian influence to get Debian to switch from ffmpeg to libav, but the technical merits were debatable. In the end, they came back to ffmpeg.
This is mostly a political issue. Software-wise AFAIK ffmpeg has been integrating many changes from libav but the opposite is not true, making IMHO ffmpeg the right choice.
Good article (2012) with in-depth history: http://blog.pkh.me/p/13-the-ffmpeg-libav-situation.html
More recent (2015) short take on the matter, seems pretty biased though: https://github.com/mpv-player/mpv/wiki/FFmpeg-versus-Libav
Wikipedia entry: https://en.wikipedia.org/wiki/Libav#Fork_from_FFmpeg
The github link is on mpv wiki. mpv is a descendant of mplayer and mplayer2 (the latter being mostly dead). IMHO mpv is the best media player for any OS (lightweight, snappy, reads everything, better options and CLI than mplayer*, etc.).
Let us not forget the reasons for libav. The ffmpeg development process was having a lot of problems due to very controversial decisions that its lead dev was taking. The libav fork has resulted in a restructuring of the ffmpeg development workflow. In this regard, libav is about as important as egcs was to gcc.
I have seen this claim frequently, but have never seen an actual list of such (and that link doesn't supply one). I get they didn't like the guy, but what were the terrible things he was supposed to have done?
Perhaps the most important is that the ecgs fork announcement  was very diplomatically worded, intended to put an end to any bad feelings on either side, and recognized that FSF was completely in their right to be conservative when it came to developing gcc. Another difference is that ecgs really took off and eventually became the official gcc; libav doesn't look like it's doing the same.
The same people who use free software for moral and ethical reasons would also choose libav.
I've read summaries of the libav fork, but I don't recall anyone raising issues of morality.
But the majority of the project, the people owning the servers, technology, coding the most part, etc – those were the ones renaming to libav.
Seeing as your the maintainer for QuasselDroid, How would you like it if a group of contributors wanted to take the project in a different direction then you, so they fork it, call their fork QuasselDroid, and then say your branch is immoral, like you have throughout this page, I doubt you would enjoy this, and if you owned the QuasselDroid trademark I'm sure you would use it too.
Just in our case the people maintaining the (now dead) original repo decided to give up maintainership to me. (And so we merged everything back).
Also, in your example, I would have no issue.
If another group decided to fork and improve the project, and have more development going on than me, I’d end up just contributing to their project.
This is open source and open development, the very concept is that anyone can and will fork, and may even become the canonical version.
And why should I have any more say on this than the other contributors?
This is open development, the very idea is that people are replaced all the time.
In contrast, projects such as Python have a "benevolent dictator" moderl where one person has the final say about the direction of development. There is nothing unethical about a BDFL model in open source; it's just a choice that a community can make.
You seem to be deliberately confusing yourself about the distinction between forking, which is always allowed, and representing your fork as the original project, which is never allowed. If you are still confused, think about it this way: would you want someone to attach a bunch of malware to your project and redistribute it under its original name, as if it were your version? You can't prevent this without trademark law.
In the case of libav...as an admitted casual, I'm thankful that ffmpeg exists, even if its API confuses me...I'm grateful enough to think that the status quo is just fine, whether I can rationalize it or not. However, I do find it admirable that some people (ostensibly) wanted to make what they think were forward-thinking changes, including doing the kind of cleanup that is generally under-appreciated and under-prioritized in all software.
So if they're promising a transparent, interoperable interface...sure, I'll give it a try, and it will be for "moral" reasons in the sense of moral support. I've done the same with MariaDB (over MySQL) and haven't regretted it.
For example, check out this graph between ffmpeg and libav in terms of Google searches. 
The entire world can be wrong, but it's rare.
* ffmpeg has merged most of what libav has done.
* Most distributions are using ffmpeg, including Debian now.
* ffmpeg has more contributor activity.
- Common Encryption (CENC) MP4 encoding and decoding support.
- New filters: extrastereo, OCR, alimiter, stereowiden, stereotools, rubberband, tremolo, agate, chromakey, maskedmerge, displace, selectivecolor, zscale, shuffleframes, vibrato, realtime, compensationdelay, acompressor, apulsator, sidechaingate, aemphasis, virtual binaural acoustics, showspectrumpic, afftfilt, convolution, swaprect, and others.
- New decoding: DXV, Screenpresso SPV1, ADPCM PSX, SDX2 DPCM, innoHeim/Rsupport Screen Capture Codec, ADPCM AICA, XMA1 & XMA2, and Cineform HD.
- New muxing: Chromaprint fingerprinting, WVE demuxer, Interplay ACM, and IVR demuxer.
- Dynamic volume control for ffplay.
- Native AAC encoder improvements.
- Zero-copy Intel QSV transcoding.
- Microsoft DXVA2-accelerated VP9 decoding on Windows.
- VA-API VP9 hardware acceleration.
- Automatic bitstream filtering.
This was introduced in 2.8 and does the same thing as chromakey, except in RGB rather than YUV.
"In the British National Corpus, for example, most examples of quite some are "quite some time", others are "quite some distance". If you replace "quite some" with "a considerable", the meaning should be clear.
If the sentence does not make sense when you do that, it's likely that "quite some" is not being used properly."
More recently I have been downloading programming framework tutorials (android development, django, angular,ect) from youtube to my plex media server. I then go back with ffmpeg to reencode the vids to playback 50% faster. So now I can blast through tutorials on my TV while I eat lunch (I work from home mostly)
Edit: The release mentioned hardware acceleration improvements. I never knew ffmpeg even supported any HW accel: https://trac.ffmpeg.org/wiki/HWAccelIntro
Usually the [ and ] keys.
Natively, neither plex or roku allows videos to be speed up, so they have to be reencoded at a different speed.
See also: https://trac.ffmpeg.org/wiki/Encode/AAC