Hacker News new | past | comments | ask | show | jobs | submit login
FFmpeg 3.0 released (ffmpeg.org)
454 points by vivagn on Feb 15, 2016 | hide | past | web | favorite | 91 comments

At Jumpshare, we use FFmpeg for screen recording. We noticed that the previous version of FFmpeg was not DPI aware. So we went ahead and fixed it. Now FFmpeg will show correct mouse location in hdpi screens. Unfortunately, it seems FFmpeg 3.0 does not ship with this fix. Nevertheless, we're happy to contribute to this open source project.

Here's the fix if anyone is interested: https://github.com/FFmpeg/FFmpeg/commit/00c73c475e3d2d7049ee...

You can backport it easily, and it will be present in 3.0.1. Ask on IRC or the dev ML.

First time that I hear about your service and it seems like a copy of Dropbox but with way more features. (the interface)

Just a suggestion: Jumpshare Plus link should either be at the top or renamed "Pricing" because you don't see it directly and the usual ctrl+f of pricing gives nothing. Plus your pricing is nothing to be ashamed of :)

Hi, thank you for the feedback and suggestions. We will make sure to include the pricing in the new homepage we're working on. :)

By the way, we're more about quick sharing than syncing. We will be overhauling our homepage to make that clearer. Here's the app if you're using a Mac (Windows app is coming soon): https://itunes.apple.com/us/app/jumpshare/id889922906

Syncing (and its related attributes of backup and redundancy) are more useful, and Dropbox facilitates both that and sharing. So what's your angle? :)

At it's core, Dropbox is about "syncing". But at Jumpshare, the core is "quick sharing". This makes all the difference. We're able to build our product around the quick sharing aspect, for example, our sharing happens in real-time so you don't have to wait for uploads to finish first. And we offer a slew of built-in tools (capturing screenshots, annotations, recording screencasts, etc) and features to supercharge your sharing.

You can learn more from the discussion here: https://www.producthunt.com/tech/jumpshare-for-mac

Okay, but what happens when Dropbox makes a minor upgrade to their sharing, making it also "quick" or real-time sharing? Already, when uploading a file to Dropbox, it can be shared and partially downloaded before it's finished uploading. Do that with an image or video that was just recorded, or that is being recorded, and that seems like real-time sharing to me.

And doing that with Dropbox means that, after it's been shared, it's also synced, mirrored, and backed up for you.

Where does that leave Jumpshare?

> Where does that leave Jumpshare?

Promoting six-line pull requests on a fringe news website.

This screams for proper release notes. The official ones are pretty light (http://git.videolan.org/gitweb.cgi/ffmpeg.git/?p=ffmpeg.git;... ), and refer to the Changelog (http://git.videolan.org/gitweb.cgi/ffmpeg.git/?p=ffmpeg.git;... ) which is quite terse. Phoronix did some reformatting of the changelog, it's a bit easier to read:


But honestly this type of stuff should be done by the project before any release.

Yeah, sorry about that. It was indeed done in a hurry.

I think the main highlights are:

- The API/ABI break (implied by the major bump)

- The many improvements in the native AAC encoder making it the recommended one (libaacplus and libvo-aacenc are removed)

- A ton of filters were added

- Many ASM optimizations that weren't mentioned in the Changelog (it will take a while to make highlights on those, I don't remember them)

Hopefully a proper news will be posted soon. Sorry again.

@imaginenore's comment below (https://news.ycombinator.com/item?id=11103063) has a detailed list of the 29! new filters.

Wait, are you @majorsheep ? If yes your illustrations are really good :) http://www.king-sheep.com/star-wars-the-force-awakens-fan-ar...

Nope. I can't take credit for the illustrations, but I might have to get in touch with Nathan since there is some nattaylor out there who likes to put my gmail address for all of their online services.

Any plan to support QuickSync?

Looking at the release notes, I see several mentions of QSV support.

I think earlier versions had an x264 module supporting QSV but which was never included in the official windows build. I wonder if that changed.

Thanks to all the FFmpeg contributors! Fantastic piece of software.

On a project I was recently on recently we started hitting the per-region concurrent transcode limits on Amazon's Elastic Transcoder. [1]

Instead of sharding over pipelines or accounts we set up a pipeline with FFMPEG + Lambda functions and it performed fantastically (within the free tier even).

It was incredibly simple to write the functions and has given that project a lot more freedom; with the caveat that the any single task you undertake should occur within the timeout window (currently 5 minutes). Having said that, it's also straight forward to split the process into steps and have multiple lambda jobs to make the flow more of a pipeline.

[1] http://docs.aws.amazon.com/elastictranscoder/latest/develope...

Did you try simply asking AWS to raise the limit? It even suggests so on your linked page.

In my experience, every limit is immediately relaxed when requested; number of VPCs (I see people do horrible things to work around this all the time! Just ask!), EC2s / region, SES limits (need to send 10 million emails / day? No problem!), API Gateways / account, total ASGs... I believe all of these are there to keep you from shooting yourself in the foot through automation gone wrong or inexperience.

I've seen some crazy complicated architectures, where just sending an email or lifting up the phone solves the thing within an hour.

I've been surprised/impressed by their quick and painless limit increases. It makes sense to have low default limits so people don't accidentally spin up a thousand instances or send a million emails. It seems the limits are mostly there to protect you from a bug or test in early development costing you a bunch of money.

>I believe all of these are there to keep you from shooting yourself in the foot through automation gone wrong or inexperience.

Also to prevent you from racking up huge bills in case an api key is compromised, and the attacker is able to spin up tons of instances for a botnet or something on your dime.

No! Thank you for pointing that out; I've always taken "limit" to mean hard but I shall no longer.

We had other reasons to move and it did end up working well for that project and others, but I can obviously only say that with hindsight.

They need to write that in big bold letters.

>number of VPCs (I see people do horrible things to work around this all the time! Just ask!)

I'm intrigued.

There are some limits they can't/don't want to increase.

I'm yet to find a limit that they won't adjust - what ones are you referring to?

Reads/sec on a Kinesis shard is capped at 5 and can't be adjusted.

S3 buckets are one I've seen. But there's not really a reason to have 100 s3 buckets, much less more than that.

Yes, that's one that you may not be able to change. From memory, they put that one in place to prevent the equivalent of bucket name squatting (since every bucket has a corresponding public domain name).

An interesting thing to look into might be using the (apparently Kepler) NVENC capabilities present on EC2 G2 instances.

For $2.60 an hour, you get 4x Kepler GPUs that can handle ~4x realtime 1080p encodes each (120fps per GPU), or 16x realtime 1080p encodes total (480fps). To convert this into rather odd units, that works out to ~1.37Tpix/$ (1920x1080 x 480fps x 3600s / $2.6). Put this on reserved instances and that number is pushed up to ~2.2Tpix/$.

According to [1], ffmpeg + x264 performance on the most cost effective instances (c3.xlarge) was 20s for a 30s, 960x540 video, or roughly 0.5Mpix at 45fps. That's 83Gpix for $0.21, or 0.399Tpix/$ at spot prices or 0.672Tpix/$ at reserved prices.

Depending on how much you care about your compression quality (NVENC isn't quite as good as x264 veryslow but it's definitely usable, particularly with its "two pass" preset), it might be worth a good look at the GPU encoders.

[1]: https://github.com/sportarchive/CloudTranscode/blob/master/b...

How did the costs compare with this setup?

I work on a project where ET isn't flexible enough (MPEG-DASH), and wondered whether Lambda would make for a good alternative to EC2 + SQS + Scaling Groups.

EC2 + SQS + ffmpeg is 1/5 the cost of elastic transcoding for me, and that's not even using all the ec2 capacity.

Could you share your experience with FFMPEG+ Lambda ? I ran into trouble with this when dealing with large files, especially when some of the files were being pulled off non S3 sources. Also what EC2 cores were you using. ?

Sure: you likely don't want to be dealing with large files on Lambda. Why?

* The maximum timeout window for a function invocation is 300 seconds

* The maximum available temp disk space per instance is 500MB

* Memory is maxed at 1.5GB

In the function invocation time window you need to:

* retrieve the file (to memory or disk)

* transcode the file (outputting to memory or disk)

* upload the file (as the disk is not persistent)

This along with the following facts make it infeasible:

* transfers from S3 are fast, but non-s3 sources are likely to be much slower.

* assuming you mean large as in GB - you have nowhere to put the files (disk too small, memory too small).

* transfers to S3 are fast, but uploading the transcoded video to a non-s3 source will likely me much slower.

Hope that helps.

I don't know much about video transcoding, but if FFMPEG can utilize streams it's easy to work around the lambda size constraints.

You can process several GBs in the 5 minute window by piping your S3 download stream through your transformation steps then directly into an S3 upload stream. Nothing ever persists to disk, so your only worry if anything is managing your stream buffers so you don't run out of memory.

As long as any single step of your pipeline doesn't exceed the time limit, you can make really nifty pipelines for large file processing by using the S3 upload as "temp space" then an S3 event to automatically trigger the next step of your pipeline.

ffmpeg can utilize streams, in both input and output. The trouble comes from different codecs and containers, especially on output. Some formats aren't append-only—the prime example being MP4 + h.264—and so ffmpeg needs to be able to write to a seekable output device, ruling out streaming output in those cases.

Wow... I would truly appreciate if you could share a bit more about your specific setup. I found myself working on a new project yesterday that I was really really excited about, until I saw the costs to transcode video.

How does doing all this in-house compare price wise (say, per minute), compared to using elastic transcoder?

Edit: The ultimate lowest cost I can find is $0.0125-0.015

I wish I could edit my parent comment but alas.

The key point was missed: we were dealing with very short, small videos.

If you are dealing with longer or large videos, it's simply not feasible on Lambda.

As for costings, unfortunately I cannot retrieve them as this project was mid last year and I've since moved on to other clients. They can be calculated though with a few short tests I'm sure.

Just want to add my voice to everyone hoping for a writeup. I'm especially interested in the cost comparison.

I know this has been a constant question (in the lines of "Should I go Python 2.x or 3.x?")...but I feel the need to ask it again on the event of a major point release for ffmpeg...but how are things, pragmatically-speaking, in terms of libav vs ffmpeg? I had thought that libav was the new way a few years ago and have more or less been using it on OS X...but now I see that Debian recently switched back to ffmpeg [1]...What are the use-cases for sticking with libav these days? I'm almost sure I started using libav because it was promoted as a concerted effort to create a better API. But by some accounts, ffmpeg has been incorporating libav's changes...and I honestly don't use libav or ffmpeg enough, directly, to really benefit from a better API. And installing both, I believe, has led to a few subtle errors when using libraries that wrap around either.

So, any reason for the casual graphics developer to install libav?

[1] https://lwn.net/Articles/650816/

edit: Oh I see that VLC at some point switched to libav. That was likely a deciding factor when I last did my nominal research into ffmpeg vs libav:


> libav [...] promoted as a concerted effort to create a better API

True, but that was biased and unfair. Some developers leveraged their Debian influence to get Debian to switch from ffmpeg to libav, but the technical merits were debatable. In the end, they came back to ffmpeg.

This is mostly a political issue. Software-wise AFAIK ffmpeg has been integrating many changes from libav but the opposite is not true, making IMHO ffmpeg the right choice.

Good article (2012) with in-depth history: http://blog.pkh.me/p/13-the-ffmpeg-libav-situation.html

More recent (2015) short take on the matter, seems pretty biased though: https://github.com/mpv-player/mpv/wiki/FFmpeg-versus-Libav

Wikipedia entry: https://en.wikipedia.org/wiki/Libav#Fork_from_FFmpeg

The github link is on mpv wiki. mpv is a descendant of mplayer and mplayer2 (the latter being mostly dead). IMHO mpv is the best media player for any OS (lightweight, snappy, reads everything, better options and CLI than mplayer*, etc.).

> This is mostly a political issue.

Let us not forget the reasons for libav. The ffmpeg development process was having a lot of problems due to very controversial decisions that its lead dev was taking. The libav fork has resulted in a restructuring of the ffmpeg development workflow. In this regard, libav is about as important as egcs was to gcc.

Further reading:


> very controversial decisions that its lead dev was taking

I have seen this claim frequently, but have never seen an actual list of such (and that link doesn't supply one). I get they didn't like the guy, but what were the terrible things he was supposed to have done?

It's interesting to note the parallel, but there are a few differences between ecgs vs gcc and libav vs ffmpeg.

Perhaps the most important is that the ecgs fork announcement [1] was very diplomatically worded, intended to put an end to any bad feelings on either side, and recognized that FSF was completely in their right to be conservative when it came to developing gcc. Another difference is that ecgs really took off and eventually became the official gcc; libav doesn't look like it's doing the same.

[1] http://gcc.gnu.org/news/announcement.html

Software-wise, ffmpeg is the more feature complete solution, obviously. But if you want a morally and ethically okay solution, with a cleaner codebase (but also NIH syndrome), libav might be the better solution.

The same people who use free software for moral and ethical reasons would also choose libav.

"Morally and ethically okay" -- what do you mean by that? and why doesn't ffmpeg meet the same standard, in your opinion?

I've read summaries of the libav fork, but I don't recall anyone raising issues of morality.

The way the ffmpeg maintainer behaved – as malevolent dictator – in contrast to the more open development approach of libav, is a pretty big issue, don’t you think?

IMHO the hostile takeover of the ffmpeg project by the libav guys (Fabrice Bellard had to wield trademark to force them to rename the fork) and intense FUD campaign were much bigger issues.

It’s not a takeover – it was a takeover when the trademark was used to force everyone to fork.

But the majority of the project, the people owning the servers, technology, coding the most part, etc – those were the ones renaming to libav.

Your mad that the owner of a trademark told others they can't use the trademark?

I’m mad that a person, who bought a trademark for a project, then decided to act against the interest of the majority of the participants of the project,

Let me get this straight, your mad because Fabrice Bellard, the person who started ffmpeg, asserted his trademark on the libav folks because their fork initially used the name ffmpeg?

Seeing as your the maintainer for QuasselDroid, How would you like it if a group of contributors wanted to take the project in a different direction then you, so they fork it, call their fork QuasselDroid, and then say your branch is immoral, like you have throughout this page, I doubt you would enjoy this, and if you owned the QuasselDroid trademark I'm sure you would use it too.

Actually, with quasseldroid we had the same situation as with ffmpeg/libav – but I’m the one who forked it.

Just in our case the people maintaining the (now dead) original repo decided to give up maintainership to me. (And so we merged everything back).

Also, in your example, I would have no issue.

If another group decided to fork and improve the project, and have more development going on than me, I’d end up just contributing to their project.

This is open source and open development, the very concept is that anyone can and will fork, and may even become the canonical version.

With all due respect, you are not answering the question that the parent poster asked. If someone created a hostile fork of QuassalDroid, and made decisions that you disagreed with, I doubt you would be OK with them using the same name for the project. The right to fork is fundamental in open source, but there is no right to present someone else's work as your own, or to confuse the general public about which version of a software package they are downloading. People should be able to decide for themselves which software to download, not be fooled by someone passing off something different as the same thing. That's why trademarks exist. Enforcing trademarks is not bad or wrong.

The fork would only be "hostile" because I disagreed with it.

And why should I have any more say on this than the other contributors?

This is open development, the very idea is that people are replaced all the time.

Trademarks can be held by an organization, not just by one person. This is how Apache software works, for example. In that case, there are bylaws in place to ensure that the interests of different people are represented, decisions can be made fairly, and toxic people can be prevented from killing the project.

In contrast, projects such as Python have a "benevolent dictator" moderl where one person has the final say about the direction of development. There is nothing unethical about a BDFL model in open source; it's just a choice that a community can make.

You seem to be deliberately confusing yourself about the distinction between forking, which is always allowed, and representing your fork as the original project, which is never allowed. If you are still confused, think about it this way: would you want someone to attach a bunch of malware to your project and redistribute it under its original name, as if it were your version? You can't prevent this without trademark law.

If you're going to fork, you have to accept that you have exactly the same burden on you to keep up to date on security patches etc, at the very least. As a number of parties found, libav wasn't doing that, and regardless of any moral or ethical argument (for which I've mostly seen accusations and no actual evidence.. I'm largely taking it on face value that there were issues), security trumps pretty much everything.

Sorry, what's wrong with using free software for moral and ethical reasons? I often do so because I don't feel like paying nor stealing commercial software. However, being so dependent on OSS has made me appreciate it and want to support it in what ways I can -- call it a moral imperative. Besides contributing bug reports and patches, I sometimes like using new libraries (or edge versions of existing software) if the creator, working freely, is trying to move the ball forward...having users who can provide feedback is a sort of moral support.

In the case of libav...as an admitted casual, I'm thankful that ffmpeg exists, even if its API confuses me...I'm grateful enough to think that the status quo is just fine, whether I can rationalize it or not. However, I do find it admirable that some people (ostensibly) wanted to make what they think were forward-thinking changes, including doing the kind of cleanup that is generally under-appreciated and under-prioritized in all software.

So if they're promising a transparent, interoperable interface...sure, I'll give it a try, and it will be for "moral" reasons in the sense of moral support. I've done the same with MariaDB (over MySQL) and haven't regretted it.

What's wrong with it is that FFmpeg and Libav are on equal footing in that regard; so using that argument in favor of one over the other is... nonsensical.

Nothing is wrong with that, but in a situation where one malevolent dictator acted against the will of every single other member of the development team, and forced them to fork, it’s hard to argue that his version is the moral one.

That's quite a claim, can you elaborate on that a bit? I remember reading some of the controversy but I don't remember the ffmpeg guys to be very bad.

mpv has a nice (and opinionated, which is good) guide for the things that matter to them: https://github.com/mpv-player/mpv/wiki/FFmpeg-versus-Libav

Thats nice one: - Libav - Pretends FFmpeg doesn't exist, though sometimes merges individual patches. - FFmpeg - Pretends Libav doesn't exist, but merges absolutely everything it does. Sometimes with consequences; for example there are now 2 prores decoders, and 3 prores encoders.

When in doubt about which open source project to go with, I typically refer to Google trends.

For example, check out this graph between ffmpeg and libav in terms of Google searches. [0]

The entire world can be wrong, but it's rare.

[0] https://www.google.com/trends/explore#q=libav%2C%20ffmpeg

It really does seem like the original ffmpeg is the way to go these days.

Pragmatically speaking, ffmpeg is a clear winner. See this article for reasons why: https://wiki.debian.org/Debate/libav-provider/ffmpeg

In summary:

* ffmpeg has merged most of what libav has done.

* Most distributions are using ffmpeg, including Debian now.

* ffmpeg has more contributor activity.

Among new things:

- Common Encryption (CENC) MP4 encoding and decoding support.

- New filters: extrastereo, OCR, alimiter, stereowiden, stereotools, rubberband, tremolo, agate, chromakey, maskedmerge, displace, selectivecolor, zscale, shuffleframes, vibrato, realtime, compensationdelay, acompressor, apulsator, sidechaingate, aemphasis, virtual binaural acoustics, showspectrumpic, afftfilt, convolution, swaprect, and others.

- New decoding: DXV, Screenpresso SPV1, ADPCM PSX, SDX2 DPCM, innoHeim/Rsupport Screen Capture Codec, ADPCM AICA, XMA1 & XMA2, and Cineform HD.

- New muxing: Chromaprint fingerprinting, WVE demuxer, Interplay ACM, and IVR demuxer.

- Dynamic volume control for ffplay.

- Native AAC encoder improvements.

- Zero-copy Intel QSV transcoding.

- Microsoft DXVA2-accelerated VP9 decoding on Windows.

- VA-API VP9 hardware acceleration.

- Automatic bitstream filtering.

Is Intel QSV available on Mac/Linux yet by any chance?

There's a patchset for this feature being discussed on the mailing list:


FFmpeg on Linux supports QSV either through the h264_qsv encoder or through some soon-to-be-merged va-api changes. On Mac I think you need to use the VideoToolbox API to access the GPU codec, and there is support for this in FFmpeg as well, but I haven't used it myself.

Chromakey was introduced in 2.8.x.

It's in the official changelog as a new feature: http://git.videolan.org/gitweb.cgi/ffmpeg.git/?p=ffmpeg.git;...

Apologies, you're correct. I was thinking of the colorkey filter:


This was introduced in 2.8 and does the same thing as chromakey, except in RGB rather than YUV.

I use ffmpeg for housekeeping stuff like converting videos from one format to the other, and cutting clips - mostly from the command line. Can some advanced users share if there is anything to look forward to with this release? Better performance? Some convenience features? Thank you in advance

From the list given by @imaginenore the major one for me is CineformHD support. We work on a lot of VR stuff and there are quite some GoPro users out there that generate material in this codec. Not having to transcode to an intermediate is nice. Also hardware acceleration is always good to have.

FYI, the phrase "quite some users" is not uncommon among (continental european?) non-native speakers of English, but it's not correct.

"In the British National Corpus, for example, most examples of quite some are "quite some time", others are "quite some distance". If you replace "quite some" with "a considerable", the meaning should be clear. If the sentence does not make sense when you do that, it's likely that "quite some" is not being used properly."

From http://forum.wordreference.com/threads/quite-some.1011589/

Thanks theoh. Dutchie here. Always nice to learn something new about a foreign language.

I love FFmpeg. I first used it to help with uploading 700 audio files to youtube years ago. Of course youtube is video only, so I used ffmpeg to reencode the audio with an image slideshow as video and then uploaded the "videos" using some web scrapping with perl.

More recently I have been downloading programming framework tutorials (android development, django, angular,ect) from youtube to my plex media server. I then go back with ffmpeg to reencode the vids to playback 50% faster. So now I can blast through tutorials on my TV while I eat lunch (I work from home mostly)

Edit: The release mentioned hardware acceleration improvements. I never knew ffmpeg even supported any HW accel: https://trac.ffmpeg.org/wiki/HWAccelIntro

mplayer and mpv and vlc and most native players can speed videos up on the fly, tempo-shifting or pitch shifting the audio based on preferences.

Usually the [ and ] keys.

Yes, those are players. But I don't want to sit in front of my computer or cell phone while on lunch break, I want to sit on the couch in front of a TV.

Natively, neither plex or roku allows videos to be speed up, so they have to be reencoded at a different speed.

If ffplay supported hardware decoding, it'd be the perfect player. You could not make a more minimal player. It does not, and it doesn't seem to be high on the priority list, rather in last position perhaps.


Have you tried mpv?

Awesome! Hoping for quick updates to the OpenBSD and FreeBSD ports.

Thanks for all the work on FFmpeg!

Is the built-in aac encoder better or as good as fdk-aac? I've been using fdk-aac because it gives lower bitrates and better/same sound.

Official docs say that it's competitive at 128kbps, but these[0] listening test from Kamendo2 (who's very experienced in ABX listening tests of lossy codecs) suggest fdk-aac still has the edge, as well as handling VBR and the HE-AAC / HE-AACv2 profiles properly.

[0]: https://hydrogenaud.io/index.php/topic,111085.0.html

libfdk historically always was a bit better and supports VBR properly.

See also: https://trac.ffmpeg.org/wiki/Encode/AAC

That's why I've been using it. Is the improved built-in encoder on par now?

I haven't done or seen any tests, but I suppose if you require VBR and/or HE-AAC support, go for libfdk, otherwise for bitrates ~128k or higher, use the internal AAC encoder.

On a related note, how do those options compare to Vorbis and Opus, technically and legally? Is there a compelling reason to use AAC over those choices?

Last version was named Feynman. This 3.0 is released on 15th feb - Death of Feynman.


Why the heck do they make it so hard to figure out what's in the release?

There will be a proper news release on the homepage soon, I suppose.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact