Hacker News new | comments | ask | show | jobs | submit login
FFmpeg 4.0 released (ffmpeg.org)
462 points by frakturfreund 10 months ago | hide | past | web | favorite | 61 comments

If you're reading this, FFmpeg developers, please accept my thanks for your work. You have become a "Category Killer"[1] in command-line video tomfoolery.

[1] http://www.catb.org/esr/writings/homesteading/cathedral-baza...

Not just command line. libav* is used in pretty much everything that deals with arbitrary AV formats/codecs.

What is the current relationship between Libav and FFmpeg? I could never figure out which one I should support when there was a falling out, and things might even have changed since then.

When I say libav*, I and most people mean libavformat, libavcodec, libswscale, etc.; the C libraries that form the basis of the command line tool and are widely used elsewhere.

libav (no wildcard) is a fork of FFmpeg that is broadly focused on reducing bloat and cleaning up the API. As such, libav tends to add features more slowly, while FFmpeg generally follows all of libav’s new features and bug fixes as well as its own [1]. Debian was on libav for a while but went back to FFmpeg in 2015 [2].

[1] http://blog.pkh.me/p/13-the-ffmpeg-libav-situation.html

[2] http://news.softpedia.com/news/Debian-Moves-to-FFmpeg-and-Dr...

> When I say libav*, I and most people mean libavformat, libavcodec, libswscale, etc.; the C libraries that form the basis of the command line tool and are widely used elsewhere.

Oh, so were those things called "libav" even before the fork, and were perhaps the origin for the name of the fork libav?

yes. some people (including me) would argue that they chose that name in part to cause deliberate confusion.

Lots of hardware acceleration:

- Intel QSV-accelerated MJPEG encoding

- NVIDIA NVDEC-accelerated H.264, HEVC, MJPEG, MPEG-1/2/4, VC1, VP8/9 hwaccel decoding

- Intel QSV-accelerated overlay filter

- OpenCL overlay filter

- VAAPI MJPEG and VP8 decoding

- AMD AMF H.264 and HEVC encoders

- VideoToolbox HEVC encoder and hwaccel

- VAAPI-accelerated ProcAmp (color balance), denoise and sharpness filters

>NVIDIA NVDEC-accelerated H.264, HEVC, MJPEG, MPEG-1/2/4, VC1, VP8/9 hwaccel decoding

Confused, as I've been using ffmpeg for HEVC NVDEC already...

This is a release and the Changelog is reporting changes relative to the last release (3.4 series). HEVC NVDEC was added to the tree in Nov '17 and 3.4 was branched off in Oct.

I've been using for encoding. I did have to download the Nvidia CUDA SDK to patch it in though

Is QSV working on Linux without X server running?

Is anyone follows https://libav.org development? I was under the impression they merged back with FFmpeg when Michael Niedermayer resigned as leader. Now I see they still make their own releases. So merge ultimately did not happen?

EDIT: Just found that FFmpeg merges (almost) all libav.org changes: https://github.com/FFmpeg/FFmpeg/blob/master/doc/libav-merge...

Love ffmpeg!

I was trying to do something the other day and couldn’t figure it out, if anyone has any ideas.

The end goal is to provide a set of video files, with time stamps for each, splicing them into one file while removing parts I don’t want.

That is straightforward enough, as long as you’re willing to re-encode the whole file. Otherwise, it seems like ffmpeg is restricted to make cuts at key frames.

It’s rare for the key frame to be placed at the exact spot I would want to make a cut, so the section of the video around the cut would need to be re-encoded. Ideally that would be the only part hat is re-encoded - everything else would be a a straight transcode from key frame to key frame.

I believe this is called ‘smart rendering’, and the pages I could find in the past said ffmpeg isn’t really suited for it, or it’s very difficult.

Does anyone know if that has changed recently, or have found a way to do it?

Depending on the container format, you may not need to re-encode anything. .mp4 supports "edit lists". You can create a .mp4 file that starts at the latest key frame <= the starting timestamp of interest, onward through the ending timestamp of interest. And has an edit list that tells the player to skip the unwanted prefix. You can have arbitrarily many of these in one file. I do this as part of a larger program (security camera NVR), although directly writing the .mp4 rather than instructing ffmpeg to do so.

Afraid I don't know how to do what you want with the ffmpeg commandline tool, though, either by partial re-encoding or by edit lists.

mkv can do something similar with chapter lists

Yes, this is possible, depending on the codec and container. I have done similar operations with h264+mp4.

It's good to be able to edit video without losing quality.

Are you sure you need sub-keyframe precision? In h264+aac+mp4, for example, if it's not keyframe aligned, the result is usually a stalled video frame for a split second, but since the audio continues smoothly, it's not that noticeable.

If you know the exact codec settings that were used to encode the video, you can create new pieces to be fit losslessly together. Otherwise, it is more difficult.

Contact me on twitter at @downpoured and I can describe more.

I hope Ubuntu gets better at updating FFmpeg by bringing it in from the "universe" category of unsupported packages. Or second best option, stops shipping it.

Just this week there was an update showing that they had nearly a year-long window of vulnerability due to out of date version[1].

A media format christmas tree like this has really a lot of vulnerabilities & exposes the user to them fairly directly through media files.

[1] https://bugs.launchpad.net/ubuntu/+source/ffmpeg/+bug/169778...

Seems like a good reason to keep it out of the base installation. Besides the patent minefield that comes with media players, of course.

FFmpeg has been an amazing tool. I don't know if this is helpful, but using static linked builds has been a big time saver for me. Patent issues can make it tough to get a feature complete install. The ones below have worked amazingly well.


Awesome, initial AV1 support!

lol I read that as "AVI support" and thought you were being sarcastic :)

That’s why AV1 is a terrible name, they should come up with something else while it’s still possible changing it.

You won't see .av1 file extensions, it will still be in a .mkv or .webm container.

If you write it lowercase the 1 stands out over the i. av1

Or it looks like avl, depending on the font...

This is what has me excited, too (although unfortunately we're still looking at a likely 6-8 years before relevant targets will support it too...).

Depends on what you classify as relevant targets. All big hardware companies have been onboard since the begging and probably already have prototypes of fixed-function decoders. Chances are we'll have consumer hardware with such decoders sometime next year.

If you actually go on the AV1 spec issue tracker, there are issues (both closed and open) from people at Nvidia, ARM's hardware team, Google and Netflix.

> Removed the ffserver program

Lots of good times with ffserver although thankfully https://github.com/arut/nginx-rtmp-module seems to meet the same use cases and exec ffmpeg under the hood.

Unfortunately is seems that Roman Arutyunyan has not been able (or willing) to keep up the development of nginx-rtmp-module. Thankfully, Mr. Sergey Dryabzhinsky has a fork [0] that has added a lot of nice new features (EXT-X-PROGRAM-DATE-TIME!) and some bug fixes.

[0] https://github.com/sergey-dryabzhinsky/nginx-rtmp-module

Has anyone ever written an ffmpeg script that could break a video apart into interesting cuts?

Someone posted a brilliant script in one of these ffmpeg posts but I can't find it for the life of me. I used it to create "trailers" of my media collection.

Do you have subtitles for those videos?

I wrote a script that cuts out clips of every sentence spoken, and builds them into example sentences to learn Chinese.


Brilliant. Would you mind sharing it?

There's actually several scripts: burn the subtitles into the movie as hard-subs, extend the subtitles by 1 second, make clips of each subtitle, make headings, and combine the clips with the headings.

These are my rough notes I made at the time (you could skip the Pingtype steps if you're not trying to make bilingual language learning material).


Wow, that's great.

Here's my attempt at building something for language learning since my listening skills trail so far behind my reading skills: https://www.danneu.com/slow-spanish/

It parses this painstakingly created file: https://github.com/danneu/slow-spanish/blob/a455da3a230632c2...

Unfortunately it's really hard to generate the source material (timestamping a transcript).

So my idea was to upload some slow-speaking audio to Youtube and let it autogen its .srt subtitle files. The subtitles don't come out perfectly, but it's the timestamp data I'm after since the goal is a UI that makes it easy to replay and scrub around spoken audio.

Using YouTube to generate the timestamps is a really good idea!

I'm manually recording timestamps while I read/listen to the Bible, verse by verse. Every time I click pause in Pingtype's Media Viewer, it logs the time. It's painstaking, but I'm trying to study each verse while I read anyway, so it's good to let me pause regularly.

There's a lot of LRC data for songs that are used in KTV/Karaoke. You just need to find a good data source for Spanish. In my opinion, listening to music and singing along in church helped my Chinese much more than textbooks. I still lack confidence speaking, but my listening improved a lot when my regular playlist became majority-Chinese (I listen to iTunes all day).

> native aptX and aptX HD encoder and decoder

Sounds great. Is there any meaning to Linux computers that don’t support aptX? Also I am wondering how it is posssible to include the aptX codec since its license term is against GPL?

> The encoder was reverse engineered from binary library and from EP0398973B1 patent (long expired). The decoder was simply deduced from the encoder.


> Aptx support for linux with FFMpeg and bluez-alsa

https://github.com/Samt43/BluetoothAPTXForLinux https://github.com/Arkq/bluez-alsa/issues/92

Thanks fot the links. But wow, this work is simply incredible for a one man's job

The aptX reference implementation is not GPL.

But FFmpeg have a clean-room implementation, based on the (expired) EP0398973B1 patent and from reverse-engineering the binary library.

There will be if pulseaudio starts using it to encode. The decoder and encoder in the codebase are both LGPL licensed.

Thank you ffmpeg contributers. I want to let you know the famous Xzibit entrances video (https://youtu.be/2dkN0YIBjEM) was made in no smart part thanks to ffmpeg.

Hopefully this means the imminent packaging of mpv 0.28 with Vulkan support.

Looks like there has been a lot[1] of discussion about it, but nothing decided yet?

1: https://github.com/mpv-player/mpv/issues/5571

That issue is about supporting Vulkan on macOS via MoltenVK, but mpv already supports Vulkan on Windows and Linux.

The problem that I think the parent post is referring to is that mpv 0.28.0, which introduced Vulkan support, also introduced a hard dependency on FFmpeg APIs that haven't been released until now (4.0). Linux distros prefer to use stable versions of packages, so most of them have been packaging FFmpeg 3.x and mpv 0.27.0. They can only upgrade to mpv 0.28.0 (with Vulkan support) now that FFmpeg 4.0 has been released.

It's a great pity ffserver has been removed.

why is that? could not find explanations, though I rarely use ffserver

Found the reason, sigh: "After thorough deliberation, we're announcing that we're about to drop the ffserver program from the project starting with the next release. ffserver has been a problematic program to maintain due to its use of internal APIs, which complicated the recent cleanups to the libavformat library, and block further cleanups and improvements which are desired by API users and will be easier to maintain. Furthermore the program has been hard for users to deploy and run due to reliability issues, lack of knowledgable people to help and confusing configuration file syntax. Current users and members of the community are invited to write a replacement program to fill the same niche that ffserver did using the new APIs and to contact us so we may point users to test and contribute to its development."

Quick Question,

For a personal project, I would like to generate videos to visualize the evolution of our git repository.

Is ffmpeg the best approach to programmatically create videos? What is the state of java, python or go bindings for such a usecase?

Or should I use OpenGL for this particular use?

I'm new to this, so any help and guidance would be great for me to get started.


If you want to try out golang, go has an amazing gif library built-in.

Here is a nice excerpt out of a tutorial exercise from the book The Go Programming Language: http://www.informit.com/articles/article.aspx?p=2453564&seqN...

Have you considered using Gource?


As an example, here's a video covering 22 years of the evolution of Python:


Thank you for pointing me to gource, but I wanted to understand it as a general approach which would be a better build it via libffmpeg or opengl.

I'm keen on building something, and extending to other use cases like embedding photographs, milestones and other major events involving our business unit.

Nice to see a stable release. mpv was already requiring >3.4 (which meant git master) but many other programs did not compile with ffmpeg master...

> support LibreSSL (via libtls)

Wow, libtls! Nice.

What does the "entropy video filter" do?

Generates a histogram of pixel values in a frame and then,

in normal mode, calculates a (weighted) measure of the variance in pixel values.

in diff mode, calculates a (weighted) measure of the variance in differences of pixel count between two neighbouring values (if 800 pixels have value 112 and 1400 pixels have value 113, then the (abs) difference is 600)

Thank you! Quality info.

Couldn't find a PPA or a docker image for it, would I need to install it from source?

I would really like to test AV1 with it.

The canonical way is just to grab the static build from the home page. You don't need a full OS container image to run a single binary.

Yes. PPA and Docker are great but compiling isn't too difficult and worth remembering how to do it.

The initial compile isn’t bad but the second you want to render some font on a video with ffmpeg on Ubuntu, good luck with that

Are the fonts compiled in?

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact