libav (no wildcard) is a fork of FFmpeg that is broadly focused on reducing bloat and cleaning up the API. As such, libav tends to add features more slowly, while FFmpeg generally follows all of libav’s new features and bug fixes as well as its own . Debian was on libav for a while but went back to FFmpeg in 2015 .
Oh, so were those things called "libav" even before the fork, and were perhaps the origin for the name of the fork libav?
- Intel QSV-accelerated MJPEG encoding
- NVIDIA NVDEC-accelerated H.264, HEVC, MJPEG, MPEG-1/2/4, VC1, VP8/9 hwaccel decoding
- Intel QSV-accelerated overlay filter
- OpenCL overlay filter
- VAAPI MJPEG and VP8 decoding
- AMD AMF H.264 and HEVC encoders
- VideoToolbox HEVC encoder and hwaccel
- VAAPI-accelerated ProcAmp (color balance), denoise and sharpness filters
Confused, as I've been using ffmpeg for HEVC NVDEC already...
EDIT: Just found that FFmpeg merges (almost) all libav.org changes: https://github.com/FFmpeg/FFmpeg/blob/master/doc/libav-merge...
I was trying to do something the other day and couldn’t figure it out, if anyone has any ideas.
The end goal is to provide a set of video files, with time stamps for each, splicing them into one file while removing parts I don’t want.
That is straightforward enough, as long as you’re willing to re-encode the whole file. Otherwise, it seems like ffmpeg is restricted to make cuts at key frames.
It’s rare for the key frame to be placed at the exact spot I would want to make a cut, so the section of the video around the cut would need to be re-encoded. Ideally that would be the only part hat is re-encoded - everything else would be a a straight transcode from key frame to key frame.
I believe this is called ‘smart rendering’, and the pages I could find in the past said ffmpeg isn’t really suited for it, or it’s very difficult.
Does anyone know if that has changed recently, or have found a way to do it?
Afraid I don't know how to do what you want with the ffmpeg commandline tool, though, either by partial re-encoding or by edit lists.
It's good to be able to edit video without losing quality.
Are you sure you need sub-keyframe precision? In h264+aac+mp4, for example, if it's not keyframe aligned, the result is usually a stalled video frame for a split second, but since the audio continues smoothly, it's not that noticeable.
If you know the exact codec settings that were used to encode the video, you can create new pieces to be fit losslessly together. Otherwise, it is more difficult.
Contact me on twitter at @downpoured and I can describe more.
Just this week there was an update showing that they had nearly a year-long window of vulnerability due to out of date version.
A media format christmas tree like this has really a lot of vulnerabilities & exposes the user to them fairly directly through media files.
If you actually go on the AV1 spec issue tracker, there are issues (both closed and open) from people at Nvidia, ARM's hardware team, Google and Netflix.
Lots of good times with ffserver although thankfully https://github.com/arut/nginx-rtmp-module seems to meet the same use cases and exec ffmpeg under the hood.
Someone posted a brilliant script in one of these ffmpeg posts but I can't find it for the life of me. I used it to create "trailers" of my media collection.
I wrote a script that cuts out clips of every sentence spoken, and builds them into example sentences to learn Chinese.
These are my rough notes I made at the time (you could skip the Pingtype steps if you're not trying to make bilingual language learning material).
Here's my attempt at building something for language learning since my listening skills trail so far behind my reading skills: https://www.danneu.com/slow-spanish/
It parses this painstakingly created file: https://github.com/danneu/slow-spanish/blob/a455da3a230632c2...
Unfortunately it's really hard to generate the source material (timestamping a transcript).
So my idea was to upload some slow-speaking audio to Youtube and let it autogen its .srt subtitle files. The subtitles don't come out perfectly, but it's the timestamp data I'm after since the goal is a UI that makes it easy to replay and scrub around spoken audio.
I'm manually recording timestamps while I read/listen to the Bible, verse by verse. Every time I click pause in Pingtype's Media Viewer, it logs the time. It's painstaking, but I'm trying to study each verse while I read anyway, so it's good to let me pause regularly.
There's a lot of LRC data for songs that are used in KTV/Karaoke. You just need to find a good data source for Spanish. In my opinion, listening to music and singing along in church helped my Chinese much more than textbooks. I still lack confidence speaking, but my listening improved a lot when my regular playlist became majority-Chinese (I listen to iTunes all day).
Sounds great. Is there any meaning to Linux computers that don’t support aptX? Also I am wondering how it is posssible to include the aptX codec since its license term is against GPL?
> Aptx support for linux with FFMpeg and bluez-alsa
But FFmpeg have a clean-room implementation, based on the (expired) EP0398973B1 patent and from reverse-engineering the binary library.
The problem that I think the parent post is referring to is that mpv 0.28.0, which introduced Vulkan support, also introduced a hard dependency on FFmpeg APIs that haven't been released until now (4.0). Linux distros prefer to use stable versions of packages, so most of them have been packaging FFmpeg 3.x and mpv 0.27.0. They can only upgrade to mpv 0.28.0 (with Vulkan support) now that FFmpeg 4.0 has been released.
For a personal project, I would like to generate videos to visualize the evolution of our git repository.
Is ffmpeg the best approach to programmatically create videos?
What is the state of java, python or go bindings for such a usecase?
Or should I use OpenGL for this particular use?
I'm new to this, so any help and guidance would be great for me to get started.
Here is a nice excerpt out of a tutorial exercise from the book The Go Programming Language:
As an example, here's a video covering 22 years of the evolution of Python:
I'm keen on building something, and extending to other use cases like embedding photographs, milestones and other major events involving our business unit.
> support LibreSSL (via libtls)
Wow, libtls! Nice.
in normal mode, calculates a (weighted) measure of the variance in pixel values.
in diff mode, calculates a (weighted) measure of the variance in differences of pixel count between two neighbouring values (if 800 pixels have value 112 and 1400 pixels have value 113, then the (abs) difference is 600)
I would really like to test AV1 with it.