Isn't that an inefficient way to use ffmpeg to extract a frame every n seconds?
You'll read the first 10 seconds (d/n) times, for example.
Probably better to grab the frames and then rename them according to the time afterwards (because ffmpeg doesn't yet have timestampable output filenames.)
ffmpeg -i video -fps 1/10 output_%0d.jpg
j=10
for i in output_*.jpg; do
t=$(printf "${OUTPUT_DIR}/frame_%02d:%02d:%02d.jpg" $((j/3600)) $(((j/60)%60)) $((j%60)))
mv -f "$i" "$t"
j=$((j+10))
done
Although for big files, `-fps` turns out to be slow. https://github.com/fluent-ffmpeg/node-fluent-ffmpeg/issues/4... is a much better solution - multiple `-ss` and `-i` on the same file. For the 14GB 4K video I tested, `-fps 1/462` was going to take ~4h, `-vf fps="fps=1/60"` was about 60 minutes, and the multiple `-ss` variant took ~35s.
Probably better to grab the frames and then rename them according to the time afterwards (because ffmpeg doesn't yet have timestampable output filenames.)