librempeg/doc/muxers.texi
Martin Storsjö be066bb5cf movenc: Add an option for resilient, hybrid fragmented/non-fragmented muxing
This allows ending up with a normal, non-fragmented file when
the file is finished, while keeping the file readable if writing
is aborted abruptly at any point. (Normally when writing a
mov/mp4 file, the unfinished file is completely useless unless it
is finished properly.)

This results in a file where the mdat atom contains (and hides)
all the moof atoms that were part of the fragmented file structure
initially.

Signed-off-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Paul B Mahol <onemda@gmail.com>
2024-06-24 15:59:55 +02:00

3882 lines
132 KiB
Plaintext

@chapter Muxers
@c man begin MUXERS
Muxers are configured elements in FFmpeg which allow writing
multimedia streams to a particular type of file.
When you configure your FFmpeg build, all the supported muxers
are enabled by default. You can list all available muxers using the
configure option @code{--list-muxers}.
You can disable all the muxers with the configure option
@code{--disable-muxers} and selectively enable / disable single muxers
with the options @code{--enable-muxer=@var{MUXER}} /
@code{--disable-muxer=@var{MUXER}}.
The option @code{-muxers} of the ff* tools will display the list of
enabled muxers. Use @code{-formats} to view a combined list of
enabled demuxers and muxers.
A description of some of the currently available muxers follows.
@anchor{raw muxers}
@section Raw muxers
This section covers raw muxers. They accept a single stream matching
the designated codec. They do not store timestamps or metadata. The
recognized extension is the same as the muxer name unless indicated
otherwise.
It comprises the following muxers. The media type and the eventual
extensions used to automatically selects the muxer from the output
extensions are also shown.
@table @samp
@item ac3 @emph{audio}
Dolby Digital, also known as AC-3.
@item adx @emph{audio}
CRI Middleware ADX audio.
This muxer will write out the total sample count near the start of the
first packet when the output is seekable and the count can be stored
in 32 bits.
@item aptx @emph{audio}
aptX (Audio Processing Technology for Bluetooth)
@item aptx_hd @emph{audio} (aptxdh)
aptX HD (Audio Processing Technology for Bluetooth) audio
@item avs2 @emph{video} (avs, avs2)
AVS2-P2 (Audio Video Standard - Second generation - Part 2) /
IEEE 1857.4 video
@item avs3 @emph{video} (avs3)
AVS3-P2 (Audio Video Standard - Third generation - Part 2) /
IEEE 1857.10 video
@item cavsvideo @emph{video} (cavs)
Chinese AVS (Audio Video Standard - First generation)
@item codec2raw @emph{audio}
Codec 2 audio.
No extension is registered so format name has to be supplied e.g. with
the ffmpeg CLI tool @code{-f codec2raw}.
@item data @emph{any}
Generic data muxer.
This muxer accepts a single stream with any codec of any type. The
input stream has to be selected using the @code{-map} option with the
@command{ffmpeg} CLI tool.
No extension is registered so format name has to be supplied e.g. with
the @command{ffmpeg} CLI tool @code{-f data}.
@item dfpwm @emph{audio} (dfpwm)
Raw DFPWM1a (Dynamic Filter Pulse With Modulation) audio muxer.
@item dirac @emph{video} (drc, vc2)
BBC Dirac video.
The Dirac Pro codec is a subset and is standardized as SMPTE VC-2.
@item dnxhd @emph{video} (dnxhd, dnxhr)
Avid DNxHD video.
It is standardized as SMPTE VC-3. Accepts DNxHR streams.
@item dts @emph{audio}
DTS Coherent Acoustics (DCA) audio
@item eac3 @emph{audio}
Dolby Digital Plus, also known as Enhanced AC-3
@item evc @emph{video} (evc)
MPEG-5 Essential Video Coding (EVC) / EVC / MPEG-5 Part 1 EVC video
@item g722 @emph{audio}
ITU-T G.722 audio
@item g723_1 @emph{audio} (tco, rco)
ITU-T G.723.1 audio
@item g726 @emph{audio}
ITU-T G.726 big-endian ("left-justified") audio.
No extension is registered so format name has to be supplied e.g. with
the @command{ffmpeg} CLI tool @code{-f g726}.
@item g726le @emph{audio}
ITU-T G.726 little-endian ("right-justified") audio.
No extension is registered so format name has to be supplied e.g. with
the @command{ffmpeg} CLI tool @code{-f g726le}.
@item gsm @emph{audio}
Global System for Mobile Communications audio
@item h261 @emph{video}
ITU-T H.261 video
@item h263 @emph{video}
ITU-T H.263 / H.263-1996, H.263+ / H.263-1998 / H.263 version 2 video
@item h264 @emph{video} (h264, 264)
ITU-T H.264 / MPEG-4 Part 10 AVC video. Bitstream shall be converted
to Annex B syntax if it's in length-prefixed mode.
@item hevc @emph{video} (hevc, h265, 265)
ITU-T H.265 / MPEG-H Part 2 HEVC video. Bitstream shall be converted
to Annex B syntax if it's in length-prefixed mode.
@item m4v @emph{video}
MPEG-4 Part 2 video
@item mjpeg @emph{video} (mjpg, mjpeg)
Motion JPEG video
@item mlp @emph{audio}
Meridian Lossless Packing, also known as Packed PCM
@item mp2 @emph{audio} (mp2, m2a, mpa)
MPEG-1 Audio Layer II audio
@item mpeg1video @emph{video} (mpg, mpeg, m1v)
MPEG-1 Part 2 video.
@item mpeg2video @emph{video} (m2v)
ITU-T H.262 / MPEG-2 Part 2 video
@item obu @emph{video}
AV1 low overhead Open Bitstream Units muxer.
Temporal delimiter OBUs will be inserted in all temporal units of the
stream.
@item rawvideo @emph{video} (yuv, rgb)
Raw uncompressed video.
@item sbc @emph{audio} (sbc, msbc)
Bluetooth SIG low-complexity subband codec audio
@item truehd @emph{audio} (thd)
Dolby TrueHD audio
@item vc1 @emph{video}
SMPTE 421M / VC-1 video
@end table
@subsection Examples
@itemize
@item
Store raw video frames with the @samp{rawvideo} muxer using @command{ffmpeg}:
@example
ffmpeg -f lavfi -i testsrc -t 10 -s hd1080p testsrc.yuv
@end example
Since the rawvideo muxer do not store the information related to size
and format, this information must be provided when demuxing the file:
@example
ffplay -video_size 1920x1080 -pixel_format rgb24 -f rawvideo testsrc.rgb
@end example
@end itemize
@section Raw PCM muxers
This section covers raw PCM (Pulse-Code Modulation) audio muxers.
They accept a single stream matching the designated codec. They do not
store timestamps or metadata. The recognized extension is the same as
the muxer name.
It comprises the following muxers. The optional additional extension
used to automatically select the muxer from the output extension is
also shown in parentheses.
@table @samp
@item alaw (al)
PCM A-law
@item f32be
PCM 32-bit floating-point big-endian
@item f32le
PCM 32-bit floating-point little-endian
@item f64be
PCM 64-bit floating-point big-endian
@item f64le
PCM 64-bit floating-point little-endian
@item mulaw (ul)
PCM mu-law
@item s16be
PCM signed 16-bit big-endian
@item s16le
PCM signed 16-bit little-endian
@item s24be
PCM signed 24-bit big-endian
@item s24le
PCM signed 24-bit little-endian
@item s32be
PCM signed 32-bit big-endian
@item s32le
PCM signed 32-bit little-endian
@item s8 (sb)
PCM signed 8-bit
@item u16be
PCM unsigned 16-bit big-endian
@item u16le
PCM unsigned 16-bit little-endian
@item u24be
PCM unsigned 24-bit big-endian
@item u24le
PCM unsigned 24-bit little-endian
@item u32be
PCM unsigned 32-bit big-endian
@item u32le
PCM unsigned 32-bit little-endian
@item u8 (ub)
PCM unsigned 8-bit
@item vidc
PCM Archimedes VIDC
@end table
@section MPEG-1/MPEG-2 program stream muxers
This section covers formats belonging to the MPEG-1 and MPEG-2 Systems
family.
The MPEG-1 Systems format (also known as ISO/IEEC 11172-1 or MPEG-1
program stream) has been adopted for the format of media track stored
in VCD (Video Compact Disc).
The MPEG-2 Systems standard (also known as ISO/IEEC 13818-1) covers
two containers formats, one known as transport stream and one known as
program stream; only the latter is covered here.
The MPEG-2 program stream format (also known as VOB due to the
corresponding file extension) is an extension of MPEG-1 program
stream: in addition to support different codecs for the audio and
video streams, it also stores subtitles and navigation metadata.
MPEG-2 program stream has been adopted for storing media streams in
SVCD and DVD storage devices.
This section comprises the following muxers.
@table @samp
@item mpeg (mpg,mpeg)
MPEG-1 Systems / MPEG-1 program stream muxer.
@item vcd
MPEG-1 Systems / MPEG-1 program stream (VCD) muxer.
This muxer can be used to generate tracks in the format accepted by
the VCD (Video Compact Disc) storage devices.
It is the same as the @samp{mpeg} muxer with a few differences.
@item vob
MPEG-2 program stream (VOB) muxer.
@item dvd
MPEG-2 program stream (DVD VOB) muxer.
This muxer can be used to generate tracks in the format accepted by
the DVD (Digital Versatile Disc) storage devices.
This is the same as the @samp{vob} muxer with a few differences.
@item svcd (vob)
MPEG-2 program stream (SVCD VOB) muxer.
This muxer can be used to generate tracks in the format accepted by
the SVCD (Super Video Compact Disc) storage devices.
This is the same as the @samp{vob} muxer with a few differences.
@end table
@subsection Options
@table @option
@item muxrate @var{rate}
Set user-defined mux rate expressed as a number of bits/s. If not
specied the automatically computed mux rate is employed. Default value
is @code{0}.
@item preload @var{delay}
Set initial demux-decode delay in microseconds. Default value is
@code{500000}.
@end table
@section MOV/MPEG-4/ISOMBFF muxers
This section covers formats belonging to the QuickTime / MOV family,
including the MPEG-4 Part 14 format and ISO base media file format
(ISOBMFF). These formats share a common structure based on the ISO
base media file format (ISOBMFF).
The MOV format was originally developed for use with Apple QuickTime.
It was later used as the basis for the MPEG-4 Part 1 (later Part 14)
format, also known as ISO/IEC 14496-1. That format was then
generalized into ISOBMFF, also named MPEG-4 Part 12 format, ISO/IEC
14496-12, or ISO/IEC 15444-12.
It comprises the following muxers.
@table @samp
@item 3gp
Third Generation Partnership Project (3GPP) format for 3G UMTS
multimedia services
@item 3g2
Third Generation Partnership Project 2 (3GP2 or 3GPP2) format for 3G
CDMA2000 multimedia services, similar to @samp{3gp} with extensions
and limitations
@item f4v
Adobe Flash Video format
@item ipod
MPEG-4 audio file format, as MOV/MP4 but limited to contain only audio
streams, typically played with the Apple ipod device
@item ismv
Microsoft IIS (Internet Information Services) Smooth Streaming
Audio/Video (ISMV or ISMA) format. This is based on MPEG-4 Part 14
format with a few incompatible variants, used to stream media files
for the Microsoft IIS server.
@item mov
QuickTime player format identified by the @code{.mov} extension
@item mp4
MP4 or MPEG-4 Part 14 format
@item psp
PlayStation Portable MP4/MPEG-4 Part 14 format variant. This is based
on MPEG-4 Part 14 format with a few incompatible variants, used to
play files on PlayStation devices.
@end table
@subsection Fragmentation
The @samp{mov}, @samp{mp4}, and @samp{ismv} muxers support
fragmentation. Normally, a MOV/MP4 file has all the metadata about all
packets stored in one location.
This data is usually written at the end of the file, but it can be
moved to the start for better playback by adding @code{+faststart} to
the @code{-movflags}, or using the @command{qt-faststart} tool).
A fragmented file consists of a number of fragments, where packets and
metadata about these packets are stored together. Writing a fragmented
file has the advantage that the file is decodable even if the writing
is interrupted (while a normal MOV/MP4 is undecodable if it is not
properly finished), and it requires less memory when writing very long
files (since writing normal MOV/MP4 files stores info about every
single packet in memory until the file is closed). The downside is
that it is less compatible with other applications.
Fragmentation is enabled by setting one of the options that define
how to cut the file into fragments:
@table @option
@item frag_duration
@item frag_size
@item min_frag_duration
@item movflags +frag_keyframe
@item movflags +frag_custom
@end table
If more than one condition is specified, fragments are cut when one of
the specified conditions is fulfilled. The exception to this is the
option @option{min_frag_duration}, which has to be fulfilled for any
of the other conditions to apply.
@subsection Options
@table @option
@item brand @var{brand_string}
Override major brand.
@item empty_hdlr_name @var{bool}
Enable to skip writing the name inside a @code{hdlr} box.
Default is @code{false}.
@item encryption_key @var{key}
set the media encryption key in hexadecimal format
@item encryption_kid @var{kid}
set the media encryption key identifier in hexadecimal format
@item encryption_scheme @var{scheme}
configure the encryption scheme, allowed values are @samp{none}, and
@samp{cenc-aes-ctr}
@item frag_duration @var{duration}
Create fragments that are @var{duration} microseconds long.
@item frag_interleave @var{number}
Interleave samples within fragments (max number of consecutive
samples, lower is tighter interleaving, but with more overhead. It is
set to @code{0} by default.
@item frag_size @var{size}
create fragments that contain up to @var{size} bytes of payload data
@item iods_audio_profile @var{profile}
specify iods number for the audio profile atom (from -1 to 255),
default is @code{-1}
@item iods_video_profile @var{profile}
specify iods number for the video profile atom (from -1 to 255),
default is @code{-1}
@item ism_lookahead @var{num_entries}
specify number of lookahead entries for ISM files (from 0 to 255),
default is @code{0}
@item min_frag_duration @var{duration}
do not create fragments that are shorter than @var{duration} microseconds long
@item moov_size @var{bytes}
Reserves space for the moov atom at the beginning of the file instead of placing the
moov atom at the end. If the space reserved is insufficient, muxing will fail.
@item mov_gamma @var{gamma}
specify gamma value for gama atom (as a decimal number from 0 to 10),
default is @code{0.0}, must be set together with @code{+ movflags}
@item movflags @var{flags}
Set various muxing switches. The following flags can be used:
@table @samp
@item cmaf
write CMAF (Common Media Application Format) compatible fragmented
MP4 output
@item dash
write DASH (Dynamic Adaptive Streaming over HTTP) compatible fragmented
MP4 output
@item default_base_moof
Similarly to the @samp{omit_tfhd_offset} flag, this flag avoids
writing the absolute base_data_offset field in tfhd atoms, but does so
by using the new default-base-is-moof flag instead. This flag is new
from 14496-12:2012. This may make the fragments easier to parse in
certain circumstances (avoiding basing track fragment location
calculations on the implicit end of the previous track fragment).
@item delay_moov
delay writing the initial moov until the first fragment is cut, or
until the first fragment flush
@item disable_chpl
Disable Nero chapter markers (chpl atom). Normally, both Nero chapters
and a QuickTime chapter track are written to the file. With this
option set, only the QuickTime chapter track will be written. Nero
chapters can cause failures when the file is reprocessed with certain
tagging programs, like mp3Tag 2.61a and iTunes 11.3, most likely other
versions are affected as well.
@item faststart
Run a second pass moving the index (moov atom) to the beginning of the
file. This operation can take a while, and will not work in various
situations such as fragmented output, thus it is not enabled by
default.
@item frag_custom
Allow the caller to manually choose when to cut fragments, by calling
@code{av_write_frame(ctx, NULL)} to write a fragment with the packets
written so far. (This is only useful with other applications
integrating libavformat, not from @command{ffmpeg}.)
@item frag_discont
signal that the next fragment is discontinuous from earlier ones
@item frag_every_frame
fragment at every frame
@item frag_keyframe
start a new fragment at each video keyframe
@item global_sidx
write a global sidx index at the start of the file
@item isml
create a live smooth streaming feed (for pushing to a publishing point)
@item negative_cts_offsets
Enables utilization of version 1 of the CTTS box, in which the CTS offsets can
be negative. This enables the initial sample to have DTS/CTS of zero, and
reduces the need for edit lists for some cases such as video tracks with
B-frames. Additionally, eases conformance with the DASH-IF interoperability
guidelines.
This option is implicitly set when writing @samp{ismv} (Smooth
Streaming) files.
@item omit_tfhd_offset
Do not write any absolute base_data_offset in tfhd atoms. This avoids
tying fragments to absolute byte positions in the file/streams.
@item prefer_icc
If writing colr atom prioritise usage of ICC profile if it exists in
stream packet side data.
@item rtphint
add RTP hinting tracks to the output file
@item separate_moof
Write a separate moof (movie fragment) atom for each track. Normally,
packets for all tracks are written in a moof atom (which is slightly
more efficient), but with this option set, the muxer writes one
moof/mdat pair for each track, making it easier to separate tracks.
@item skip_sidx
Skip writing of sidx atom. When bitrate overhead due to sidx atom is
high, this option could be used for cases where sidx atom is not
mandatory. When the @samp{global_sidx} flag is enabled, this option
is ignored.
@item skip_trailer
skip writing the mfra/tfra/mfro trailer for fragmented files
@item use_metadata_tags
use mdta atom for metadata
@item write_colr
write colr atom even if the color info is unspecified. This flag is
experimental, may be renamed or changed, do not use from scripts.
@item write_gama
write deprecated gama atom
@item hybrid_fragmented
For recoverability - write the output file as a fragmented file.
This allows the intermediate file to be read while being written
(in particular, if the writing process is aborted uncleanly). When
writing is finished, the file is converted to a regular, non-fragmented
file, which is more compatible and allows easier and quicker seeking.
If writing is aborted, the intermediate file can manually be
remuxed to get a regular, non-fragmented file of what had been
written into the unfinished file.
@end table
@item movie_timescale @var{scale}
Set the timescale written in the movie header box (@code{mvhd}).
Range is 1 to INT_MAX. Default is @code{1000}.
@item rtpflags @var{flags}
Add RTP hinting tracks to the output file.
The following flags can be used:
@table @samp
@item h264_mode0
use mode 0 for H.264 in RTP
@item latm
use MP4A-LATM packetization instead of MPEG4-GENERIC for AAC
@item rfc2190
use RFC 2190 packetization instead of RFC 4629 for H.263
@item send_bye
send RTCP BYE packets when finishing
@item skip_rtcp
do not send RTCP sender reports
@end table
@item skip_iods @var{bool}
skip writing iods atom (default value is @code{true})
@item use_editlist @var{bool}
use edit list (default value is @code{auto})
@item use_stream_ids_as_track_ids @var{bool}
use stream ids as track ids (default value is @code{false})
@item video_track_timescale @var{scale}
Set the timescale used for video tracks. Range is @code{0} to INT_MAX. If
set to @code{0}, the timescale is automatically set based on the
native stream time base. Default is @code{0}.
@item write_btrt @var{bool}
Force or disable writing bitrate box inside stsd box of a track. The
box contains decoding buffer size (in bytes), maximum bitrate and
average bitrate for the track. The box will be skipped if none of
these values can be computed. Default is @code{-1} or @code{auto},
which will write the box only in MP4 mode.
@item write_prft @var{option}
Write producer time reference box (PRFT) with a specified time source for the
NTP field in the PRFT box. Set value as @samp{wallclock} to specify timesource
as wallclock time and @samp{pts} to specify timesource as input packets' PTS
values.
@item write_tmcd @var{bool}
Specify @code{on} to force writing a timecode track, @code{off} to disable it
and @code{auto} to write a timecode track only for mov and mp4 output (default).
Setting value to @samp{pts} is applicable only for a live encoding use case,
where PTS values are set as as wallclock time at the source. For example, an
encoding use case with decklink capture source where @option{video_pts} and
@option{audio_pts} are set to @samp{abs_wallclock}.
@end table
@subsection Examples
@itemize
@item
Push Smooth Streaming content in real time to a publishing point on
IIS with the @samp{ismv} muxer using @command{ffmpeg}:
@example
ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
@end example
@end itemize
@anchor{a64}
@section a64
A64 Commodore 64 video muxer.
This muxer accepts a single @code{a64_multi} or @code{a64_multi5}
codec video stream.
@section ac4
Raw AC-4 audio muxer.
This muxer accepts a single @code{ac4} audio stream.
@subsection Options
@table @option
@item write_crc @var{bool}
when enabled, write a CRC checksum for each packet to the output,
default is @code{false}
@end table
@anchor{adts}
@section adts
Audio Data Transport Stream muxer.
It accepts a single AAC stream.
@subsection Options
@table @option
@item write_id3v2 @var{bool}
Enable to write ID3v2.4 tags at the start of the stream. Default is
disabled.
@item write_apetag @var{bool}
Enable to write APE tags at the end of the stream. Default is
disabled.
@item write_mpeg2 @var{bool}
Enable to set MPEG version bit in the ADTS frame header to 1 which
indicates MPEG-2. Default is 0, which indicates MPEG-4.
@end table
@anchor{aea}
@section aea
MD STUDIO audio muxer.
This muxer accepts a single ATRAC1 audio stream with either one or two channels
and a sample rate of 44100Hz.
As AEA supports storing the track title, this muxer will also write
the title from stream's metadata to the container.
@anchor{aiff}
@section aiff
Audio Interchange File Format muxer.
@subsection Options
@table @option
@item write_id3v2 @var{bool}
Enable ID3v2 tags writing when set to 1. Default is 0 (disabled).
@item id3v2_version @var{bool}
Select ID3v2 version to write. Currently only version 3 and 4 (aka.
ID3v2.3 and ID3v2.4) are supported. The default is version 4.
@end table
@anchor{alp}
@section alp
High Voltage Software's Lego Racers game audio muxer.
It accepts a single ADPCM_IMA_ALP stream with no more than 2 channels
and a sample rate not greater than 44100 Hz.
Extensions: @code{tun}, @code{pcm}
@subsection Options
@table @option
@item type @var{type}
Set file type.
@var{type} accepts the following values:
@table @samp
@item tun
Set file type as music. Must have a sample rate of 22050 Hz.
@item pcm
Set file type as sfx.
@item auto
Set file type as per output file extension. @code{.pcm} results in
type @code{pcm} else type @code{tun} is set. @var{(default)}
@end table
@end table
@section amr
3GPP AMR (Adaptive Multi-Rate) audio muxer.
It accepts a single audio stream containing an AMR NB stream.
@section amv
AMV (Actions Media Video) format muxer.
@section apm
Ubisoft Rayman 2 APM audio muxer.
It accepts a single ADPCM IMA APM audio stream.
@section apng
Animated Portable Network Graphics muxer.
It accepts a single APNG video stream.
@subsection Options
@table @option
@item final_delay @var{delay}
Force a delay expressed in seconds after the last frame of each
repetition. Default value is @code{0.0}.
@item plays @var{repetitions}
specify how many times to play the content, @code{0} causes an infinte
loop, with @code{1} there is no loop
@end table
@subsection Examples
@itemize
@item
Use @command{ffmpeg} to generate an APNG output with 2 repetitions,
and with a delay of half a second after the first repetition:
@example
ffmpeg -i INPUT -final_delay 0.5 -plays 2 out.apng
@end example
@end itemize
@section argo_asf
Argonaut Games ASF audio muxer.
It accepts a single ADPCM audio stream.
@subsection Options
@table @option
@item version_major @var{version}
override file major version, specified as an integer, default value is
@code{2}
@item version_minor @var{version}
override file minor version, specified as an integer, default value is
@code{1}
@item name @var{name}
Embed file name into file, if not specified use the output file
name. The name is truncated to 8 characters.
@end table
@section argo_cvg
Argonaut Games CVG audio muxer.
It accepts a single one-channel ADPCM 22050Hz audio stream.
The @option{loop} and @option{reverb} options set the corresponding
flags in the header which can be later retrieved to process the audio
stream accordingly.
@subsection Options
@table @option
@item skip_rate_check @var{bool}
skip sample rate check (default is @code{false})
@item loop @var{bool}
set loop flag (default is @code{false})
@item reverb @var{boolean}
set reverb flag (default is @code{true})
@end table
@anchor{asf}
@section asf, asf_stream
Advanced / Active Systems (or Streaming) Format audio muxer.
The @samp{asf_stream} variant should be selected for streaming.
Note that Windows Media Audio (wma) and Windows Media Video (wmv) use this
muxer too.
@subsection Options
@table @option
@item packet_size @var{size}
Set the muxer packet size as a number of bytes. By tuning this setting
you may reduce data fragmentation or muxer overhead depending on your
source. Default value is @code{3200}, minimum is @code{100}, maximum
is @code{64Ki}.
@end table
@section ass
ASS/SSA (SubStation Alpha) subtitles muxer.
It accepts a single ASS subtitles stream.
@subsection Options
@table @option
@item ignore_readorder @var{bool}
Write dialogue events immediately, even if they are out-of-order,
default is @code{false}, otherwise they are cached until the expected
time event is found.
@end table
@section ast
AST (Audio Stream) muxer.
This format is used to play audio on some Nintendo Wii games.
It accepts a single audio stream.
The @option{loopstart} and @option{loopend} options can be used to
define a section of the file to loop for players honoring such
options.
@subsection Options
@table @option
@item loopstart @var{start}
Specify loop start position expressesd in milliseconds, from @code{-1}
to @code{INT_MAX}, in case @code{-1} is set then no loop is specified
(default -1) and the @option{loopend} value is ignored.
@item loopend @var{end}
Specify loop end position expressed in milliseconds, from @code{0} to
@code{INT_MAX}, default is @code{0}, in case @code{0} is set it
assumes the total stream duration.
@end table
@section au
SUN AU audio muxer.
It accepts a single audio stream.
@anchor{avi}
@section avi
Audio Video Interleaved muxer.
AVI is a proprietary format developed by Microsoft, and later formally specified
through the Open DML specification.
Because of differences in players implementations, it might be required to set
some options to make sure that the generated output can be correctly played by
the target player.
@subsection Options
@table @option
@item flipped_raw_rgb @var{bool}
If set to @code{true}, store positive height for raw RGB bitmaps, which
indicates bitmap is stored bottom-up. Note that this option does not flip the
bitmap which has to be done manually beforehand, e.g. by using the @samp{vflip}
filter. Default is @code{false} and indicates bitmap is stored top down.
@item reserve_index_space @var{size}
Reserve the specified amount of bytes for the OpenDML master index of each
stream within the file header. By default additional master indexes are
embedded within the data packets if there is no space left in the first master
index and are linked together as a chain of indexes. This index structure can
cause problems for some use cases, e.g. third-party software strictly relying
on the OpenDML index specification or when file seeking is slow. Reserving
enough index space in the file header avoids these problems.
The required index space depends on the output file size and should be about 16
bytes per gigabyte. When this option is omitted or set to zero the necessary
index space is guessed.
Default value is @code{0}.
@item write_channel_mask @var{bool}
Write the channel layout mask into the audio stream header.
This option is enabled by default. Disabling the channel mask can be useful in
specific scenarios, e.g. when merging multiple audio streams into one for
compatibility with software that only supports a single audio stream in AVI
(see @ref{amerge,,the "amerge" section in the ffmpeg-filters manual,ffmpeg-filters}).
@end table
@section avif
AV1 (Alliance for Open Media Video codec 1) image format muxer.
This muxers stores images encoded using the AV1 codec.
It accepts one or two video streams. In case two video streams are
provided, the second one shall contain a single plane storing the
alpha mask.
In case more than one image is provided, the generated output is
considered an animated AVIF and the number of loops can be specified
with the @option{loop} option.
This is based on the specification by Alliance for Open Media at url
@url{https://aomediacodec.github.io/av1-avif}.
@subsection Options
@table @option
@item loop @var{count}
number of times to loop an animated AVIF, @code{0} specify an infinite
loop, default is @code{0}
@item movie_timescale @var{timescale}
Set the timescale written in the movie header box (@code{mvhd}).
Range is 1 to INT_MAX. Default is @code{1000}.
@end table
@section avm2
ShockWave Flash (SWF) / ActionScript Virtual Machine 2 (AVM2) format muxer.
It accepts one audio stream, one video stream, or both.
@section bit
G.729 (.bit) file format muxer.
It accepts a single G.729 audio stream.
@section caf
Apple CAF (Core Audio Format) muxer.
It accepts a single audio stream.
@section codec2
Codec2 audio audio muxer.
It accepts a single codec2 audio stream.
@anchor{chromaprint}
@section chromaprint
Chromaprint fingerprinter muxers.
To enable compilation of this filter you need to configure FFmpeg with
@code{--enable-chromaprint}.
This muxer feeds audio data to the Chromaprint library, which
generates a fingerprint for the provided audio data. See:
@url{https://acoustid.org/chromaprint}
It takes a single signed native-endian 16-bit raw audio stream of at
most 2 channels.
@subsection Options
@table @option
@item algorithm @var{version}
Select version of algorithm to fingerprint with. Range is @code{0} to
@code{4}. Version @code{3} enables silence detection. Default is @code{1}.
@item fp_format @var{format}
Format to output the fingerprint as. Accepts the following options:
@table @samp
@item base64
Base64 compressed fingerprint @emph{(default)}
@item compressed
Binary compressed fingerprint
@item raw
Binary raw fingerprint
@end table
@item silence_threshold @var{threshold}
Threshold for detecting silence. Range is from @code{-1} to
@code{32767}, where @code{-1} disables silence detection. Silence
detection can only be used with version @code{3} of the algorithm.
Silence detection must be disabled for use with the AcoustID
service. Default is @code{-1}.
@end table
@anchor{crc}
@section crc
CRC (Cyclic Redundancy Check) muxer.
This muxer computes and prints the Adler-32 CRC of all the input audio
and video frames. By default audio frames are converted to signed
16-bit raw audio and video frames to raw video before computing the
CRC.
The output of the muxer consists of a single line of the form:
CRC=0x@var{CRC}, where @var{CRC} is a hexadecimal number 0-padded to
8 digits containing the CRC for all the decoded input frames.
See also the @ref{framecrc} muxer.
@subsection Examples
@itemize
@item
Use @command{ffmpeg} to compute the CRC of the input, and store it in
the file @file{out.crc}:
@example
ffmpeg -i INPUT -f crc out.crc
@end example
@item
Use @command{ffmpeg} to print the CRC to stdout with the command:
@example
ffmpeg -i INPUT -f crc -
@end example
@item
You can select the output format of each frame with @command{ffmpeg} by
specifying the audio and video codec and format. For example, to
compute the CRC of the input audio converted to PCM unsigned 8-bit
and the input video converted to MPEG-2 video, use the command:
@example
ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
@end example
@end itemize
@anchor{dash}
@section dash
Dynamic Adaptive Streaming over HTTP (DASH) muxer.
This muxer creates segments and manifest files according to the
MPEG-DASH standard ISO/IEC 23009-1:2014 and following standard
updates.
For more information see:
@itemize @bullet
@item
ISO DASH Specification: @url{http://standards.iso.org/ittf/PubliclyAvailableStandards/c065274_ISO_IEC_23009-1_2014.zip}
@item
WebM DASH Specification: @url{https://sites.google.com/a/webmproject.org/wiki/adaptive-streaming/webm-dash-specification}
@end itemize
This muxer creates an MPD (Media Presentation Description) manifest
file and segment files for each stream. Segment files are placed in
the same directory of the MPD manifest file.
The segment filename might contain pre-defined identifiers used in the
manifest @code{SegmentTemplate} section as defined in section
5.3.9.4.4 of the standard.
Available identifiers are @code{$RepresentationID$}, @code{$Number$},
@code{$Bandwidth$}, and @code{$Time$}. In addition to the standard
identifiers, an ffmpeg-specific @code{$ext$} identifier is also
supported. When specified, @command{ffmpeg} will replace @code{$ext$}
in the file name with muxing format's extensions such as @code{mp4},
@code{webm} etc.
@subsection Options
@table @option
@item adaptation_sets @var{adaptation_sets}
Assign streams to adaptation sets, specified in the MPD manifest
@code{AdaptationSets} section.
An adaptation set contains a set of one or more streams accessed as a
single subset, e.g. corresponding streams encoded at different size
selectable by the user depending on the available bandwidth, or to
different audio streams with a different language.
Each adaptation set is specified with the syntax:
@example
id=@var{index},streams=@var{streams}
@end example
where @var{index} must be a numerical index, and @var{streams} is a
sequence of @code{,}-separated stream indices. Multiple adaptation
sets can be specified, separated by spaces.
To map all video (or audio) streams to an adaptation set, @code{v} (or
@code{a}) can be used as stream identifier instead of IDs.
When no assignment is defined, this defaults to an adaptation set for
each stream.
The following optional fields can also be specified:
@table @option
@item descriptor
Define the descriptor as defined by ISO/IEC 23009-1:2014/Amd.2:2015.
For example:
@example
<SupplementalProperty schemeIdUri=\"urn:mpeg:dash:srd:2014\" value=\"0,0,0,1,1,2,2\"/>
@end example
The descriptor string should be a self-closing XML tag.
@item frag_duration
Override the global fragment duration specified with the
@option{frag_duration} option.
@item frag_type
Override the global fragment type specified with the
@option{frag_type} option.
@item seg_duration
Override the global segment duration specified with the
@option{seg_duration} option.
@item trick_id
Mark an adaptation set as containing streams meant to be used for
Trick Mode for the referenced adaptation set.
@end table
A few examples of possible values for the @option{adaptation_sets}
option follow:
@example
id=0,seg_duration=2,frag_duration=1,frag_type=duration,streams=v id=1,seg_duration=2,frag_type=none,streams=a
@end example
@example
id=0,seg_duration=2,frag_type=none,streams=0 id=1,seg_duration=10,frag_type=none,trick_id=0,streams=1
@end example
@item dash_segment_type @var{type}
Set DASH segment files type.
Possible values:
@table @samp
@item auto
The dash segment files format will be selected based on the stream
codec. This is the default mode.
@item mp4
the dash segment files will be in ISOBMFF/MP4 format
@item webm
the dash segment files will be in WebM format
@end table
@item extra_window_size @var{size}
Set the maximum number of segments kept outside of the manifest before
removing from disk.
@item format_options @var{options_list}
Set container format (mp4/webm) options using a @code{:}-separated list of
key=value parameters. Values containing @code{:} special characters must be
escaped.
@item frag_duration @var{duration}
Set the length in seconds of fragments within segments, fractional
value can also be set.
@item frag_type @var{type}
Set the type of interval for fragmentation.
Possible values:
@table @samp
@item auto
set one fragment per segment
@item every_frame
fragment at every frame
@item duration
fragment at specific time intervals
@item pframes
fragment at keyframes and following P-Frame reordering (Video only,
experimental)
@end table
@item global_sidx @var{bool}
Write global @code{SIDX} atom. Applicable only for single file, mp4
output, non-streaming mode.
@item hls_master_name @var{file_name}
HLS master playlist name. Default is @file{master.m3u8}.
@item hls_playlist @var{bool}
Generate HLS playlist files. The master playlist is generated with
filename specified by the @option{hls_master_name} option. One media
playlist file is generated for each stream with filenames
@file{media_0.m3u8}, @file{media_1.m3u8}, etc.
@item http_opts @var{http_opts}
Specify a list of @code{:}-separated key=value options to pass to the
underlying HTTP protocol. Applicable only for HTTP output.
@item http_persistent @var{bool}
Use persistent HTTP connections. Applicable only for HTTP output.
@item http_user_agent @var{user_agent}
Override User-Agent field in HTTP header. Applicable only for HTTP
output.
@item ignore_io_errors @var{bool}
Ignore IO errors during open and write. Useful for long-duration runs
with network output. This is disabled by default.
@item index_correction @var{bool}
Enable or disable segment index correction logic. Applicable only when
@option{use_template} is enabled and @option{use_timeline} is
disabled. This is disabled by default.
When enabled, the logic monitors the flow of segment indexes. If a
streams's segment index value is not at the expected real time
position, then the logic corrects that index value.
Typically this logic is needed in live streaming use cases. The
network bandwidth fluctuations are common during long run
streaming. Each fluctuation can cause the segment indexes fall behind
the expected real time position.
@item init_seg_name @var{init_name}
DASH-templated name to use for the initialization segment. Default is
@code{init-stream$RepresentationID$.$ext$}. @code{$ext$} is replaced
with the file name extension specific for the segment format.
@item ldash @var{bool}
Enable Low-latency Dash by constraining the presence and values of
some elements. This is disabled by default.
@item lhls @var{bool}
Enable Low-latency HLS (LHLS). Add @code{#EXT-X-PREFETCH} tag with
current segment's URI. hls.js player folks are trying to standardize
an open LHLS spec. The draft spec is available at
@url{https://github.com/video-dev/hlsjs-rfcs/blob/lhls-spec/proposals/0001-lhls.md}.
This option tries to comply with the above open spec. It enables
@option{streaming} and @option{hls_playlist} options automatically.
This is an experimental feature.
Note: This is not Apple's version LHLS. See
@url{https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis}
@item master_m3u8_publish_rate @var{segment_intervals_count}
Publish master playlist repeatedly every after specified number of
segment intervals.
@item max_playback_rate @var{rate}
Set the maximum playback rate indicated as appropriate for the
purposes of automatically adjusting playback latency and buffer
occupancy during normal playback by clients.
@item media_seg_name @var{segment_name}
DASH-templated name to use for the media segments. Default is
@code{chunk-stream$RepresentationID$-$Number%05d$.$ext$}. @code{$ext$}
is replaced with the file name extension specific for the segment
format.
@item method @var{method}
Use the given HTTP method to create output files. Generally set to @code{PUT}
or @code{POST}.
@item min_playback_rate @var{rate}
Set the minimum playback rate indicated as appropriate for the
purposes of automatically adjusting playback latency and buffer
occupancy during normal playback by clients.
@item mpd_profile @var{flags}
Set one or more MPD manifest profiles.
Possible values:
@table @samp
@item dash
MPEG-DASH ISO Base media file format live profile
@item dvb_dash
DVB-DASH profile
@end table
Default value is @code{dash}.
@item remove_at_exit @var{bool}
Enable or disable removal of all segments when finished. This is
disabled by default.
@item seg_duration @var{duration}
Set the segment length in seconds (fractional value can be set). The
value is treated as average segment duration when the
@option{use_template} option is enabled and the @option{use_timeline}
option is disabled and as minimum segment duration for all the other
use cases.
Default value is @code{5}.
@item single_file @var{bool}
Enable or disable storing all segments in one file, accessed using
byte ranges. This is disabled by default.
The name of the single file can be specified with the
@option{single_file_name} option, if not specified assume the basename
of the manifest file with the output format extension.
@item single_file_name @var{file_name}
DASH-templated name to use for the manifest @code{baseURL}
element. Imply that the @option{single_file} option is set to
@var{true}. In the template, @code{$ext$} is replaced with the file
name extension specific for the segment format.
@item streaming @var{bool}
Enable or disable chunk streaming mode of output. In chunk streaming
mode, each frame will be a @code{moof} fragment which forms a
chunk. This is disabled by default.
@item target_latency @var{target_latency}
Set an intended target latency in seconds for serving (fractional
value can be set). Applicable only when the @option{streaming} and
@option{write_prft} options are enabled. This is an informative fields
clients can use to measure the latency of the service.
@item timeout @var{timeout}
Set timeout for socket I/O operations expressed in seconds (fractional
value can be set). Applicable only for HTTP output.
@item update_period @var{period}
Set the MPD update period, for dynamic content. The unit is
second. If set to @code{0}, the period is automatically computed.
Default value is @code{0}.
@item use_template @var{bool}
Enable or disable use of @code{SegmentTemplate} instead of
@code{SegmentList} in the manifest. This is enabled by default.
@item use_timeline @var{bool}
Enable or disable use of @code{SegmentTimeline} within the
@code{SegmentTemplate} manifest section. This is enabled by default.
@item utc_timing_url @var{url}
URL of the page that will return the UTC timestamp in ISO
format, for example @code{https://time.akamai.com/?iso}
@item window_size @var{size}
Set the maximum number of segments kept in the manifest, discard the
oldest one. This is useful for live streaming.
If the value is @code{0}, all segments are kept in the
manifest. Default value is @code{0}.
@item write_prft @var{write_prft}
Write Producer Reference Time elements on supported streams. This also
enables writing prft boxes in the underlying muxer. Applicable only
when the @var{utc_url} option is enabled. It is set to @var{auto} by
default, in which case the muxer will attempt to enable it only in
modes that require it.
@end table
@subsection Example
Generate a DASH output reading from an input source in realtime using
@command{ffmpeg}.
Two multimedia streams are generated from the input file, both
containing a video stream encoded through @samp{libx264}, and an audio
stream encoded with @samp{libfdk_aac}. The first multimedia stream
contains video with a bitrate of 800k and audio at the default rate,
the second with video scaled to 320x170 pixels at 300k and audio
resampled at 22005 Hz.
The @option{window_size} option keeps only the latest 5 segments with
the default duration of 5 seconds.
@example
ffmpeg -re -i <input> -map 0 -map 0 -c:a libfdk_aac -c:v libx264 \
-b:v:0 800k -profile:v:0 main \
-b:v:1 300k -s:v:1 320x170 -profile:v:1 baseline -ar:a:1 22050 \
-bf 1 -keyint_min 120 -g 120 -sc_threshold 0 -b_strategy 0 \
-use_timeline 1 -use_template 1 -window_size 5 \
-adaptation_sets "id=0,streams=v id=1,streams=a" \
-f dash /path/to/out.mpd
@end example
@section daud
D-Cinema audio muxer.
It accepts a single 6-channels audio stream resampled at 96000 Hz
encoded with the @samp{pcm_24daud} codec.
@subsection Example
Use @command{ffmpeg} to mux input audio to a @samp{5.1} channel layout
resampled at 96000Hz:
@example
ffmpeg -i INPUT -af aresample=96000,pan=5.1 slow.302
@end example
For ffmpeg versions before 7.0 you might have to use the @samp{asetnsamples}
filter to limit the muxed packet size, because this format does not support
muxing packets larger than 65535 bytes (3640 samples). For newer ffmpeg
versions audio is automatically packetized to 36000 byte (2000 sample) packets.
@section dv
DV (Digital Video) muxer.
It accepts exactly one @samp{dvvideo} video stream and at most two
@samp{pcm_s16} audio streams. More constraints are defined by the
property of the video, which must correspond to a DV video supported
profile, and on the framerate.
@subsection Example
Use @command{ffmpeg} to convert the input:
@example
ffmpeg -i INPUT -s:v 720x480 -pix_fmt yuv411p -r 29.97 -ac 2 -ar 48000 -y out.dv
@end example
@section ffmetadata
FFmpeg metadata muxer.
This muxer writes the streams metadata in the @samp{ffmetadata}
format.
See @ref{metadata,,the Metadata chapter,ffmpeg-formats} for
information about the format.
@subsection Example
Use @command{ffmpeg} to extract metadata from an input file to a @file{metadata.ffmeta}
file in @samp{ffmetadata} format:
@example
ffmpeg -i INPUT -f ffmetadata metadata.ffmeta
@end example
@anchor{fifo}
@section fifo
FIFO (First-In First-Out) muxer.
The @samp{fifo} pseudo-muxer allows the separation of encoding and
muxing by using a first-in-first-out queue and running the actual muxer
in a separate thread.
This is especially useful in combination with
the @ref{tee} muxer and can be used to send data to several
destinations with different reliability/writing speed/latency.
The target muxer is either selected from the output name or specified
through the @option{fifo_format} option.
The behavior of the @samp{fifo} muxer if the queue fills up or if the
output fails (e.g. if a packet cannot be written to the output) is
selectable:
@itemize @bullet
@item
Output can be transparently restarted with configurable delay between
retries based on real time or time of the processed stream.
@item
Encoding can be blocked during temporary failure, or continue transparently
dropping packets in case the FIFO queue fills up.
@end itemize
API users should be aware that callback functions
(@code{interrupt_callback}, @code{io_open} and @code{io_close}) used
within its @code{AVFormatContext} must be thread-safe.
@subsection Options
@table @option
@item attempt_recovery @var{bool}
If failure occurs, attempt to recover the output. This is especially
useful when used with network output, since it makes it possible to
restart streaming transparently. By default this option is set to
@code{false}.
@item drop_pkts_on_overflow @var{bool}
If set to @code{true}, in case the fifo queue fills up, packets will
be dropped rather than blocking the encoder. This makes it possible to
continue streaming without delaying the input, at the cost of omitting
part of the stream. By default this option is set to @code{false}, so in
such cases the encoder will be blocked until the muxer processes some
of the packets and none of them is lost.
@item fifo_format @var{format_name}
Specify the format name. Useful if it cannot be guessed from the
output name suffix.
@item format_opts @var{options}
Specify format options for the underlying muxer. Muxer options can be
specified as a list of @var{key}=@var{value} pairs separated by ':'.
@item max_recovery_attempts @var{count}
Set maximum number of successive unsuccessful recovery attempts after
which the output fails permanently. By default this option is set to
@code{0} (unlimited).
@item queue_size @var{size}
Specify size of the queue as a number of packets. Default value is
@code{60}.
@item recover_any_error @var{bool}
If set to @code{true}, recovery will be attempted regardless of type
of the error causing the failure. By default this option is set to
@code{false} and in case of certain (usually permanent) errors the
recovery is not attempted even when the @option{attempt_recovery}
option is set to @code{true}.
@item recovery_wait_streamtime @var{bool}
If set to @code{false}, the real time is used when waiting for the
recovery attempt (i.e. the recovery will be attempted after the time
specified by the @option{recovery_wait_time} option).
If set to @code{true}, the time of the processed stream is taken into
account instead (i.e. the recovery will be attempted after discarding
the packets corresponding to the @option{recovery_wait_time} option).
By default this option is set to @code{false}.
@item recovery_wait_time @var{duration}
Specify waiting time in seconds before the next recovery attempt after
previous unsuccessful recovery attempt. Default value is @code{5}.
@item restart_with_keyframe @var{bool}
Specify whether to wait for the keyframe after recovering from
queue overflow or failure. This option is set to @code{false} by default.
@item timeshift @var{duration}
Buffer the specified amount of packets and delay writing the
output. Note that the value of the @option{queue_size} option must be
big enough to store the packets for timeshift. At the end of the input
the fifo buffer is flushed at realtime speed.
@end table
@subsection Example
Use @command{ffmpeg} to stream to an RTMP server, continue processing
the stream at real-time rate even in case of temporary failure
(network outage) and attempt to recover streaming every second
indefinitely:
@example
ffmpeg -re -i ... -c:v libx264 -c:a aac -f fifo -fifo_format flv \
-drop_pkts_on_overflow 1 -attempt_recovery 1 -recovery_wait_time 1 \
-map 0:v -map 0:a rtmp://example.com/live/stream_name
@end example
@section film_cpk
Sega film (.cpk) muxer.
This format was used as internal format for several Sega games.
For more information regarding the Sega film file format, visit
@url{http://wiki.multimedia.cx/index.php?title=Sega_FILM}.
It accepts at maximum one @samp{cinepak} or raw video stream, and at
maximum one audio stream.
@section filmstrip
Adobe Filmstrip muxer.
This format is used by several Adobe tools to store a generated filmstrip export. It
accepts a single raw video stream.
@section fits
Flexible Image Transport System (FITS) muxer.
This image format is used to store astronomical data.
For more information regarding the format, visit
@url{https://fits.gsfc.nasa.gov}.
@section flac
Raw FLAC audio muxer.
This muxer accepts exactly one FLAC audio stream. Additionally, it is possible to add
images with disposition @samp{attached_pic}.
@subsection Options
@table @option
@item write_header @var{bool}
write the file header if set to @code{true}, default is @code{true}
@end table
@subsection Example
Use @command{ffmpeg} to store the audio stream from an input file,
together with several pictures used with @samp{attached_pic}
disposition:
@example
ffmpeg -i INPUT -i pic1.png -i pic2.jpg -map 0:a -map 1 -map 2 -disposition:v attached_pic OUTPUT
@end example
@section flv
Adobe Flash Video Format muxer.
@subsection Options
@table @option
@item flvflags @var{flags}
Possible values:
@table @samp
@item aac_seq_header_detect
Place AAC sequence header based on audio stream data.
@item no_sequence_end
Disable sequence end tag.
@item no_metadata
Disable metadata tag.
@item no_duration_filesize
Disable duration and filesize in metadata when they are equal to zero
at the end of stream. (Be used to non-seekable living stream).
@item add_keyframe_index
Used to facilitate seeking; particularly for HTTP pseudo streaming.
@end table
@end table
@anchor{framecrc}
@section framecrc
Per-packet CRC (Cyclic Redundancy Check) testing format.
This muxer computes and prints the Adler-32 CRC for each audio
and video packet. By default audio frames are converted to signed
16-bit raw audio and video frames to raw video before computing the
CRC.
The output of the muxer consists of a line for each audio and video
packet of the form:
@example
@var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, 0x@var{CRC}
@end example
@var{CRC} is a hexadecimal number 0-padded to 8 digits containing the
CRC of the packet.
@subsection Examples
For example to compute the CRC of the audio and video frames in
@file{INPUT}, converted to raw audio and video packets, and store it
in the file @file{out.crc}:
@example
ffmpeg -i INPUT -f framecrc out.crc
@end example
To print the information to stdout, use the command:
@example
ffmpeg -i INPUT -f framecrc -
@end example
With @command{ffmpeg}, you can select the output format to which the
audio and video frames are encoded before computing the CRC for each
packet by specifying the audio and video codec. For example, to
compute the CRC of each decoded input audio frame converted to PCM
unsigned 8-bit and of each decoded input video frame converted to
MPEG-2 video, use the command:
@example
ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc -
@end example
See also the @ref{crc} muxer.
@anchor{framehash}
@section framehash
Per-packet hash testing format.
This muxer computes and prints a cryptographic hash for each audio
and video packet. This can be used for packet-by-packet equality
checks without having to individually do a binary comparison on each.
By default audio frames are converted to signed 16-bit raw audio and
video frames to raw video before computing the hash, but the output
of explicit conversions to other codecs can also be used. It uses the
SHA-256 cryptographic hash function by default, but supports several
other algorithms.
The output of the muxer consists of a line for each audio and video
packet of the form:
@example
@var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, @var{hash}
@end example
@var{hash} is a hexadecimal number representing the computed hash
for the packet.
@table @option
@item hash @var{algorithm}
Use the cryptographic hash function specified by the string @var{algorithm}.
Supported values include @code{MD5}, @code{murmur3}, @code{RIPEMD128},
@code{RIPEMD160}, @code{RIPEMD256}, @code{RIPEMD320}, @code{SHA160},
@code{SHA224}, @code{SHA256} (default), @code{SHA512/224}, @code{SHA512/256},
@code{SHA384}, @code{SHA512}, @code{CRC32} and @code{adler32}.
@end table
@subsection Examples
To compute the SHA-256 hash of the audio and video frames in @file{INPUT},
converted to raw audio and video packets, and store it in the file
@file{out.sha256}:
@example
ffmpeg -i INPUT -f framehash out.sha256
@end example
To print the information to stdout, using the MD5 hash function, use
the command:
@example
ffmpeg -i INPUT -f framehash -hash md5 -
@end example
See also the @ref{hash} muxer.
@anchor{framemd5}
@section framemd5
Per-packet MD5 testing format.
This is a variant of the @ref{framehash} muxer. Unlike that muxer,
it defaults to using the MD5 hash function.
@subsection Examples
To compute the MD5 hash of the audio and video frames in @file{INPUT},
converted to raw audio and video packets, and store it in the file
@file{out.md5}:
@example
ffmpeg -i INPUT -f framemd5 out.md5
@end example
To print the information to stdout, use the command:
@example
ffmpeg -i INPUT -f framemd5 -
@end example
See also the @ref{framehash} and @ref{md5} muxers.
@anchor{gif}
@section gif
Animated GIF muxer.
Note that the GIF format has a very large time base: the delay between two frames can
therefore not be smaller than one centi second.
@subsection Options
@table @option
@item loop @var{bool}
Set the number of times to loop the output. Use @code{-1} for no loop, @code{0}
for looping indefinitely (default).
@item final_delay @var{delay}
Force the delay (expressed in centiseconds) after the last frame. Each frame
ends with a delay until the next frame. The default is @code{-1}, which is a
special value to tell the muxer to re-use the previous delay. In case of a
loop, you might want to customize this value to mark a pause for instance.
@end table
@subsection Example
Encode a gif looping 10 times, with a 5 seconds delay between
the loops:
@example
ffmpeg -i INPUT -loop 10 -final_delay 500 out.gif
@end example
Note 1: if you wish to extract the frames into separate GIF files, you need to
force the @ref{image2} muxer:
@example
ffmpeg -i INPUT -c:v gif -f image2 "out%d.gif"
@end example
@section gxf
General eXchange Format (GXF) muxer.
GXF was developed by Grass Valley Group, then standardized by SMPTE as SMPTE
360M and was extended in SMPTE RDD 14-2007 to include high-definition video
resolutions.
It accepts at most one video stream with codec @samp{mjpeg}, or
@samp{mpeg1video}, or @samp{mpeg2video}, or @samp{dvvideo} with resolution
@samp{512x480} or @samp{608x576}, and several audio streams with rate 48000Hz
and codec @samp{pcm16_le}.
@anchor{hash}
@section hash
Hash testing format.
This muxer computes and prints a cryptographic hash of all the input
audio and video frames. This can be used for equality checks without
having to do a complete binary comparison.
By default audio frames are converted to signed 16-bit raw audio and
video frames to raw video before computing the hash, but the output
of explicit conversions to other codecs can also be used. Timestamps
are ignored. It uses the SHA-256 cryptographic hash function by default,
but supports several other algorithms.
The output of the muxer consists of a single line of the form:
@var{algo}=@var{hash}, where @var{algo} is a short string representing
the hash function used, and @var{hash} is a hexadecimal number
representing the computed hash.
@table @option
@item hash @var{algorithm}
Use the cryptographic hash function specified by the string @var{algorithm}.
Supported values include @code{MD5}, @code{murmur3}, @code{RIPEMD128},
@code{RIPEMD160}, @code{RIPEMD256}, @code{RIPEMD320}, @code{SHA160},
@code{SHA224}, @code{SHA256} (default), @code{SHA512/224}, @code{SHA512/256},
@code{SHA384}, @code{SHA512}, @code{CRC32} and @code{adler32}.
@end table
@subsection Examples
To compute the SHA-256 hash of the input converted to raw audio and
video, and store it in the file @file{out.sha256}:
@example
ffmpeg -i INPUT -f hash out.sha256
@end example
To print an MD5 hash to stdout use the command:
@example
ffmpeg -i INPUT -f hash -hash md5 -
@end example
See also the @ref{framehash} muxer.
@anchor{hds}
@section hds
HTTP Dynamic Streaming (HDS) muxer.
HTTP dynamic streaming, or HDS, is an adaptive bitrate streaming method
developed by Adobe. HDS delivers MP4 video content over HTTP connections. HDS
can be used for on-demand streaming or live streaming.
This muxer creates an .f4m (Adobe Flash Media Manifest File) manifest, an .abst
(Adobe Bootstrap File) for each stream, and segment files in a directory
specified as the output.
These needs to be accessed by an HDS player throuhg HTTPS for it to be able to
perform playback on the generated stream.
@subsection Options
@table @option
@item extra_window_size @var{int}
number of fragments kept outside of the manifest before removing from disk
@item min_frag_duration @var{microseconds}
minimum fragment duration (in microseconds), default value is 1 second
(@code{10000000})
@item remove_at_exit @var{bool}
remove all fragments when finished when set to @code{true}
@item window_size @var{int}
number of fragments kept in the manifest, if set to a value different from
@code{0}. By default all segments are kept in the output directory.
@end table
@subsection Example
Use @command{ffmpeg} to generate HDS files to the @file{output.hds} directory in
real-time rate:
@example
ffmpeg -re -i INPUT -f hds -b:v 200k output.hds
@end example
@anchor{hls}
@section hls
Apple HTTP Live Streaming muxer that segments MPEG-TS according to
the HTTP Live Streaming (HLS) specification.
It creates a playlist file, and one or more segment files. The output filename
specifies the playlist filename.
By default, the muxer creates a file for each segment produced. These files
have the same name as the playlist, followed by a sequential number and a
.ts extension.
Make sure to require a closed GOP when encoding and to set the GOP
size to fit your segment time constraint.
For example, to convert an input file with @command{ffmpeg}:
@example
ffmpeg -i in.mkv -c:v h264 -flags +cgop -g 30 -hls_time 1 out.m3u8
@end example
This example will produce the playlist, @file{out.m3u8}, and segment files:
@file{out0.ts}, @file{out1.ts}, @file{out2.ts}, etc.
See also the @ref{segment} muxer, which provides a more generic and
flexible implementation of a segmenter, and can be used to perform HLS
segmentation.
@subsection Options
@table @option
@item hls_init_time @var{duration}
Set the initial target segment length. Default value is @var{0}.
@var{duration} must be a time duration specification,
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
Segment will be cut on the next key frame after this time has passed on the
first m3u8 list. After the initial playlist is filled, @command{ffmpeg} will cut
segments at duration equal to @option{hls_time}.
@item hls_time @var{duration}
Set the target segment length. Default value is 2.
@var{duration} must be a time duration specification,
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
Segment will be cut on the next key frame after this time has passed.
@item hls_list_size @var{size}
Set the maximum number of playlist entries. If set to 0 the list file
will contain all the segments. Default value is 5.
@item hls_delete_threshold @var{size}
Set the number of unreferenced segments to keep on disk before @code{hls_flags delete_segments}
deletes them. Increase this to allow continue clients to download segments which
were recently referenced in the playlist. Default value is 1, meaning segments older than
@option{hls_list_size+1} will be deleted.
@item hls_start_number_source @var{source}
Start the playlist sequence number (@code{#EXT-X-MEDIA-SEQUENCE}) according to the specified source.
Unless @option{hls_flags single_file} is set, it also specifies source of starting sequence numbers of
segment and subtitle filenames. In any case, if @option{hls_flags append_list}
is set and read playlist sequence number is greater than the specified start sequence number,
then that value will be used as start value.
It accepts the following values:
@table @option
@item generic (default)
Set the start numbers according to the @option{start_number} option value.
@item epoch
Set the start number as the seconds since epoch (1970-01-01 00:00:00).
@item epoch_us
Set the start number as the microseconds since epoch (1970-01-01 00:00:00).
@item datetime
Set the start number based on the current date/time as YYYYmmddHHMMSS. e.g. 20161231235759.
@end table
@item start_number @var{number}
Start the playlist sequence number (@code{#EXT-X-MEDIA-SEQUENCE}) from the specified @var{number}
when @option{hls_start_number_source} value is @var{generic}. (This is the default case.)
Unless @option{hls_flags single_file} is set, it also specifies starting sequence numbers of segment and subtitle filenames.
Default value is 0.
@item hls_allow_cache @var{bool}
Explicitly set whether the client MAY (1) or MUST NOT (0) cache media segments.
@item hls_base_url @var{baseurl}
Append @var{baseurl} to every entry in the playlist.
Useful to generate playlists with absolute paths.
Note that the playlist sequence number must be unique for each segment
and it is not to be confused with the segment filename sequence number
which can be cyclic, for example if the @option{wrap} option is
specified.
@item hls_segment_filename @var{filename}
Set the segment filename. Unless the @option{hls_flags} option is set with
@samp{single_file}, @var{filename} is used as a string format with the
segment number appended.
For example:
@example
ffmpeg -i in.nut -hls_segment_filename 'file%03d.ts' out.m3u8
@end example
will produce the playlist, @file{out.m3u8}, and segment files:
@file{file000.ts}, @file{file001.ts}, @file{file002.ts}, etc.
@var{filename} may contain a full path or relative path specification,
but only the file name part without any path will be contained in the m3u8 segment list.
Should a relative path be specified, the path of the created segment
files will be relative to the current working directory.
When @option{strftime_mkdir} is set, the whole expanded value of @var{filename} will be written into the m3u8 segment list.
When @option{var_stream_map} is set with two or more variant streams, the
@var{filename} pattern must contain the string "%v", and this string will be
expanded to the position of variant stream index in the generated segment file
names.
For example:
@example
ffmpeg -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \
-map 0:v -map 0:a -map 0:v -map 0:a -f hls -var_stream_map "v:0,a:0 v:1,a:1" \
-hls_segment_filename 'file_%v_%03d.ts' out_%v.m3u8
@end example
will produce the playlists segment file sets:
@file{file_0_000.ts}, @file{file_0_001.ts}, @file{file_0_002.ts}, etc. and
@file{file_1_000.ts}, @file{file_1_001.ts}, @file{file_1_002.ts}, etc.
The string "%v" may be present in the filename or in the last directory name
containing the file, but only in one of them. (Additionally, %v may appear multiple times in the last
sub-directory or filename.) If the string %v is present in the directory name, then
sub-directories are created after expanding the directory name pattern. This
enables creation of segments corresponding to different variant streams in
subdirectories.
For example:
@example
ffmpeg -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \
-map 0:v -map 0:a -map 0:v -map 0:a -f hls -var_stream_map "v:0,a:0 v:1,a:1" \
-hls_segment_filename 'vs%v/file_%03d.ts' vs%v/out.m3u8
@end example
will produce the playlists segment file sets:
@file{vs0/file_000.ts}, @file{vs0/file_001.ts}, @file{vs0/file_002.ts}, etc. and
@file{vs1/file_000.ts}, @file{vs1/file_001.ts}, @file{vs1/file_002.ts}, etc.
@item strftime @var{bool}
Use @code{strftime()} on @var{filename} to expand the segment filename with
localtime. The segment number is also available in this mode, but to use it,
you need to set @samp{second_level_segment_index} in the @option{hls_flag} and
%%d will be the specifier.
For example:
@example
ffmpeg -i in.nut -strftime 1 -hls_segment_filename 'file-%Y%m%d-%s.ts' out.m3u8
@end example
will produce the playlist, @file{out.m3u8}, and segment files:
@file{file-20160215-1455569023.ts}, @file{file-20160215-1455569024.ts}, etc.
Note: On some systems/environments, the @code{%s} specifier is not
available. See @code{strftime()} documentation.
For example:
@example
ffmpeg -i in.nut -strftime 1 -hls_flags second_level_segment_index -hls_segment_filename 'file-%Y%m%d-%%04d.ts' out.m3u8
@end example
will produce the playlist, @file{out.m3u8}, and segment files:
@file{file-20160215-0001.ts}, @file{file-20160215-0002.ts}, etc.
@item strftime_mkdir @var{bool}
Used together with @option{strftime}, it will create all subdirectories which
are present in the expanded values of option @option{hls_segment_filename}.
For example:
@example
ffmpeg -i in.nut -strftime 1 -strftime_mkdir 1 -hls_segment_filename '%Y%m%d/file-%Y%m%d-%s.ts' out.m3u8
@end example
will create a directory @file{201560215} (if it does not exist), and then
produce the playlist, @file{out.m3u8}, and segment files:
@file{20160215/file-20160215-1455569023.ts},
@file{20160215/file-20160215-1455569024.ts}, etc.
For example:
@example
ffmpeg -i in.nut -strftime 1 -strftime_mkdir 1 -hls_segment_filename '%Y/%m/%d/file-%Y%m%d-%s.ts' out.m3u8
@end example
will create a directory hierarchy @file{2016/02/15} (if any of them do not
exist), and then produce the playlist, @file{out.m3u8}, and segment files:
@file{2016/02/15/file-20160215-1455569023.ts},
@file{2016/02/15/file-20160215-1455569024.ts}, etc.
@item hls_segment_options @var{options_list}
Set output format options using a :-separated list of key=value
parameters. Values containing @code{:} special characters must be
escaped.
@item hls_key_info_file @var{key_info_file}
Use the information in @var{key_info_file} for segment encryption. The first
line of @var{key_info_file} specifies the key URI written to the playlist. The
key URL is used to access the encryption key during playback. The second line
specifies the path to the key file used to obtain the key during the encryption
process. The key file is read as a single packed array of 16 octets in binary
format. The optional third line specifies the initialization vector (IV) as a
hexadecimal string to be used instead of the segment sequence number (default)
for encryption. Changes to @var{key_info_file} will result in segment
encryption with the new key/IV and an entry in the playlist for the new key
URI/IV if @option{hls_flags periodic_rekey} is enabled.
Key info file format:
@example
@var{key URI}
@var{key file path}
@var{IV} (optional)
@end example
Example key URIs:
@example
http://server/file.key
/path/to/file.key
file.key
@end example
Example key file paths:
@example
file.key
/path/to/file.key
@end example
Example IV:
@example
0123456789ABCDEF0123456789ABCDEF
@end example
Key info file example:
@example
http://server/file.key
/path/to/file.key
0123456789ABCDEF0123456789ABCDEF
@end example
Example shell script:
@example
#!/bin/sh
BASE_URL=$@{1:-'.'@}
openssl rand 16 > file.key
echo $BASE_URL/file.key > file.keyinfo
echo file.key >> file.keyinfo
echo $(openssl rand -hex 16) >> file.keyinfo
ffmpeg -f lavfi -re -i testsrc -c:v h264 -hls_flags delete_segments \
-hls_key_info_file file.keyinfo out.m3u8
@end example
@item hls_enc @var{bool}
Enable (1) or disable (0) the AES128 encryption.
When enabled every segment generated is encrypted and the encryption key
is saved as @var{playlist name}.key.
@item hls_enc_key @var{key}
Specify a 16-octet key to encrypt the segments, by default it is randomly
generated.
@item hls_enc_key_url @var{keyurl}
If set, @var{keyurl} is prepended instead of @var{baseurl} to the key filename
in the playlist.
@item hls_enc_iv @var{iv}
Specify the 16-octet initialization vector for every segment instead of the
autogenerated ones.
@item hls_segment_type @var{flags}
Possible values:
@table @samp
@item mpegts
Output segment files in MPEG-2 Transport Stream format. This is
compatible with all HLS versions.
@item fmp4
Output segment files in fragmented MP4 format, similar to MPEG-DASH.
fmp4 files may be used in HLS version 7 and above.
@end table
@item hls_fmp4_init_filename @var{filename}
Set filename for the fragment files header file, default filename is @file{init.mp4}.
When @option{strftime} is enabled, @var{filename} is expanded to the segment filename with localtime.
For example:
@example
ffmpeg -i in.nut -hls_segment_type fmp4 -strftime 1 -hls_fmp4_init_filename "%s_init.mp4" out.m3u8
@end example
will produce init like this @file{1602678741_init.mp4}.
@item hls_fmp4_init_resend @var{bool}
Resend init file after m3u8 file refresh every time, default is @var{0}.
When @option{var_stream_map} is set with two or more variant streams, the
@var{filename} pattern must contain the string "%v", this string specifies
the position of variant stream index in the generated init file names.
The string "%v" may be present in the filename or in the last directory name
containing the file. If the string is present in the directory name, then
sub-directories are created after expanding the directory name pattern. This
enables creation of init files corresponding to different variant streams in
subdirectories.
@item hls_flags @var{flags}
Possible values:
@table @samp
@item single_file
If this flag is set, the muxer will store all segments in a single MPEG-TS
file, and will use byte ranges in the playlist. HLS playlists generated with
this way will have the version number 4.
For example:
@example
ffmpeg -i in.nut -hls_flags single_file out.m3u8
@end example
will produce the playlist, @file{out.m3u8}, and a single segment file,
@file{out.ts}.
@item delete_segments
Segment files removed from the playlist are deleted after a period of time
equal to the duration of the segment plus the duration of the playlist.
@item append_list
Append new segments into the end of old segment list,
and remove the @code{#EXT-X-ENDLIST} from the old segment list.
@item round_durations
Round the duration info in the playlist file segment info to integer
values, instead of using floating point.
If there are no other features requiring higher HLS versions be used,
then this will allow @command{ffmpeg} to output a HLS version 2 m3u8.
@item discont_start
Add the @code{#EXT-X-DISCONTINUITY} tag to the playlist, before the
first segment's information.
@item omit_endlist
Do not append the @code{EXT-X-ENDLIST} tag at the end of the playlist.
@item periodic_rekey
The file specified by @code{hls_key_info_file} will be checked periodically and
detect updates to the encryption info. Be sure to replace this file atomically,
including the file containing the AES encryption key.
@item independent_segments
Add the @code{#EXT-X-INDEPENDENT-SEGMENTS} tag to playlists that has video segments
and when all the segments of that playlist are guaranteed to start with a key frame.
@item iframes_only
Add the @code{#EXT-X-I-FRAMES-ONLY} tag to playlists that has video segments
and can play only I-frames in the @code{#EXT-X-BYTERANGE} mode.
@item split_by_time
Allow segments to start on frames other than key frames. This improves
behavior on some players when the time between key frames is inconsistent,
but may make things worse on others, and can cause some oddities during
seeking. This flag should be used with the @option{hls_time} option.
@item program_date_time
Generate @code{EXT-X-PROGRAM-DATE-TIME} tags.
@item second_level_segment_index
Make it possible to use segment indexes as %%d in the
@option{hls_segment_filename} option expression besides date/time values when
@option{strftime} option is on. To get fixed width numbers with trailing zeroes, %%0xd format
is available where x is the required width.
@item second_level_segment_size
Make it possible to use segment sizes (counted in bytes) as %%s in
@option{hls_segment_filename} option expression besides date/time values when
strftime is on. To get fixed width numbers with trailing zeroes, %%0xs format
is available where x is the required width.
@item second_level_segment_duration
Make it possible to use segment duration (calculated in microseconds) as %%t in
@option{hls_segment_filename} option expression besides date/time values when
strftime is on. To get fixed width numbers with trailing zeroes, %%0xt format
is available where x is the required width.
For example:
@example
ffmpeg -i sample.mpeg \
-f hls -hls_time 3 -hls_list_size 5 \
-hls_flags second_level_segment_index+second_level_segment_size+second_level_segment_duration \
-strftime 1 -strftime_mkdir 1 -hls_segment_filename "segment_%Y%m%d%H%M%S_%%04d_%%08s_%%013t.ts" stream.m3u8
@end example
will produce segments like this:
@file{segment_20170102194334_0003_00122200_0000003000000.ts}, @file{segment_20170102194334_0004_00120072_0000003000000.ts} etc.
@item temp_file
Write segment data to @file{filename.tmp} and rename to filename only once the
segment is complete.
A webserver serving up segments can be configured to reject requests to *.tmp to
prevent access to in-progress segments before they have been added to the m3u8
playlist.
This flag also affects how m3u8 playlist files are created. If this flag is set,
all playlist files will be written into a temporary file and renamed after they
are complete, similarly as segments are handled. But playlists with @code{file}
protocol and with @option{hls_playlist_type} type other than @samp{vod} are
always written into a temporary file regardless of this flag.
Master playlist files specified with @option{master_pl_name}, if any, with
@code{file} protocol, are always written into temporary file regardless of this
flag if @option{master_pl_publish_rate} value is other than zero.
@end table
@item hls_playlist_type @var{type}
If type is @samp{event}, emit @code{#EXT-X-PLAYLIST-TYPE:EVENT} in the m3u8
header. This forces @option{hls_list_size} to 0; the playlist can only be
appended to.
If type is @samp{vod}, emit @code{#EXT-X-PLAYLIST-TYPE:VOD} in the m3u8
header. This forces @option{hls_list_size} to 0; the playlist must not change.
@item method @var{method}
Use the given HTTP method to create the hls files.
For example:
@example
ffmpeg -re -i in.ts -f hls -method PUT http://example.com/live/out.m3u8
@end example
will upload all the mpegts segment files to the HTTP server using the HTTP PUT
method, and update the m3u8 files every @code{refresh} times using the same
method. Note that the HTTP server must support the given method for uploading
files.
@item http_user_agent @var{agent}
Override User-Agent field in HTTP header. Applicable only for HTTP output.
@item var_stream_map @var{stream_map}
Specify a map string defining how to group the audio, video and subtitle streams
into different variant streams. The variant stream groups are separated by
space.
Expected string format is like this "a:0,v:0 a:1,v:1 ....". Here a:, v:, s: are
the keys to specify audio, video and subtitle streams respectively.
Allowed values are 0 to 9 (limited just based on practical usage).
When there are two or more variant streams, the output filename pattern must
contain the string "%v": this string specifies the position of variant stream
index in the output media playlist filenames. The string "%v" may be present in
the filename or in the last directory name containing the file. If the string is
present in the directory name, then sub-directories are created after expanding
the directory name pattern. This enables creation of variant streams in
subdirectories.
A few examples follow.
@itemize
@item
Create two hls variant streams. The first variant stream will contain video
stream of bitrate 1000k and audio stream of bitrate 64k and the second variant
stream will contain video stream of bitrate 256k and audio stream of bitrate
32k. Here, two media playlist with file names @file{out_0.m3u8} and
@file{out_1.m3u8} will be created.
@example
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \
-map 0:v -map 0:a -map 0:v -map 0:a -f hls -var_stream_map "v:0,a:0 v:1,a:1" \
http://example.com/live/out_%v.m3u8
@end example
@item
If you want something meaningful text instead of indexes in result names, you
may specify names for each or some of the variants. The following example will
create two hls variant streams as in the previous one. But here, the two media
playlist with file names @file{out_my_hd.m3u8} and @file{out_my_sd.m3u8} will be
created.
@example
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \
-map 0:v -map 0:a -map 0:v -map 0:a -f hls -var_stream_map "v:0,a:0,name:my_hd v:1,a:1,name:my_sd" \
http://example.com/live/out_%v.m3u8
@end example
@item
Create three hls variant streams. The first variant stream will be a video only
stream with video bitrate 1000k, the second variant stream will be an audio only
stream with bitrate 64k and the third variant stream will be a video only stream
with bitrate 256k. Here, three media playlist with file names @file{out_0.m3u8},
@file{out_1.m3u8} and @file{out_2.m3u8} will be created.
@example
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k \
-map 0:v -map 0:a -map 0:v -f hls -var_stream_map "v:0 a:0 v:1" \
http://example.com/live/out_%v.m3u8
@end example
@item
Create the variant streams in subdirectories. Here, the first media playlist is
created at @file{http://example.com/live/vs_0/out.m3u8} and the second one at
@file{http://example.com/live/vs_1/out.m3u8}.
@example
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \
-map 0:v -map 0:a -map 0:v -map 0:a -f hls -var_stream_map "v:0,a:0 v:1,a:1" \
http://example.com/live/vs_%v/out.m3u8
@end example
@item
Create two audio only and two video only variant streams. In addition to the
@code{#EXT-X-STREAM-INF} tag for each variant stream in the master playlist, the
@code{#EXT-X-MEDIA} tag is also added for the two audio only variant streams and
they are mapped to the two video only variant streams with audio group names
'aud_low' and 'aud_high'.
By default, a single hls variant containing all the encoded streams is created.
@example
ffmpeg -re -i in.ts -b:a:0 32k -b:a:1 64k -b:v:0 1000k -b:v:1 3000k \
-map 0:a -map 0:a -map 0:v -map 0:v -f hls \
-var_stream_map "a:0,agroup:aud_low a:1,agroup:aud_high v:0,agroup:aud_low v:1,agroup:aud_high" \
-master_pl_name master.m3u8 \
http://example.com/live/out_%v.m3u8
@end example
@item
Create two audio only and one video only variant streams. In addition to the
@code{#EXT-X-STREAM-INF} tag for each variant stream in the master playlist, the
@code{#EXT-X-MEDIA} tag is also added for the two audio only variant streams and
they are mapped to the one video only variant streams with audio group name
'aud_low', and the audio group have default stat is NO or YES.
By default, a single hls variant containing all the encoded streams is created.
@example
ffmpeg -re -i in.ts -b:a:0 32k -b:a:1 64k -b:v:0 1000k \
-map 0:a -map 0:a -map 0:v -f hls \
-var_stream_map "a:0,agroup:aud_low,default:yes a:1,agroup:aud_low v:0,agroup:aud_low" \
-master_pl_name master.m3u8 \
http://example.com/live/out_%v.m3u8
@end example
@item
Create two audio only and one video only variant streams. In addition to the
@code{#EXT-X-STREAM-INF} tag for each variant stream in the master playlist, the
@code{#EXT-X-MEDIA} tag is also added for the two audio only variant streams and
they are mapped to the one video only variant streams with audio group name
'aud_low', and the audio group have default stat is NO or YES, and one audio
have and language is named ENG, the other audio language is named CHN. By
default, a single hls variant containing all the encoded streams is created.
@example
ffmpeg -re -i in.ts -b:a:0 32k -b:a:1 64k -b:v:0 1000k \
-map 0:a -map 0:a -map 0:v -f hls \
-var_stream_map "a:0,agroup:aud_low,default:yes,language:ENG a:1,agroup:aud_low,language:CHN v:0,agroup:aud_low" \
-master_pl_name master.m3u8 \
http://example.com/live/out_%v.m3u8
@end example
@item
Create a single variant stream. Add the @code{#EXT-X-MEDIA} tag with
@code{TYPE=SUBTITLES} in the master playlist with webvtt subtitle group name
'subtitle'. Make sure the input file has one text subtitle stream at least.
@example
ffmpeg -y -i input_with_subtitle.mkv \
-b:v:0 5250k -c:v h264 -pix_fmt yuv420p -profile:v main -level 4.1 \
-b:a:0 256k \
-c:s webvtt -c:a mp2 -ar 48000 -ac 2 -map 0:v -map 0:a:0 -map 0:s:0 \
-f hls -var_stream_map "v:0,a:0,s:0,sgroup:subtitle" \
-master_pl_name master.m3u8 -t 300 -hls_time 10 -hls_init_time 4 -hls_list_size \
10 -master_pl_publish_rate 10 -hls_flags \
delete_segments+discont_start+split_by_time ./tmp/video.m3u8
@end example
@end itemize
@item cc_stream_map @var{cc_stream_map}
Map string which specifies different closed captions groups and their
attributes. The closed captions stream groups are separated by space.
Expected string format is like this
"ccgroup:<group name>,instreamid:<INSTREAM-ID>,language:<language code> ....".
'ccgroup' and 'instreamid' are mandatory attributes. 'language' is an optional
attribute.
The closed captions groups configured using this option are mapped to different
variant streams by providing the same 'ccgroup' name in the
@option{var_stream_map} string.
For example:
@example
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \
-a53cc:0 1 -a53cc:1 1 \
-map 0:v -map 0:a -map 0:v -map 0:a -f hls \
-cc_stream_map "ccgroup:cc,instreamid:CC1,language:en ccgroup:cc,instreamid:CC2,language:sp" \
-var_stream_map "v:0,a:0,ccgroup:cc v:1,a:1,ccgroup:cc" \
-master_pl_name master.m3u8 \
http://example.com/live/out_%v.m3u8
@end example
will add two @code{#EXT-X-MEDIA} tags with @code{TYPE=CLOSED-CAPTIONS} in the
master playlist for the INSTREAM-IDs 'CC1' and 'CC2'. Also, it will add
@code{CLOSED-CAPTIONS} attribute with group name 'cc' for the two output variant
streams.
If @option{var_stream_map} is not set, then the first available ccgroup in
@option{cc_stream_map} is mapped to the output variant stream.
For example:
@example
ffmpeg -re -i in.ts -b:v 1000k -b:a 64k -a53cc 1 -f hls \
-cc_stream_map "ccgroup:cc,instreamid:CC1,language:en" \
-master_pl_name master.m3u8 \
http://example.com/live/out.m3u8
@end example
this will add @code{#EXT-X-MEDIA} tag with @code{TYPE=CLOSED-CAPTIONS} in the
master playlist with group name 'cc', language 'en' (english) and INSTREAM-ID
'CC1'. Also, it will add @code{CLOSED-CAPTIONS} attribute with group name 'cc'
for the output variant stream.
@item master_pl_name @var{name}
Create HLS master playlist with the given name.
For example:
@example
ffmpeg -re -i in.ts -f hls -master_pl_name master.m3u8 http://example.com/live/out.m3u8
@end example
creates an HLS master playlist with name @file{master.m3u8} which is published
at @url{http://example.com/live/}.
@item master_pl_publish_rate @var{count}
Publish master play list repeatedly every after specified number of segment intervals.
For example:
@example
ffmpeg -re -i in.ts -f hls -master_pl_name master.m3u8 \
-hls_time 2 -master_pl_publish_rate 30 http://example.com/live/out.m3u8
@end example
creates an HLS master playlist with name @file{master.m3u8} and keeps
publishing it repeatedly every after 30 segments i.e. every after 60s.
@item http_persistent @var{bool}
Use persistent HTTP connections. Applicable only for HTTP output.
@item timeout @var{timeout}
Set timeout for socket I/O operations. Applicable only for HTTP output.
@item ignore_io_errors @var{bool}
Ignore IO errors during open, write and delete. Useful for long-duration runs with network output.
@item headers @var{headers}
Set custom HTTP headers, can override built in default headers. Applicable only for HTTP output.
@end table
@section iamf
Immersive Audio Model and Formats (IAMF) muxer.
IAMF is used to provide immersive audio content for presentation on a wide range
of devices in both streaming and offline applications. These applications
include internet audio streaming, multicasting/broadcasting services, file
download, gaming, communication, virtual and augmented reality, and others. In
these applications, audio may be played back on a wide range of devices, e.g.,
headphones, mobile phones, tablets, TVs, sound bars, home theater systems, and
big screens.
This format was promoted and desgined by Alliance for Open Media.
For more information about this format, see @url{https://aomedia.org/iamf/}.
@anchor{ico}
@section ico
ICO file muxer.
Microsoft's icon file format (ICO) has some strict limitations that should be noted:
@itemize
@item
Size cannot exceed 256 pixels in any dimension
@item
Only BMP and PNG images can be stored
@item
If a BMP image is used, it must be one of the following pixel formats:
@example
BMP Bit Depth FFmpeg Pixel Format
1bit pal8
4bit pal8
8bit pal8
16bit rgb555le
24bit bgr24
32bit bgra
@end example
@item
If a BMP image is used, it must use the BITMAPINFOHEADER DIB header
@item
If a PNG image is used, it must use the rgba pixel format
@end itemize
@section ilbc
Internet Low Bitrate Codec (iLBC) raw muxer.
It accepts a single @samp{ilbc} audio stream.
@anchor{image2}
@section image2, image2pipe
Image file muxer.
The @samp{image2} muxer writes video frames to image files.
The output filenames are specified by a pattern, which can be used to
produce sequentially numbered series of files.
The pattern may contain the string "%d" or "%0@var{N}d", this string
specifies the position of the characters representing a numbering in
the filenames. If the form "%0@var{N}d" is used, the string
representing the number in each filename is 0-padded to @var{N}
digits. The literal character '%' can be specified in the pattern with
the string "%%".
If the pattern contains "%d" or "%0@var{N}d", the first filename of
the file list specified will contain the number 1, all the following
numbers will be sequential.
The pattern may contain a suffix which is used to automatically
determine the format of the image files to write.
For example the pattern "img-%03d.bmp" will specify a sequence of
filenames of the form @file{img-001.bmp}, @file{img-002.bmp}, ...,
@file{img-010.bmp}, etc.
The pattern "img%%-%d.jpg" will specify a sequence of filenames of the
form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg},
etc.
The image muxer supports the .Y.U.V image file format. This format is
special in that each image frame consists of three files, for
each of the YUV420P components. To read or write this image file format,
specify the name of the '.Y' file. The muxer will automatically open the
'.U' and '.V' files as required.
The @samp{image2pipe} muxer accepts the same options as the @samp{image2} muxer,
but ignores the pattern verification and expansion, as it is supposed to write
to the command output rather than to an actual stored file.
@subsection Options
@table @option
@item frame_pts @var{bool}
If set to 1, expand the filename with the packet PTS (presentation time stamp).
Default value is 0.
@item start_number @var{count}
Start the sequence from the specified number. Default value is 1.
@item update @var{bool}
If set to 1, the filename will always be interpreted as just a
filename, not a pattern, and the corresponding file will be continuously
overwritten with new images. Default value is 0.
@item strftime @var{bool}
If set to 1, expand the filename with date and time information from
@code{strftime()}. Default value is 0.
@item atomic_writing @var{bool}
Write output to a temporary file, which is renamed to target filename once
writing is completed. Default is disabled.
@item protocol_opts @var{options_list}
Set protocol options as a :-separated list of key=value parameters. Values
containing the @code{:} special character must be escaped.
@end table
@subsection Examples
@itemize
@item
Use @command{ffmpeg} for creating a sequence of files @file{img-001.jpeg},
@file{img-002.jpeg}, ..., taking one image every second from the input video:
@example
ffmpeg -i in.avi -vsync cfr -r 1 -f image2 'img-%03d.jpeg'
@end example
Note that with @command{ffmpeg}, if the format is not specified with the
@code{-f} option and the output filename specifies an image file
format, the image2 muxer is automatically selected, so the previous
command can be written as:
@example
ffmpeg -i in.avi -vsync cfr -r 1 'img-%03d.jpeg'
@end example
Note also that the pattern must not necessarily contain "%d" or
"%0@var{N}d", for example to create a single image file
@file{img.jpeg} from the start of the input video you can employ the command:
@example
ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg
@end example
@item
The @option{strftime} option allows you to expand the filename with
date and time information. Check the documentation of
the @code{strftime()} function for the syntax.
To generate image files from the @code{strftime()} "%Y-%m-%d_%H-%M-%S" pattern,
the following @command{ffmpeg} command can be used:
@example
ffmpeg -f v4l2 -r 1 -i /dev/video0 -f image2 -strftime 1 "%Y-%m-%d_%H-%M-%S.jpg"
@end example
@item
Set the file name with current frame's PTS:
@example
ffmpeg -f v4l2 -r 1 -i /dev/video0 -copyts -f image2 -frame_pts true %d.jpg
@end example
@item
Publish contents of your desktop directly to a WebDAV server every second:
@example
ffmpeg -f x11grab -framerate 1 -i :0.0 -q:v 6 -update 1 -protocol_opts method=PUT http://example.com/desktop.jpg
@end example
@end itemize
@section ircam
Berkeley / IRCAM / CARL Sound Filesystem (BICSF) format muxer.
The Berkeley/IRCAM/CARL Sound Format, developed in the 1980s, is a result of the
merging of several different earlier sound file formats and systems including
the csound system developed by Dr Gareth Loy at the Computer Audio Research Lab
(CARL) at UC San Diego, the IRCAM sound file system developed by Rob Gross and
Dan Timis at the Institut de Recherche et Coordination Acoustique / Musique in
Paris and the Berkeley Fast Filesystem.
It was developed initially as part of the Berkeley/IRCAM/CARL Sound Filesystem,
a suite of programs designed to implement a filesystem for audio applications
running under Berkeley UNIX. It was particularly popular in academic music
research centres, and was used a number of times in the creation of early
computer-generated compositions.
This muxer accepts a single audio stream containing PCM data.
@section ivf
On2 IVF muxer.
IVF was developed by On2 Technologies (formerly known as Duck
Corporation), to store internally developed codecs.
This muxer accepts a single @samp{vp8}, @samp{vp9}, or @samp{av1}
video stream.
@section jacosub
JACOsub subtitle format muxer.
This muxer accepts a single @samp{jacosub} subtitles stream.
For more information about the format, see
@url{http://unicorn.us.com/jacosub/jscripts.html}.
@section kvag
Simon & Schuster Interactive VAG muxer.
This custom VAG container is used by some Simon & Schuster Interactive
games such as "Real War", and "Real War: Rogue States".
This muxer accepts a single @samp{adpcm_ima_ssi} audio stream.
@section lc3
Bluetooth SIG Low Complexity Communication Codec audio (LC3), or
ETSI TS 103 634 Low Complexity Communication Codec plus (LC3plus).
This muxer accepts a single @samp{lc3} audio stream.
@section lrc
LRC lyrics file format muxer.
LRC (short for LyRiCs) is a computer file format that synchronizes
song lyrics with an audio file, such as MP3, Vorbis, or MIDI.
This muxer accepts a single @samp{subrip} or @samp{text} subtitles stream.
@subsection Metadata
The following metadata tags are converted to the format corresponding
metadata:
@table @option
@item title
@item album
@item artist
@item author
@item creator
@item encoder
@item encoder_version
@end table
If @samp{encoder_version} is not explicitly set, it is automatically
set to the libavformat version.
@section matroska
Matroska container muxer.
This muxer implements the matroska and webm container specs.
@subsection Metadata
The recognized metadata settings in this muxer are:
@table @option
@item title
Set title name provided to a single track. This gets mapped to
the FileDescription element for a stream written as attachment.
@item language
Specify the language of the track in the Matroska languages form.
The language can be either the 3 letters bibliographic ISO-639-2 (ISO
639-2/B) form (like "fre" for French), or a language code mixed with a
country code for specialities in languages (like "fre-ca" for Canadian
French).
@item stereo_mode
Set stereo 3D video layout of two views in a single video track.
The following values are recognized:
@table @samp
@item mono
video is not stereo
@item left_right
Both views are arranged side by side, Left-eye view is on the left
@item bottom_top
Both views are arranged in top-bottom orientation, Left-eye view is at bottom
@item top_bottom
Both views are arranged in top-bottom orientation, Left-eye view is on top
@item checkerboard_rl
Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
@item checkerboard_lr
Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
@item row_interleaved_rl
Each view is constituted by a row based interleaving, Right-eye view is first row
@item row_interleaved_lr
Each view is constituted by a row based interleaving, Left-eye view is first row
@item col_interleaved_rl
Both views are arranged in a column based interleaving manner, Right-eye view is first column
@item col_interleaved_lr
Both views are arranged in a column based interleaving manner, Left-eye view is first column
@item anaglyph_cyan_red
All frames are in anaglyph format viewable through red-cyan filters
@item right_left
Both views are arranged side by side, Right-eye view is on the left
@item anaglyph_green_magenta
All frames are in anaglyph format viewable through green-magenta filters
@item block_lr
Both eyes laced in one Block, Left-eye view is first
@item block_rl
Both eyes laced in one Block, Right-eye view is first
@end table
@end table
For example a 3D WebM clip can be created using the following command line:
@example
ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
@end example
@subsection Options
@table @option
@item reserve_index_space @var{size}
By default, this muxer writes the index for seeking (called cues in Matroska
terms) at the end of the file, because it cannot know in advance how much space
to leave for the index at the beginning of the file. However for some use cases
-- e.g. streaming where seeking is possible but slow -- it is useful to put the
index at the beginning of the file.
If this option is set to a non-zero value, the muxer will reserve @var{size} bytes
of space in the file header and then try to write the cues there when the muxing
finishes. If the reserved space does not suffice, no Cues will be written, the
file will be finalized and writing the trailer will return an error.
A safe size for most use cases should be about 50kB per hour of video.
Note that cues are only written if the output is seekable and this option will
have no effect if it is not.
@item cues_to_front @var{bool}
If set, the muxer will write the index at the beginning of the file
by shifting the main data if necessary. This can be combined with
reserve_index_space in which case the data is only shifted if
the initially reserved space turns out to be insufficient.
This option is ignored if the output is unseekable.
@item cluster_size_limit @var{size}
Store at most the provided amount of bytes in a cluster.
If not specified, the limit is set automatically to a sensible
hardcoded fixed value.
@item cluster_time_limit @var{duration}
Store at most the provided number of milliseconds in a cluster.
If not specified, the limit is set automatically to a sensible
hardcoded fixed value.
@item dash @var{bool}
Create a WebM file conforming to WebM DASH specification. By default
it is set to @code{false}.
@item dash_track_number @var{index}
Track number for the DASH stream. By default it is set to @code{1}.
@item live @var{bool}
Write files assuming it is a live stream. By default it is set to
@code{false}.
@item allow_raw_vfw @var{bool}
Allow raw VFW mode. By default it is set to @code{false}.
@item flipped_raw_rgb @var{bool}
If set to @code{true}, store positive height for raw RGB bitmaps, which indicates
bitmap is stored bottom-up. Note that this option does not flip the bitmap
which has to be done manually beforehand, e.g. by using the @samp{vflip} filter.
Default is @code{false} and indicates bitmap is stored top down.
@item write_crc32 @var{bool}
Write a CRC32 element inside every Level 1 element. By default it is
set to @code{true}. This option is ignored for WebM.
@item default_mode @var{mode}
Control how the FlagDefault of the output tracks will be set.
It influences which tracks players should play by default. The default mode
is @samp{passthrough}.
@table @samp
@item infer
Every track with disposition default will have the FlagDefault set.
Additionally, for each type of track (audio, video or subtitle), if no track
with disposition default of this type exists, then the first track of this type
will be marked as default (if existing). This ensures that the default flag
is set in a sensible way even if the input originated from containers that
lack the concept of default tracks.
@item infer_no_subs
This mode is the same as infer except that if no subtitle track with
disposition default exists, no subtitle track will be marked as default.
@item passthrough
In this mode the FlagDefault is set if and only if the AV_DISPOSITION_DEFAULT
flag is set in the disposition of the corresponding stream.
@end table
@end table
@anchor{md5}
@section md5
MD5 testing format.
This is a variant of the @ref{hash} muxer. Unlike that muxer, it
defaults to using the MD5 hash function.
See also the @ref{hash} and @ref{framemd5} muxers.
@subsection Examples
@itemize
@item
To compute the MD5 hash of the input converted to raw
audio and video, and store it in the file @file{out.md5}:
@example
ffmpeg -i INPUT -f md5 out.md5
@end example
@item
To print the MD5 hash to stdout:
@example
ffmpeg -i INPUT -f md5 -
@end example
@end itemize
@section microdvd
MicroDVD subtitle format muxer.
This muxer accepts a single @samp{microdvd} subtitles stream.
@section mmf
Synthetic music Mobile Application Format (SMAF) format muxer.
SMAF is a music data format specified by Yamaha for portable
electronic devices, such as mobile phones and personal digital
assistants.
This muxer accepts a single @samp{adpcm_yamaha} audio stream.
@section mp3
The MP3 muxer writes a raw MP3 stream with the following optional features:
@itemize @bullet
@item
An ID3v2 metadata header at the beginning (enabled by default). Versions 2.3 and
2.4 are supported, the @code{id3v2_version} private option controls which one is
used (3 or 4). Setting @code{id3v2_version} to 0 disables the ID3v2 header
completely.
The muxer supports writing attached pictures (APIC frames) to the ID3v2 header.
The pictures are supplied to the muxer in form of a video stream with a single
packet. There can be any number of those streams, each will correspond to a
single APIC frame. The stream metadata tags @var{title} and @var{comment} map
to APIC @var{description} and @var{picture type} respectively. See
@url{http://id3.org/id3v2.4.0-frames} for allowed picture types.
Note that the APIC frames must be written at the beginning, so the muxer will
buffer the audio frames until it gets all the pictures. It is therefore advised
to provide the pictures as soon as possible to avoid excessive buffering.
@item
A Xing/LAME frame right after the ID3v2 header (if present). It is enabled by
default, but will be written only if the output is seekable. The
@code{write_xing} private option can be used to disable it. The frame contains
various information that may be useful to the decoder, like the audio duration
or encoder delay.
@item
A legacy ID3v1 tag at the end of the file (disabled by default). It may be
enabled with the @code{write_id3v1} private option, but as its capabilities are
very limited, its usage is not recommended.
@end itemize
Examples:
Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
@example
ffmpeg -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3
@end example
To attach a picture to an mp3 file select both the audio and the picture stream
with @code{map}:
@example
ffmpeg -i input.mp3 -i cover.png -c copy -map 0 -map 1
-metadata:s:v title="Album cover" -metadata:s:v comment="Cover (Front)" out.mp3
@end example
Write a "clean" MP3 without any extra features:
@example
ffmpeg -i input.wav -write_xing 0 -id3v2_version 0 out.mp3
@end example
@section mpegts
MPEG transport stream muxer.
This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
The recognized metadata settings in mpegts muxer are @code{service_provider}
and @code{service_name}. If they are not set the default for
@code{service_provider} is @samp{FFmpeg} and the default for
@code{service_name} is @samp{Service01}.
@subsection Options
The muxer options are:
@table @option
@item mpegts_transport_stream_id @var{integer}
Set the @samp{transport_stream_id}. This identifies a transponder in DVB.
Default is @code{0x0001}.
@item mpegts_original_network_id @var{integer}
Set the @samp{original_network_id}. This is unique identifier of a
network in DVB. Its main use is in the unique identification of a service
through the path @samp{Original_Network_ID, Transport_Stream_ID}. Default
is @code{0x0001}.
@item mpegts_service_id @var{integer}
Set the @samp{service_id}, also known as program in DVB. Default is
@code{0x0001}.
@item mpegts_service_type @var{integer}
Set the program @samp{service_type}. Default is @code{digital_tv}.
Accepts the following options:
@table @samp
@item hex_value
Any hexadecimal value between @code{0x01} and @code{0xff} as defined in
ETSI 300 468.
@item digital_tv
Digital TV service.
@item digital_radio
Digital Radio service.
@item teletext
Teletext service.
@item advanced_codec_digital_radio
Advanced Codec Digital Radio service.
@item mpeg2_digital_hdtv
MPEG2 Digital HDTV service.
@item advanced_codec_digital_sdtv
Advanced Codec Digital SDTV service.
@item advanced_codec_digital_hdtv
Advanced Codec Digital HDTV service.
@end table
@item mpegts_pmt_start_pid @var{integer}
Set the first PID for PMTs. Default is @code{0x1000}, minimum is @code{0x0020},
maximum is @code{0x1ffa}. This option has no effect in m2ts mode where the PMT
PID is fixed @code{0x0100}.
@item mpegts_start_pid @var{integer}
Set the first PID for elementary streams. Default is @code{0x0100}, minimum is
@code{0x0020}, maximum is @code{0x1ffa}. This option has no effect in m2ts mode
where the elementary stream PIDs are fixed.
@item mpegts_m2ts_mode @var{boolean}
Enable m2ts mode if set to @code{1}. Default value is @code{-1} which
disables m2ts mode.
@item muxrate @var{integer}
Set a constant muxrate. Default is VBR.
@item pes_payload_size @var{integer}
Set minimum PES packet payload in bytes. Default is @code{2930}.
@item mpegts_flags @var{flags}
Set mpegts flags. Accepts the following options:
@table @samp
@item resend_headers
Reemit PAT/PMT before writing the next packet.
@item latm
Use LATM packetization for AAC.
@item pat_pmt_at_frames
Reemit PAT and PMT at each video frame.
@item system_b
Conform to System B (DVB) instead of System A (ATSC).
@item initial_discontinuity
Mark the initial packet of each stream as discontinuity.
@item nit
Emit NIT table.
@item omit_rai
Disable writing of random access indicator.
@end table
@item mpegts_copyts @var{boolean}
Preserve original timestamps, if value is set to @code{1}. Default value
is @code{-1}, which results in shifting timestamps so that they start from 0.
@item omit_video_pes_length @var{boolean}
Omit the PES packet length for video packets. Default is @code{1} (true).
@item pcr_period @var{integer}
Override the default PCR retransmission time in milliseconds. Default is
@code{-1} which means that the PCR interval will be determined automatically:
20 ms is used for CBR streams, the highest multiple of the frame duration which
is less than 100 ms is used for VBR streams.
@item pat_period @var{duration}
Maximum time in seconds between PAT/PMT tables. Default is @code{0.1}.
@item sdt_period @var{duration}
Maximum time in seconds between SDT tables. Default is @code{0.5}.
@item nit_period @var{duration}
Maximum time in seconds between NIT tables. Default is @code{0.5}.
@item tables_version @var{integer}
Set PAT, PMT, SDT and NIT version (default @code{0}, valid values are from 0 to 31, inclusively).
This option allows updating stream structure so that standard consumer may
detect the change. To do so, reopen output @code{AVFormatContext} (in case of API
usage) or restart @command{ffmpeg} instance, cyclically changing
@option{tables_version} value:
@example
ffmpeg -i source1.ts -codec copy -f mpegts -tables_version 0 udp://1.1.1.1:1111
ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111
...
ffmpeg -i source3.ts -codec copy -f mpegts -tables_version 31 udp://1.1.1.1:1111
ffmpeg -i source1.ts -codec copy -f mpegts -tables_version 0 udp://1.1.1.1:1111
ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111
...
@end example
@end table
@subsection Example
@example
ffmpeg -i file.mpg -c copy \
-mpegts_original_network_id 0x1122 \
-mpegts_transport_stream_id 0x3344 \
-mpegts_service_id 0x5566 \
-mpegts_pmt_start_pid 0x1500 \
-mpegts_start_pid 0x150 \
-metadata service_provider="Some provider" \
-metadata service_name="Some Channel" \
out.ts
@end example
@section mxf, mxf_d10, mxf_opatom
MXF muxer.
@subsection Options
The muxer options are:
@table @option
@item store_user_comments @var{bool}
Set if user comments should be stored if available or never.
IRT D-10 does not allow user comments. The default is thus to write them for
mxf and mxf_opatom but not for mxf_d10
@end table
@section null
Null muxer.
This muxer does not generate any output file, it is mainly useful for
testing or benchmarking purposes.
For example to benchmark decoding with @command{ffmpeg} you can use the
command:
@example
ffmpeg -benchmark -i INPUT -f null out.null
@end example
Note that the above command does not read or write the @file{out.null}
file, but specifying the output file is required by the @command{ffmpeg}
syntax.
Alternatively you can write the command as:
@example
ffmpeg -benchmark -i INPUT -f null -
@end example
@section nut
@table @option
@item -syncpoints @var{flags}
Change the syncpoint usage in nut:
@table @option
@item @var{default} use the normal low-overhead seeking aids.
@item @var{none} do not use the syncpoints at all, reducing the overhead but making the stream non-seekable;
Use of this option is not recommended, as the resulting files are very damage
sensitive and seeking is not possible. Also in general the overhead from
syncpoints is negligible. Note, -@code{write_index} 0 can be used to disable
all growing data tables, allowing to mux endless streams with limited memory
and without these disadvantages.
@item @var{timestamped} extend the syncpoint with a wallclock field.
@end table
The @var{none} and @var{timestamped} flags are experimental.
@item -write_index @var{bool}
Write index at the end, the default is to write an index.
@end table
@example
ffmpeg -i INPUT -f_strict experimental -syncpoints none - | processor
@end example
@section ogg
Ogg container muxer.
@table @option
@item -page_duration @var{duration}
Preferred page duration, in microseconds. The muxer will attempt to create
pages that are approximately @var{duration} microseconds long. This allows the
user to compromise between seek granularity and container overhead. The default
is 1 second. A value of 0 will fill all segments, making pages as large as
possible. A value of 1 will effectively use 1 packet-per-page in most
situations, giving a small seek granularity at the cost of additional container
overhead.
@item -serial_offset @var{value}
Serial value from which to set the streams serial number.
Setting it to different and sufficiently large values ensures that the produced
ogg files can be safely chained.
@end table
@anchor{rcwtenc}
@section rcwt
RCWT (Raw Captions With Time) is a format native to ccextractor, a commonly
used open source tool for processing 608/708 Closed Captions (CC) sources.
It can be used to archive the original extracted CC bitstream and to produce
a source file for later processing or conversion. The format allows
for interoperability between ccextractor and FFmpeg, is simple to parse,
and can be used to create a backup of the CC presentation.
This muxer implements the specification as of March 2024, which has
been stable and unchanged since April 2014.
This muxer will have some nuances from the way that ccextractor muxes RCWT.
No compatibility issues when processing the output with ccextractor
have been observed as a result of this so far, but mileage may vary
and outputs will not be a bit-exact match.
A free specification of RCWT can be found here:
@url{https://github.com/CCExtractor/ccextractor/blob/master/docs/BINARY_FILE_FORMAT.TXT}
@subsection Examples
@itemize
@item
Extract Closed Captions to RCWT using lavfi:
@example
ffmpeg -f lavfi -i "movie=INPUT.mkv[out+subcc]" -map 0:s:0 -c:s copy -f rcwt CC.rcwt.bin
@end example
@end itemize
@anchor{segment}
@section segment, stream_segment, ssegment
Basic stream segmenter.
This muxer outputs streams to a number of separate files of nearly
fixed duration. Output filename pattern can be set in a fashion
similar to @ref{image2}, or by using a @code{strftime} template if
the @option{strftime} option is enabled.
@code{stream_segment} is a variant of the muxer used to write to
streaming output formats, i.e. which do not require global headers,
and is recommended for outputting e.g. to MPEG transport stream segments.
@code{ssegment} is a shorter alias for @code{stream_segment}.
Every segment starts with a keyframe of the selected reference stream,
which is set through the @option{reference_stream} option.
Note that if you want accurate splitting for a video file, you need to
make the input key frames correspond to the exact splitting times
expected by the segmenter, or the segment muxer will start the new
segment with the key frame found next after the specified start
time.
The segment muxer works best with a single constant frame rate video.
Optionally it can generate a list of the created segments, by setting
the option @var{segment_list}. The list type is specified by the
@var{segment_list_type} option. The entry filenames in the segment
list are set by default to the basename of the corresponding segment
files.
See also the @ref{hls} muxer, which provides a more specific
implementation for HLS segmentation.
@subsection Options
The segment muxer supports the following options:
@table @option
@item increment_tc @var{1|0}
if set to @code{1}, increment timecode between each segment
If this is selected, the input need to have
a timecode in the first video stream. Default value is
@code{0}.
@item reference_stream @var{specifier}
Set the reference stream, as specified by the string @var{specifier}.
If @var{specifier} is set to @code{auto}, the reference is chosen
automatically. Otherwise it must be a stream specifier (see the ``Stream
specifiers'' chapter in the ffmpeg manual) which specifies the
reference stream. The default value is @code{auto}.
@item segment_format @var{format}
Override the inner container format, by default it is guessed by the filename
extension.
@item segment_format_options @var{options_list}
Set output format options using a :-separated list of key=value
parameters. Values containing the @code{:} special character must be
escaped.
@item segment_list @var{name}
Generate also a listfile named @var{name}. If not specified no
listfile is generated.
@item segment_list_flags @var{flags}
Set flags affecting the segment list generation.
It currently supports the following flags:
@table @samp
@item cache
Allow caching (only affects M3U8 list files).
@item live
Allow live-friendly file generation.
@end table
@item segment_list_size @var{size}
Update the list file so that it contains at most @var{size}
segments. If 0 the list file will contain all the segments. Default
value is 0.
@item segment_list_entry_prefix @var{prefix}
Prepend @var{prefix} to each entry. Useful to generate absolute paths.
By default no prefix is applied.
@item segment_list_type @var{type}
Select the listing format.
The following values are recognized:
@table @samp
@item flat
Generate a flat list for the created segments, one segment per line.
@item csv, ext
Generate a list for the created segments, one segment per line,
each line matching the format (comma-separated values):
@example
@var{segment_filename},@var{segment_start_time},@var{segment_end_time}
@end example
@var{segment_filename} is the name of the output file generated by the
muxer according to the provided pattern. CSV escaping (according to
RFC4180) is applied if required.
@var{segment_start_time} and @var{segment_end_time} specify
the segment start and end time expressed in seconds.
A list file with the suffix @code{".csv"} or @code{".ext"} will
auto-select this format.
@samp{ext} is deprecated in favor or @samp{csv}.
@item ffconcat
Generate an ffconcat file for the created segments. The resulting file
can be read using the FFmpeg @ref{concat} demuxer.
A list file with the suffix @code{".ffcat"} or @code{".ffconcat"} will
auto-select this format.
@item m3u8
Generate an extended M3U8 file, version 3, compliant with
@url{http://tools.ietf.org/id/draft-pantos-http-live-streaming}.
A list file with the suffix @code{".m3u8"} will auto-select this format.
@end table
If not specified the type is guessed from the list file name suffix.
@item segment_time @var{time}
Set segment duration to @var{time}, the value must be a duration
specification. Default value is "2". See also the
@option{segment_times} option.
Note that splitting may not be accurate, unless you force the
reference stream key-frames at the given time. See the introductory
notice and the examples below.
@item min_seg_duration @var{time}
Set minimum segment duration to @var{time}, the value must be a duration
specification. This prevents the muxer ending segments at a duration below
this value. Only effective with @code{segment_time}. Default value is "0".
@item segment_atclocktime @var{1|0}
If set to "1" split at regular clock time intervals starting from 00:00
o'clock. The @var{time} value specified in @option{segment_time} is
used for setting the length of the splitting interval.
For example with @option{segment_time} set to "900" this makes it possible
to create files at 12:00 o'clock, 12:15, 12:30, etc.
Default value is "0".
@item segment_clocktime_offset @var{duration}
Delay the segment splitting times with the specified duration when using
@option{segment_atclocktime}.
For example with @option{segment_time} set to "900" and
@option{segment_clocktime_offset} set to "300" this makes it possible to
create files at 12:05, 12:20, 12:35, etc.
Default value is "0".
@item segment_clocktime_wrap_duration @var{duration}
Force the segmenter to only start a new segment if a packet reaches the muxer
within the specified duration after the segmenting clock time. This way you
can make the segmenter more resilient to backward local time jumps, such as
leap seconds or transition to standard time from daylight savings time.
Default is the maximum possible duration which means starting a new segment
regardless of the elapsed time since the last clock time.
@item segment_time_delta @var{delta}
Specify the accuracy time when selecting the start time for a
segment, expressed as a duration specification. Default value is "0".
When delta is specified a key-frame will start a new segment if its
PTS satisfies the relation:
@example
PTS >= start_time - time_delta
@end example
This option is useful when splitting video content, which is always
split at GOP boundaries, in case a key frame is found just before the
specified split time.
In particular may be used in combination with the @file{ffmpeg} option
@var{force_key_frames}. The key frame times specified by
@var{force_key_frames} may not be set accurately because of rounding
issues, with the consequence that a key frame time may result set just
before the specified time. For constant frame rate videos a value of
1/(2*@var{frame_rate}) should address the worst case mismatch between
the specified time and the time set by @var{force_key_frames}.
@item segment_times @var{times}
Specify a list of split points. @var{times} contains a list of comma
separated duration specifications, in increasing order. See also
the @option{segment_time} option.
@item segment_frames @var{frames}
Specify a list of split video frame numbers. @var{frames} contains a
list of comma separated integer numbers, in increasing order.
This option specifies to start a new segment whenever a reference
stream key frame is found and the sequential number (starting from 0)
of the frame is greater or equal to the next value in the list.
@item segment_wrap @var{limit}
Wrap around segment index once it reaches @var{limit}.
@item segment_start_number @var{number}
Set the sequence number of the first segment. Defaults to @code{0}.
@item strftime @var{1|0}
Use the @code{strftime} function to define the name of the new
segments to write. If this is selected, the output segment name must
contain a @code{strftime} function template. Default value is
@code{0}.
@item break_non_keyframes @var{1|0}
If enabled, allow segments to start on frames other than keyframes. This
improves behavior on some players when the time between keyframes is
inconsistent, but may make things worse on others, and can cause some oddities
during seeking. Defaults to @code{0}.
@item reset_timestamps @var{1|0}
Reset timestamps at the beginning of each segment, so that each segment
will start with near-zero timestamps. It is meant to ease the playback
of the generated segments. May not work with some combinations of
muxers/codecs. It is set to @code{0} by default.
@item initial_offset @var{offset}
Specify timestamp offset to apply to the output packet timestamps. The
argument must be a time duration specification, and defaults to 0.
@item write_empty_segments @var{1|0}
If enabled, write an empty segment if there are no packets during the period a
segment would usually span. Otherwise, the segment will be filled with the next
packet written. Defaults to @code{0}.
@end table
Make sure to require a closed GOP when encoding and to set the GOP
size to fit your segment time constraint.
@subsection Examples
@itemize
@item
Remux the content of file @file{in.mkv} to a list of segments
@file{out-000.nut}, @file{out-001.nut}, etc., and write the list of
generated segments to @file{out.list}:
@example
ffmpeg -i in.mkv -codec hevc -flags +cgop -g 60 -map 0 -f segment -segment_list out.list out%03d.nut
@end example
@item
Segment input and set output format options for the output segments:
@example
ffmpeg -i in.mkv -f segment -segment_time 10 -segment_format_options movflags=+faststart out%03d.mp4
@end example
@item
Segment the input file according to the split points specified by the
@var{segment_times} option:
@example
ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
@end example
@item
Use the @command{ffmpeg} @option{force_key_frames}
option to force key frames in the input at the specified location, together
with the segment option @option{segment_time_delta} to account for
possible roundings operated when setting key frame times.
@example
ffmpeg -i in.mkv -force_key_frames 1,2,3,5,8,13,21 -codec:v mpeg4 -codec:a pcm_s16le -map 0 \
-f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 -segment_time_delta 0.05 out%03d.nut
@end example
In order to force key frames on the input file, transcoding is
required.
@item
Segment the input file by splitting the input file according to the
frame numbers sequence specified with the @option{segment_frames} option:
@example
ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_frames 100,200,300,500,800 out%03d.nut
@end example
@item
Convert the @file{in.mkv} to TS segments using the @code{libx264}
and @code{aac} encoders:
@example
ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a aac -f ssegment -segment_list out.list out%03d.ts
@end example
@item
Segment the input file, and create an M3U8 live playlist (can be used
as live HLS source):
@example
ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \
-segment_list_flags +live -segment_time 10 out%03d.mkv
@end example
@end itemize
@section smoothstreaming
Smooth Streaming muxer generates a set of files (Manifest, chunks) suitable for serving with conventional web server.
@table @option
@item window_size
Specify the number of fragments kept in the manifest. Default 0 (keep all).
@item extra_window_size
Specify the number of fragments kept outside of the manifest before removing from disk. Default 5.
@item lookahead_count
Specify the number of lookahead fragments. Default 2.
@item min_frag_duration
Specify the minimum fragment duration (in microseconds). Default 5000000.
@item remove_at_exit
Specify whether to remove all fragments when finished. Default 0 (do not remove).
@end table
@anchor{streamhash}
@section streamhash
Per stream hash testing format.
This muxer computes and prints a cryptographic hash of all the input frames,
on a per-stream basis. This can be used for equality checks without having
to do a complete binary comparison.
By default audio frames are converted to signed 16-bit raw audio and
video frames to raw video before computing the hash, but the output
of explicit conversions to other codecs can also be used. Timestamps
are ignored. It uses the SHA-256 cryptographic hash function by default,
but supports several other algorithms.
The output of the muxer consists of one line per stream of the form:
@var{streamindex},@var{streamtype},@var{algo}=@var{hash}, where
@var{streamindex} is the index of the mapped stream, @var{streamtype} is a
single character indicating the type of stream, @var{algo} is a short string
representing the hash function used, and @var{hash} is a hexadecimal number
representing the computed hash.
@table @option
@item hash @var{algorithm}
Use the cryptographic hash function specified by the string @var{algorithm}.
Supported values include @code{MD5}, @code{murmur3}, @code{RIPEMD128},
@code{RIPEMD160}, @code{RIPEMD256}, @code{RIPEMD320}, @code{SHA160},
@code{SHA224}, @code{SHA256} (default), @code{SHA512/224}, @code{SHA512/256},
@code{SHA384}, @code{SHA512}, @code{CRC32} and @code{adler32}.
@end table
@subsection Examples
To compute the SHA-256 hash of the input converted to raw audio and
video, and store it in the file @file{out.sha256}:
@example
ffmpeg -i INPUT -f streamhash out.sha256
@end example
To print an MD5 hash to stdout use the command:
@example
ffmpeg -i INPUT -f streamhash -hash md5 -
@end example
See also the @ref{hash} and @ref{framehash} muxers.
@anchor{tee}
@section tee
The tee muxer can be used to write the same data to several outputs, such as files or streams.
It can be used, for example, to stream a video over a network and save it to disk at the same time.
It is different from specifying several outputs to the @command{ffmpeg}
command-line tool. With the tee muxer, the audio and video data will be encoded only once.
With conventional multiple outputs, multiple encoding operations in parallel are initiated,
which can be a very expensive process. The tee muxer is not useful when using the libavformat API
directly because it is then possible to feed the same packets to several muxers directly.
Since the tee muxer does not represent any particular output format, ffmpeg cannot auto-select
output streams. So all streams intended for output must be specified using @code{-map}. See
the examples below.
Some encoders may need different options depending on the output format;
the auto-detection of this can not work with the tee muxer, so they need to be explicitly specified.
The main example is the @option{global_header} flag.
The slave outputs are specified in the file name given to the muxer,
separated by '|'. If any of the slave name contains the '|' separator,
leading or trailing spaces or any special character, those must be
escaped (see @ref{quoting_and_escaping,,the "Quoting and escaping"
section in the ffmpeg-utils(1) manual,ffmpeg-utils}).
@subsection Options
@table @option
@item use_fifo @var{bool}
If set to 1, slave outputs will be processed in separate threads using the @ref{fifo}
muxer. This allows to compensate for different speed/latency/reliability of
outputs and setup transparent recovery. By default this feature is turned off.
@item fifo_options
Options to pass to fifo pseudo-muxer instances. See @ref{fifo}.
@end table
Muxer options can be specified for each slave by prepending them as a list of
@var{key}=@var{value} pairs separated by ':', between square brackets. If
the options values contain a special character or the ':' separator, they
must be escaped; note that this is a second level escaping.
The following special options are also recognized:
@table @option
@item f
Specify the format name. Required if it cannot be guessed from the
output URL.
@item bsfs[/@var{spec}]
Specify a list of bitstream filters to apply to the specified
output.
It is possible to specify to which streams a given bitstream filter
applies, by appending a stream specifier to the option separated by
@code{/}. @var{spec} must be a stream specifier (see @ref{Format
stream specifiers}).
If the stream specifier is not specified, the bitstream filters will be
applied to all streams in the output. This will cause that output operation
to fail if the output contains streams to which the bitstream filter cannot
be applied e.g. @code{h264_mp4toannexb} being applied to an output containing an audio stream.
Options for a bitstream filter must be specified in the form of @code{opt=value}.
Several bitstream filters can be specified, separated by ",".
@item use_fifo @var{bool}
This allows to override tee muxer use_fifo option for individual slave muxer.
@item fifo_options
This allows to override tee muxer fifo_options for individual slave muxer.
See @ref{fifo}.
@item select
Select the streams that should be mapped to the slave output,
specified by a stream specifier. If not specified, this defaults to
all the mapped streams. This will cause that output operation to fail
if the output format does not accept all mapped streams.
You may use multiple stream specifiers separated by commas (@code{,}) e.g.: @code{a:0,v}
@item onfail
Specify behaviour on output failure. This can be set to either @code{abort} (which is
default) or @code{ignore}. @code{abort} will cause whole process to fail in case of failure
on this slave output. @code{ignore} will ignore failure on this output, so other outputs
will continue without being affected.
@end table
@subsection Examples
@itemize
@item
Encode something and both archive it in a WebM file and stream it
as MPEG-TS over UDP:
@example
ffmpeg -i ... -c:v libx264 -c:a mp2 -f tee -map 0:v -map 0:a
"archive-20121107.mkv|[f=mpegts]udp://10.0.1.255:1234/"
@end example
@item
As above, but continue streaming even if output to local file fails
(for example local drive fills up):
@example
ffmpeg -i ... -c:v libx264 -c:a mp2 -f tee -map 0:v -map 0:a
"[onfail=ignore]archive-20121107.mkv|[f=mpegts]udp://10.0.1.255:1234/"
@end example
@item
Use @command{ffmpeg} to encode the input, and send the output
to three different destinations. The @code{dump_extra} bitstream
filter is used to add extradata information to all the output video
keyframes packets, as requested by the MPEG-TS format. The select
option is applied to @file{out.aac} in order to make it contain only
audio packets.
@example
ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac
-f tee "[bsfs/v=dump_extra=freq=keyframe]out.ts|[movflags=+faststart]out.mp4|[select=a]out.aac"
@end example
@item
As above, but select only stream @code{a:1} for the audio output. Note
that a second level escaping must be performed, as ":" is a special
character used to separate options.
@example
ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac
-f tee "[bsfs/v=dump_extra=freq=keyframe]out.ts|[movflags=+faststart]out.mp4|[select=\'a:1\']out.aac"
@end example
@end itemize
@section webm_chunk
WebM Live Chunk Muxer.
This muxer writes out WebM headers and chunks as separate files which can be
consumed by clients that support WebM Live streams via DASH.
@subsection Options
This muxer supports the following options:
@table @option
@item chunk_start_index
Index of the first chunk (defaults to 0).
@item header
Filename of the header where the initialization data will be written.
@item audio_chunk_duration
Duration of each audio chunk in milliseconds (defaults to 5000).
@end table
@subsection Example
@example
ffmpeg -f v4l2 -i /dev/video0 \
-f alsa -i hw:0 \
-map 0:0 \
-c:v libvpx-vp9 \
-s 640x360 -keyint_min 30 -g 30 \
-f webm_chunk \
-header webm_live_video_360.hdr \
-chunk_start_index 1 \
webm_live_video_360_%d.chk \
-map 1:0 \
-c:a libvorbis \
-b:a 128k \
-f webm_chunk \
-header webm_live_audio_128.hdr \
-chunk_start_index 1 \
-audio_chunk_duration 1000 \
webm_live_audio_128_%d.chk
@end example
@section webm_dash_manifest
WebM DASH Manifest muxer.
This muxer implements the WebM DASH Manifest specification to generate the DASH
manifest XML. It also supports manifest generation for DASH live streams.
For more information see:
@itemize @bullet
@item
WebM DASH Specification: @url{https://sites.google.com/a/webmproject.org/wiki/adaptive-streaming/webm-dash-specification}
@item
ISO DASH Specification: @url{http://standards.iso.org/ittf/PubliclyAvailableStandards/c065274_ISO_IEC_23009-1_2014.zip}
@end itemize
@subsection Options
This muxer supports the following options:
@table @option
@item adaptation_sets
This option has the following syntax: "id=x,streams=a,b,c id=y,streams=d,e" where x and y are the
unique identifiers of the adaptation sets and a,b,c,d and e are the indices of the corresponding
audio and video streams. Any number of adaptation sets can be added using this option.
@item live
Set this to 1 to create a live stream DASH Manifest. Default: 0.
@item chunk_start_index
Start index of the first chunk. This will go in the @samp{startNumber} attribute
of the @samp{SegmentTemplate} element in the manifest. Default: 0.
@item chunk_duration_ms
Duration of each chunk in milliseconds. This will go in the @samp{duration}
attribute of the @samp{SegmentTemplate} element in the manifest. Default: 1000.
@item utc_timing_url
URL of the page that will return the UTC timestamp in ISO format. This will go
in the @samp{value} attribute of the @samp{UTCTiming} element in the manifest.
Default: None.
@item time_shift_buffer_depth
Smallest time (in seconds) shifting buffer for which any Representation is
guaranteed to be available. This will go in the @samp{timeShiftBufferDepth}
attribute of the @samp{MPD} element. Default: 60.
@item minimum_update_period
Minimum update period (in seconds) of the manifest. This will go in the
@samp{minimumUpdatePeriod} attribute of the @samp{MPD} element. Default: 0.
@end table
@subsection Example
@example
ffmpeg -f webm_dash_manifest -i video1.webm \
-f webm_dash_manifest -i video2.webm \
-f webm_dash_manifest -i audio1.webm \
-f webm_dash_manifest -i audio2.webm \
-map 0 -map 1 -map 2 -map 3 \
-c copy \
-f webm_dash_manifest \
-adaptation_sets "id=0,streams=0,1 id=1,streams=2,3" \
manifest.xml
@end example
@c man end MUXERS