Intel MediaSDK and oneVPL expect continuous allocation for data[i],
however there are mandatory padding bytes between data[i] and data[i+1].
when calling av_frame_get_buffer. This patch removes all extra padding
bytes.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The data copy is unnecessary for packed formats when frame width and
height are aligned
For example:
$ ffmpeg -f lavfi -i testsrc=size=1920x1088 -vf "format=yuyv422" -c:v hevc_qsv -f null -
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The mfx implementation based on D3D11 is expected for an internal
session on Windows, however sometimes this implemntation is not
supported [1]. A fallback to the default mfx implementation is added in
this patch.
[1] https://github.com/intel/cartwheel-ffmpeg/issues/246
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
mix->timestamps is expressed relative to the source timebase, which is
possibly a different timescale from `base_pts`. We can't mix-and-match
here. The only reason this worked in my previous testing was because I
was testing on a source file which had an exactly matching timebase.
Fix it by always using the exact PTS as tagged on the AVFrame.
Fixes decoding packets containing split temporal units, as generated for example
by the av1_frame_split bsf.
Signed-off-by: James Almer <jamrial@gmail.com>
When no drawing is to be performed, tpad can work fine with
hardware frames, so advertise this in the query_formats
callback and ensure the drawing context is never initialised
when just cloning frames.
Reviewed-by: Thilo Borgmann <thilo.borgmann@mail.de>
Reviewed-by: Niklas Haas <git@haasn.dev>
It currently emulates the long-removed
avcodec_decode_audio4/avcodec_decode_video2 APIs, which obfuscates the
actual decoding flow. Restructure the decoding calls so that they
naturally follow the new avcodec_send_packet()/avcodec_receive_frame()
design.
This is not only significantly easier to read, but also shorter.
It is currently handled in the same loop as audio and video, but this
obscures the actual flow, because only one iteration is ever performed
for subtitles.
Also, avoid a pointless packet reference.
process_input_packet() contains two non-interacting pieces of nontrivial
size and complexity - decoding and streamcopy. Separating them makes the
code easier to read.
New placement requires fewer explicit conditions and is easier to
understand.
The logic should be exactly equivalent, since this is the only place
where eof_reached is set for decoding.
Passing ist=NULL is currently used to identify stream types that do not
decode into AVFrames, i.e. subtitles. That is highly non-obvious -
always pass a non-NULL InputStream and just check the type explicitly.
It tracks whether the decoder for this stream ever produced any frames
and its only use is for checking whether a filter input ever received a
frame - those that did not are prioritized by the scheduler.
This is awkward and unnecessarily complicated - checking whether the
filtergraph input format is valid works just as well and does not
require maintaining an extra variable.
In case no decoder is available, dec_open() called from ist_use() will
fail with 'Decoding requested, but no decoder found', so this check is
redundant.
When a filtergraph input receives EOF but never saw any input frames, we
use the fallback parameters. Currently an attempt to actually configure
the filtergraph will happen elsewhere, but there is no reason to
postpone this.
With complex filtergraphs it can happen that the filtergraph is
unconfigured because some other filter than the one we just got EOF on
is missing parameters.
Make sure that the fallback parametes for a given input are only used
when that input is unconfigured.