librempeg/doc/multithreading.txt
Andreas Rheinhardt 315c956cbd avcodec/pthread_frame: Remove ff_thread_release_buffer()
It is unnecessary since the removal of non-thread-safe callbacks
in e0786a8eeb. Since then, the
AVCodecContext has only been used as logcontext.

Removing ff_thread_release_buffer() allowed to remove AVCodecContext*
parameters from several other functions (not only unref functions,
but also e.g. ff_h264_ref_picture() which calls ff_h264_unref_picture()
on error).

Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2023-10-22 22:09:59 +02:00

67 lines
2.8 KiB
Plaintext

FFmpeg multithreading methods
==============================================
FFmpeg provides two methods for multithreading codecs.
Slice threading decodes multiple parts of a frame at the same time, using
AVCodecContext execute() and execute2().
Frame threading decodes multiple frames at the same time.
It accepts N future frames and delays decoded pictures by N-1 frames.
The later frames are decoded in separate threads while the user is
displaying the current one.
Restrictions on clients
==============================================
Slice threading -
* The client's draw_horiz_band() must be thread-safe according to the comment
in avcodec.h.
Frame threading -
* Restrictions with slice threading also apply.
* Custom get_buffer2() and get_format() callbacks must be thread-safe.
* There is one frame of delay added for every thread beyond the first one.
Clients must be able to handle this; the pkt_dts and pkt_pts fields in
AVFrame will work as usual.
Restrictions on codec implementations
==============================================
Slice threading -
None except that there must be something worth executing in parallel.
Frame threading -
* Codecs can only accept entire pictures per packet.
* Codecs similar to ffv1, whose streams don't reset across frames,
will not work because their bitstreams cannot be decoded in parallel.
* The contents of buffers must not be read before ff_thread_await_progress()
has been called on them. reget_buffer() and buffer age optimizations no longer work.
* The contents of buffers must not be written to after ff_thread_report_progress()
has been called on them. This includes draw_edges().
Porting codecs to frame threading
==============================================
Find all context variables that are needed by the next frame. Move all
code changing them, as well as code calling get_buffer(), up to before
the decode process starts. Call ff_thread_finish_setup() afterwards. If
some code can't be moved, have update_thread_context() run it in the next
thread.
Add AV_CODEC_CAP_FRAME_THREADS to the codec capabilities. There will be very little
speed gain at this point but it should work.
If there are inter-frame dependencies, so the codec calls
ff_thread_report/await_progress(), set FF_CODEC_CAP_ALLOCATE_PROGRESS in
FFCodec.caps_internal and use ff_thread_get_buffer() to allocate frames.
Otherwise decode directly into the user-supplied frames.
Call ff_thread_report_progress() after some part of the current picture has decoded.
A good place to put this is where draw_horiz_band() is called - add this if it isn't
called anywhere, as it's useful too and the implementation is trivial when you're
doing this. Note that draw_edges() needs to be called before reporting progress.
Before accessing a reference frame or its MVs, call ff_thread_await_progress().