Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Currently the parameter of skin-tone-enhancement filter is forced
to zero. In fact it could be set different value by the user.
So create a new property named as "skin-tone-enhancement-level"
for accepting the used defined parameter value.
At the same time, skin-tone-enhancement is marked as deprecated.
When skin-tone-enhancement-level is set, skin-tone-enhancement
will be ignored.
|
|
Add crop-left, crop-right, crop-top and crop-bottom
properties to vaapipostproc.
|
|
Advertise to upstream that vaapipostproc can handle
crop meta.
When used in conjunction with videocrop plugin, the
videocrop plugin will only do in-place transform on the
crop meta when vaapipostproc advertises the ability to
handle it. This allows vaapipostproc to apply the crop
meta on the output buffer using vaapi acceleration.
Without this advertisement, the videocrop plugin will
crop the output buffer directly via software methods,
which is not what we desire.
vaapipostproc will not apply the crop meta if downstream
advertises crop meta handling; vaapipostproc will just
forward the crop meta to downstream. If crop meta is
not advertised by downstream, then vaapipostproc will
apply the crop meta.
Examples:
1. vaapipostproc will forward crop meta to vaapisink
gst-launch-1.0 videotestsrc \
! videocrop left=10 \
! vaapipostproc \
! vaapisink
2. vaapipostproc will do the cropping
gst-launch-1.0 videotestsrc \
! videocrop left=10 \
! vaapipostproc \
! identity drop-allocation=1 \
! vaapisink
|
|
Now that vaapipostproc can possible handle video-direction, it
should also handle the image-orientation event from upstream if
video-direction property is set to auto.
|
|
Adds vpp mirroring support to vaapipostproc. Use
property video-direction. Valid values are identity,
horiz or vert. Default is identity (no mirror).
Closes #89
v2: Use GstVideoOrientationMethod enum
v3: Don't warn for VA_MIRROR_NONE.
Use GST_TYPE_VIDEO_ORIENTATION_METHOD type.
v4: Query VAAPI caps when setting mirror value
instead of during per-frame processing.
v5: Return TRUE in warning cases when setting mirror value.
|
|
In case that sink caps and src caps are same, and no filtering parameter set,
pass-through mode is enabled.
If new filtering parameter is set during playback, it makes it reconfiguring,
so that pass-through mode is changed
In addition, updating filter is performed during reconfiguration, if needed.
https://bugzilla.gnome.org/show_bug.cgi?id=751876
|
|
Add a mutex to postproc to protect concurrent access to data members.
Previously set_caps() could release the allowed_srcpad_caps while
transform_caps was in the middle of using it.
Signed-off-by: Scott D Phillips <scott.d.phillips@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=766940
|
|
https://bugzilla.gnome.org/show_bug.cgi?id=720376
|
|
Added the 'skin-tone-enhancement' property to vaapostproc.
https://bugzilla.gnome.org/show_bug.cgi?id=744088
|
|
The vaapipostproc has a proxy flag to know if the the buffer pool is
already active. But this fails in some situations where it is needed
to renegotiate the buffer pool.
This patch removes that flag so the renegotiation is done whenever is
required.
https://bugzilla.gnome.org/show_bug.cgi?id=745535
|
|
|
|
Add new "scale-method" property to expose the scaling mode to use during
video processing. Note that this is only a hint, and the actual behaviour
may differ from implementation (VA driver) to implementation.
|
|
Use pooled GstVaapiVideoMeta information, i.e. always allocate that on
video buffer allocation. Also optimize copy of additional metadata info
into the resulting video buffer: only copy the video cropping info and
the source surface proxy.
https://bugzilla.gnome.org/show_bug.cgi?id=720311
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
[fixed proxy leak, fixed double free on error, optimized meta copy]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
|
|
https://bugzilla.gnome.org/show_bug.cgi?id=720311
[used new infrastructure through base decide_allocation() impl]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
|
|
Use the new gst_caps_has_vaapi_surface() helper function to detect
whether the sink pad caps contain native VA surfaces, or not, i.e.
no raw video caps.
Also rename is_raw_yuv to get_va_surfaces to make the variable more
explicit as we just want a way to differentiate raw video caps from
VA surfaces actually.
|
|
|
|
Add support for hue, saturation, brightness and constrat adjustments.
Also fix cap info local copy to match the really expected cap subtype
of interest.
https://bugzilla.gnome.org/show_bug.cgi?id=720376
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
|
|
Drop the execute bit for gstvaapiuploader.c and gstvaapipostproc.[ch]
files.
|
|
Factor out propose_allocation() hooks, creation of video buffer pool
for the sink pad, conversion from raw YUV buffers to VA surface backed
buffers. Update vaapidecode, vaapiencode and vaapipostproc to cope
with the new GstVaapiPluginBase abilities.
|
|
|
|
Introduce a new GstVaapiPluginBase object that will contain all common
data structures and perform all common tasks. First step is to have a
single place to hold VA displays.
While we are at it, also make sure to store and subsequently release
the appropriate debug category for the subclasses.
|
|
Fix advanced deinterlacing modes with VPP to track only up to 2 past
reference buffers. This used to be 3 past reference buffers but this
doesn't fit with the existing decode pipeline that only has 4 extra
scratch surfaces.
Also optimize references tracking to be only enabled when needed, i.e.
when advanced deinterlacing mode is used. This means that we don't
need to track past references for basic bob or weave deinterlacing.
|
|
|
|
Credit original authors on a per-file basis as we cannot expect people
to know all country-specific rules, or bother browsing through the git
history.
|
|
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
|
|
Add initial support for advanced deinterlacing. The history buffer
size is arbitrarily set to 3 references for now.
|
|
Add initial support for basic scaling with size specified through the
"width" and "height" properties. If either user-provided dimension is
zero and "force-aspect-ratio" is set to true (the default), then the
other dimension is scaled to preserve the aspect ratio.
|
|
If VPP is available, we always try to implicitly convert the source
buffer to the "native" surface format for the underlying accelerator.
This means that no optimization is performed yet to propagate raw YUV
buffers to the downstream element as is, if VPP is available. i.e. it
will always cause a color conversion.
|
|
Even if we only support deinterlacing for now, use flags to specify
which filters are to be applied to each frame we receive in transform().
This is preparatory work for integrating new filters.
|
|
Allow video processing from raw YUV buffers coming from the sink pad,
while still producing a VA surface for the downstream elements.
|
|
Rewrite the vaapipostproc plug-in element so that it derives from
GstBaseTransform, thus simplifying the caps negotiation process.
|
|
Add basic deinterlacing support, i.e. bob-deinterlacing whereby only
the selected field from the input surface is kept for the target surface.
Setting gst_vaapi_filter_set_deinterlacing() method argument to
GST_VAAPI_DEINTERLACE_METHOD_NONE means to disable deinterlacing.
Also move GstVaapiDeinterlaceMethod definition from vaapipostproc plug-in
to libgstvaapi core library.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
|
|
Move GstVaapiVideoBuffer from core libgstvaapi decoding library to the
actual plugin elements. That's only useful there.
|
|
Forward declaring enums is not allowed by the C standard and aborts
compilation if the header file is included in a C++ project.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
|
|
Declaring a function as const enables better optimization of calls to
the function.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
|
|
Add vaapipostproc element for video postprocessing. So far, only basic
bob deinterlacing is implemented. Interlaced mode is automatically
detected based on sink caps ("interlaced" field).
|