diff options
author | Tim-Philipp Müller <tim@centricular.com> | 2018-03-13 19:28:33 +0000 |
---|---|---|
committer | Tim-Philipp Müller <tim@centricular.com> | 2018-03-13 19:28:36 +0000 |
commit | dc4a6d07ad86170f6939705d647e58035a02452e (patch) | |
tree | f48864b16a896d00631f84c6f6d7efcffb5830b8 | |
parent | 2df75442d0c3296e2b67b1d7aef5a89c95c30c41 (diff) |
Release 1.13.911.13.91
-rw-r--r-- | ChangeLog | 51 | ||||
-rw-r--r-- | NEWS | 924 | ||||
-rw-r--r-- | RELEASE | 2 | ||||
-rw-r--r-- | configure.ac | 12 | ||||
-rw-r--r-- | gst-rtsp-server.doap | 10 | ||||
-rw-r--r-- | meson.build | 2 |
6 files changed, 958 insertions, 43 deletions
@@ -1,7 +1,58 @@ +=== release 1.13.91 === + +2018-03-13 19:28:33 +0000 Tim-Philipp Müller <tim@centricular.com> + + * NEWS: + * RELEASE: + * configure.ac: + * gst-rtsp-server.doap: + * meson.build: + Release 1.13.91 + +2018-03-13 13:30:41 +0000 Tim-Philipp Müller <tim@centricular.com> + + * gst/rtsp-server/Makefile.am: + * gst/rtsp-server/meson.build: + * gst/rtsp-server/rtsp-address-pool.h: + * gst/rtsp-server/rtsp-auth.h: + * gst/rtsp-server/rtsp-client.h: + * gst/rtsp-server/rtsp-context.h: + * gst/rtsp-server/rtsp-media-factory-uri.h: + * gst/rtsp-server/rtsp-media-factory.h: + * gst/rtsp-server/rtsp-media.h: + * gst/rtsp-server/rtsp-mount-points.h: + * gst/rtsp-server/rtsp-onvif-client.h: + * gst/rtsp-server/rtsp-onvif-media-factory.h: + * gst/rtsp-server/rtsp-onvif-media.h: + * gst/rtsp-server/rtsp-onvif-server.h: + * gst/rtsp-server/rtsp-params.h: + * gst/rtsp-server/rtsp-permissions.h: + * gst/rtsp-server/rtsp-sdp.h: + * gst/rtsp-server/rtsp-server-prelude.h: + * gst/rtsp-server/rtsp-server.h: + * gst/rtsp-server/rtsp-session-media.h: + * gst/rtsp-server/rtsp-session-pool.h: + * gst/rtsp-server/rtsp-session.h: + * gst/rtsp-server/rtsp-stream-transport.h: + * gst/rtsp-server/rtsp-stream.h: + * gst/rtsp-server/rtsp-thread-pool.h: + * gst/rtsp-server/rtsp-token.h: + rtsp-server: GST_EXPORT -> GST_RTSP_SERVER_API + We need different export decorators for the different libs. + For now no actual change though, just rename before the release, + and add prelude headers to define the new decorator to GST_EXPORT. + +2018-03-07 12:20:05 +0200 Sebastian Dröge <sebastian@centricular.com> + + * gst/rtsp-server/rtsp-onvif-media-factory.c: + rtsp-onvif-media-factory: Document that backchannel pipelines must end with async=false sinks + https://bugzilla.gnome.org/show_bug.cgi?id=794143 + === release 1.13.90 === 2018-03-03 22:49:34 +0000 Tim-Philipp Müller <tim@centricular.com> + * ChangeLog: * NEWS: * RELEASE: * configure.ac: @@ -7,13 +7,13 @@ GStreamer 1.14.0 has not been released yet. It is scheduled for release in early March 2018. There are unstable pre-releases available for testing and development -purposes. The latest pre-release is version 1.13.90 (rc1) and was -released on 03 March 2018. +purposes. The latest pre-release is version 1.13.91 (rc2) and was +released on 12 March 2018. See https://gstreamer.freedesktop.org/releases/1.14/ for the latest version of this document. -_Last updated: Saturday 03 March 2018, 16:30 UTC (log)_ +_Last updated: Monday 12 March 2018, 18:00 UTC (log)_ Introduction @@ -28,103 +28,957 @@ other improvements. Highlights -- this section will be completed shortly +- WebRTC support: real-time audio/video streaming to and from web + browsers + +- Experimental support for the next-gen royalty-free AV1 video codec + +- Video4Linux: encoding support, stable element names and faster + device probing + +- Support for the Secure Reliable Transport (SRT) video streaming + protocol + +- RTP Forward Error Correction (FEC) support (ULPFEC) + +- RTSP 2.0 support in rtspsrc and gst-rtsp-server + +- ONVIF audio backchannel support in gst-rtsp-server and rtspsrc + +- playbin3 gapless playback and pre-buffering support + +- tee, our stream splitter/duplication element, now does allocation + query aggregation which is important for efficient data handling and + zero-copy + +- QuickTime muxer has a new prefill recording mode that allows file + import in Adobe Premiere and FinalCut Pro while the file is still + being written. + +- rtpjitterbuffer fast-start mode and timestamp offset adjustment + smoothing + +- souphttpsrc connection sharing, which allows for connection reuse, + cookie sharing, etc. + +- nvdec: new plugin for hardware-accelerated video decoding using the + NVIDIA NVDEC API + +- Adaptive DASH trick play support + +- ipcpipeline: new plugin that allows splitting a pipeline across + multiple processes + +- Major gobject-introspection annotation improvements for large parts + of the library API Major new features and changes -Noteworthy new API +WebRTC support + +There is now basic support for WebRTC in GStreamer in form of a new +webrtcbin element and a webrtc support library. This allows you to build +applications that set up connections with and stream to and from other +WebRTC peers, whilst leveraging all of the usual GStreamer features such +as hardware-accelerated encoding and decoding, OpenGL integration, +zero-copy and embedded platform support. And it's easy to build and +integrate into your application too! + +WebRTC enables real-time communication of audio, video and data with web +browsers and native apps, and it is supported or about to be support by +recent versions of all major browsers and operating systems. -- this section will be filled in shortly +GStreamer's new WebRTC implementation uses libnice for Interactive +Connectivity Establishment (ICE) to figure out the best way to +communicate with other peers, punch holes into firewalls, and traverse +NATs. + +The implementation is not complete, but all the basics are there, and +the code sticks fairly close to the PeerConnection API. Where +functionality is missing it should be fairly obvious where it needs to +go. + +For more details, background and example code, check out Nirbheek's blog +post _GStreamer has grown a WebRTC implementation_, as well as Matthew's +_GStreamer WebRTC_ talk from last year's GStreamer Conference in Prague. New Elements -- this section will be filled in shortly +- webrtcbin handles the transport aspects of webrtc connections (see + WebRTC section above for more details) + +- New srtsink and srtsrc elements for the Secure Reliable Transport + (SRT) video streaming protocol, which aims to be easy to use whilst + striking a new balance between reliability and latency for low + latency video streaming use cases. More details about SRT and the + implementation in GStreamer in Olivier's blog post _SRT in + GStreamer_. + +- av1enc and av1dec elements providing experimental support for the + next-generation royalty free video AV1 codec, alongside Matroska + support for it. + +- hlssink2 is a rewrite of the existing hlssink element, but unlike + its predecessor hlssink2 takes elementary streams as input and + handles the muxing to MPEG-TS internally. It also leverages + splitmuxsink internally to do the splitting. This allows more + control over the chunk splitting and sizing process and relies less + on the co-operation of an upstream muxer. Different to the old + hlssink it also works with pre-encoded streams and does not require + close interaction with an upstream encoder element. + +- audiolatency is a new element for measuring audio latency end-to-end + and is useful to measure roundtrip latency including both the + GStreamer-internal latency as well as latency added by external + components or circuits. + +- 'fakevideosink is basically a null sink for video data and very + similar to fakesink, only that it will answer allocation queries and + will advertise support for various video-specific things such + GstVideoMeta, GstVideoCropMeta and GstVideoOverlayCompositionMeta + like a normal video sink would. This is useful for throughput + testing and testing the zero-copy path when creating a new pipeline. + +- ipcpipeline: new plugin that allows the splitting of a pipeline into + multiple processes. Usually a GStreamer pipeline runs in a single + process and parallelism is achieved by distributing workloads using + multiple threads. This means that all elements in the pipeline have + access to all the other elements' memory space however, including + that of any libraries used. For security reasons one might therefore + want to put sensitive parts of a pipeline such as DRM and decryption + handling into a separate process to isolate it from the rest of the + pipeline. This can now be achieved with the new ipcpipeline plugin. + Check out George's blog post _ipcpipeline: Splitting a GStreamer + pipeline into multiple processes_ or his lightning talk from last + year's GStreamer Conference in Prague for all the gory details. + + +- proxysink and proxysrc are new elements to pass data from one + pipeline to another within the same process, very similar to the + existing inter elements, but not limited to raw audio and video + data. These new proxy elements are very special in how they work + under the hood, which makes them extremely powerful, but also + dangerous if not used with care. The reason for this is that it's + not just data that's passed from sink to src, but these elements + basically establish a two-way wormhole that passes through queries + and events in both directions, which means caps negotiation and + allocation query driven zero-copy can work through this wormhole. + There are scheduling considerations as well: proxysink forwards + everything into the proxysrc pipeline directly from the proxysink + streaming thread. There is a queue element inside proxysrc to + decouple the source thread from the sink thread, but that queue is + not unlimited, so it is entirely possible that the proxysink + pipeline thread gets stuck in the proxysrc pipeline, e.g. when that + pipeline is paused or stops consuming data for some other reason. + This means that one should always shut down down the proxysrc + pipeline before shutting down the proxysink pipeline, for example. + Or at least take care when shutting down pipelines. Usually this is + not a problem though, especially not in live pipelines. For more + information see Nirbheek's blog post _Decoupling GStreamer + Pipelines_, and also check out out the new ipcpipeline plugin for + sending data from one process to another process (see above). + +- lcms is a new LCMS-based ICC color profile correction element + +- openmptdec is a new OpenMPT-based decoder for module music formats, + such as S3M, MOD, XM, IT. It is built on top of a new + GstNonstreamAudioDecoder base class which aims to unify handling of + files which do not operate a streaming model. The wildmidi plugin + has also been revived and is also implemented on top of this new + base class. + +- The curl plugin has gained a new curlhttpsrc element, which is + useful for testing HTTP protocol version 2.0 amongst other things. -New element features and additions +Noteworthy new API -- this section will be filled in shortly +- GstPromise provides future/promise-like functionality. This is used + in the GStreamer WebRTC implementation. + + +- GstReferenceTimestampMeta is a new meta that allows you to attach + additional reference timestamps to a buffer. These timestamps don't + have to relate to the pipeline clock in any way. Examples of this + could be an NTP timestamp when the media was captured, a frame + counter on the capture side or the (local) UNIX timestamp when the + media was captured. The decklink elements make use of this. + + +- GstVideoRegionOfInterestMeta: it's now possible to attach generic + free-form element-specific parameters to a region of interest meta, + for example to tell a downstream encoder to use certain codec + parameters for a certain region. + + +- gst_bus_get_pollfd can be used to obtain a file descriptor for the + bus that can be poll()-ed on for new messages. This is useful for + integration with non-GLib event loops. + + +- gst_get_main_executable_path() can be used by wrapper plugins that + need to find things in the directory where the application + executable is located. In the same vein, + GST_PLUGIN_DEPENDENCY_FLAG_PATHS_ARE_RELATIVE_TO_EXE can be used to + signal that plugin dependency paths are relative to the main + executable. + +- pad templates can be told about the GType of the pad subclass of the + pad via newly-added GstPadTemplate API API or the + gst_element_class_add_static_pad_template_with_gtype() convenience + function. gst-inspect-1.0 will use this information to print pad + properties. + + +- new convenience functions to iterate over element pads without using + the GstIterator API: gst_element_foreach_pad(), + gst_element_foreach_src_pad(), and gst_element_foreach_sink_pad(). + + +- GstBaseSrc and appsrc have gained support for buffer lists: + GstBaseSrc subclasses can use gst_base_src_submit_buffer_list(), and + applications can use gst_app_src_push_buffer_list() to push a buffer + list into appsrc. + + +- The GstHarness unit test harness has a couple of new convenience + functions to retrieve all pending data in the harness in form of a + single chunk of memory. + + +- GstAudioStreamAlign is a new helper object for audio elements that + handles discontinuity detection and sample alignment. It will align + samples after the previous buffer's samples, but keep track of the + divergence between buffer timestamps and sample position (jitter). + If it exceeds a configurable threshold the alignment will be reset. + This simply factors out code that was duplicated in a number of + elements into a common helper API. + + +- The GstVideoEncoder base class implements Quality of Service (QoS) + now. This is disabled by default and must be opted in by setting the + "qos" property, which will make the base class gather statistics + about the real-time performance of the pipeline from downstream + elements (usually sinks that sync the pipeline clock). Subclasses + can then make use of this by checking whether input frames are late + already using gst_video_encoder_get_max_encode_time() If late, they + can just drop them and skip encoding in the hope that the pipeline + will catch up. + + +- The GstVideoOverlay interface gained a few helper functions for + installing and handling a "render-rectangle" property on elements + that implement this interface, so that this functionality can also + be used from the command line for testing and debugging purposes. + The property wasn't added to the interface itself as that would + require all implementors to provide it which would not be + backwards-compatible. + + +- A new base class, GstNonstreamAudioDecoder for non-stream audio + decoders was added to gst-plugins-bad. This base-class is meant to + be used for audio decoders that require the whole stream to be + loaded first before decoding can start. Examples of this are module + formats (MOD/S3M/XM/IT/etc), C64 SID tunes, video console music + files (GYM/VGM/etc), MIDI files and others. The new openmptdec + element is based on this. + + +- Full list of API new in 1.14: +- GStreamer core API new in 1.14 +- GStreamer base library API new in 1.14 +- gst-plugins-base libraries API new in 1.14 +- gst-plugins-bad: no list, mostly GstWebRTC library and new + non-stream audio decoder base class. + +New RTP features and improvements + +- rtpulpfecenc and rtpulpfecdec are new elements that implement + Generic Forward Error Correction (FEC) using Uneven Level Protection + (ULP) as described in RFC 5109. This can be used to protect against + certain types of (non-bursty) packet loss, and important packets + such as those containing codec configuration data or key frames can + be protected with higher redundancy. Equally, packets that are not + particularly important can be given low priority or not be protected + at all. If packets are lost, the receiver can then hopefully restore + the lost packet(s) from the surrounding packets which were received. + This is an alternative to, or rather complementary to, dealing with + packet loss using _retransmission (rtx)_. GStreamer has had + retransmission support for a long time, but Forward Error Correction + allows for different trade-offs: The advantage of Forward Error + Correction is that it doesn't add latency, whereas retransmission + requires at least one more roundtrip to request and hopefully + receive lost packets; Forward Error Correction increases the + required bandwidth however, even in situations where there is no + packet loss at all, so one will typically want to fine-tune the + overhead and mechanisms used based on the characteristics of the + link at the time. + +- New _Redundant Audio Data (RED)_ encoders and decoders for RTP as + per RFC 2198 are also provided (rtpredenc and rtpreddec), mostly for + chrome webrtc compatibility, as chrome will wrap ULPFEC-protected + streams in RED packets, and such streams need to be wrapped and + unwrapped in order to use ULPFEC with chrome. + + +- a few new buffer flags for FEC support: + GST_BUFFER_FLAG_NON_DROPPABLE can be used to mark important buffers, + e.g. to flag RTP packets carrying keyframes or codec setup data for + RTP Forward Error Correction purposes, or to prevent still video + frames from being dropped by elements due to QoS. There already is a + GST_BUFFER_FLAG_DROPPABLE. GST_RTP_BUFFER_FLAG_REDUNDANT is used to + signal internally that a packet represents a redundant RTP packet + and used in rtpstorage to hold back the packet and use it only for + recovery from packet loss. Further work is still needed in + payloaders to make use of these. + +- rtpbin now has an option for increasing timestamp offsets gradually: + Instant large changes to the internal ts_offset may cause timestamps + to move backwards and also cause visible glitches in media playback. + The new "max-ts-offset-adjustment" and "max-ts-offset" properties + let the application control the rate to apply changes to ts_offset. + There have also been some EOS/BYE handling improvements in rtpbin. + +- rtpjitterbuffer has a new fast start mode: in many scenarios the + jitter buffer will have to wait for the full configured latency + before it can start outputting packets. The reason for that is that + it often can't know what the sequence number of the first expected + RTP packet is, so it can't know whether a packet earlier than the + earliest packet received will still arrive in future. This behaviour + can now be bypassed by setting the "faststart-min-packets" property + to the number of consecutive packets needed to start, and the jitter + buffer will start output packets as soon as it has N consecutive + packets queued internally. This is particularly useful to get a + first video frame decoded and rendered as quickly as possible. + +- rtpL8pay and rtpL8depay provide RTP payloading and depayloading for + 8-bit raw audio + +New element features + +- playbin3 has gained support or gapless playback via the + "about-to-finish" signal where users can set the uri for the next + item to play. For non-live streams this will be emitted as soon as + the first uri has finished downloading, so with sufficiently large + buffers it is now possible to pre-buffer the next item well ahead of + time (unlike playbin where there would not be a lot of time between + "about-to-finish" emission and the end of the stream). If the stream + format of the next stream is the same as that of the previous + stream, the data will be concatenated via the concat element. + Whether this will result in true gaplessness depends on the + container format and codecs used, there might still be codec-related + gaps between streams with some codecs. + +- tee now does allocation query aggregation, which is important for + zero-copy and efficient data handling, especially for video. Those + who want to drop allocation queries on purpose can use the identity + element's new "drop-allocation" property for that instead. + +- audioconvert now has a "mix-matrix" property, which obsoletes the + audiomixmatrix element. There's also mix matrix support in the audio + conversion and channel mixing API. + +- x264enc: new "insert-vui" property to disable VUI (Video Usability + Information) parameter insertion into the stream, which allows + creation of streams that are compatible with certain legacy hardware + decoders that will refuse to decode in certain combinations of + resolution and VUI parameters; the max. allowed number of B-frames + was also increased from 4 to 16. + +- dvdlpcmdec: has gained support for Blu-Ray audio LPCM. + +- appsrc has gained support for buffer lists (see above) and also seen + some other performance improvements. + +- flvmux has been ported to the GstAggregator base class which means + it can work in defined-latency mode with live input sources and + continue streaming if one of the inputs stops producing data. + +- jpegenc has gained a "snapshot" property just like pngenc to make it + easier to just output a single encoded frame. + +- jpegdec will now handle interlaced MJPEG streams properly and also + handle frames without an End of Image marker better. + +- v4l2: There are now video encoders for VP8, VP9, MPEG4, and H263. + The v4l2 video decoder handles dynamic resolution changes, and the + video4linux device provider now does much faster device probing. The + plugin also no longer uses the libv4l2 library by default, as it has + prevented a lot of interesting use cases like CREATE_BUFS, DMABuf, + usage of TRY_FMT. As the libv4l2 library is totally inactive and not + really maintained, we decided to disable it. This might affect a + small number of cheap/old webcams with custom vendor formats for + which we do not provide conversion in GStreamer. It is possible to + re-enable support for libv4l2 at run-time however, by setting the + environment variable GST_V4L2_USE_LIBV4L2=1. + +- rtspsrc now has support for RTSP protocol version 2.0 as well as + ONVIF audio backchannels (see below for more details). It also + sports a new ["accept-certificate"] signal for "manually" checking a + TLS certificate for validity. It now also prints RTSP/SDP messages + to the gstreamer debug log instead of stdout. + +- shout2send now uses non-blocking I/O and has a configurable network + operations timeout. + +- splitmuxsink has gained a "split-now" action signal and new + "alignment-threshold" and "use-robust-muxing" properties. If robust + muxing is enabled, it will check and set the muxer's reserved space + properties if present. This is primarily for use with mp4mux's + robust muxing mode. + +- qtmux has a new _prefill recording mode_ which sets up a moov header + with the correct sample positions beforehand, which then allows + software like Adobe Premiere and FinalCut Pro to import the files + while they are still being written to. This only works with constant + framerate I-frame only streams, and for now only support for ProRes + video and raw audio is implemented but adding new codecs is just a + matter of defining appropriate maximum frame sizes. + +- qtmux also supports writing of svmi atoms with stereoscopic video + information now. Trak timescales can be configured on a per-stream + basis using the "trak-timescale" property on the sink pads. Various + new formats can be muxed: MPEG layer 1 and 2, AC3 and Opus, as well + as PNG and VP9. + +- souphttpsrc now does connection sharing by default, shares its + SoupSession with other elements in the same pipeline via a + GstContext if possible (session-wide settings are all the defaults). + This allows for connection reuse, cookie sharing, etc. Applications + can also force a context to use. In other news, HTTP headers + received from the server are posted as element messages on the bus + now for easier diagnostics, and it's also possible now to use other + types of proxy servers such as SOCKS4 or SOCKS5 proxies, support for + which is implemented directly in gio. Before only HTTP proxies were + allowed. + +- qtmux, mp4mux and matroskamux will now refuse caps changes of input + streams at runtime. This isn't really supported with these + containers (or would have to be implemented differently with a + considerable effort) and doesn't produce valid and spec-compliant + files that will play everywhere. So if you can't guarantee that the + input caps won't change, use a container format that does support on + the fly caps changes for a stream such as MPEG-TS or use + splitmuxsink which can start a new file when the caps change. What + would happen before is that e.g. rtph264depay or rtph265depay would + simply send new SPS/PPS inband even for AVC format, which would then + get muxed into the container as if nothing changed. Some decoders + will handle this just fine, but that's often more luck than by + design. In any case, it's not right, so we disallow it now. + +- matroskamux had Table of Content (TOC) support now (chapters etc.) + and matroskademux TOC support has been improved. matroskademux has + also seen seeking improvements searching for the right cluster and + position. + +- videocrop now uses GstVideoCropMeta if downstream supports it, which + means cropping can be handled more efficiently without any copying. + +- compositor now has support for _crossfade blending_, which can be + used via the new "crossfade-ratio" property on the sink pads. + +- The avwait element has a new "end-timecode" property and posts + "avwait-status" element messages now whenever avwait starts or stops + passing through data (e.g. because target-timecode and end-timecode + respectively have been reached). + + +- h265parse and h265parse will try harder to make upstream output the + same caps as downstream requires or prefers, thus avoiding + unnecessary conversion. The parsers also expose chroma format and + bit depth in the caps now. + +- The dtls elements now longer rely on or require the application to + run a GLib main loop that iterates the default main context + (GStreamer plugins should never rely on the application running a + GLib main loop). + +- openh264enc allows to change the encoding bitrate dynamically at + runtime now + +- nvdec is a new plugin for hardware-accelerated video decoding using + the NVIDIA NVDEC API (which replaces the old VDPAU API which is no + longer supported by NVIDIA) + +- The NVIDIA NVENC hardware-accelerated video encoders now support + dynamic bitrate and preset reconfiguration and support the I420 + 4:2:0 video format. It's also possible to configure the gop size via + the new "gop-size" property. + +- The MPEG-TS muxer and demuxer (tsmux, tsdemux) now have support for + JPEG2000 + +- openjpegdec and jpeg2000parse support 2-component images now (gray + with alpha), and jpeg2000parse has gained limited support for + conversion between JPEG2000 stream-formats. (JP2, J2C, JPC) and also + extracts more details such as colorimetry, interlace-mode, + field-order, multiview-mode and chroma siting. + +- The decklink plugin for Blackmagic capture and playback cards have + seen numerous improvements: + +- decklinkaudiosrc and decklinkvideosrc now put hardware reference + timestamp on buffers in form of GstReferenceTimestampMetas. + This can be useful to know on multi-channel cards which frames from + different channels were captured at the same time. + +- decklinkvideosink has gained support for Decklink hardware keying + with two new properties ("keyer-mode" and "keyer-level") to control + the built-in hardware keyer of Decklink cards. + +- decklinkaudiosink has been re-implemented around GstBaseSink instead + of the GstAudioBaseSink base class, since the Decklink APIs don't + fit very well with the GstAudioBaseSink APIs, which used to cause + various problems due to inaccuracies in the clock calculations. + Problems were audio drop-outs and A/V sync going wrong after + pausing/seeking. + +- support for more than 16 devices, without any artificial limit + +- work continued on the msdk plugin for Intel's Media SDK which + enables hardware-accelerated video encoding and decoding on Intel + graphics hardware on Windows or Linux. More tuning options were + added, and more pixel formats and video codecs are supported now. + The encoder now also handles force-key-unit events and can insert + frame-packing SEIs for side-by-side and top-bottom stereoscopic 3D + video. + +- dashdemux can now do adaptive trick play of certain types of DASH + streams, meaning it can do fast-forward/fast-rewind of normal (non-I + frame only) streams even at high speeds without saturating network + bandwidth or exceeding decoder capabilities. It will keep statistics + and skip keyframes or fragments as needed. See Sebastian's blog post + _DASH trick-mode playback in GStreamer_ for more details. It also + supports webvtt subtitle streams now and has seen improvements when + seeking in live streams. + + +- kmssink has seen lots of fixes and improvements in this cycle, + including: + +- Raspberry Pi (vc4) and Xilinx DRM driver support + +- new "render-rectangle" property that can be used from the command + line as well as "display-width" and "display-height", and + "can-scale" properties + +- GstVideoCropMeta support Plugin and library moves -- this section will be filled in shortly +MPEG-1 audio (mp1, mp2, mp3) decoders and encoders moved to -good + +Following the expiration of the last remaining mp3 patents in most +jurisdictions, and the termination of the mp3 licensing program, as well +as the decision by certain distros to officially start shipping full mp3 +decoding and encoding support, these plugins should now no longer be +problematic for most distributors and have therefore been moved from +-ugly and -bad to gst-plugins-good. Distributors can still disable these +plugins if desired. + +In particular these are: + +- mpg123audiodec: an mp1/mp2/mp3 audio decoder using libmpg123 +- lamemp3enc: an mp3 encoder using LAME +- twolamemp2enc: an mp2 encoder using TwoLAME + +GstAggregator moved from -bad to core + +GstAggregator has been moved from gst-plugins-bad to the base library in +GStreamer and is now stable API. + +GstAggregator is a new base class for mixers and muxers that have to +handle multiple input pads and aggregate streams into one output stream. +It improves upon the existing GstCollectPads API in that it is a proper +base class which was also designed with live streaming in mind. +GstAggregator subclasses will operate in a mode with defined latency if +any of the inputs are live streams. This ensures that the pipeline won't +stall if any of the inputs stop producing data, and that the configured +maximum latency is never exceeded. + +GstAudioAggregator, audiomixer and audiointerleave moved from -bad to -base + +GstAudioAggregator is a new base class for raw audio mixers and muxers +and is based on GstAggregator (see above). It provides defined-latency +mixing of raw audio inputs and ensures that the pipeline won't stall +even if one of the input streams stops producing data. + +As part of the move to stabilise the API there were some last-minute API +changes and clean-ups, but those should mostly affect internal elements. + +It is used by the audiomixer element, which is a replacement for +'adder', which did not handle live inputs very well and did not align +input streams according to running time. audiomixer should behave much +better in that respect and generally behave as one would expected in +most scenarios. + +Similarly, audiointerleave replaces the 'interleave' element which did +not handle live inputs or non-aligned inputs very robustly. + +GstAudioAggregator and its subclases have gained support for input +format conversion, which does not include sample rate conversion though +as that would add additional latency. Furthermore, GAP events are now +handled correctly. + +We hope to move the video equivalents (GstVideoAggregator and +compositor) to -base in the next cycle, i.e. for 1.16. + +GStreamer OpenGL integration library and plugin moved from -bad to -base + +The GStreamer OpenGL integration library and opengl plugin have moved +from gst-plugins-bad to -base and are now part of the stable API canon. +Not all OpenGL elements have been moved; a few had to be left behind in +gst-plugins-bad in the new openglmixers plugin, because they depend on +the GstVideoAggregator base class which we were not able to move in this +cycle. We hope to reunite these elements with the rest of their family +for 1.16 though. + +This is quite a milestone, thanks to everyone who worked to make this +happen! + +Qt QML and GTK plugins moved from -bad to -good + +The Qt QML-based qmlgl plugin has moved to -good and provides a +qmlglsink video sink element as well as a qmlglsrc element. qmlglsink +renders video into a QQuickItem, and qmlglsrc captures a window from a +QML view and feeds it as video into a pipeline for further processing. +Both elements leverage GStreamer's OpenGL integration. In addition to +the move to -good the following features were added: + +- A proxy object is now used for thread-safe access to the QML widget + which prevents crashes in corner case scenarios: QML can destroy the + video widget at any time, so without this we might be left with a + dangling pointer. + +- EGL is now supported with the X11 backend, which works e.g. on + Freescale imx6 + +The GTK+ plugin has also moved from -bad to -good. It includes gtksink +and gtkglsink which both render video into a GtkWidget. gtksink uses +Cairo for rendering the video, which will work everywhere in all +scenarios but involves an extra memory copy, whereas gtkglsink fully +leverages GStreamer's OpenGL integration, but might not work properly in +all scenarios, e.g. where the OpenGL driver does not properly support +multiple sharing contexts in different threads; on Linux Nouveau is +known to be broken in this respect, whilst NVIDIA's proprietary drivers +and most other drivers generally work fine, and the experience with +Intel's driver seems to be fixed; some proprietary embedded Linux +drivers don't work; macOS works). + +GstPhysMemoryAllocator interface moved from -bad to -base + +GstPhysMemoryAllocator is a marker interface for allocators with +physical address backed memory. Plugin removals -- this section will be filled in shortly - - -Miscellaneous API additions +- the sunaudio plugin was removed, since it couldn't ever have been + built or used with GStreamer 1.0, but no one even noticed in all + these years. -- this section will be filled in shortly +- the schroedinger-based Dirac encoder/decoder plugin has been + removed, as there is no longer any upstream or anyone else + maintaining it. Seeing that it's quite a fringe codec it seemed best + to simply remove it. -GstPlayer +API removals -- this section will be filled in shortly +- some MPEG video parser API in the API unstable codecutils library in + gst-plugins-bad was removed after having been deprecated for 5 + years. Miscellaneous changes -- this section will be filled in shortly +- The video support library has gained support for a few new pixel + formats: +- NV16_10LE32: 10-bit variant of NV16, packed into 32bit words (plus 2 + bits padding) +- NV12_10LE32: 10-bit variant of NV12, packed into 32bit words (plus 2 + bits padding) +- GRAY10_LE32: 10-bit grayscale, packed in 32bit words (plus 2 bits + padding) + +- decodebin, playbin and GstDiscoverer have seen stability + improvements in corner cases such as shutdown while still starting + up or shutdown in error cases (hat tip to the oss-fuzz project). + +- floating reference handling was inconsistent and has been cleaned up + across the board, including annotations. This solves various + long-standing memory leaks in language bindings, which e.g. often + caused elements and pads to be leaked. + +- major gobject-introspection annotation improvements for large parts + of the library API, including nullability of return types and + function parameters, correct types (e.g. strings vs. filenames), + ownership transfer, array length parameters, etc. This allows to use + bigger parts of the GStreamer API to be safely used from dynamic + language bindings (e.g. Python, Javascript) and allows static + bindings (e.g. C#, Rust, Vala) to autogenerate more API bindings + without manual intervention. OpenGL integration -- this section will be filled in shortly +- The GStreamer OpenGL integration library has moved to + gst-plugins-base and is now part of our stable API. + +- new MESA3D GBM BACKEND. On devices with working libdrm support, it + is possible to use Mesa3D's GBM library to set up an EGL context + directly on top of KMS. This makes it possible to use the GStreamer + OpenGL elements without a windowing system if a libdrm- and + Mesa3D-supported GPU is present. + +- Prefer wayland display over X11: As most Wayland compositors support + XWayland, the X11 backend would get selected. + +- gldownload can export dmabufs now, and glupload will advertise + dmabuf as caps feature. Tracing framework and debugging improvements -- this section will be filled in shortly +- NEW MEMORY RINGBUFFER BASED DEBUG LOGGER, useful for long-running + applications or to retrieve diagnostics when encountering an error. + The GStreamer debug logging system provides in-depth debug logging + about what is going on inside a pipeline. When enabled, debug logs + are usually written into a file, printed to the terminal, or handed + off to a log handler installed by the application. However, at + higher debug levels the volume of debug output quickly becomes + unmanageable, which poses a problem in disk-space or bandwidth + restricted environments or with long-running pipelines where a + problem might only manifest itself after multiple days. In those + situations, developers are usually only interested in the most + recent debug log output. The new in-memory ringbuffer logger makes + this easy: just installed it with gst_debug_add_ring_buffer_logger() + and retrieve logs with gst_debug_ring_buffer_logger_get_logs() when + needed. It is possible to limit the memory usage per thread and set + a timeout to determine how long messages are kept around. It was + always possible to implement this in the application with a custom + log handler of course, this just provides this functionality as part + of GStreamer. + + +- 'fakevideosink is a null sink for video data that advertises + video-specific metas ane behaves like a video sink. See above for + more details. + +- gst_util_dump_buffer() prints the content of a buffer to stdout. + +- gst_pad_link_get_name() and gst_state_change_get_name() print pad + link return values and state change transition values as strings. + +- The LATENCY TRACER has seen a few improvements: trace records now + contain timestamps which is useful to plot things over time, and + downstream synchronisation time is now excluded from the measured + values. + +- Miniobject refcount tracing and logging was not entirley + thread-safe, there were duplicates or missing entries at times. This + has now been made reliable. + +- The netsim element, which can be used to simulate network jitter, + packet reordering and packet loss, received new features and + improvements: it can now also simulate network congestion using a + token bucket algorithm. This can be enabled via the "max-kbps" + property. Packet reordering can be disabled now via the + "allow-reordering" property: Reordering of packets is not very + common in networks, and the delay functions will always introduce + reordering if delay > packet-spacing, so by setting + "allow-reordering" to FALSE you guarantee that the packets are in + order, while at the same time introducing delay/jitter to them. By + using the new "delay-distribution" property the use can control how + the delay applied to delayed packets is distributed: This is either + the uniform distribution (as before) or the normal distribution; in + addition there is also the gamma distribution which simulates the + delay on wifi networks better. Tools -- this section will be filled in shortly +- gst-inspect-1.0 now prints pad properties for elements that have pad + subclasses with special properties, such as compositor or + audiomixer. This only works for elements that use the newly-added + GstPadTemplate API API or the + gst_element_class_add_static_pad_template_with_gtype() convenience + function to tell GStreamer about the special pad subclass. + +- gst-launch-1.0 now generates a gstreamer pipeline diagram (.dot + file) whenever SIGHUP is sent to it on Linux/*nix systems. + +- gst-discoverer-1.0 can now analyse live streams such as rtsp:// URIs GStreamer RTSP server -- this section will be filled in shortly +- Initial support for [RTSP protocol version + 2.0][rtsp2-lightning-talk] was added, which is to the best of our + knowledge the first RTSP 2.0 implementation ever! + +- ONVIF audio backchannel support. This is an extension specified by + ONVIF that allows RTSP clients (e.g. a control room operator) to + send audio back to the RTSP server (e.g. an IP camera). + Theoretically this could have been done also by using the RECORD + method of the RTSP protocol, but ONVIF chose not to do that, so the + backchannel is set up alongside the other streams. Format + negotiation needs to be done out of band, if needed. Use the new + ONVIF-specific subclasses GstRTSPOnvifServer and + GstRTSPOnvifMediaFactory to enable this functionality. + + +- The internal server streaming pipeline is now dynamically + reconfigured on PLAY based on the transports needed. This means that + the server no longer adds the pipeline plumbing for all possible + transports from the start, but only if needed as needed. This + improves performance and memory footprint. + +- rtspclientsink has gained an "accept-certificate" signal for + manually checking a TLS certificate for validity. + +- Fix keep-alive/timeout issue for certain clients using TCP + interleave as transport who don't do keep-alive via some other + method such as periodic RTSP OPTION requests. We now put netaddress + metas on the packets from the TCP interleaved stream, so can map + RTCP packets to the right stream in the server and can handle them + properly. + +- Language bindings improvements: in general there were quite a few + improvements in the gobject-introspection annotations, but we also + extended the permissions API which was not usable from bindings + before. + +- Fix corner case issue where the wrong mount point was found when + there were multiple mount points with a common prefix. GStreamer VAAPI -- this section will be filled in shortly +- this section will be filled in shortly {FIXME!} GStreamer Editing Services and NLE -- this section will be filled in shortly +- this section will be filled in shortly {FIXME!} GStreamer validate -- this section will be filled in shortly +- this section will be filled in shortly {FIXME!} GStreamer Python Bindings -- this section will be filled in shortly +- this section will be filled in shortly {FIXME!} Build and Dependencies -- this section will be filled in shortly +- the new WebRTC support in gst-plugins-bad depends on the GStreamer + elements that ship as part of libnice, and libnice version 1.1.14 is + required. Also the dtls and srtp plugins. + +- gst-plugins-bad no longer depends on the libschroedinger Dirac codec + library. + +- The srtp plugin can now also be built against libsrtp2. + +- some plugins and libraries have moved between modules, see the + _Plugin and_ _library moves_ section above, and their respective + dependencies have moved with them of course, e.g. the GStreamer + OpenGL integration support library and plugin is now in + gst-plugins-base, and mpg123, LAME and twoLAME based audio decoder + and encoder plugins are now in gst-plugins-good. + +- Unify static and dynamic plugin interface and remove plugin specific + static build option: Static and dynamic plugins now have the same + interface. The standard --enable-static/--enable-shared toggle is + sufficient. This allows building static and shared plugins from the + same object files, instead of having to build everything twice. + +- The default plugin entry point has changed. This will only affect + plugins that are recompiled against new GStreamer headers. Binary + plugins using the old entry point will continue to work. However, + plugins that are recompiled must have matching plugin names in + GST_PLUGIN_DEFINE and filenames, as the plugin entry point for + shared plugins is now deduced from the plugin filename. This means + you can no longer have a plugin called foo living in a file called + libfoobar.so or such, the plugin filename needs to match. This might + cause problems with some external third party plugin modules when + they get rebuilt against GStreamer 1.14. + + +Note to packagers and distributors + +A number of libraries, APIs and plugins moved between modules and/or +libraries in different modules between version 1.12.x and 1.14.x, see +the _Plugin and_ _library moves_ section above. Some APIs have seen +minor ABI changes in the course of moving them into the stable APIs +section. + +This means that you should try to ensure that all major GStreamer +modules are synced to the same major version (1.12 or 1.13/1.14) and can +only be upgraded in lockstep, so that your users never end up with a mix +of major versions on their system at the same time, as this may cause +breakages. + +Also, plugins compiled against >= 1.14 headers will not load with +GStreamer <= 1.12 owing to a new plugin entry point (but plugin binaries +built against older GStreamer versions will continue to load with newer +versions of GStreamer of course). + +There is also a small structure size related ABI breakage introduced in +the gst-plugins-bad codecparsers library between version 1.13.90 and +1.13.91. This should "only" affect gstreamer-vaapi, so anyone who ships +the release candidates is advised to upgrade those two modules at the +same time. Platform-specific improvements Android -- this section will be filled in shortly +- ahcsrc (Android camera source) does autofocus now macOS and iOS -- this section will be filled in shortly +- this section will be filled in shortly {FIXME!} Windows -- this section will be filled in shortly +- The GStreamer wasapi plugin was rewritten and should not only be + usable now, but in top shape and suitable for low-latency use cases. + The Windows Audio Session API (WASAPI) is Microsoft's most modern + method for talking with audio devices, and now that the wasapi + plugin is up to scratch it is preferred over the directsound plugin. + The ranks of the wasapisink and wasapisrc elements have been updated + to reflect this. Further improvements include: + +- support for more than 2 channels + +- a new "low-latency" property to enable low-latency operation (which + should always be safe to enable) + +- support for the AudioClient3 API which is only available on Windows + 10: in wasapisink this will be used automatically if available; in + wasapisrc it will have to be enabled explicitly via the + "use-audioclient3" property, as capturing audio with low latency and + without glitches seems to require setting the realtime priority of + the entire pipeline to "critical", which cannot be done from inside + the element, but has to be done in the application. + +- set realtime thread priority to avoid glitches + +- allow opening devices in exclusive mode, which provides much lower + latency compared to shared mode where WASAPI's engine period is + 10ms. This can be activated via the "exclusive" property. + +- There are now GstDeviceProvider implementations for the wasapi and + directsound plugins, so it's now possible to discover both audio + sources and audio sinks on Windows via the GstDeviceMonitor API + +- debug log timestamps are now higher granularity owing to + g_get_monotonic_time() now being used as fallback in + gst_utils_get_timestamp(). Before that, there would sometimes be + 10-20 lines of debug log output sporting the same timestamp. Contributors @@ -184,9 +1038,7 @@ suggestions or helped testing. Bugs fixed in 1.14 -- this section will be filled in shortly - -More than 704 bugs have been fixed during the development of 1.14. +More than 800 bugs have been fixed during the development of 1.14. This list does not include issues that have been cherry-picked into the stable 1.12 branch and fixed there as well, all fixes that ended up in @@ -211,7 +1063,8 @@ the git 1.14 branch, which is a stable branch. Known Issues -- The webrtcdsp element is currently not shipped as part of the +- The webrtcdsp element (which is unrelated to the newly-landed + GStreamer webrtc support) is currently not shipped as part of the Windows binary packages due to a build system issue. @@ -230,6 +1083,7 @@ several 1.15 pre-releases and the new 1.16 stable release in September. ------------------------------------------------------------------------ -_These release notes have been prepared by Tim-Philipp Müller._ +_These release notes have been prepared by Tim-Philipp Müller with_ +_contributions from Sebastian Dröge._ _License: CC BY-SA 4.0_ @@ -1,4 +1,4 @@ -This is GStreamer gst-rtsp-server 1.13.90. +This is GStreamer gst-rtsp-server 1.13.91. The GStreamer team is pleased to announce the first release candidate for the upcoming stable 1.14 release series. diff --git a/configure.ac b/configure.ac index 0fc1f01..4ef18c1 100644 --- a/configure.ac +++ b/configure.ac @@ -2,7 +2,7 @@ AC_PREREQ(2.69) dnl initialize autoconf dnl when going to/from release please set the nano (fourth number) right ! dnl releases only do Wall, cvs and prerelease does Werror too -AC_INIT([GStreamer RTSP Server Library], [1.13.90], +AC_INIT([GStreamer RTSP Server Library], [1.13.91], [http://bugzilla.gnome.org/enter_bug.cgi?product=GStreamer], [gst-rtsp-server]) AG_GST_INIT @@ -53,13 +53,13 @@ dnl 1.2.5 => 205 dnl 1.10.9 (who knows) => 1009 dnl dnl sets GST_LT_LDFLAGS -AS_LIBTOOL(GST, 1390, 0, 1390) +AS_LIBTOOL(GST, 1391, 0, 1391) dnl *** required versions of GStreamer stuff *** -GST_REQ=1.13.90 -GSTPB_REQ=1.13.90 -GSTPG_REQ=1.13.90 -GSTPD_REQ=1.13.90 +GST_REQ=1.13.91 +GSTPB_REQ=1.13.91 +GSTPG_REQ=1.13.91 +GSTPD_REQ=1.13.91 dnl *** autotools stuff **** diff --git a/gst-rtsp-server.doap b/gst-rtsp-server.doap index efb94e6..3cdb864 100644 --- a/gst-rtsp-server.doap +++ b/gst-rtsp-server.doap @@ -32,6 +32,16 @@ RTSP server library based on GStreamer <release> <Version> + <revision>1.13.91</revision> + <branch>master</branch> + <name></name> + <created>2018-03-13</created> + <file-release rdf:resource="https://gstreamer.freedesktop.org/src/gst-rtsp-server/gst-rtsp-server-1.13.91.tar.xz" /> + </Version> + </release> + + <release> + <Version> <revision>1.13.90</revision> <branch>master</branch> <name></name> diff --git a/meson.build b/meson.build index 200c7a8..9643163 100644 --- a/meson.build +++ b/meson.build @@ -1,5 +1,5 @@ project('gst-rtsp-server', 'c', - version : '1.13.90', + version : '1.13.91', meson_version : '>= 0.33.0', default_options : ['warning_level=1', 'buildtype=debugoptimized']) |