1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
|
Bufferpool
----------
This document details a possible design for how buffers can be allocated
and managed in pools.
Bufferpools should increase performance by reducing allocation overhead and
improving possibilities to implement zero-copy memory transfer.
Current Situation
-----------------
- elements can choose to implement a pool of buffers. These pools
can contain buffers for both source and sink pad buffers.
- elements can provide buffers to upstream elements when the upstream element
requests a buffer with gst_pad_alloc_buffer().
- The buffer pool can preallocate a certain amount of buffers to avoid
runtime allocation. pad_alloc_buffer() is allowed to block the upstream
element until buffers are recycled in the pool.
- the pad_alloc_buffer function call can be passed downstream to the sink
that actually will perform the allocation. A fallback option exists to use
a default memory bufferpool whe there is no alloc_buffer function installed
on a pad.
- Upstream renegotiation is performed by making the pad_alloc_buffer function
return a buffer with new caps.
Problems
--------
- There is currently no helper base class to implement efficient buffer pools
meaning that each element has to implement its own version.
- There is no negotiation between elements about their buffer requirements.
Upstream elements that decide to use pad_alloc_buffer() can find that the
buffer they received is not appropriate at all. The most common problem
is that the buffers don't have the right alignment or insufficient padding.
- There is no negotiation of minimum and maximum amounts of preallocated
buffers. In order to not avoid deadlocks, this means that buffer pool
implementations should be able to allocate unlimited amounts of buffers and
are never allowed to block in pad_alloc_buffer()
Requirements
------------
- maintain and reuse a list of buffers in a reusable base GstBufferPool
object
- negotiate allocation configuration between source and sink pad.
- have minimum and maximum amount of buffers with the option of
preallocating buffers.
- alignment and padding support
- arbitrary extra options
- integrate with dynamic caps renegotiation
- dynamically change bufferpool configuration based on pipeline changes.
- allow the application to control buffer allocation
GstBufferPool
-------------
The bufferpool object manages a list of buffers with the same properties such
as size, padding and alignment.
The bufferpool has two states: active and inactive. In the in-active
state, the bufferpool can be configured with the required allocation
preferences. In the active state, buffers can be retrieved from and
returned to the pool.
The default implementation of the bufferpool is able to allocate buffers
from main memory with arbitrary alignment and padding/prefix.
Custom implementations of the bufferpool can override the allocation and
free algorithms of the buffers from the pool. This should allow for
different allocation strategies such as using shared memory or hardware
mapped memory.
The bufferpool object is also used to perform the negotiation of configuration
between elements.
GstPad
------
A GstPad can query a new bufferpool from a peer element withe the BUFFERPOOL
query.
The returned bufferpool object can then be configured with the desired
parameters of the buffers it should provide.
When the bufferpool is configured, it must be pushed downstream with the
BUFFERPOOL event. This is to inform a pad and its peer pad that a bufferpool
should be used for allocation (on source pads) and that bufferpool is used
by the upstream element (on sinkpads).
negotiating pool and config
---------------------------
Since upstream needs to allocate buffers from a buffer pool, it should first
negotiate a buffer pool with the downstream element. We propose a simple
scheme where a sink can propose a bufferpool and some configuration and where
the source can choose to use this allocator or use its own.
The algorithm for doing this is roughly like this:
/* srcpad knows media type and size of buffers and is ready to
* prepare an output buffer but has no pool yet */
/* first get the pool from the downstream peer */
res = gst_pad_query_bufferpool (srcpad, &pool);
if (pool != NULL) {
GstBufferPoolConfig config;
/* clear the pool so that we can reconfigure it */
gst_buffer_pool_set_active (pool, FALSE);
do {
/* get the config */
gst_buffer_pool_get_config (pool, &config);
/* check and modify the config to match our requirements */
if (!tweak_config (&config)) {
/* we can't tweak the config any more, exit and fail */
gst_object_unref (pool);
pool = NULL;
break;
}
}
/* update the config */
while (!gst_buffer_pool_set_config (pool, &config));
/* we managed to update the config, all is fine now */
/* set the pool to active to make it allocate things */
gst_buffer_pool_set_active (pool, TRUE);
}
if (pool == NULL) {
/* still no pool, we create one ourself with our ideal config */
pool = gst_buffer_pool_new (...);
}
/* now set the pool on this pad and the peer pad */
gst_pad_push_event (pad, gst_event_new_bufferpool (pool));
Negotiation is the same for both push and pull mode. In the case of pull
mode scheduling, the srcpad will perform the negotiation of the pool
when it receives the first pull request.
Allocating from pool
--------------------
Buffers are allocated from the pool of a pad:
res = gst_buffer_pool_acquire_buffer (pool, &buffer, ¶ms);
convenience functions to automatically get the pool from a pad can be made:
res = gst_pad_acquire_buffer (pad, &buffer, ¶ms);
Buffers are refcounted in te usual way. When the refcount of the buffer
reaches 0, the buffer is automatically returned to the pool. This is achieved
by setting and reffing the pool as a new buffer member.
Since all the buffers allocated from the pool keep a reference to the pool,
when nothing else is holding a refcount to the pool, it will be finalized
when all the buffers from the pool are unreffed. By setting the pool to
the inactive state we can drain all buffers from the pool.
Renegotiation
-------------
Renegotiation of the bufferpool might need to be performed when the
configuration of the pool changes. Changes can be in the buffer size (because
of a caps change), alignment or number of buffers.
* downstream
When the upstream element wants to negotiate a new format, it might need
to renegotiate a new bufferpool configuration with the downstream element.
This can, for example, happen when the buffer size changes.
We can not just reconfigure the existing bufferpool because there might
still be outstanding buffers from the pool in the pipeline. Therefore we
need to create a new bufferpool for the new configuration while we let the
old pool drain.
Implementations can choose to reuse the same bufferpool object and wait for
the drain to finish before reconfiguring the pool.
The element that wants to renegotiate a new bufferpool uses exactly the same
algorithm as when it first started.
* upstream
When a downstream element wants to negotiate a new format, it will send a
RECONFIGURE event upstream. This instructs upstream to renegotiate both
the format and the bufferpool when needed.
A pipeline reconfiguration is when new elements are added or removed from
the pipeline or when the topology of the pipeline changes. Pipeline
reconfiguration also triggers possible renegotiation of the bufferpool and
caps.
A RECONFIGURE event tags each pad it travels on as needing reconfiguration.
The next buffer allocation will then require the renegotiation or
reconfiguration of a pool.
If downstream has specified a RENEGOTIATE flag, it must be prepared to
received NOT_NEGOTIATED results when allocating buffers, which instructs
it to start caps and bufferpool renegotiation. When using this flag,
upstream can more quickly react to downstream format or size changes.
Shutting down
-------------
In push mode, a source pad is responsible for setting the pool to the
inactive state when streaming stops. The inactive state will unblock any pending
allocations so that the element can shut down.
In pull mode, the sink element should set the pool to the inactive state when
shutting down so that the peer _get_range() function can unblock.
In the inactive state, all the buffers that are returned to the pool will
automatically be freed by the pool and new allocations will fail.
Use cases
---------
1) videotestsrc ! xvimagesink
Before videotestsrc can output a buffer, it needs to negotiate caps and
a bufferpool with the downstream peer pad.
First it will negotiate a suitable format with downstream according to the
normal rules.
Then it does gst_pad_query_bufferpool() which triggers the BUFFERPOOL query.
This bufferpool is currently in the inactive state and thus has no buffers
allocated.
videotestsrc gets the configuration of the bufferpool object. This
configuration lists the desired configuration of the xvimagesink, which can
have specific alignment and/or min/max amount of buffers.
videotestsrc updates the configuration of the bufferpool, it will likely
set the min buffers to 1 and the size of the desired buffers. It then
updates the bufferpool configuration with the new properties.
When the configuration is successfully updated, videotestsrc pushes the
bufferpool downstream with the BUFFERPOOL event.
It then sets the bufferpool to the active state. This preallocates
the buffers in the pool (if needed). This operation can fail when there
is not enough memory available. Since the bufferpool is provided by
xvimagesink, it will allocate buffers backed by an XvImage and pointing
to shared memory with the X server.
If the bufferpool is successfully activated, videotestsrc can acquire a
buffer from the pool, set the caps on it, fill in the data and push it
out to xvimagesink.
xvimagesink can know that the buffer originated from its pool by following
the pool member. It might need to get the parent buffer first in case of
subbuffers.
when shutting down, videotestsrc will set the pool to the inactive state,
this will cause further allocations to fail and currently allocated buffers
to be freed. videotestsrc will then free the pool and stop streaming.
2) videotestsrc ! queue ! myvideosink
In this second use case we have a videosink that can at most allocate
3 video buffers.
Again videotestsrc will have to negotiate a bufferpool with the peer
element. For this it will perform gst_pad_query_bufferpool() which
queue will proxy to its downstream peer element.
The bufferpool returned from myvideosink will have a max_buffers set to 3.
queue and videotestsrc can operate with this upper limit because none of
those elements require more than that amount of buffers for temporary
storage.
The bufferpool of myvideosink will then be configured with the size of the
buffers for the negotiated format and according to the padding and alignment
rules. When videotestsrc sets the pool to active, the 3 video
buffers will be preallocated in the pool.
The pool will then be configured to downstream elements with the BUFFERPOOL
event. The queue will proxy the BUFFERPOOL event to its srcpad, which
finally configures the pool all the way to the sink.
videotestsrc acquires a buffer from the configured pool on its srcpad and
pushes this into the queue. When the videotestsrc has acquired and pushed
3 frames, the next call to gst_buffer_pool_acquire_buffer() will block
(assuming the GST_BUFFER_POOL_FLAG_WAIT is specified).
When the queue has pushed out a buffer and the sink has rendered it, the
refcount of the buffer reaches 0 and the buffer is recycled in the pool.
This will wake up the videotestsrc that was blocked, waiting for more
buffers and will make it produce the next buffer.
In this setup, there are at most 3 buffers active in the pipeline and
the videotestsrc is rate limited by the rate at which buffers are recycled
in the bufferpool.
When shutting down, videotestsrc will first set the bufferpool on the srcpad
to inactive. This causes any pending (blocked) acquire to return with a
WRONG_STATE result and causes the streaming thread to pause.
3) .. ! myvideodecoder ! queue ! fakesink
In this case, the myvideodecoder requires buffers to be aligned to 128
bytes and padded with 4096 bytes. The pipeline starts out with the
decoder linked to a fakesink but we will then dynamically change the
sink to one that can provide a bufferpool.
When it negotiates the size with the downstream element fakesink, it will
receive a NULL bufferpool because fakesink does not provide a bufferpool.
It will then select it own custom bufferpool to start the datatransfer.
At some point we block the queue srcpad, unlink the queue from the
fakesink, link a new sink, set the new sink to the PLAYING state and send
the right newsegment event to the sink. Linking the new sink would
automatically send a RENEGOTIATE event upstream and, through queue, inform
myvideodecoder that it should renegotiate its bufferpool because downstream
has been reconfigured.
Before pushing the next buffer, myvideodecoder would renegotiate a new
bufferpool. To do this, it performs the usual bufferpool negotiation
algorithm. If it can obtain and configure a new bufferpool from downstream,
it sets its own (old) pool to inactive and unrefs it. This will eventually
drain and unref the old bufferpool.
The new bufferpool is set as the new bufferpool for the srcpad and sinkpad
of the queue and set to the active state.
4) .. ! myvideodecoder ! queue ! myvideosink
myvideodecoder has negotiated a bufferpool with the downstream myvideosink
to handle buffers of size 320x240. It has now detected a change in the
video format and need to renegotiate to a resolution of 640x480. This
requires it to negotiate a new bufferpool with a larger buffersize.
When myvideodecoder needs to get the bigger buffer, it starts the
negotiation of a new bufferpool. It queries a bufferpool from downstream,
reconfigures it with the new configuration (which includes the bigger buffer
size), it sets the bufferpool to active and pushes the bufferpool downstream.
This automatically inactivates the old pool and unrefs it, which causes the
old format to drain.
It then uses the new bufferpool for allocating new buffers of the new
dimension.
If at some point, the decoder wants to switch to a lower resolution again,
it can choose to use the current pool (which has buffers that are larger
than the required size) or it can choose to renegotiate a new bufferpool.
5) .. ! myvideodecoder ! videoscale ! myvideosink
myvideosink is providing a bufferpool for upstream elements and wants to
change the resolution.
myvideosink sends a RENEGOTIATE event upstream to notify upstream that a
new format is desirable. upstream elements try to negotiate a new format
and bufferpool before pushing out a new buffer. The old bufferpools are
drained in the regular way.
|