summaryrefslogtreecommitdiff
path: root/HACKING
blob: 40c14d2ab16559c80b9653d8e539ae7c9468c465 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
THE GOAL
--------
What we are trying to achieve:

satisfy:
  patching of CVS checkout using our patch files placed in our CVS

  passing of
    make
    make distcheck
    non-srcdir build (ie, mkdir build; cd build; ../configure; make)

THE SETUP
---------
There is a "mirror" root CVS module that contains "ffmpeg".
This directory contains a vendor-branch checkout of upstream FFmpeg CVS
of a given day.

On head, the following things have been commited on top of this:
* patches/, which is a directory with a set of patches, and a series file
  listing the order, as generated by quilt
* .pc/, which is a tree of files that quilt uses to keep control of its state.
  It contains a list of applied patches, and one directory per patch,
  containing a tree of hardlinked files that were added to the patchset, and
  a .pc file listing all files part of the patchset.
* the result of having all these patches commited (ie, quilt push -a) to the
  ffmpeg tree.

Both the actually patched CVS ffmpeg code as well as the .pc dir need to be
commited to CVS so the state of quilt wrt. the source is in sync.

THE WAY
-------

- If you want to hack on our copy of the FFmpeg code, there are some basic
  rules you need to respect:
  - you need to use quilt.  If you don't use quilt, you can't hack on it.
  - we separate patches based on the functionality they patch, and whether
    or not we want to send stuff upstream.  Make sure you work in the right
    patch.  use "quilt applied" to check which patches are applied.
  - before starting to hack, run cvs diff.  There should be NO diffs, and
    NO files listed with question mark.  If there are, somebody before you
    probably made a mistake.  To manage the state correctly, it is vital that
    none of the files are unknown to CVS.
  - if you want to add a file to a patchset, you need to:
    - be in the right patchset
    - quilt add (file)
    - cvs add .pc/(patchsetname)/(file)
    - cvs commit .pc/(patchsetname) (to update the state of quilt in cvs)
    - edit the file
    - quilt refresh
    - quilt push -a (This one is IMPORTANT, otherwise you'll have a huge diff)
    - cvs commit
  - if you want to add a patchset, you need to:
    - go over the procedure with thomas to check it's correct
    - decide where in the stack to put it.  ask for help if you don't know.
    - go there in the patch stack (use quilt pop/push)
    - quilt new (patchsetname).patch (don't forget .patch !)
    - quilt add (files)
    - cvs add .pc/(patchsetname) the whole tree
    - cvs commit .pc/(patchsetname)
    - quilt refresh
    - quilt push -a
    - cvs commit
    - cvs diff (to check if any of the files are unknown to CVS; if they are,
      you need to add them to CVS)

THE PLUGIN
----------
Some notes on how ffmpeg wrapping inside GStreamer currently works:
* gstffmpeg{dec,enc,demux,mux}.c are wrappers for specific element types from
    their ffmpeg counterpart. If you want to wrap a new type of element in
    ffmpeg (e.g. the URLProtocol things), then you'd need to write a new
    wrapper file.

* gstffmpegcolorspace.c is a wrapper for one specific function in ffmpeg:
    colorspace conversion. This works different from the previously mentioned
    ones, and we'll come to that in the next item. If you want to wrap one
    specific function, then that, too, belongs in a new wrapper file.

* the important difference between all those is that the colorspace element
    contains one element, so there is a 1<->1 mapping. This makes for a fairly
    basic element implementation. gstffmpegcolorspace.c, therefore, doesn't
    differ much from other colorspace elements. The ffmpeg element types,
    however, define a whole *list* of elements (in GStreamer, each decoder etc.
    needs to be its own element). We use a set of tricks for that to keep
    coding simple: codec mapping and dynamic type creation.

* ffmpeg uses CODEC_ID_* enumerations for their codecs. GStreamer uses caps,
    which consists of a mimetype and a defined set of properties. In ffmpeg,
    these properties live in a AVCodecContext struct, which contains anything
    that could configure any codec (which makes it rather messy, but ohwell).
    To convert from one to the other, we use codec mapping, which is done in
    gstffmpegcodecmap.[ch]. This is the most important file in the whole
    ffmpeg wrapping process! It contains functions to go from a codec type
    (video or audio - used as the output format for decoding or the input
    format for encoding), a codec id (to identify each format) or a format id
    (a string identifying a file format - usually the file format extension)
    to a GstCaps, and the other way around.

* to define multiple elements in one source file (which all behave similarly),
    we dynamically create types for each plugin and let all of them operate on
    the same struct (GstFFMpegDec, GstFFMpegEnc, ...). The functions in
    gstffmpeg{dec,enc,demux,mux}.c called gst_ffmpeg*_register() do this.
    The magic is as follows: for each codec or format, ffmpeg has a single
    AVCodec or AV{Input,Output}Format, which are packed together in a list of
    supported codecs/formats. We simply walk through the list, for each of
    those, we check whether gstffmpegcodecmap.c knows about this single one.
    If it does, we get the GstCaps for each pad template that belongs to it,
    and register a type for all of those together. We also leave this inside
    a caching struct, that will later be used by the base_init() function to
    fill in information about this specific codec in the class struct of this
    element (pad templates and codec/format information). Since the actual
    codec information is the only thing that really makes each codec/format
    different (they all behave the same through the ffmpeg API), we don't
    really need to do anything else that is codec-specific, so all other
    functions are rather simple.

* one particular thing that needs mention is how gstffmpeg{mux,demux}.c and
    gstffmpegprotocol.c interoperate. ffmpeg uses URLProtocols for data input
    and output. Now, of course, we want to use the *GStreamer* way of doing
    input and output (filesrc, ...) rather than the ffmpeg way. Therefore, we
    wrap up a GstPad as a URLProtocol and register this with ffmpeg. This is
    what gstffmpegprotocol.c does. The URL is called gstreamer://%p, where %p
    is the address of a GstPad. gstffmpeg{mux,demux}.c then open a file called
    gstreamer://%p, with %p being their source/sink pad, respectively. This
    way, we use GStreamer for data input/output through the ffmpeg API. It's
    rather ugly, but it has worked quite well so far.

* there's lots of things that still need doing. See the TODO file for more
    information.