1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
|
This is cairo's performance test suite.
One of the simplest ways to run the performance suite is:
make perf
which will give a report of the speed of each individual test. See
more details on other options for running the suite below.
Running the cairo performance suite
-----------------------------------
The performance suite is composed of two types of tests, micro- and
macro-benchmarks. The micro-benchmarks are a series of hand-written,
short, synthetic tests that measure the speed of doing a simple
operation such as painting a surface or showing glyphs. These aim to
give very good feedback on whether a performance related patch is
successful without causing any performance degradations elsewhere. The
second type of benchmark consists of replaying a cairo-trace from a
large application during typical usage. These aim to give an overall
feel as to whether cairo is faster for everyday use.
Running the micro-benchmarks
----------------------------
The micro-benchmarks are compiled into a single executable called
cairo-perf-micro, which is what "make perf" executes. Some
examples of running it:
# Report on all tests with default number of iterations:
./cairo-perf-micro
# Report on 100 iterations of all gradient tests:
./cairo-perf-micro -i 100 gradient
# Generate raw results for 10 iterations into cairo.perf
./cairo-perf-micro -r -i 10 > cairo.perf
# Append 10 more iterations of the paint test
./cairo-perf-micro -r -i 10 paint >> cairo.perf
Raw results aren't useful for reading directly, but are quite useful
when using cairo-perf-diff to compare separate runs (see more
below). The advantage of using the raw mode is that test runs can be
generated incrementally and appended to existing reports.
Running the macro-benchmarks
----------------------------
The macro-benchmarks are run by a single program called
cairo-perf-trace, which is also executed by "make perf".
cairo-perf-trace loops over the series of traces stored beneath
cairo-traces/. cairo-perf-trace produces the same output and takes the
same arguments as cairo-perf-micro. Some examples of running it:
# Report on all tests with default number of iterations:
./cairo-perf-trace
# Report on 100 iterations of all firefox tests:
./cairo-perf-trace -i 100 firefox
# Generate raw results for 10 iterations into cairo.perf
./cairo-perf-trace -r -i 10 > cairo.perf
# Append 10 more iterations of the poppler tests
./cairo-perf-trace -r -i 10 poppler >> cairo.perf
Generating comparisons of separate runs
---------------------------------------
It's often useful to generate a chart showing the comparison of two
separate runs of the cairo performance suite, (for example, after
applying a patch intended to improve cairo's performance). The
cairo-perf-diff script can be used to compare two report files
generated by cairo-perf.
Again, by way of example:
# Show performance changes from cairo-orig.perf to cairo-patched.perf
./cairo-perf-diff cairo-orig.perf cairo-patched.perf
This will work whether the data files were generate in raw mode (with
cairo-perf -r) or cooked, (cairo-perf without -r).
Finally, in its most powerful mode, cairo-perf-diff accepts two git
revisions and will do all the work of checking each revision out,
building it, running cairo-perf for each revision, and finally
generating the report. Obviously, this mode only works if you are
using cairo within a git repository, (and not from a tar file). Using
this mode is as simple as passing the git revisions to be compared to
cairo-perf-diff:
# Compare cairo 1.2.6 to cairo 1.4.0
./cairo-perf-diff 1.2.6 1.4.0
# Measure the impact of the latest commit
./cairo-perf-diff HEAD~1 HEAD
As a convenience, this common desire to measure a single commit is
supported by passing a single revision to cairo-perf-diff, in which
case it will compare it to the immediately preceeding commit. So for
example:
# Measure the impact of the latest commit
./cairo-perf-diff HEAD
# Measure the impact of an arbitrary commit by SHA-1
./cairo-perf-diff aa883123d2af90
Also, when passing git revisions to cairo-perf-diff like this, it will
automatically cache results and re-use them rather than re-running
cairo-perf over and over on the same versions. This means that if you
ask for a report that you've generated in the past, cairo-perf-diff
should return it immediately.
Now, sometimes it is desirable to generate more iterations rather than
re-using cached results. In this case, the -f flag can be used to
force cairo-perf-diff to generate additional results in addition to
what has been cached:
# Measure the impact of latest commit (force more measurement)
./cairo-perf-diff -f
And finally, the -f mode is most useful in conjunction with the --
option to cairo-perf-diff which allows you to pass options to the
underlying cairo-perf runs. This allows you to restrict the additonal
test runs to a limited subset of the tests.
For example, a frequently used trick is to first generate a chart with
a very small number of iterations for all tests:
./cairo-perf-diff HEAD
Then, if any of the results look suspicious, (say there's a slowdown
reported in the text tests, but you think the text test shouldn't be
affected), then you can force more iterations to be tested for only
those tests:
./cairo-perf-diff -f HEAD -- text
Generating comparisons of different backends
--------------------------------------------
An alternate question that is often asked is, "how does the speed of one
backend compare to another?". cairo-perf-compare-backends can read files
generated by cairo-perf and produces a comparison of the backends for every
test.
Again, by way of example:
# Show relative performance of the backends
./cairo-perf-compare-backends cairo.perf
This will work whether the data files were generate in raw mode (with
cairo-perf -r) or cooked, (cairo-perf without -r).
Creating a new performance test
-------------------------------
This is where we could use everybody's help. If you have encountered a
sequence of cairo operations that are slower than you would like, then
please provide a performance test. Writing a test is very simple, it
requires you to write only a small C file with a couple of functions,
one of which exercises the cairo calls of interest.
Here is the basic structure of a performance test file:
/* Copyright © 2006 Kind Cairo User
*
* ... Licensing information here ...
* Please copy the MIT blurb as in other tests
*/
#include "cairo-perf.h"
static cairo_perf_ticks_t
do_my_new_test (cairo_t *cr, int width, int height)
{
cairo_perf_timer_start ();
/* Make the cairo calls to be measured */
cairo_perf_timer_stop ();
return cairo_perf_timer_elapsed ();
}
void
my_new_test (cairo_perf_t *perf, cairo_t *cr, int width, int height)
{
/* First do any setup for which the execution time should not
* be measured. For example, this might include loading
* images from disk, creating patterns, etc. */
/* Then launch the actual performance testing. */
cairo_perf_run (perf, "my_new_test", do_my_new_test);
/* Finally, perform any cleanup from the setup above. */
}
That's really all there is to writing a new test. The first function
above is the one that does the real work and returns a timing
number. The second function is the one that will be called by the
performance test rig (see below for how to accomplish that), and
allows for multiple performance cases to be written in one file,
(simply call cairo_perf_run once for each case, passing the
appropriate callback function to each).
We go through this dance of indirectly calling your own function
through cairo_perf_run so that cairo_perf_run can call your function
many times and measure statistical properties over the many runs.
Finally, to fully integrate your new test case you just need to add
your new test to three different lists. (TODO: We should set this up
better so that the lists are maintained automatically---computed from
the list of files in cairo/perf, for example). Here's what needs to be
added:
1. Makefile.am: Add the new file name to the cairo_perf_SOURCES list
2. cairo-perf.h: Add a new CAIRO_PERF_DECL line with the name of your
function, (my_new_test in the example above)
3. cairo-perf.c: Add a new row to the list at the end of the file. A
typical entry would look like:
{ my_new_test, 16, 64 }
The last two numbers are a minimum and a maximum image size at
which your test should be exercised. If these values are the same,
then only that size will be used. If they are different, then
intermediate sizes will be used by doubling. So in the example
above, three tests would be performed at sizes of 16x16, 32x32 and
64x64.
How to record new traces
-----------------------
Using cairo-trace you can record the exact sequence of graphic operations
made by an application and replay them later. These traces can then be
used by cairo-perf-trace to benchmark the various backends and patches.
To record a trace:
$ cairo-trace --no-mark-dirty --no-callers $APPLICATION [$ARGV]
--no-mark-dirty is useful for applications that are paranoid about
surfaces being modified by external plugins outside of their control, the
prime example here is firefox.
--no-callers disables the symbolic caller lookup and so speeds tracing
(dramatically for large c++ programs) and similarly speeds up the replay
as the files are much smaller.
The output file will be called $APPLICATION.$PID.trace, the actual path
written to will be displayed on the terminal.
Alternatively you can use:
$ cairo-trace --profile $APPLICATION [$ARGV]
which automatically passes --no-mark-dirty and --no-callers and compresses
the resultant trace using LZMA. To use the trace with cairo-perf-trace you
will first need to decompress it.
Then to use cairo-perf-trace:
$ ./cairo-perf-trace $APPLICATION.$PID.trace
Alternatively you can put the trace into perf/cairo-traces, or set
CAIRO_TRACE_DIR to point to your trace directory, and the trace will be
included in the performance tests.
If you record an interesting trace, please consider sharing it by compressing
it, LZMA preferred, and posting a link to cairo@cairographics.org, or by
uploading it to git.cairographics.org/cairo-traces.
How to run cairo-perf-diff on WINDOWS
-------------------------------------
This section explains the specifics of running cairo-perf-diff under
win32 plateforms. It assumes that you have installed a UNIX-like shell
environment such as MSYS (distributed as part of MinGW).
1. From your Mingw32 window, be sure to have all of your MSVC environ-
ment variables set up for proper compilation using 'make'
2. Add the %GitBaseDir%/Git/bin path to your environment, replacing the
%GitBaseDir% by whatever directory your Git version is installed to.
3. Comment out the "UNSET CDPATH" line in the git-sh-setup script
(located inside the ...Git/bin directory) by putting a "#" at the
beginning of the line.
you should be ready to go !
From your mingw32 window, go to your cairo/perf directory and run the
cairo-perf-diff script with the right arguments.
Thanks for your contributions and have fun with cairo!
|