summaryrefslogtreecommitdiff
path: root/README
blob: 8510d78431132bdf395371895a92852c0c287f53 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
= EzBench =

This repo contains a collection of tools to benchmark graphics-related
patch-series.

== Core.sh ==

WARNING: This tool can be used directly from the CLI but it is recommenced that
you use ezbench in conjunction with ezbenchd which together support testing the
kernel and which are much more robust to errors.

This tool is responsible for collecting the data and generating logs that will
be used by another tool to generate a visual report.

To operate, this script requires a git repo, the ability to compile and deploy
a commit and then benchmarks to be run. To simplify the usage, a profile should
be written for each repo you want to test that will allow ezbench to check the
current version that is deployed, to compile and install a new version and set
some default parameters to avoid typing very-long command lines every time a
set of benchmarks needs to be run.

By default, the logs will be outputed in logs/<date of the run>/ and are stored
mostly as csv files. The main report is found under the name results and needs
to read with "less -r" to get the colours out! The list of commits tested is
found under the name commit_list. A comprehensive documentation of the file
structure will be written really soon.

You may specify whatever name you want by adding -N <name> to the command line.
This is very useful when testing kernel-related stuff as we need to reboot on
a new kernel to test a new commit.

=== Dependencies ===

 - A recent-enough version of bash
 - make
 - awk
 - all the other typical binutils binaries

=== Configuration ===

The tests configuration file is named user_parameters.sh. A sample file called
user_parameters.sh.sample comes with the repo and is a good basis for your first
configuration file.

You will need to adjust this file to give the location of the base directory of
all the benchmark folders and repositories for the provided profiles.

Another important note about core.sh is that it is highly modular and
hook-based. Have a look at profiles.d/$profile/conf.d/README for the
documentation about the different hooks.

=== Examples ===

==== Testing every patchset of a series ====

The following command will test all the GLB27:Egypt cases but the ones
containing the word cpu in them. It will run all the benchmarks 5 times on
the 10 commits before HEAD~~.

    ./core.sh -p ~/repos/mesa -B cpu -b GLB27:Egypt -r 5 -n 10 -H HEAD~~

The following command run the synmark:Gl21Batch2 benchmark (note the $ at the
end that indicates that we do not want the :cpu variant). It will run all the
benchmarks 3 times on 3 commits (in this order), HEAD~5 HEAD~2 HEAD~10.

    ./core.sh -p ~/repos/mesa -b synmark:Gl21Batch2$ -r 3 HEAD~5 HEAD~2 HEAD~10

To use the mesa profile, which has the advantage of checking that the deployment
was successful, you may achieve the same result by running:

    ./core.sh -P mesa -b synmark:Gl21Batch2$ -r 3 HEAD~5 HEAD~2 HEAD~10

==== Retrospectives ====

Here is an example of how to generate a retrospective. The interesting part is
the call to utils/get_commit_list.py which generates a list of commits

    ./core.sh -p ~/repos/mesa -B cpu -b GLB27:Egypt:offscreen \
                 -b GLB27:Trex:offscreen -b GLB30:Manhattan:offscreen \
                 -b GLB30:Trex:offscreen -b unigine:heaven:1080p:fullscreen \
                 -b unigine:valley:1080p:fullscreen -r 3 \
                 -m "./recompile-release.sh" \
                 `utils/get_commit_list.py -p ~/repos/mesa -s 2014-12-01 -i "1 week"`

== ezbench ==

This tool is meant to make the usage of core.sh easy and support testing
performance across reboots.

It allows creating a new performance report, scheduling benchmark
runs, changing the execution rounds on the fly and then start, pause or halt
the execution of this report.

This tool uses core.sh as a backend for checking that the commits SHA1 and tests
do exist so you are sure that the work can be executed when the time comes.

=== Dependencies ===

 - python3
 - numpy

=== Examples ===

==== Creating a report ====

The ezbench command allows you to create a new performance report. To create
a performance report named 'mesa-tracking-pub-benchmarks', using the core.sh
profile 'mesa', you need to run the following command:

    ./ezbench -p mesa mesa-tracking-pub-benchmarks

==== Adding benchmarks runs ====

Adding the 2 rounds of benchmark GLB27:Egypt:offscreen to the report
mesa-tracking-pub-benchmarks for the commit HEAD can be done using the following
command:

    ./ezbench -r 2 -b GLB27:Egypt:offscreen -c HEAD mesa-tracking-pub-benchmarks

A retrospective can be made in the same fashion as with core.sh at the exception
made that it would also work across reboots which is good when testing kernels:

    ./ezbench -r 3 -b GLB27:Egypt:offscreen -b GLB27:Trex:offscreen \
              -b GLB30:Manhattan:offscreen -b GLB30:Trex:offscreen \
              -b unigine:heaven:1080p:fullscreen -b unigine:valley:1080p:fullscreen \
              -c "`utils/get_commit_list.py -p ~/repos/mesa -s 2014-12-01 -i "1 week"`"
              mesa-tracking-pub-benchmarks

==== Checking the status of a report ====

You can check the status of the 'mesa-tracking-pub-benchmarks' report by calling
the following command:

    ./ezbench mesa-tracking-pub-benchmarks status

==== Changing the execution status of the report ====

When creating a report, the default state of the report is "initial" which means
that nothing will happen until the state is changed. To change the state, you
need to run the following command:

    ./ezbench mesa-tracking-pub-benchmarks (run|pause|abort)

 - The "run" state says that the report is ready to be run by ezbenchd.py.

 - The "pause" and "abort" states indicate that ezbenchd.py should not be
 executing any benchmarks from this report. The difference between the "pause"
 and "abort" states is mostly for humans, to convey the actual intent.

==== Starting collecting data without ezbenchd.py ====

If you are not using ezbenchd.py, you may simply run the following command to
start collecting data:

    ./ezbench mesa-tracking-pub-benchmarks start

This command will automatically change the state of the report to "run".

== utils/ezbenchd.py ==

TODO

== stats/gen_report.py ==

WARNING: This tool is deprecated, compare_reports.py is the prefered way now even
if the single-report mode is not as advanced as the gen_report.py.

The goal of this tool is to read the reports from ezbench and make them
presentable to engineers and managers.

Commits can be renamed by having a file named 'commit_labels' in the logs
folder. The format is to have one label per line. The short SHA1 first, a space
and then the label. Here is an example:
    bb19f2c 2014-12-01

If you want to generate date labels for commits, you can use the tool
utils/gen_date_labels.py to generates the 'commit_labels' file. Example:
    utils/gen_date_labels.py -p ~/repos/mesa logs/seekreet_stuff/

It is also possible to add notes to the HTML report by adding a file called
'notes' in the report folder. Every line of the note file will be added in
an unordered list. It is possible to use HTML inside the file.

=== Dependencies ===

 - python3
 - matplotlib
 - scipy
 - mako
 - an internet connection to read the report

=== Example ===

This command will create an HTML report named
logs/public_benchmarks_trend_broadwell/index.html. Nothing more, nothing less.

    ./stats/gen_report.py logs/public_benchmarks_trend_broadwell/


== utils/perf_bisect.py ==

WARNING: The introduction of smart ezbench made this tool absolutely useless

The perf_bisect.py tool allows bisecting performance issues. It is quite trivial
to use, so just check out the example.

=== Examples ===

The following command will bisect a performance difference between commit
HEAD~100 and HEAD. The -p, -b, -r and -m arguments are the same as core.sh.

    utils/perf_bisect.py -p ~/repos/mesa -b glxgears:window -r 1 HEAD~100 HEAD