1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8">
<link href="wayland.css" rel="stylesheet" type="text/css">
<script type="text/javascript" src="generated-toc.js"></script>
<title>Wayland Testing</title>
</head>
<body>
<h1><a href="/"><img src="wayland.png" alt="Wayland logo"></a></h1>
<div id="generated-toc" class="generate_from_h2"></div>
<h2>Wayland Unit Test Suite</h2>
<p>
The Wayland unit test suite is part of the Wayland source tree found under
the <i>tests</i> directory. It leverages automake's
<a href="http://www.gnu.org/software/automake/manual/html_node/Serial-Test-Harness.html">Serial Test Harness</a> to
specify, compile, and execute the tests thru <i>make check</i>.
</p>
<p>
It is recommended that all Wayland developers create unit tests for their
code. All unit tests should pass before submitting patches upstream.
The Wayland maintainer(s) have the right to refuse any patches that are not accompanied
by a unit test or if the patches break existing unit tests.
</p>
<h3>Compiling and Running</h3>
<pre>
$ make check
</pre>
<p>NO_ASSERT_LEAK_CHECK=1 disables the test runner's simple memory checks.</p>
<h3>Writing Tests</h3>
<p>
The test runner (tests/test-runner.h,c) predefines the <i>main</i> function, does simple
memory leak checking for each test, and defines various convenience macros to simplify
writing unit tests. When running a unit test group, the test runner executes each
test case in a separate forked subprocess.
</p>
<p>
The <i>TEST(<test name>)</i> macro defines a test case that "passes" on a
<b>normal</b> exit status of the test (i.e. exitcode 0). An abnormal exit status causes the test
to "fail". Tests defined with this macro are auto-registered with the test runner.
</p>
<p>
The <i>FAIL_TEST(<test name>)</i> macro defines a test case that "passes" on a
<b>abnormal</b> exit status of the test. A normal exit status causes the test to
"fail". Tests defined with this macro are auto-registered with the test runner.
</p>
<h2>Weston Unit Test Suite</h2>
<p>
The Weston unit test suite is part of the Weston source tree found under
the <i>tests</i> directory. It leverages automake's
<a href="http://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness.html">Parallel Test Harness</a> to
specify, compile, and execute the tests thru <i>make check</i>.
</p>
<p>
It is recommended that all Weston developers make unit tests for the
code that they write. All unit tests should pass before submitting patches upstream.
The Weston maintainer(s) have the right to refuse any patches that are not accompanied
by a unit test or if the patches break existing unit tests.
</p>
<h3>Compiling and Running</h3>
<p>To run with the default headless backend:</p>
<pre>
$ make check
</pre>
<p>To run with a different backend, e.g. the X11 backend:</p>
<pre>
$ make check BACKEND=x11-backend.so
</pre>
<h3>Running Individual Tests</h3>
<p>
The <code>check</code> target essentially executes all tests registered
in the <code>TESTS</code> variable in Makefile.am. Files with
a <code>.test</code> extension are run directly, e.g.:</p>
<pre>
$ ./vertex-clip.test
test "float_difference_same":exit status 0, pass.
...
16 tests, 16 pass, 0 skip, 0 fail
</pre>
<p>Tests suffixed with <code>.la</code> and <code>.weston</code> are run
using the <code>tests/weston-tests-env</code> shell script, which you can
use to run individual tests, thusly:</p>
<pre>
$ abs_builddir=$PWD tests/weston-tests-env button.weston
</pre>
<h3>Test Results</h3>
<p>
The <code>*.test</code> tests output their results to stdout. The other
tests write into a logs/ subdirectory and generate one log for the
server and another for the test itself.
</p>
<p>
Example output:
</p>
<pre>
test-client: got global pointer 100 100
test-client: got keyboard keymap
test-client: got surface enter output 0x202d600
test-client: got keyboard modifiers 0 0 0 0
test-client: got pointer enter 0 0, surface 0x202d500
test-client: got pointer motion 50 50
test-client: got global pointer 150 150
test-client: got pointer button 272 1
test-client: got pointer button 272 0
test "simple_button_test":exit status 0, pass.
1 tests, 1 pass, 0 skip, 0 fail
</pre>
<h3>Writing Tests</h3>
<p><b>Compositor Tests</b></p>
<p>
These tests are written by leveraging Weston's module feature. See the "surface-test" for
an example of how this is done.
<p>
<p>
In general, you define a <i>module_init(...)</i> function which registers an idle callback
to your test function. The test function is responsible for ensuring the compositor
exits at the end of your test. For example:
</p>
<pre>
static void
test_foo(void *data)
{
struct weston_compositor *compositor = data;
/* Test stuff...
assert(...);
assert(...);
...
*/
wl_display_terminate(compositor->wl_display);
}
WL_EXPORT int
module_init(struct weston_compositor *compositor,
int *argc, char *argv[], const char *config_file)
{
struct wl_event_loop *loop;
loop = wl_display_get_event_loop(compositor->wl_display);
wl_event_loop_add_idle(loop, test_foo, compositor);
return 0;
}
</pre>
<p><b>Client Tests</b></p>
<p>
Similar to the Wayland unit test suite, the test runner (tests/weston-test-runner.h,c)
predefines the <i>main</i> function and various convenience macros to simplify writing
unit tests. When running a unit test group, the test runner executes each test case
in a separate forked subprocess. Unlike the Wayland unit test suite, this runner
does not have a simple memory leak checker.
</p>
<p>
The <i>TEST(<test name>)</i> macro defines a test case that "passes" on a
<b>normal</b> exit status of the test (i.e. exitcode 0). An abnormal exit status causes the test
to "fail". Tests defined with this macro are auto-registered with the test runner.
</p>
<p>
The <i>FAIL_TEST(<test name>)</i> macro defines a test case that "passes" on a
<b>abnormal</b> exit status of the test. A normal exit status causes the test to
"fail". Tests defined with this macro are auto-registered with the test runner.
</p>
<p>
Client tests have access to the <i>wl_test</i> interface which defines a small API
for interacting with the compositor (e.g. emit input events, query surface data, etc.)
There is also a client helper API (tests/weston-test-client-helper.h,c), that
simplifies client setup and wl_test interface usage. Probably the most effective
way to learn how to leverage these API's is to study some of the existing tests
(e.g. tests/button-test.c).
</p>
</body>
|