summaryrefslogtreecommitdiff
path: root/HACKING
blob: 7f1e55a081e208e839d18d78c7b64b4d6e038ffc (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
Writing test for the X.Org Integration Test (XIT) suite

== googletest ==
You should be familiar with googletest
http://code.google.com/p/googletest/wiki/Documentation

This document well not detail googletest-only specific issues, it will focus
on those parts that are specific to XIT.

== Directory layout ==
Tests are divided by general category:
tests/input .... tests for specific input driver features or bugs
tests/video .... tests for specific video driver features or bugs
tests/server ... tests for server features or bugs
tests/lib ...... tests for library features or bugs

Each directory has a number of binaries that group the tests further. e.g.
input/evdev is a set of evdev-related tests. Tests should go into existing
binaries where sensible, but tests for new logical features should add new
binaries. A test binary may have multiple source files.

Ideally, each feature or behaviour group can be tested by running one
binary.

== Writing tests ==
Generally, when writing a new tests you should check if there are similar
tests. Copy and rename such a test, then add the test-specific code.  Do
look up the class definitions of the various helper classes, especially the
ones in the tests/common directory.

A test name should stay constant so bugs, patches and commits can refer to
those test names. Pick a descriptive name for the test. Ideally, a test name
describes enough to figure out what the test does approximately and is
specific enough that name collision is unlikely.

Each test should start with XORG_TESTCASE(), containing a human-readable
description of the test and the steps it takes. This description is printed
when a test fails and should be precise enough to explain what the tests
attempts to verify.

The central class to know is XITServer. It's a wrapper around
xorg::testing::XServer with some automated features. You should always use
XITServer to create an X Server in your test, never xorg-gtest's XServer
class. The XITServer initialises itself on DISPLAY=:133, with config and log
file names named after the test name.

The second class to know is the XOrgConfig class. It provides simple hooks
for writing a config file with some of the options automated.

Thus, your basic test could look like this:

TEST(SomeTest, TestForFeature) {
    XORG_TESTCASE("Create simple input device section.\n"
                  "Start server.\n"
                  "Run tests against the server.\n");

    XOrgConfig config;
    // Add evdev input device section
    config.AddInputSection("evdev", "some device name",
                           "Option \"Device\" \"/dev/input/event0\");
    // Add dummy video section
    config.AddDefaultScreenWithDriver();

    // Start a server based on that config
    XITServer server;
    server.Start(config);

    ::Display *dpy = XOpenDisplay(server.GetDisplayString().c_str());

    // now run your tests

    // clean up
    server.Terminate();

    // A test should remove its files on success but leave them on failure
    config.RemoveConfig();
    server.RemoveLogFile();
}

This test will show up as
  SomeTest.TestForFeature
in the test logs, and use log and config files named the same way.

The above is a simple structure for starting a server and testing against
it. In most cases, several tests want the same configuration and moving the
common parts out into shared code is useful.

class MyFeatureTest : public XITServerTest {
public:
   virtual void SetUpConfigAndLog() {
       config.AddDefaultScreenWithDriver();
       config.AddInputSection("evdev", "some device name", ...
   }
};

SetUpConfigAndLog() will be called during the setup of the test, before the
server is started. This is the hook to change your configuration file. For
most tests, you do not need to override any other calls.
Your test case based on that class can now look like this:

TEST_F(MyFeatureTest, TestForFeature) {
    XORG_TESTCASE( .... )
    ::Display *dpy = Display();

    // now run your tests against the display
}

Of course, you can just use Display() directly instead of dpy. XITServerTest
will write the config file, start the server, and remove the config and log
files if the test succeeds. A new server is started for every TEST_F you
have written, and there is no state dependency between tests.

For general feature tests, you should _always_ use a base class to derive
your feature tests from. Do not directly base your tests on any class in
tests/common. If necessary, write an empty class to ensure consistent naming
if the shared classes in tests/common change. So even if you don't need any
actual special code, base your test of a named empty class:

class MyFeatureTest : public XITServerTest {}

Both of the tests above used a specific input device. But we need to ensure
that device actually exists, so it's best to create it ourselves:

class MyFeatureTestWithDevice : public XITServerTest,
                                public DeviceInterface {
public:
    virtual void SetUp() {
        SetDevice("mice/PIXART-USB-OPTICAL-MOUSE-HWHEEL.desc");
        XITServerTest::SetUp(); // always call the parent's SetUp
    }

    // set up config and log as above
};

This class initiates a uinput device based on
recordings/mice/PIXART-USB-OPTICAL-MOUSE-HWHEEL.desc. By the time our TEST_F
is called, we have that uinput device available for usage:

TEST_F(MyFeatureTestWithDevice, TestForFeature) {
    XORG_TESTCASE( .... )

    XSelectInput(Display(), DefaultRootWindow(Display()), PointerMotionMask);
    XSync(Display(), False);

    dev->PlayOne(EV_REL, REL_X, -1, true); /* true to send a EV_SYN after the event */

    ASSERT_TRUE(xorg::testing::XServer::WaitForEventOfType(Display(), MotionNotify, -1, -1));

    XEvent ev;
    XNextEvent(Display(), &ev);
    // ev is our motion event
}

So, what does this do? Register for pointer motion events on the root
window, then play one relative x axis motion event on the device. Then it
waits for the motion event to appear.

Note: the X protocol is asynchronous and Xlib buffers generously. You should
_always_ call XSync() before triggering anything in the server. If not, your
event selection may still be in the local buffer when the events are
generated and you'll never see the events.

If you have more than one event, you can replay a sequence with:
   dev->PlayOne(RECORDINGS_DIR "mice/somefile.events");

Finally, if you're writing tests that require XI2, use the
XITServerInputTest as parent class. This class will automatically register
for XI2 on startup, and lets you override that call.

class MyFeatureTestXI2 : public XITServerInputTest {
public:
    virtual int RegisterXI2(int major, int minor) {
        return XITServerInputTest::RegisterXI2(2, 2); // Force XI2.2 for this test
    }
}

In your TEST_F, you now have xi2_opcode available.

TEST_F(MyFeatureTestXI2, TestForFeature) {
    XORG_TESTCASE( .... )

    ... select for events, play, etc.

    ASSERT_TRUE(xorg::testing::XServer::WaitForEventOfType(Display(), GenericEvent, xi2_opcode, XI_ButtonPress));
}

That's the gist of writing tests. There are several helpers and other
functions that make writing tests easier. Read other tests to get a feel for
what those calls are.

== Debugging tests ==
xorg-gtest supports three environment variables that help with debugging the
server.

Failing tests usually leave their config and log files around for
inspection. By default, this directory is /tmp (unless changed at configure
time with --with-logpath) and each test uses a naming convention that
includes TestCase.TestName.

Starting the same configuration as a failed test is thus
  Xorg -config /tmp/EvdevDriverFloatingSlaveTest.FloatingDevice

If tests fail because the server doesn't appear to start, you may set
  export XORG_GTEST_CHILD_STDOOUT=1
  sudo -E ./testname --gtest_filter="*blah*" # note the -E to sudo

If this variable is set, stdout from the server will go to stdout of your
test. It will be interleaved with the test output but nonetheless that's a
good way to identify failing server starts.

If you need to gdb the server to set breakpoints before a test starts, set
  export XORG_GTEST_XSERVER_SIGSTOP=1

The server will be sent a SIGSTOP signal after starting, waiting for you to
attach gdb. It can then be foregrounded and the test continues.

== Using the bug registry ==
Most server versions will fail at least some tests, tests may have been
committed before a fix for a given failure was upstream. It's hard to keep
track of which tests fail, which is what the bug registry addresses.

Often, what really matters is if there are any tests that changed after a
fix in the server. To use the bug registry for this task run the following
commands. On the __original__ server, run
   # Run the grab tests, printing to a JUnit test xml file
   ./test/server/grab --gtest_output="xml:grab.xml"
   # Create a registry based on the test results
   xit-bug-registry create grab.xml > grab-results.xml
   # fix server bug
   # Re-run grab tests on new server
   ./test/server/grab --gtest_output="xml:grab.xml"
   # Compare previous results with new results
   xit-bug-registry verify grab.xml < grab-results.xml


The output will print the test names and the expected vs real outcome plus a
status code to grep for the unexpected.

Note that especially test failures need to be treated with caution. An
unrelated fix may alter the outcome of a already failing tests (e.g. the
server now crashes as opposed to returning incorrect values). It is not
enough to simply check that all tests have the same outcome.