Writing test for the X.Org Integration Test (XIT) suite == googletest == You should be familiar with googletest http://code.google.com/p/googletest/wiki/Documentation This document well not detail googletest-only specific issues, it will focus on those parts that are specific to XIT. == Directory layout == Tests are divided by general category: tests/input .... tests for specific input driver features or bugs tests/video .... tests for specific video driver features or bugs tests/server ... tests for server features or bugs tests/lib ...... tests for library features or bugs Each directory has a number of binaries that group the tests further. e.g. input/input is a set of input-related tests. Tests should go into existing binaries where sensible, but tests for new logical features should add new binaries. A test binary may have multiple source files. Ideally, each feature or behaviour group can be tested by running one binary. == Writing tests == Generally, when writing a new tests you should check if there are similar tests. Copy and rename such a test, then add the test-specific code. Do look up the class definitions of the various helper classes, especially the ones in the tests/common directory. A test name should stay constant so bugs, patches and commits can refer to those test names. Pick a descriptive name for the test. Ideally, a test name describes enough to figure out what the test does approximately and is specific enough that name collision is unlikely. Each test should start with XORG_TESTCASE(), containing a human-readable description of the test and the steps it takes. This description is printed when a test fails and should be precise enough to explain what the tests attempts to verify. The central class to know is XITServer. It's a wrapper around xorg::testing::XServer with some automated features. You should always use XITServer to create an X Server in your test, never xorg-gtest's XServer class. The XITServer initialises itself on DISPLAY=:133, with config and log file names named after the test name. The second class to know is the XOrgConfig class. It provides simple hooks for writing a config file with some of the options automated. Most tests that use any kind of input will look something like this: class MyFeatureTest : public XITServerInputTest, public DeviceInputTestInterface{ public: void SetUp() override { AddDevice(xorg::testing::inputtest::DeviceType::POINTER); XITServerInputTest::SetUp(); } void SetUpConfigAndLog() overrid { config.AddDefaultScreenWithDriver(); config.AddInputSection(XORG_INPUTTEST_DRIVER, "some device name", "Option \"CorePointer\" \"on\"\n" + Dev(0).GetOptions()); config.WriteConfig(); } void StartServer() override { XITServerInputTest::StartServer(); WaitForDevices(); } }; SetUpConfigAndLog() will be called during the setup of the test, before the server is started. This is the hook to change your configuration file. For most tests, you do not need to override any other calls. Your test case based on that class can now look like this: TEST_F(MyFeatureTest, TestForFeature) { XORG_TESTCASE( .... ) ::Display *dpy = Display(); // now run your tests against the display XSelectInput(dpy, DefaultRootWindow(Display()), PointerMotionMask); XSync(dpy, False); Dev(0).RelMotion(1, 0); ASSERT_TRUE(xorg::testing::XServer::WaitForEventOfType(Display(), MotionNotify, -1, -1)); XEvent ev; XNextEvent(dpy, &ev); // ev is our motion event } The above test registers for pointer motion events on the root window, then sends one relative x axis motion event on the device. Then it waits for the motion event to appear. Most low-level details are handled by XITServerTest. It will write the config file, start the server, and remove the config and log files if the test succeeds. A new server is started for every TEST_F you have written, and there is no state dependency between tests. For general feature tests, you should _always_ use a base class to derive your feature tests from. Do not directly base your tests on any class in tests/common. If necessary, write an empty class to ensure consistent naming if the shared classes in tests/common change. So even if you don't need any actual special code, base your test of a named empty class: class MyFeatureTest : public XITServerTest {} Note: the X protocol is asynchronous and Xlib buffers generously. You should _always_ call XSync() before triggering anything in the server. If not, your event selection may still be in the local buffer when the events are generated and you'll never see the events. That's the gist of writing tests. There are several helpers and other functions that make writing tests easier. Read other tests to get a feel for what those calls are. == Debugging tests == xorg-gtest supports three environment variables that help with debugging the server. Failing tests usually leave their config and log files around for inspection. By default, this directory is /tmp (unless changed at configure time with --with-logpath) and each test uses a naming convention that includes TestCase.TestName. Starting the same configuration as a failed test is thus Xorg -config /tmp/EvdevDriverFloatingSlaveTest.FloatingDevice If tests fail because the server doesn't appear to start, you may set export XORG_GTEST_CHILD_STDOOUT=1 sudo -E ./testname --gtest_filter="*blah*" # note the -E to sudo If this variable is set, stdout from the server will go to stdout of your test. It will be interleaved with the test output but nonetheless that's a good way to identify failing server starts. If you need to gdb the server to set breakpoints before a test starts, set export XORG_GTEST_XSERVER_SIGSTOP=1 The server will be sent a SIGSTOP signal after starting, waiting for you to attach gdb. It can then be foregrounded and the test continues. == Using the bug registry == Most server versions will fail at least some tests, tests may have been committed before a fix for a given failure was upstream. It's hard to keep track of which tests fail, which is what the bug registry addresses. Often, what really matters is if there are any tests that changed after a fix in the server. To use the bug registry for this task run the following commands. On the __original__ server, run # Run all tests. Use -k to ensure all tests are run, even after some have # failed. make -k check # Results end up in $top_builddir/results/latest # Create a registry based on that. xit-bug-registry create results/latest/*.xml > before.xml # fix bug # Re-run on new code make -k check # Compare previous results with new results xit-bug-registry create results/latest/*.xml > after.xml xit-bug-registry compare before.xml after.xml The output will print the test names and the expected vs real outcome plus a status code to grep for the unexpected. This will take a while since it runs all tests. The shortcut is: # Re-run only the server test ./server --gtest_output="xml:after.xml" xit-bug-registry -f before.xml verify after.xml Note that especially test failures need to be treated with caution. An unrelated fix may alter the outcome of a already failing tests (e.g. the server now crashes as opposed to returning incorrect values). It is not enough to simply check that all tests have the same outcome.