Age | Commit message (Collapse) | Author | Files | Lines |
|
platform where the os seperator is not '\'.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
|
|
Valgrind testing is useful, but really should be done as a separate
exercise from the usual regression testing, as it takes way too long.
Rather than including it by default in all.tests and making people
exclude it with the -x valgrind option (or by using quick.tests), it
makes sense to explicitly request valgrind testing with --valgrind.
To perform valgrind testing:
$ piglit-run.py --valgrind <options> tests/quick.tests results/vg-1
The ExecTest class now handles Valgrind wrapping natively, rather than
relying on the tests/valgrind-test/valgrind-test shell script wrapper.
This provides a huge benefit: we can leverage the interpretResult()
function to make it work properly for any subclass of ExecTest. The
old shell script only worked for PlainExecTest (piglit) and GleanTest.
Another small benefit is that you can now use --valgrind with any test
profile (such as quick.tests). Also, you can use all.tests without
having to remember to specify "-x valgrind".
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
GPUs like to hang, especially when barraged with lots of mean Piglit
tests. Usually this results in the poor developer having to figure out
what test hung, blacklist it via -x, and start the whole test run over.
This can waste a huge amount of time, especially when many tests hang.
This patch adds the ability to resume a Piglit run where you left off.
The workflow is:
$ piglit-run.py -t foo tests/quick.tests results/foobar-1
<interrupt the test run somehow>
$ piglit-run.py -r -x bad-test results/foobar-1
To accomplish this, piglit-run.py now stores the test profile
(quick.tests) and -t/-x options in the JSON results file so it can tell
what you were originally running. When run with the --resume option, it
re-reads the results file to obtain this information (repairing broken
JSON if necessary), rewrites the existing results, and runs any
remaining tests.
WARNING:
Results files produced after this commit are incompatible with older
piglit-summary-html.py (due to the extra "option" section.)
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
|
|
When resuming an interrupted piglit run, we'll want to output both
existing and new results into the same 'tests' section. Since
TestProfile.run only handles newly run tests, it can't open/close the
JSON dictionary.
So, move it to the caller in piglit-run.py.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
|
|
list[:0] = [item] is a strange way to add an item to a list.
1. It forces the list to be copied, which can be inefficient.
2. It produces a strange ordering:
>>> x = [1, 2, 3]
>>> x[:0] = [4]
>>> x
[4, 1, 2, 3]
...whereas list.append(item) produces [1, 2, 3, 4].
3. Most importantly, list.append is easier to understand at a glance.
Reported-by: Paul Berry <stereotype441@gmail.com>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
piglit-run.py takes the directory to store the results in as a command
line argument, yet always chdir's to piglit_dir first before accessing it.
When running piglit from outside of piglit_dir, this is surprising to the
user and will fail when piglit is packaged in a distribution and the user
can't write to piglit_dir.
This came up while packaging piglit for Fedora:
https://bugzilla.redhat.com/show_bug.cgi?id=773021
where all piglit code is placed under /usr/lib64/piglit/ and symbolic
links to the piglit-* scripts are created in /usr/bin/.
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
Signed-off-by: Scott Tsai <scottt.tw@gmail.com>
|
|
The main purpose of this patch is to make piglit independent of the
current working directory, so it is possible to package piglit as a RPM
package (with binaries symlinked to /usr/bin, most of the files in
read-only /usr/lib*/piglit directory, and results somewhere else).
So it is now possible to run
$ piglit-run tests/quick-driver.tests /tmp/piglit
and then with this command
$ piglit-summary-html --overwrite /tmp/piglit/results /tmp/piglit/main
generate a report in /tmp/piglit/results/index.html & al.
Signed-off-by: Matěj Cepl <mcepl@redhat.com>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
|
|
Prior to JSON-ification, piglit wrote each test's results in a single
call, and SyncFileWriter ensured that these were done sequentially.
Now, the JSON writer internally handles locking and concurrency, so
SyncFileWriter is unnecessary. Furthermore, outputting a single test's
results now takes multiple write calls, so SyncFileWriter wouldn't actually
guard against concurrency issues anyway.
This also removes a fsync() call on each write, fixing a major
performance regression on machines with non-SSDs. Prior to the JSON
work, since each test mapped to a single write call, we were doing one
fsync() per test case. With JSON, we started doing many more fsyncs.
But none of them are actually necessary, so just scrap them all.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=39737
Cc: Chad Versace <chad@chad-versace.us>
Cc: Ian Romanick <idr@freedesktop.org>
Cc: Dave Airlie <airlied@gmail.com>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Chad Versace <chad@chad-versace.us>
Tested-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
-c bool, --concurrent=bool Enable/disable concurrent test runs. Valid
option values are: 0, 1, on, off. (default: on)
CC: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Chad Versace <chad@chad-versace.us>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
When a test run is interrupted, perhaps by a system crash, we often want
the test results. To accomplish this, Piglit must write each test result
to the result file as the test completes.
If the test run is interrupted, the result file will be corrupt. This is
corrected in a subsequent commit.
CC: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Chad Versace <chad@chad-versace.us>
|
|
The results file produced by piglit-run.py contains a serialized
TestrunResult, and the serialization format was horridly homebrew. This
commit replaces that insanity with json.
Benefits:
- Net loss of 113 lines of code (ignoring comments and empty lines).
- By using the json module in the Python standard library, serializing
and unserializing is nearly as simple as `json.dump(object, file)`
and `json.load(object, file)`.
- By using a format that is easy to manipulate, it is now simple to
extend Piglit to allow users to embed custom data into the results
file.
As a side effect, the summary file is no longer needed, so it is no longer
produced.
Reviewed-by: Paul Berry <stereotype441@gmail.com>
Signed-off-by: Chad Versace <chad@chad-versace.us>
|
|
Add SyncFileWriter class to synchronize writes to the 'main' results
file from multiple threads. This helps to ensure that writes to this
file are not intermingled.
|
|
This is a blacklist to complement the existing -t|--tests whitelist.
It works similarly, (accepts a regular expression and can be specified
multiple times).
|
|
|
|
|
|
The results files can get rather huge when tests fail, because tests like
glean/blendFunc output a lot of debugging data, which we all record.
Now, we generate an additional .../summary file, in which the info string
is simply truncated. Pretty stupid, but it should give enough info to get
an idea of the rough kind of failure.
Add a new option for piglit-summary-html.py, to choose between full or
abbreviated info when both are present.
|
|
|
|
|
|
|
|
The idea is to allow accompanying data (like screenshots) in the future.
|
|
|
|
Fix calls to exit() to use sys.exit().
'exit' is a string. We want to use sys.exit instead.
Signed-off-by: Nicolai Haehnle <nhaehnle@gmail.com>
|
|
|