Some of this material is covered elsewhere, the best documentation is probably in the ISPRAS wiki, which is still not available: http://ispras.linuxfoundation.org/index.php/About_Distribution_Checker. There have been other blog/LDN articles about getting started in testing, and Distribution Checker itself includes documentation. Workflow is relatively simple, once you have Distribution Checker, and is also diagrammed within dist-checker itself (sorry, image uploads don't work and it doesn't seem our wiki allows inline external images):
There are currently at least 3 ways to get started with distribution testing:
The current generation of distribution tests no longer require the "lsb" keystone to be installed as a package dependency (although we do check the result of lsb_release). The appbat packages do require a specific version of lsb as a dependency, as that is part of their FVT, that the package is installable without using --force, etc.
There are other steps to be taken to prepare a system for successful testing. Things that aren't required by LSB, but are needed by the test suites to test interfaces. These are covered on another page.
Note: I have had some success on my test systems adding /opt/lsb/test/manager/bin/dist-checker-start.pl to /etc/rc.local to automate the backend startup.
Tests can be initiated from the command line using /opt/lsb/test/manager/utils/dist-checker.pl. This is the same application run by the gui front end. The --help output from this command outlines fairly clearly how it is used:
[stew@acer ~]$ /opt/lsb/test/manager/utils/dist-checker.pl --help Usage: /opt/lsb/test/manager/utils/dist-checker.pl [OPTIONS] TESTS Runs one or more tests and produces HTML report. Examples: Run a set of tests: /opt/lsb/test/manager/utils/dist-checker.pl -D -s 'LSB 4.1' cmdchk libchk Run all automated certification tests from snaphots: /opt/lsb/test/manager/utils/dist-checker.pl -v2 -D -s 'LSB 4.1' -S 'snapshot' all Options: -a,--arch <architecture> Set machine architecture. [autodetect] -b,--batch Download all files before running tests. --cert Enable certification mode. --check-only Do some checks and exit. -D,--download Allow downloading needed files from the Internet. Use -Dftp to prefer FTP protocol rather than HTTP. --comment '<text>' Any comments for this test run. --force-reinstall Reinstall packages. -h,--help Show this help and exit. --ignore-check Ignore failed checks. -I,--ignore-unavailable Don't fail if can't run some tests. --list List all available tests. -M,--mail-to <email> Send test results to e-mail. --not-run Do some initialization, then exit (for debug). -m,--package-manager <name> Package manager: 'rpm' or 'dpkg' [autodetect]. -f,--profile <file> Take settings from <file>. --report <result dir> Build report for results in directory <result dir>. --post-cmd '<cmd>' Execute '<cmd> <result>.tgz' at the end. -s,--standard 'LSB 4.1' Set standard version. [autodetect via lsb_release] -S,--status <status> Choose tests with this status (e.g. 'beta'). -p,--std-profile <prof> Use standard profile <prof> (e.g. 'core,c++'). -r,--testrun-id <name> Use <name> as a result subdirectory name. --update Download latest data files and test modules. -v,--verbose <N> Verbose level: 0 - quiet, 1 - normal, 2 - verbose, 3 - debug. Proxy Settings: -x,--proxy [<user>:<password>@]<host>:<port>[,<auth>][,notunnel] Setup proxy. <auth> - authentication method (see 'man curl') 'notunnel' - see '--proxytunnel' on the curl manpage. --http-proxy <...> Proxy settings can be specified separately for HTTP --ftp-proxy <...> and FTP. --no-proxy Don't use proxy at all.
Test results land in /var/opt/lsb/test/manager/results, and can be reviewed with either a browser or tjreport (part of lsb-tet3-lite).
At the end of the test run results land in /var/opt/lsb/test/manager/results. There is a directory tree for each test run with a tarball of all the results, as well as html to browse the results. The top-level html is report.htm. If you wish to archive off the results elsewhere, the tarball contains the journals, the html pages and the raw test logs. A typical directory tree looks like this:
-rw-r--r-- 1 root root 454 Feb 6 17:01 INFO -rw-r--r-- 1 root root 2228768 Feb 6 17:01 log -rw-r--r-- 1 root root 6643 Feb 6 09:00 profile -rw-r--r-- 1 root root 1275667 Feb 6 17:01 report.htm drwxr-xr-x 3 root root 4096 Feb 6 17:01 results -rw-r--r-- 1 root root 4342 Feb 6 17:01 runconfig -rw-r--r-- 1 root root 2740698 Feb 6 17:01 verbose_log -rw-r--r-- 1 root root 19165322 Feb 6 17:20 x86-ubuntu-latest-32-2012-02-06-09h-00m-37s.tgz
Over time, as we've added new tests from different sources, using different test technologies, we've arrived at a situation where some libraries/interfaces are touched by multiple test packages. This could complicate attempts to modularize things, particularly if we want to drill down to testing specific libraries. All required libraries are of course checked by libchk. All required commands are checked by cmdchk. This area will attempt to describe how this is fragmented out into various commands/libraries vs the other tests (probably incomplete):
Much of this split is intentional. Cmdchk only checks for the presence of the command, while the other tests exercise some functionality. There are open bugs to expand the functionality testing.
|pax (cpio)||cmdchk, core-test|
|comm (li18n)||cmdchk, core-test|
|cpio (li18n)||cmdchk, core-test|
|diff (li18n)||cmdchk, core-test|
|ed (li18n)||cmdchk, core-test|
|egrep-tp (li18n)||cmdchk, core-test|
|ex (li18n)||cmdchk, core-test|
|expand (li18n)||cmdchk, core-test|
|fgrep (li18n)||cmdchk, core-test|
|find (li18n)||cmdchk, core-test|
|fold (li18n)||cmdchk, core-test|
|gencat (li18n)||cmdchk, core-test|
|gettext (li18n)||cmdchk, core-test|
|grep (li18n)||cmdchk, core-test|
|iconv (li18n)||cmdchk, core-test|
|join (li18n)||cmdchk, core-test|
|locale (li18n)||cmdchk, core-test|
|localedef (li18n)||cmdchk, core-test|
|ls (li18n)||cmdchk, core-test|
|msgfmt (li18n)||cmdchk, core-test|
|nm (li18n)||cmdchk, core-test|
|od (li18n)||cmdchk, core-test|
|pr (li18n)||cmdchk, core-test|
|printf (li18n)||cmdchk, core-test|
|sed (li18n)||cmdchk, core-test|
|shell (li18n)||cmdchk, core-test|
|sort (li18n)||cmdchk, core-test|
|tar (li18n)||cmdchk, core-test|
|tr (li18n)||cmdchk, core-test|
|unexpand (li18n)||cmdchk, core-test|
|uniq (li18n)||cmdchk, core-test|
|vi (li18n)||cmdchk, core-test|
|wc (li18n)||cmdchk, core-test|
|libc (probably also includes libm, libpthreads, other libc-related libs)||core-test, core-t2c, olver-core|
|libQt* (Qt4)||desktop-test, qt4-azov|
|libX*||core-test (li18n), xts5-test|
LSB-3.1/3.2 had the concept of Core & C++, Desktop, Qt4 (optional), and dist-checker is able to run these subset groups.
Some of our test packages, particularly those that are tet based, have a built in capability to do subsets of a full test run (core-test, desktop-test, printing-test, xts5-test). The packaging/dist-checker does not currently utilize/expose this, but it is one possible approach to allow a means to run less than the whole blob. The files tet_scen, scen.exec utilize a grouping which allows subsets of tests to be run. Typically in the current implementation we just run "all", but many times in debugging test issues I run subsets or individual tests. If you look at the printing test as an example (tet_scen):
all "Starting LSB Printing Test Suite" "total tests in cupsConvenience 43" /convenience/cupsConvenience "total tests in cupsPPD 23" /ppd/cupsPPD "total tests in cupsRaster 6" /raster/cupsRaster "total tests in testgs 11" /testgs/testgs "total tests in testfoomaticrip 13" /testfoomaticrip/testfoomaticrip "total tests in printing-fhs 1" /fhs/share-ppd/share-ppd
Everything is currently in "all", but this could be split up. Xts5 is perhaps a better example (tet_scen again):
all "total tests in xts5 4524" "VSW5TESTSUITE SECTION Xproto 122 389 0 0" :include:/scenarios/Xproto_scen "VSW5TESTSUITE SECTION Xlib3 109 161 0 0" :include:/scenarios/Xlib3_scen "VSW5TESTSUITE SECTION Xlib4 29 324 0 0" :include:/scenarios/Xlib4_scen "VSW5TESTSUITE SECTION Xlib5 15 84 0 0" :include:/scenarios/Xlib5_scen "VSW5TESTSUITE SECTION Xlib6 8 50 0 0" :include:/scenarios/Xlib6_scen "VSW5TESTSUITE SECTION Xlib7 58 172 0 0" :include:/scenarios/Xlib7_scen ... MotionNotify "VSW5TESTSUITE CASE MotionNotify 19" /tset/Xlib11/mtnntfy/Test NoExpose "VSW5TESTSUITE CASE NoExpose 1" /tset/Xlib11/nexps/Test PropertyNotify "VSW5TESTSUITE CASE PropertyNotify 2" /tset/Xlib11/prprtyntfy/Test ...
While we run "all" in run_xts5.sh, it would be easy to run just "Xproto" or "Xlib7", or even drill down to a particular test like "NoExpose".
Desktop-test also has some modularity built in to the run_tests.sh script, allowing one to specify a particular module:
[stew@pavilion devel]$ ./desktop-test/scripts/runtime/run_tests -? usage: -?, -h this help -a run only automatic tests -i run only interactive tests -s run a single module (FDO, GTKVTS, QT3, QT4, FONTCONFIG, XML, PNG, XRENDER, FREETYPE, XFT CAIRO)