The Linux Foundation


From The Linux Foundation


Review of Current Test Status/Workflow

Some of this material is covered elsewhere, the best documentation is probably in the ISPRAS wiki, which is still not available: There have been other blog/LDN articles about getting started in testing, and Distribution Checker itself includes documentation. Workflow is relatively simple, once you have Distribution Checker, and is also diagrammed within dist-checker itself (sorry, image uploads don't work and it doesn't seem our wiki allows inline external images):


Download/Install of Test Packages.

There are currently at least 3 ways to get started with distribution testing:

  • Use repos ( and using the native package manager install either lsb-distribution-checker and let it install the tests, or install lsb-task-dist-testkit, which is a meta-package that will pull in all the tests and dist-checker.

System Preparation

The current generation of distribution tests no longer require the "lsb" keystone to be installed as a package dependency (although we do check the result of lsb_release). The appbat packages do require a specific version of lsb as a dependency, as that is part of their FVT, that the package is installable without using --force, etc.

There are other steps to be taken to prepare a system for successful testing. Things that aren't required by LSB, but are needed by the test suites to test interfaces. These are covered on another page.

Running Tests

GUI Method

  • Start dist-checker by running /opt/lsb/test/manager/bin/ You will be prompted for a root or sudo password (if you're not root already), and the backend will start and an attempt will be made to launch a browser. If a browser fails to launch, open http://localhost:8888 (or the appropriate machine/ip if you want to manage the tests from another machine).
  • For non-certification runs, you want Custom Tests. Select whether to download tests/support files from the internet, the LSB version you want to test against, and which test suites you want to run. You can pick and choose individual test suites, Command/Static/Runtime/Appbat, or a whole Certification set. Everything but the appbat Manual Tests will run unattended, and you can come back to the web interface later to review the results.
  • Test Results are listed under the Results heading by date/time and can be reviewed/analysed within the web interface. Tests with Known Issues that are in the problem_db are flagged, with references to LSB and upstream bug systems where applicable.
  • You can stop the back end either via the web interface under Administration or by running /opt/lsb/test/manager/bin/

Note: I have had some success on my test systems adding /opt/lsb/test/manager/bin/ to /etc/rc.local to automate the backend startup.

CLI Method

Tests can be initiated from the command line using /opt/lsb/test/manager/utils/ This is the same application run by the gui front end. The --help output from this command outlines fairly clearly how it is used:

[stew@acer ~]$ /opt/lsb/test/manager/utils/ --help
 Usage: /opt/lsb/test/manager/utils/ [OPTIONS] TESTS
 Runs one or more tests and produces HTML report.
   Run a set of tests:
     /opt/lsb/test/manager/utils/ -D -s 'LSB 4.1'  cmdchk libchk

   Run all automated certification tests from snaphots:
     /opt/lsb/test/manager/utils/ -v2 -D -s 'LSB 4.1' -S 'snapshot'  all
  -a,--arch <architecture>   Set machine architecture. [autodetect]
  -b,--batch                 Download all files before running tests.
  --cert                     Enable certification mode.
  --check-only               Do some checks and exit.
  -D,--download              Allow downloading needed files from the Internet.
                             Use -Dftp to prefer FTP protocol rather than HTTP.
  --comment '<text>'         Any comments for this test run.
  --force-reinstall          Reinstall packages.
  -h,--help                  Show this help and exit.
  --ignore-check             Ignore failed checks.
  -I,--ignore-unavailable    Don't fail if can't run some tests.
  --list                     List all available tests.
  -M,--mail-to <email>       Send test results to e-mail.
  --not-run                  Do some initialization, then exit (for debug).
  -m,--package-manager <name>   Package manager: 'rpm' or 'dpkg' [autodetect].
  -f,--profile <file>        Take settings from <file>.
  --report <result dir>      Build report for results in directory <result dir>.
  --post-cmd '<cmd>'         Execute '<cmd> <result>.tgz' at the end.
  -s,--standard 'LSB 4.1'    Set standard version. [autodetect via lsb_release]
  -S,--status <status>       Choose tests with this status (e.g. 'beta').
  -p,--std-profile <prof>    Use standard profile <prof> (e.g. 'core,c++').
  -r,--testrun-id <name>     Use <name> as a result subdirectory name.
  --update                   Download latest data files and test modules.
  -v,--verbose <N>           Verbose level:
                               0 - quiet, 1 - normal, 2 - verbose, 3 - debug.

 Proxy Settings:
  -x,--proxy [<user>:<password>@]<host>:<port>[,<auth>][,notunnel]   Setup proxy.
                          <auth> - authentication method (see 'man curl')
                          'notunnel' - see '--proxytunnel' on the curl manpage.
  --http-proxy <...>    Proxy settings can be specified separately for HTTP
  --ftp-proxy  <...>      and FTP.
  --no-proxy            Don't use proxy at all.

Test results land in /var/opt/lsb/test/manager/results, and can be reviewed with either a browser or tjreport (part of lsb-tet3-lite).

Reviewing Results

At the end of the test run results land in /var/opt/lsb/test/manager/results. There is a directory tree for each test run with a tarball of all the results, as well as html to browse the results. The top-level html is report.htm. If you wish to archive off the results elsewhere, the tarball contains the journals, the html pages and the raw test logs. A typical directory tree looks like this:

-rw-r--r-- 1 root root      454 Feb  6 17:01 INFO
-rw-r--r-- 1 root root  2228768 Feb  6 17:01 log
-rw-r--r-- 1 root root     6643 Feb  6 09:00 profile
-rw-r--r-- 1 root root  1275667 Feb  6 17:01 report.htm
drwxr-xr-x 3 root root     4096 Feb  6 17:01 results
-rw-r--r-- 1 root root     4342 Feb  6 17:01 runconfig
-rw-r--r-- 1 root root  2740698 Feb  6 17:01 verbose_log
-rw-r--r-- 1 root root 19165322 Feb  6 17:20 x86-ubuntu-latest-32-2012-02-06-09h-00m-37s.tgz

Test Fragmentation

Over time, as we've added new tests from different sources, using different test technologies, we've arrived at a situation where some libraries/interfaces are touched by multiple test packages. This could complicate attempts to modularize things, particularly if we want to drill down to testing specific libraries. All required libraries are of course checked by libchk. All required commands are checked by cmdchk. This area will attempt to describe how this is fragmented out into various commands/libraries vs the other tests (probably incomplete):

Command Tests

Much of this split is intentional. Cmdchk only checks for the presence of the command, while the other tests exercise some functionality. There are open bugs to expand the functionality testing.

Command Tests
gs cmdchk, printing-test
foomatic-rip cmdchk, printing-test
lsb_release cmdchk, core-test
passwd cmdchk, core-test
chgrp cmdchk, core-test
chown cmdchk, core-test
chsh cmdchk, core-test
chfn cmdchk, core-test
groups cmdchk, core-test
newgrp cmdchk, core-test
passwd cmdchk, core-test
groupadd cmdchk, core-test
groupdel cmdchk, core-test
groupmod cmdchk, core-test
useradd cmdchk, core-test
userdel cmdchk, core-test
usermod cmdchk, core-test
pax (cpio) cmdchk, core-test
comm (li18n) cmdchk, core-test
cpio (li18n) cmdchk, core-test
diff (li18n) cmdchk, core-test
ed (li18n) cmdchk, core-test
egrep-tp (li18n) cmdchk, core-test
ex (li18n) cmdchk, core-test
expand (li18n) cmdchk, core-test
fgrep (li18n) cmdchk, core-test
find (li18n) cmdchk, core-test
fold (li18n) cmdchk, core-test
gencat (li18n) cmdchk, core-test
gettext (li18n) cmdchk, core-test
grep (li18n) cmdchk, core-test
iconv (li18n) cmdchk, core-test
join (li18n) cmdchk, core-test
locale (li18n) cmdchk, core-test
localedef (li18n) cmdchk, core-test
ls (li18n) cmdchk, core-test
msgfmt (li18n) cmdchk, core-test
nm (li18n) cmdchk, core-test
od (li18n) cmdchk, core-test
pr (li18n) cmdchk, core-test
printf (li18n) cmdchk, core-test
sed (li18n) cmdchk, core-test
shell (li18n) cmdchk, core-test
sort (li18n) cmdchk, core-test
tar (li18n) cmdchk, core-test
tr (li18n) cmdchk, core-test
unexpand (li18n) cmdchk, core-test
uniq (li18n) cmdchk, core-test
vi (li18n) cmdchk, core-test
wc (li18n) cmdchk, core-test

Library Tests

Library Tests
libc (probably also includes libm, libpthreads, other libc-related libs) core-test, core-t2c, olver-core
libstdc++ libstdc++-test, cpp-t2c
libqt-mt desktop-test, qt3-azov
libQt* (Qt4) desktop-test, qt4-azov
gtk/gdk/gobject/gmodule desktop-test, desktop-t2c
libxml2 desktop-test, xml2-azov
fontconfig desktop-test, desktop-t2c
freetype desktop-test, desktop-t2c
libX* core-test (li18n), xts5-test

Past Modularization Attempts

LSB-3.1/3.2 had the concept of Core & C++, Desktop, Qt4 (optional), and dist-checker is able to run these subset groups.

Current Subset Test Capability

Some of our test packages, particularly those that are tet based, have a built in capability to do subsets of a full test run (core-test, desktop-test, printing-test, xts5-test). The packaging/dist-checker does not currently utilize/expose this, but it is one possible approach to allow a means to run less than the whole blob. The files tet_scen, scen.exec utilize a grouping which allows subsets of tests to be run. Typically in the current implementation we just run "all", but many times in debugging test issues I run subsets or individual tests. If you look at the printing test as an example (tet_scen):

	"Starting LSB Printing Test Suite"
	"total tests in cupsConvenience 43"
	"total tests in cupsPPD 23"
	"total tests in cupsRaster 6"
	"total tests in testgs 11"
	"total tests in testfoomaticrip 13"
	"total tests in printing-fhs 1"

Everything is currently in "all", but this could be split up. Xts5 is perhaps a better example (tet_scen again):

        "total tests in xts5 4524"
        "VSW5TESTSUITE SECTION Xproto 122 389 0 0"
        "VSW5TESTSUITE SECTION Xlib3 109 161 0 0"
        "VSW5TESTSUITE SECTION Xlib4 29 324 0 0"
        "VSW5TESTSUITE SECTION Xlib5 15 84 0 0"
        "VSW5TESTSUITE SECTION Xlib6 8 50 0 0"
        "VSW5TESTSUITE SECTION Xlib7 58 172 0 0"
        "VSW5TESTSUITE CASE MotionNotify 19"

        "VSW5TESTSUITE CASE NoExpose 1"

        "VSW5TESTSUITE CASE PropertyNotify 2"

While we run "all" in, it would be easy to run just "Xproto" or "Xlib7", or even drill down to a particular test like "NoExpose".

Desktop-test also has some modularity built in to the script, allowing one to specify a particular module:

[stew@pavilion devel]$ ./desktop-test/scripts/runtime/run_tests -?
-?, -h    this help
-a        run only automatic tests
-i        run only interactive tests
-s        run a single module (FDO, GTKVTS, QT3, QT4, FONTCONFIG, XML, PNG, XRENDER, FREETYPE, XFT CAIRO)

[Article] [Discussion] [View source] [History]